A Quick and Dirty Pytest Cookbook
Pytest has become the standard tool to test python code. In this post I will show you several tricks and hacks to create, configure and run your tests. without further ado, let’s eat the bugs!

Configuring Pytest
Since I don’t want to burden my users with testing dependencies, I usually put all the test and documentation dependencies in the extra_require section of the setup.py file, like:
extras_require={
"test": ["coverage", "mypy", "pycodestyle", "pytest", "pytest-cov",
"pytest-mock", "pytest-asyncio"]}
In this way, when I am developing code, I can install these testing dependencies like:
pip install -e .[test]
I also like to configure Pytest to automatically look for the tests folder and generate a coverage report. For doing so, I add the following lines in the configuration setup.cfg file:
[tool:pytest]
testpaths = tests
addopts = --cov --cov-report xml --cov-report term --cov-report html
Running the tests in GitHub actions
I use GitHub actions to tests my Python code. The following snippet defines the commands that I use to install and run the Pytest dependencies:
- name: Install the library
run: |
pip install -e .[test]- name: Test
run: |
pytest --cov-config setup.cfg- name: coverage
uses: codecov/codecov-action@v1
with:
file: ./coverage.xml
name: codecov-umbrella
The final action takes the automatically generated coverage and send it to codecov. Pretty neat, right?
Using a temporary folder
Sometimes you find yourself wanting to get a temporal folder to write output files from your server, simulation, etc. in order to check that things are going as expected. The following snippet shows you how to do so:
Notice that tmp_path
is an argument provided automatically by Pytest. You just need to use it in your code.
Mocking code
Mocking functionality is a cool trick when you need to test expensive or complex code, like calling a web service.
Let’s suppose that I have a command line interface to interact with a web service and I will like to test its behavior. I would first mock the reading of the user’s arguments by argparse, then I would mock the expensive or complex functionality, like calling a web service. The following snippet shows how you can do so:
Notice that the mocker
argument is automatically injected by Pytest. For more information check the pytest-mock page.
You can use multiple Pytest fixtures
A common pattern when testing is to mock part of the functionality, while writing some output to a file. For doing so, you use both a mocker object and a temporal file, like:
Checking exceptions
Testing that an exception is raised helps to check that your code functionality behaves as expected, for example when the wrong arguments are passed or the user runs into a corner case. The following snippet allows you to test that behavior:
Skipping tests
Sometimes is very useful to skip a test if a dependency is not present at runtime. Maybe the dependency takes a lot of time to install in the CI or it is a proprietary package that you cannot install easily. You can skip those tests with the following snippet:
Running MyPy
I personally love to use type hints in my functions and methods. I use mypy to check that those types make sense. the following snippet presents the configuration that I use to call mypy on a package.
Notice that you need to define a mypy.ini file somewhere in your repository ( I put that file inside the tests folder). I also prefer that the mypy raises some warnings if there are issues with the types instead of raising a (very annoying) error.
As an alternative to the above snippet, you can install and run the pytest-mypy-plugins.
Testing the documentation
Testing the documentation is a must have. The following snippet shows how you can do so:
Testing async code
If you do web applications or something similar you probably have run into Python asyncio. If you try to run async code using normal pytest code, chances are that you wouldn’t notice that there are no tests running at all. Async code is lazy and requires that you run it inside an async block, otherwise nothing interesting happens. The following snippet shows how to invoke async code in Pytest:
For more details about testing asynchronous code in python check pytest-asyncio.
Running tests in parallel
If you, like me, have the tendency to procrastinate while waiting for the test suite to finish executing, then a good way to reduce the procrastination time is to run the tests in parallel.
For running your tests in parallel, you just need to install the pytest-parallel library that not only allows you to run your tests in parallel but also in a thread-safe manner, using commands as simple as:
pytest --workers 2
Running a test with multiple parameters
Imaging that you have a simulation that receives some parameters as input and gives you back some numerical output. You can use pytest’s parametrize extension to feed one parameter at a time to the simulation and check that each one of them returns the expected output. The following snippet shows how to accomplish that:
If you run the previous snippet, you should see something like:
pytest -v test_parameters.py
test_parameters.py::test_simulation[pi-3.141592653589793] PASSED
test_parameters.py::test_simulation[exp-2.718281828459045] FAILEDparameter = 'exp', expected = 2.718281828459045@pytest.mark.parametrize("parameter, expected",
[("pi", np.pi), ("exp", np.exp(1))])
def test_simulation(parameter: str, expected: float) -> None:
"""Check the simulation."""
result = run_simulation(parameter)
> assert abs(result - expected) < 1e-8
E assert 39.28171817154095 < 1e-08
E + where 39.28171817154095 = abs((42 - 2.718281828459045))test_parameter.py:16: AssertionError
Conclusions
Pytest is a flexible tool with a great number of useful extensions. With Pytest, there is always a way to check your code functionality, even those annoying corner cases.
Please comment if you find these tricks useful or if you find another nice trick that you want to share.
Acknowledgement
My special gratitude to Bas van Beek to share with me some of his useful recipes. I would like also to thank Florian Huber and Stefan Verhoeven for their feedback.
Thanks to Pablo Rodríguez-Sánchez and Steven Roldan for their help editing the text.