- IPython Interactive Computing and Visualization Cookbook
- Cyrille Rossant
- 2070字
- 2021-08-05 17:57:26
Writing unit tests with nose
Manual testing is essential to ensuring that our software works as expected and does not contain critical bugs. However, manual testing is severely limited because bugs may be introduced every time a change is made in the code. We can't possibly expect to manually test our entire program at every commit.
Nowadays, automated testing is a standard practice in software engineering. In this recipe, we will briefly cover important aspects of automated testing: unit tests, test-driven development, test coverage, and continuous integration. Following these practices is absolutely necessary in order to produce high-quality software.
Getting ready
Python has a native unit-testing module that you can readily use (unittest
). Other third-party unit testing packages exist, such as py.test or nose, which we have chosen here. nose makes it a bit easier to write a test suite, and it has a library of external plugins. Your users don't need that extra dependency unless they want to run the test suite themselves. You can install nose with pip install nose
.
How to do it...
In this example, we will write a unit test for a function that downloads a file from a URL. A testing suite should run and successfully pass even in the absence of a network connection. We take care of that by fooling Python's urllib
module with a mock HTTP server.
Note
The code snippets used in this recipe have been written for Python 3. A few changes are required to make them work with Python 2, and we have indicated these changes in the code. The versions for Python 2 and Python 3 are both available on the book's website.
You may also be interested in the requests
module; it provides a much simpler API for HTTP requests (http://docs.python-requests.org/en/latest/).
- We create a file named
datautils.py
with the following code:In [1]: %%writefile datautils.py # Version 1. import os from urllib.request import urlopen # Python 2: use urllib2 def download(url): """Download a file and save it in the current folder. Return the name of the downloaded file.""" # Get the filename. file = os.path.basename(url) # Download the file unless it already exists. if not os.path.exists(file): with open(file, 'w') as f: f.write(urlopen(url).read()) return file Writing datautils.py
- We create a file named
test_datautils.py
with the following code:In [2]: %%writefile test_datautils.py # Python 2: use urllib2 from urllib.request import (HTTPHandler, install_opener, build_opener, addinfourl) import os import shutil import tempfile from io import StringIO # Python 2: use StringIO from datautils import download TEST_FOLDER = tempfile.mkdtemp() ORIGINAL_FOLDER = os.getcwd() class TestHTTPHandler(HTTPHandler): """Mock HTTP handler.""" def http_open(self, req): resp = addinfourl(StringIO('test'), '', req.get_full_url(), 200) resp.msg = 'OK' return resp def setup(): """Install the mock HTTP handler for unit tests.""" install_opener(build_opener(TestHTTPHandler)) os.chdir(TEST_FOLDER) def teardown(): """Restore the normal HTTP handler.""" install_opener(build_opener(HTTPHandler)) # Go back to the original folder. os.chdir(ORIGINAL_FOLDER) # Delete the test folder. shutil.rmtree(TEST_FOLDER) def test_download1(): file = download("http://example.com/file.txt") # Check that the file has been downloaded. assert os.path.exists(file) # Check that the file contains the contents of # the remote file. with open(file, 'r') as f: contents = f.read() print(contents) assert contents == 'test' Writing test_datautils.py
- Now, to launch the tests, we execute the following command in a terminal:
$ nosetests . Ran 1 test in 0.042s OK
- Our first unit test passes! Now, let's add a new test. We add some code at the end of
test_datautils.py
:In [4]: %%writefile test_datautils.py -a def test_download2(): file = download("http://example.com/") assert os.path.exists(file) Appending to test_datautils.py
- We launch the tests again with the
nosetests
command:$ nosetests .E ERROR: test_datautils.test_download2 Traceback (most recent call last): File "datautils.py", line 12, in download with open(file, 'wb') as f: IOError: [Errno 22] invalid mode ('wb') or filename: '' Ran 2 tests in 0.032s FAILED (errors=1)
- The second test fails. In a real-world scenario, we might need to debug the program. This should be easy because the bug is isolated in a single test function. Here, by inspecting the traceback error and the code, we find that the bug results from the requested URL not ending with a proper file name. Thus, the inferred file name,
os.path.basename(url)
, is empty. Let's fix this by replacing thedownload
function indatautils.py
with the following function:In [6]: %%file datautils.py # Version 2. import os from urllib.request import urlopen # Python 2: use urllib2 def download(url): """Download a file and save it in the current folder. Return the name of the downloaded file.""" # Get the filename. file = os.path.basename(url) # Fix the bug, by specifying a fixed filename if the # URL does not contain one. if not file: file = 'downloaded' # Download the file unless it already exists. if not os.path.exists(file): with open(file, 'w') as f: f.write(urlopen(url).read()) return file Overwriting datautils.py
- Finally, let's run the tests again:
$ nosetests .. Ran 2 tests in 0.036s OK
Tip
By default, nosetests
hides the standard output (unless errors occur). If you want the standard output to show up, use nosetests --nocapture
.
How it works...
A test_xxx.py
module should accompany every Python module named xxx.py
. This testing module contains functions (unit tests) that execute and test functionality implemented in the xxx.py
module.
By definition, a given unit test must focus on one very specific functionality. All unit tests should be completely independent. Writing a program as a collection of well-tested, mostly decoupled units forces you to write modular code that is more easily maintainable.
However, sometimes your module's functions require preliminary work to run (for example, setting up the environment, creating data files, or setting up a web server). The unit testing framework can handle this; just write setup()
and teardown()
functions (called fixtures), and they will be called at the beginning and at the end of the test module, respectively. Note that the state of the system environment should be exactly the same before and after a testing module runs (for example, temporarily created files should be deleted in teardown
).
Here, the datautils.py
module contains a single function, download
, that accepts a URL as an argument, downloads the file, and saves it locally. This module comes with a testing module named test_datautils.py
. You should choose the same convention in your program (test_<modulename>
for the testing module of modulename
). This testing module contains one or several functions prefixed with test_
. This is how nose automatically discovers the unit tests across your project. nose also accepts other similar conventions.
Tip
nose runs all tests it can find in your project, but you can, of course, have more fine-grained control over the tests to run. Type nosetests --help
to get the list of all options. You can also check out the documentation at http://nose.readthedocs.org/en/latest/testing.html.
The testing module also contains the setup
and teardown
functions, which are automatically detected as fixtures by nose. A custom HTTP handler object is created within the setup
function. This object captures all HTTP requests, even those with fictional URLs. The setup
function then moves into a test folder (created with the tempfile
module) to avoid potential conflicts between downloaded files and existing files. In general, unit tests should not leave any trace; this is how we ensure that they are fully reproducible. Likewise, the teardown
function deletes the test folder.
Tip
In Python 3.2 and higher versions, you can also use tempfile.TemporaryDirectory
to create a temporary directory.
The first unit test downloads a file from a mock URL and checks whether it contains the expected contents. By default, a unit test passes if it does not raise an exception. This is where assert
statements, which raise exceptions if the statement is False
, are useful. nose also comes with convenient routines and decorators for precisely determining the conditions under which a particular unit test is expected to pass or fail (for example, it should raise a particular exception to pass, or it should run in less than X seconds, and so on).
Tip
Further convenient assert-like functions are provided by NumPy (see http://docs.scipy.org/doc/numpy/reference/routines.testing.html). They are especially useful when working with arrays. For example, np.testing.assert_allclose(x, y)
asserts that the x
and y
arrays are almost equal, up to a given precision that can be specified.
Writing a full testing suite takes time. It imposes strong (but good) constraints on your code's architecture. It's a real investment, but it is always profitable in the long run. Also, knowing that your project is backed by a full testing suite is a real load off your mind.
First, thinking about unit tests from the beginning forces you to think about a modular architecture. It is really difficult to write unit tests for a monolithic program full of interdependencies.
Second, unit tests make it easier for you to find and fix bugs. If a unit test fails after introducing a change in the program, isolating and reproducing the bugs becomes trivial.
Third, unit tests help you avoid regressions, that is, fixed bugs that silently reappear in a later version. When you discover a new bug, you should write a specific failing unit test for it. To fix it, make this test pass. Now, if the bug reappears later, this unit test will fail and you will immediately be able to address it.
Let's say that you write a complex program in several layers, with an n+1 layer based on an n layer. Having a battery of successful unit tests for the n layer makes you confident that it works as expected. When working on the n+1 layer, you can focus on this layer instead of constantly worrying whether the layer below works or not.
Unit testing is not the whole story, as it just concerns independent components. Further levels of testing are required in order to ensure good integration of the components within the program.
There's more...
Unit testing is a wide topic, and we only scratched the surface in this recipe. We give some further information here.
Test coverage
Using unit tests is good. However, measuring test coverage is even better: it quantifies how much of our code is being covered by your testing suite. Ned Batchelder's coverage module (http://nedbatchelder.com/code/coverage/) does precisely this. It integrates very well with nose.
First, install coverage with pip install coverage
. Then run your testing suite with the following command:
$ nosetests --with-cov --cover-package datautils
This command instructs nose to launch your testing suite with coverage measurement for the datautils
package only.
The brings test-coverage features to a continuous integration server (refer to the Unit testing and continuous integration section). It works seamlessly with GitHub.
Workflows with unit testing
Note the particular workflow we have used in this example. After writing our download
function, we created a first unit test that passed. Then we created a second test that failed. We investigated the issue and fixed the function. The second test passed. We could continue writing more and more complex unit tests, until we are confident that the function works as expected in most situations.
Tip
Run nosetests --pdb
to drop into the Python debugger on failures. This is quite convenient to find out quickly why a unit test fails.
This is test-driven development, which consists of writing unit tests before writing the actual code. This workflow forces us to think about what our code does and how one uses it, instead of how it is implemented.
Unit testing and continuous integration
A good habit to get into is running the full testing suite of our project at every commit. In fact, it is even possible to do this completely transparently and automatically through continuous integration. We can set up a server that automatically runs our testing suite in the cloud at every commit. If a test fails, we get an automatic e-mail telling us what the problem is so that we can fix it.
There are many continuous integration systems and services: Jenkins/Hudson, https://drone.io, http://stridercd.com, https://travis-ci.org, and many others. Some of them play well with GitHub projects. For example, to use Travis CI with a GitHub project, create an account on Travis CI, link your GitHub project to this account, and then add a .travis.yml
file with various settings in your repository (see the additional details in the following references).
In conclusion, unit testing, code coverage, and continuous integration are standard practices that should be used for all significant projects.
Here are a few references:
- Test-driven development, available at http://en.wikipedia.org/wiki/Test-driven_development
- Untested code is broken code: test automation in enterprise software delivery, by Martin Aspeli, available at www.deloittedigital.com/eu/blog/untested-code-is-broken-code-test-automation-in-enterprise-software-deliver
- Documentation of Travis CI in Python, at http://about.travis-ci.org/docs/user/languages/python/