The idea behind unit tests is to check "does this function do what I think it’s doing". We want to check that, when we write a function, we haven’t put any subtle (or not so subtle) errors in the code. Also we want to check that when we change some other function, it doesn’t break existing code. So for each function (or maybe pair of functions) we write a test function. These are known as "unit tests".
Unit testing is an essential part of the development process for OPAL. Also note that tests are code that we need to edit, etc, so tests should obey any relevant style guides.
C++ Quick Start
-
Run
build-recipes
to build300-gtest
-
From the root opal source directory run
cmake -DBUILD_OPAL_UNIT_TESTS=1
-
Run
tests/opal_unit_tests
to run the unit tests
PyOpal Quick Start
-
From the root opal source directory run
cmake -DBUILD_OPAL_PYTHON=1
-
Look for the line
-- PyOpal install location resolved to /path/to/python/site-packages/
and make sure that/path/to/python/site-packages/
is in yourPYTHONPATH
-
Don’t forget to run
make install
to install pyopal library to your install area -
Either: Run
python tests/opal_src/PyOpal/test_runner.py
to run all the unit tests -
Or: Run
python tests/opal_src/PyOpal/path/to/a_unit_test.py
to run a specific unit test
Setting up the tests
In OPAL we use the google testing framework which has some convenience functions for e.g. equality testing in the presence of floating point errors, setting up common test data, etc.
There is a script for installing gtest in tests/tools/install_gtest.bash
.
At the moment it doesn’t work on macs - sorry about that.
Documentation for gtest can be found at:
Go ahead and edit the tests if you need to
We should have one source file for each header file in the main body of the code.
The directory structure is supposed to reflect the directory structure of the main OPAL source (add a directory if you need to).
If you want to add an extra test, you need to make sure that it includes gtest #include "gtest/gtest.h"
It should be automatically added to the test executable by cmake.
To run the tests, either run the full regression testing suite,
or run just the cpp tests by running executable test/opal_unit_tests (after building).
If you are just interested in a specific test,
you can do e.g. tests/opal_unit_tests --gtest_filter=SBend3DTest.*
to run all tests from SBend3DTest.
There are some other useful command line options; do tests/opal_unit_tests --help
to list them all.
In general:
-
Unit tests are in the directory
tests
-
Code which is in src/Classic should have tests in
tests/classic_src
-
Other code should have tests in
tests/opal_src
PyOpal tests
Within the tests area are some specific tests for the OPAL python API. These tests are a mixture of unit tests and integration tests (i.e. testing the pyopal workflow), but they are all quick to run. There is also a slower test for code style. Tests are implemented in python.
Tests will only successfully execute if you have built the OPAL python API. This is controlled by setting BUILD_OPAL_PYTHON
to true at cmake time, by doing something like cmake -DBUILD_OPAL_PYTHON=1
. It requires dynamic linking of the OPAL libraries which can be fiddly to achieve. The PyOpal install location is determined dynamically at cmake time, look at src/PyOpal/CMakeLists.txt
for the install path resolution code. Make sure that the install location is in your PYTHONPATH environment variables - look for an output during cmake like:
-- PyOpal install location resolved to /path/to/python/site-packages/
to figure out what is the path to include.
Individual tests use the built-in python unittest package. Tests are implemented as instances of unittest.TestCase
. Tests should be executable as a standalone test by calling them as scripts from the command line, like python tests/opal_src/PyOpal/path/to/a_unit_test.py
.
Some tests execute the main OPAL workflow. Unfortunately, once finished OPAL tends to leave a lot of things lying about in memory which can cause problems in subsequent tests. In order to combat this, a small wrapper class has been written which wraps some tests in a forked process using os.fork. Details can be found at src/PyOpal/PyPython/encapsulated_test_case.py
There exists a set of style tests to check that code is written in a way compatible with the official python style and to perform some static code checks. These checks are executed by doing python tests/opal_src/PyOpal/test_pylint.py
. test_pylint.py
looks for python files in subdirectories of tests/opal_src/PyOpal
or python files that are installed in the pyopal install target area.
All of the tests can be run using the test runner, python tests/opal_src/PyOpal/test_runner.py
. Note that this does run the pylint tests, which can be slow. To disable pylint execution, pass --do_not_run_pylint
as a command line argument. test_runner.py
looks for tests that are in subdirectories of tests/opal_src/PyOpal
.
More on the unit test concept
The idea behind unit tests is to test at the level of the smallest unit that the code does what we think. We test at the smallest unit so that:
-
test coverage is high
-
tests are quick to run
-
we get logically separated functions
Let’s consider each of these points individually
-
test coverage is high - if we imagine the execution path following some branch structure, then we get many more possible execution paths for longer code snippets. So maintaining high test coverage becomes very difficult, and we need many more tests to have good test coverage. Thus its a good idea to test the smallest code snippet possible.
-
tests are quick to run - the execution time goes as the (number_of_tests)*(length_of_each_test). Now, we have many more tests to keep a good test coverage, and each test is longer because they are testing bigger code snippets. This means tests are slowwww. You want to actually run the tests, and an essential part of this is making sure they are quick enough.
-
we get logically separated functions - functions that do simple, understandable things are less likely to be buggy than functions that do complicated or difficult things. The process of really testing if a function does what you intended forces us to make code simple and understandable - otherwise the test becomes difficult to write.
Explicitly, unit tests are not intended to test: that code works together in the desired fashion, code integrates properly with external libraries, the load on hardware is not too big, etc. These issues are dealt with in the Regression tests.
Test driven design (TDD)
Test driven design (TDD) is a recommended way to design code in which unit tests come for free.
Instead of class design, TDD starts by writing the unit tests. This focuses directly on the usage of the new classes / features.
By putting the test first and then writing code to make the test pass, TDD becomes a way to design code. Even better, as you add tests you incrementally improve your design. In an environment that (deliberately) lacks detailed, up-front design thinking, that last characteristic is critical. The process works so well, in fact, that the architecture that emerges from a TDD environment is usually better than the one that, I at least, can create from whole cloth ahead of time.
TDD consists of the following steps:
-
Write unit tests
-
Unit test does not compile
-
-
Write class/feature design (skeleton)
-
Unit test will compile, but fail
-
-
Write class/feature implementation
-
Unit test will pass (if implementation correct)
-