Done - ScalingFFAMagnet and EndFieldModels. I tried to do some clever scheme to automatically detect when the EndFieldModel changed and update the ScalingFFAMagnet but I was unable to make it work. So the user has to manually call EndFieldModel::update() and then ScalingFFAMagnet::update_end_field() every time the endfieldmodel is changed.
I am working on a validated scaling FFA lattice (integration test). Aim is to complete by the meeting next week!
Looks like I have now committed API code and tests for VariableRFCavity, MultipoleT, Probe. I added a thingy "minimal_runner.py" which is a purely python module that provides some hooks for running simulations as a convenience class that applies defaults for lots of things. User just has to add their particular lattice and distribution. Overload anything they want to be non-Default.
I have observed, but not dug into, segmentation faults when running multiple simulations in the same process. The minimal_runner provides a hook for running OPAL simulations in a os.fork() to work around the issue for now. Needs some attention.
The aim of this merge is to get to a point where we have a "minimal working example" which allows the entire python API to be executed. In this case, an example execution (which I used as the basis for many tests) is available at:
src/PyOpal/PyPython/minimal_runner.py
A worked example that builds a scaling FFA and tracks it is available at:
For now, I just check that I can generate reasonable field maps and get some (any) output on the probes. Nothing as clever as tracking a closed orbit or anything, that's the user's job! I attach sample field maps as output (coarse granularity is because it is a unit test, so I want it to run very quickly).
I also hacked around with some of the layout options for the horizontal FFA. In particular I added the option to do positive and negative bend angle and fiddled with the placement routines. I am setting up for implementing 3D placements.
Build
I am coding against python3.8.2, although it should not in principle matter much as long as it is version 3 or greater. I haven't tested against different python versions. If you do not have the appropriate boost/python library, you will need to install that also.
If you want to run the pylint static code checking test (see below) then you need pylint installed. This is executed by default.
If you want to generate the plots in test_track_run_scaling_ffa.py then you need matplotlib installed.
NOTE: remember to 'make install'. When executing tests and so on, python will look in PYTHONPATH for installed packages. Note the various directives instructing cmake where to put things, in particular the install location of the pyopal code. I have added to my python path:
${INSTALL_LOCATION}/lib/python/site-packages/
Test
The C++ unit tests should all pass.
There are new python unit tests that execute from
tests/opal_src/PyOpal/test_runner.py
If you run this, it will go quite quickly, couple of seconds - except there is a script that does a static python checking "test_pylint" which takes about half a minute. You can disable this by passing a command line argument like
python test_runner --do_not_run_pylint
I didn't figure out how to suppress verbose output from OPAL, so it makes lots of noise. I'm sure the ippl experts can tell me that it's easy!
Outstanding issues
1/ Documentation! I have not done any documentation
2/ At least in the scaling FFA, I have not protected the user from silly mistakes. I got hung up for half a day because I had a typo in the end field definition (the fringe field length was undefined, leading to nonsense magnets).
3/ Object persistence: objects can be constructed but not deleted. If two objects are created with the same name, OPAL exits. For example, this makes optimisation executable loops impossible (i.e.)
** build a lattice
** track a lattice
** look at the results
** edit the lattice
For now I have a hacky workaround using os.fork(), but it is hacky. It's in minimal_runner.py "execute_fork" method.
4/ I didn't complete all of the things I promised. Specifically, I did not implement VariableRFCavity nor Dump[EM]Fields. I just didn't have time to do the former, the latter is superceded by the pyopal.fields module.
5/ What is the verdict on the module name? Should it be pyopal or something else (pyopaltracking is a mouthful).
6/ Probably lots of other things. I'm sure you will find them!
I have now compiled #745 (closed) and run the unit tests. All the tests passed aside from PyObjects/test_distribution.py and PyObjects/test_field_solver.py
These tests produced an error:
Traceback (most recent call last):
File "/home/carl/OPAL/src/tests/opal_src/PyOpal/PyObjects/test_field_solver.py", line 7, in
class TestFieldSolver(pyopal.objects.encapsulated_test_case.EncapsulatedTestCase):
AttributeError: module 'pyopal.objects' has no attribute 'encapsulated_test_case'
To fix I added the import:
import pyopal.objects.encapsulated_test_case
to both files.
I'm not sure if this was only an issue on my Linux machine (WSL2 - Ubuntu 20.04)