src issueshttps://gitlab.psi.ch/OPAL/src/-/issues2019-06-28T08:50:14+02:00https://gitlab.psi.ch/OPAL/src/-/issues/321Optimizer crash if SDDSVariable return value is NaN.2019-06-28T08:50:14+02:00frey_mOptimizer crash if SDDSVariable return value is NaN.I recently observed that the optimizer crashes with
```
Error{0}>
Error{0}> *** Error:
Error{0}> Internal OPAL error:
Error{0}> input stream error
Error{0}> input stream error
Rank 0 [Wed Jun 26 20:01:25 2019] [c0-0c0s2n0] ...I recently observed that the optimizer crashes with
```
Error{0}>
Error{0}> *** Error:
Error{0}> Internal OPAL error:
Error{0}> input stream error
Error{0}> input stream error
Rank 0 [Wed Jun 26 20:01:25 2019] [c0-0c0s2n0] application called MPI_Abort(MPI_COMM_WORLD, -100) - process 0
SIGABRT
```
if the return value of `SDDSVariable` is `ǸaN`. With a check `std::isnan` and `std::isinf` this can be fixed. It's probably best it's added to the `SDDSParser` which then throws an execption that is caught by `SDDSVariable`.frey_mfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/318Optimizer: Generalizing objectives with sddsVariableAt2019-06-26T16:56:20+02:00frey_mOptimizer: Generalizing objectives with sddsVariableAt### Summary
Currently, the optimizer allows to evaluate an objective in the stat-file with ```sddsVariableAt``` according to ```spos```
only. I'd like to use other quantities as well.### Summary
Currently, the optimizer allows to evaluate an objective in the stat-file with ```sddsVariableAt``` according to ```spos```
only. I'd like to use other quantities as well.frey_mfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/312Restart-2 regression test fails after AMR update2019-06-13T17:29:10+02:00snuverink_jjochem.snuverink@psi.chRestart-2 regression test fails after AMR updateAfter merging src!105, the Restart-2 regression test fails: http://amas.web.psi.ch/opal/regressionTests/master/results_2019-06-13.xmlAfter merging src!105, the Restart-2 regression test fails: http://amas.web.psi.ch/opal/regressionTests/master/results_2019-06-13.xmlfrey_mfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/310OPAL job failed due to attempt to free memory that is still in use by MPI2019-06-13T09:43:00+02:00bellotti_rOPAL job failed due to attempt to free memory that is still in use by MPII tried to take 10k samples and obtained the following error message:
```
Ippl{0}> CommMPI: Initialization complete.
Ippl{0}> CommMPI: Parent process waiting for children ...
Ippl{0}> CommMPI: Child 1 ready.
Ippl{0}> CommMPI: Child 2 re...I tried to take 10k samples and obtained the following error message:
```
Ippl{0}> CommMPI: Initialization complete.
Ippl{0}> CommMPI: Parent process waiting for children ...
Ippl{0}> CommMPI: Child 1 ready.
Ippl{0}> CommMPI: Child 2 ready.
Ippl{0}> CommMPI: Child 3 ready.
Ippl{0}> CommMPI: Child 4 ready.
Ippl{0}> CommMPI: Child 5 ready.
Ippl{0}> CommMPI: Child 6 ready.
Ippl{0}> CommMPI: Child 7 ready.
Ippl{0}> CommMPI: Initialization complete.
[merlin-c-002:10963] Attempt to free memory that is still in use by an ongoing MPI communication (buffer 0x7f6a000, size 6098944). MPI job will now abort.
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[63717,1],42]
Exit code: 1
--------------------------------------------------------------------------
```
I think this might be a bug. The following **configuration file** was used:
```
OPTION, INFO=TRUE;
REAL Nsamples = 10000;
// Design variables
p1: DVAR, VARIABLE="p1", LOWERBOUND=-30., UPPERBOUND=0;
// Sampling methods
sp1: SAMPLING, VARIABLE="p1", RANDOM=true, TYPE="UNIFORM", SEED=122, N = Nsamples;
tdc1: DVAR, VARIABLE="PHASE", LOWERBOUND=0.4856, UPPERBOUND=0.7284;
// ---- OPTIMIZER SECTION -------
dv0: DVAR, VARIABLE="IBF", LOWERBOUND=400, UPPERBOUND=500;
dv1: DVAR, VARIABLE="IM", LOWERBOUND=250, UPPERBOUND=440;
dv2: DVAR, VARIABLE="GPHASE", LOWERBOUND=-30.0, UPPERBOUND=0.0;
dv3: DVAR, VARIABLE="FWHM", LOWERBOUND=1.5e-12, UPPERBOUND=10.0e-12;
//Quad values
dv4: DVAR, VARIABLE="KQ1", LOWERBOUND=-8.0, UPPERBOUND=8.0;
dv5: DVAR, VARIABLE="KQ2", LOWERBOUND=-8.0, UPPERBOUND=8.0;
dv6: DVAR, VARIABLE="KQ3", LOWERBOUND=-8.0, UPPERBOUND=8.0;
dv7: DVAR, VARIABLE="KQ4", LOWERBOUND=-8.0, UPPERBOUND=8.0;
stdc1: SAMPLING, VARIABLE="PHASE", TYPE="UNIFORM", SEED=329, N = Nsamples;
sdv0: SAMPLING, VARIABLE="IBF", RANDOM=true, TYPE="UNIFORM", SEED=5979, N = Nsamples;
sdv1: SAMPLING, VARIABLE="IM", RANDOM=true, TYPE="UNIFORM", SEED=2840, N = Nsamples;
sdv2: SAMPLING, VARIABLE="GPHASE", RANDOM=true, TYPE="UNIFORM", SEED=68921, N = Nsamples;
sdv3: SAMPLING, VARIABLE="FWHM", RANDOM=true, TYPE="UNIFORM", SEED=580972, N = Nsamples;
sdv4: SAMPLING, VARIABLE="KQ1", RANDOM=true, TYPE="UNIFORM", SEED=1169, N = Nsamples;
sdv5: SAMPLING, VARIABLE="KQ2", RANDOM=true, TYPE="UNIFORM", SEED=435831, N = Nsamples;
sdv6: SAMPLING, VARIABLE="KQ3", RANDOM=true, TYPE="UNIFORM", SEED=183246, N = Nsamples;
sdv7: SAMPLING, VARIABLE="KQ4", RANDOM=true, TYPE="UNIFORM", SEED=12548, N = Nsamples;
SAMPLE,
RASTER = false,
DVARS = {p1, tdc1, dv0, dv1, dv2, dv3, dv4, dv5, dv6, dv7},
SAMPLINGS = {sp1, stdc1, sdv0, sdv1, sdv2, sdv3, sdv4, sdv5, sdv6, sdv7},
INPUT = "awa.tmpl",
OUTPUT = "awa",
OUTDIR = "output_5k",
TEMPLATEDIR = "tmpl",
FIELDMAPDIR = "fieldmaps",
NUM_MASTERS = 1,
NUM_COWORKERS = 1;
QUIT;
```
Just write me an email if somebody is interested and needs more information.https://gitlab.psi.ch/OPAL/src/-/issues/304All cores in a parallel run with particle-matter-interaction use same sequenc...2019-05-22T19:49:00+02:00krausAll cores in a parallel run with particle-matter-interaction use same sequence of random numbersThis is similar to the case we had in the Distribution class where the sequences of random numbers were the same. There we could alleviate this problem using two different approaches. The first approach is to discard parts of the sequenc...This is similar to the case we had in the Distribution class where the sequences of random numbers were the same. There we could alleviate this problem using two different approaches. The first approach is to discard parts of the sequence on all cores except of one (see [here](https://gitlab.psi.ch/OPAL/src/blob/master/src/Distribution/Distribution.cpp#L373)). This doesn't scale well with increasing number of cores and particles. The other approach was to use a different seed on each core (see [here](https://gitlab.psi.ch/OPAL/src/blob/master/src/Distribution/Distribution.cpp#L307)). This scales well but yields different results when different number of cores are used.
In particle-matter-interaction the first approach doesn't seem feasible. The second approach is however easy to implement.krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/78Particle Matter interaction and Large Angle scattering2019-05-16T21:05:08+02:00adelmannParticle Matter interaction and Large Angle scatteringA 249 MeV proton beam is hitting a degrader
REAL WEDGE_HLEN=0.0197293;
REAL START = 0.02;
DEGPHYS_Wedge : SURFACEPHYSICS, TYPE="DEGRADER", MATERIAL="GraphiteR6710";
Wedge1: DEGRADER, L=WEDGE_HLEN, OUTFN="sWedge1.h5", SURFACEP...A 249 MeV proton beam is hitting a degrader
REAL WEDGE_HLEN=0.0197293;
REAL START = 0.02;
DEGPHYS_Wedge : SURFACEPHYSICS, TYPE="DEGRADER", MATERIAL="GraphiteR6710";
Wedge1: DEGRADER, L=WEDGE_HLEN, OUTFN="sWedge1.h5", SURFACEPHYSICS=DEGPHYS_Wedge, ELEMEDGE=START;
The claim is that the following transverse real space
![image](/uploads/96f74bd4cd02104fb0f45ba275702de5/image.png)
and transverse momenta space
![image](/uploads/4a30f2ebddb24ba7bc1e7da81e087bb9/image.png)
is **not** correct.
Switching off the large angle scattering (http://amas.web.psi.ch/docs/opal/opal_user_guide.pdf 18.2.2) the "halo" is disappearing, as shown
by the red dots in the following picture:
![image](/uploads/ea17023a70f261b39db30854795d1485/image.png)
Switch off == omment out: https://gitlab.psi.ch/OPAL/src/blob/master/src/Classic/Solvers/CollimatorPhysics.cpp#L777 and
https://gitlab.psi.ch/OPAL/src/blob/master/src/Classic/Solvers/CollimatorPhysics.cpp#L746
Now we can enable/disable Rutherford scattering
`DEGPHYS_Wedge : SURFACEPHYSICS, TYPE="DEGRADER", MATERIAL="GraphiteR6710", ENABLERUTHERFORD=TRUE;
`
Default is **ENABLED**
Be aware of the fact this inout file runs only with OPAL-1.6 (git checkout OPAL-1.6)
[sDegrader_70.in](/uploads/8ef0732890ee80d73567650e8e4f810a/sDegrader_70.in)
OPAL 1.9.xbaumgartenchristian.baumgarten@psi.chbaumgartenchristian.baumgarten@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/306Many failed regression tests2019-05-16T07:58:24+02:00snuverink_jjochem.snuverink@psi.chMany failed regression testsToday there were many failed regression tests:
http://amas.web.psi.ch/opal/regressionTests/master/results_2019-05-10.xml
There were four merge requests (issue assigned to the authors of these):
1. https://gitlab.psi.ch/OPAL/src/merge_r...Today there were many failed regression tests:
http://amas.web.psi.ch/opal/regressionTests/master/results_2019-05-10.xml
There were four merge requests (issue assigned to the authors of these):
1. https://gitlab.psi.ch/OPAL/src/merge_requests/82
1. https://gitlab.psi.ch/OPAL/src/merge_requests/87
1. https://gitlab.psi.ch/OPAL/src/merge_requests/90
1. https://gitlab.psi.ch/OPAL/src/merge_requests/92
I had a quick look and I suspect this change in the PoissonSolver https://gitlab.psi.ch/OPAL/src/blob/master/src/Solvers/FFTPoissonSolver.cpp:
```diff
@@ -388,7 +388,7 @@ void FFTPoissonSolver::computePotentialDKS(Field_t &rho) {
if (Ippl::myNode() == 0) {
IpplTimings::startTimer(GreensFunctionTimer_m);
- integratedGreensFunction();
+ integratedGreensFunctionDKS();
IpplTimings::stopTimer(GreensFunctionTimer_m);
//transform the greens function
int dimsize[3] = {2*nr_m[0], 2*nr_m[1], 2*nr_m[2]};
```frey_mfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/302Cleanup legacy code2019-05-15T22:43:30+02:00snuverink_jjochem.snuverink@psi.chCleanup legacy codeIn our source code there are several preprocessor checks for `__GNUC__ < 3`. We no longer support those systems and this legacy code can be safely removed.In our source code there are several preprocessor checks for `__GNUC__ < 3`. We no longer support those systems and this legacy code can be safely removed.snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/301Premature termination of integration when reference particle after MAXSTEPS i...2019-05-15T22:43:30+02:00krausPremature termination of integration when reference particle after MAXSTEPS in implicit drift.The Degrader-1 test is flagged as broken because the number of saved steps in the `.stat` file differ. This is caused by the fact that after 230 steps (MAXSTEPS) the reference particle in the OrbitThreader class is located in a drift, th...The Degrader-1 test is flagged as broken because the number of saved steps in the `.stat` file differ. This is caused by the fact that after 230 steps (MAXSTEPS) the reference particle in the OrbitThreader class is located in a drift, that isn't explicitly mentioned in the input file. During simulation ParallelTTracker stops because it seems to have reached the end of the beamline.krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/149Coulomb / Rutherford scattering2019-05-11T19:39:59+02:00krausCoulomb / Rutherford scatteringDoes multiplying R twice with 1000 really make sense?
- [first time here](https://gitlab.psi.ch/OPAL/src/blob/OPAL-1.6/src/Classic/Solvers/CollimatorPhysics.cpp#L773)
- [second time here](https://gitlab.psi.ch/OPAL/src/blob/OPAL-1.6/...Does multiplying R twice with 1000 really make sense?
- [first time here](https://gitlab.psi.ch/OPAL/src/blob/OPAL-1.6/src/Classic/Solvers/CollimatorPhysics.cpp#L773)
- [second time here](https://gitlab.psi.ch/OPAL/src/blob/OPAL-1.6/src/Classic/Solvers/CollimatorPhysics.cpp#L792)
@adelmann @baumgarten ?krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/305Calculation of chord length in RBend wrong2019-05-09T14:16:31+02:00krausCalculation of chord length in RBend wrong### Summary
When the deflection angle is negative then the chord length that is calculated in RBend is wrong. In a rectangular bend when the orientation of the face relative to the beam (`E1`) is half of the deflection angle then the ch...### Summary
When the deflection angle is negative then the chord length that is calculated in RBend is wrong. In a rectangular bend when the orientation of the face relative to the beam (`E1`) is half of the deflection angle then the chord length should be equal to the length of the dipole. Instead the calculated length is as if `E1` was multiplied by `-1`.
### Steps to reproduce
Add `OPTION, LOGBENDTRAJECTORY=TRUE;` to the input file and track a bunch through a rectangular bend with `ANGLE < 0` and `E1 = ANGLE / 2`. Then look up the distance in the file `data/<input_fname>_<bend_name>_traj.dat` between the two locations where the reference particle crosses `x=0`.
### What is the current *bug* behavior?
The current chord length is as if `E1 = -ANGLE / 2`.
### What is the expected *correct* behavior?
The chord length should be equal to `L` in the description of the bend in the input file.krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/288attribute nullptr bug2019-05-08T18:00:43+02:00gsellattribute nullptr bugThe attribute base class doesn't initialise the base pointer in some cases.The attribute base class doesn't initialise the base pointer in some cases.gsellgsellhttps://gitlab.psi.ch/OPAL/src/-/issues/168MultipoleT Curvature model2019-04-18T12:21:41+02:00ext-rogers_cMultipoleT Curvature modelWell, I pushed Titus's code. I did some cleanup and checked a few things over, I note that with non-zero radius of curvature
(a) the processing time is too long. There is some recursive derivative lookup that seems to be not well-optimi...Well, I pushed Titus's code. I did some cleanup and checked a few things over, I note that with non-zero radius of curvature
(a) the processing time is too long. There is some recursive derivative lookup that seems to be not well-optimised, but I suspect it will take a bit of browsing through the maths to understand what he was trying to do.
(b) the field values come out very large when radius of curvature is large. Presumably the code should tend to the straight magnet limit, indicating a bug somewhere.
I will need some time to address these issues - but between data taking and other work I can't give a firm date when I can get into this. I would estimate that I need a good week of work (say two-three weeks in real time) to have some confidence in the code. E.g. it has taken about a week of coding time plus lots of tracking studies for me to gain confidence in the spiral sector FFAG magnet model.
On the plus side, the straight magnet routines seem okay.ext-rogers_cext-rogers_chttps://gitlab.psi.ch/OPAL/src/-/issues/289Cleanup unused and outdated files2019-04-18T08:44:12+02:00snuverink_jjochem.snuverink@psi.chCleanup unused and outdated filesI propose to remove all unused and outdated files from the repo.
The following source files are affected:
* MPWriter/MPReader
* ~~bet/math/functions~~
* ~~bet/math/svdfit~~
* ~~bet/error.h/cpp (superseeded by BetError.h)~~
* ~~bet/erro...I propose to remove all unused and outdated files from the repo.
The following source files are affected:
* MPWriter/MPReader
* ~~bet/math/functions~~
* ~~bet/math/svdfit~~
* ~~bet/error.h/cpp (superseeded by BetError.h)~~
* ~~bet/error.C~~
* ~~bet/math/integrate (Leff,Leff2,Labs not used in profile)~~
* ~~bet/math/sort all except sort2~~
* Algorithms/AutophaseTracker
* ippl/src/Particle/ParticleSpatialLayout.hNudge and ippl/src/Particle/ParticleSpatialLayout.cppNudge
* ~~src/Classic/FixedAlgebra/ ComplexEigen and FComplexEigen~~
* ~~src/Classic/Algebra/NormalForm~~
* src/Classic/Utilities/Gauss
* src/Errors
* optimizer/Comm/Splitter/ReadSplitFromFile.h
* ~~ippl/test/ directory~~
* TaperDomain
* TracerParticles
* Elements/OpalBeamBeam3D
* Elements/OpalBeamBeam
Other files:
* src/Classic/ReadMe
* .gitattributes
More (added 16.04):
* Distribution/halton1d_sequence.cpp
* src/Classic/Main.cpp
* src/Classic/DipoleFieldTest.cpp
* Structure/PriEmissionPhysics.cppsnuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/299Implement a better format for the material parameters in CollimatorPhysics2019-04-12T13:45:26+02:00krausImplement a better format for the material parameters in CollimatorPhysicsCurrently the material parameters are stored in a big `if ... else if ... else`. This isn't very nice and can't be easily extended.Currently the material parameters are stored in a big `if ... else if ... else`. This isn't very nice and can't be easily extended.krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/295ParallelTTracker crashes when using particle matter integration and space cha...2019-04-12T13:26:34+02:00krausParallelTTracker crashes when using particle matter integration and space charge solver if all particles are in material```
ParallelTTracker [2]> --- CollimatorPhysics - Name AIR1 Material AIR
ParallelTTracker [2]> Particle Statistics @ 12:29:52
ParallelTTracker [2]> entered: 1
ParallelTTracker [2]> rediffused: 0
ParallelTTracker [2]>...```
ParallelTTracker [2]> --- CollimatorPhysics - Name AIR1 Material AIR
ParallelTTracker [2]> Particle Statistics @ 12:29:52
ParallelTTracker [2]> entered: 1
ParallelTTracker [2]> rediffused: 0
ParallelTTracker [2]> stopped: 0
ParallelTTracker [2]> total in material: 50'000
Error>
Error> *** User error detected by function "boundp() "
Error> h<0, can not build a mesh
Error> h<0, can not build a mesh
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode -100.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
```krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/292reading H5hut fieldmap fails due to unset view2019-04-12T10:57:21+02:00gsellreading H5hut fieldmap fails due to unset viewIn method FM3dH5Block::readMap() a view must be set before we can read the map.In method FM3dH5Block::readMap() a view must be set before we can read the map.gsellgsellhttps://gitlab.psi.ch/OPAL/src/-/issues/300reading H5Block formatted field-maps crashes2019-04-12T10:47:46+02:00gsellreading H5Block formatted field-maps crashesif the size of the field-map in z-direction is less than the number of cores, reading the field-map crashes.
This is already fixed in OPAL 2.0: see 0172837a and #292 if the size of the field-map in z-direction is less than the number of cores, reading the field-map crashes.
This is already fixed in OPAL 2.0: see 0172837a and #292 gsellgsellhttps://gitlab.psi.ch/OPAL/src/-/issues/174optimiser run hasResultsAvailable()2019-04-06T10:08:12+02:00adelmannoptimiser run hasResultsAvailable()it seams that hasResultsAvailable() is sometimes true after I removed
the pid from the hash string. This was necessary when more than
one $CORE is used for a worker.
I probable need to add this back but not with the pid but with an
id ...it seams that hasResultsAvailable() is sometimes true after I removed
the pid from the hash string. This was necessary when more than
one $CORE is used for a worker.
I probable need to add this back but not with the pid but with an
id that represents the "worker with more than one core"
@ineichen can you point me to that structure.OPAL 2.0.0adelmannYves Ineichenadelmann2017-10-28https://gitlab.psi.ch/OPAL/src/-/issues/293Stripper Element not losing any particles2019-04-06T09:55:12+02:00snuverink_jjochem.snuverink@psi.chStripper Element not losing any particlesDiscovered by @ext\-calvo\_p . The stripper element is not recording any particles.
This is due to a forgotten `bunch->get_bounds()` statement. This was introduced in commit 60b4de13 and 57787997 (OPAL-2.0)Discovered by @ext\-calvo\_p . The stripper element is not recording any particles.
This is due to a forgotten `bunch->get_bounds()` statement. This was introduced in commit 60b4de13 and 57787997 (OPAL-2.0)snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.ch