src issueshttps://gitlab.psi.ch/OPAL/src/-/issues2017-06-17T20:38:34+02:00https://gitlab.psi.ch/OPAL/src/-/issues/105RectangularDomain::getBoundaryStencil typo2017-06-17T20:38:34+02:00snuverink_jjochem.snuverink@psi.chRectangularDomain::getBoundaryStencil typolines [51-53](https://gitlab.psi.ch/OPAL/src/blob/master/src/Solvers/RectangularDomain.cpp#L51):
```c++
S = -hr[0] * hr[2] / hr[1];
F = -hr[0] * hr[1] / hr[2];
S = -hr[0] * hr[1] / hr[2];
```
The second `S` assignment is lik...lines [51-53](https://gitlab.psi.ch/OPAL/src/blob/master/src/Solvers/RectangularDomain.cpp#L51):
```c++
S = -hr[0] * hr[2] / hr[1];
F = -hr[0] * hr[1] / hr[2];
S = -hr[0] * hr[1] / hr[2];
```
The second `S` assignment is likely a typo and should be `B`, but it would be good if someone could check the formulas.OPAL 1.6.1https://gitlab.psi.ch/OPAL/src/-/issues/104--version or --help crashes OPAL2017-06-17T20:38:34+02:00adelmann--version or --help crashes OPAL--version or --help crashes OPAL--version or --help crashes OPALOPAL 1.9.xadelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/102PSDUMPFRAME report is wrong in OPTIOn TELL=TRUE2017-06-17T20:38:34+02:00adelmannPSDUMPFRAME report is wrong in OPTIOn TELL=TRUEThe following trivial OPAL input file:
```
OPTION, TELL=TRUE;
OPTION, PSDUMPFRAME=REFERENCE;
QUIT;
```
shows
OPAL> Current settings of options:
OPAL> OPTION,ECHO=FALSE,INFO=TRUE,TRACE=FALSE,VERIFY=FALSE,WARN=TRUE,
OPAL> SEED=1.2...The following trivial OPAL input file:
```
OPTION, TELL=TRUE;
OPTION, PSDUMPFRAME=REFERENCE;
QUIT;
```
shows
OPAL> Current settings of options:
OPAL> OPTION,ECHO=FALSE,INFO=TRUE,TRACE=FALSE,VERIFY=FALSE,WARN=TRUE,
OPAL> SEED=1.23457e+08,TELL=TRUE,PSDUMPFREQ=10,STATDUMPFREQ=10,
OPAL> PSDUMPEACHTURN=FALSE,PSDUMPLOCALFRAME=FALSE,**PSDUMPFRAME="GLOBAL"**,
OPAL> SPTDUMPFREQ=1,REPARTFREQ=10,REBINFREQ=100,SCSOLVEFREQ=1,
OPAL> MTSSUBSTEPS=1,REMOTEPARTDEL=0,SCAN=FALSE,RHODUMP=FALSE,
OPAL> EBDUMP=FALSE,CSRDUMP=FALSE,AUTOPHASE=0,PPDEBUG=FALSE,
OPAL> SURFDUMPFREQ=-1,NUMBLOCKS=0,RECYCLEBLOCKS=0,NLHS=1,CZERO=FALSE,
OPAL> RNGTYPE="RANDOM",SCHOTTKYCORR=FALSE,SCHOTTKYRENO=-1,ENABLEHDF5=TRUE,
OPAL> ASCIIDUMP=FALSE,BOUNDPDESTROYFQ=10,BEAMHALOBOUNDARY=0,
OPAL> CLOTUNEONLY=FALSE,VERSION=10000;OPAL 1.6.1adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/98Placement of elements in 3D coordinates not possible anymore2017-06-17T20:38:34+02:00krausPlacement of elements in 3D coordinates not possible anymorePlacement of elements in 3D coordinates (see attachment) was possible, this isn't the case anymore.
This issue has to do with the fact that I added the attribute ELEMEDGE and introduced access methods.
[Niowave_first_korrektur.dat](/u...Placement of elements in 3D coordinates (see attachment) was possible, this isn't the case anymore.
This issue has to do with the fact that I added the attribute ELEMEDGE and introduced access methods.
[Niowave_first_korrektur.dat](/uploads/ad152c3a3e13fa0ec231105ec4711817/Niowave_first_korrektur.dat)[Banana_ref.in](/uploads/68db2ec88393f764cfebd520466bf2de/Banana_ref.in)[ez_normalizedcathodepos_4.txt](/uploads/8015defbc8c4082e95296f6ff3133670/ez_normalizedcathodepos_4.txt)OPAL 1.9.xkrauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/96DKS 1.1.0 for OPAL 1.6 branch2017-06-17T20:38:34+02:00gsellDKS 1.1.0 for OPAL 1.6 branchDKS 1.1.0 must be used in OPAL 1.6. So we have the same toolchain for OPAL 1.6 and masterDKS 1.1.0 must be used in OPAL 1.6. So we have the same toolchain for OPAL 1.6 and masterhttps://gitlab.psi.ch/OPAL/src/-/issues/94Error detected by function "FileStream::fillLine()"2017-06-17T20:38:34+02:00ganz_pError detected by function "FileStream::fillLine()"I ran some simulations and at a certain point on all simulations gave me following error:
[Terminal.out](/uploads/8d537807dbf8586b2ec6f08e87a708ae/Terminal.out)
I've tried to vary the opal command (with and without `mpirun`, or `--use-d...I ran some simulations and at a certain point on all simulations gave me following error:
[Terminal.out](/uploads/8d537807dbf8586b2ec6f08e87a708ae/Terminal.out)
I've tried to vary the opal command (with and without `mpirun`, or `--use-dks`), but all files, even files which already ran well gave me that error.
The Opal Version I use is: `OPAL/1.5.1-20170217`
Example .in file:
[100MeV_InvQuad_1_NoColl.in](/uploads/44d81f1f63a2ffffc828556e7944cfdb/100MeV_InvQuad_1_NoColl.in)OPAL 1.6.0adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/91Documentation for attribute DESIGNENERGY of kickers missing2017-06-17T20:38:34+02:00krausDocumentation for attribute DESIGNENERGY of kickers missingOPAL 2.0.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/90OPAL-Cycl - COMET2017-06-17T20:38:34+02:00adelmannOPAL-Cycl - COMETI have been using a locally compiled code with a version number 1.2.1 SVN. I have also run the program through module load with a version number 1.4.3. The loss files are basically the same.
Attached is the input file vc.in. Two phase...I have been using a locally compiled code with a version number 1.2.1 SVN. I have also run the program through module load with a version number 1.4.3. The loss files are basically the same.
Attached is the input file vc.in. Two phase slits CMA1 and CMA2 work quite well. However, the loss data from the vertical collimators, for example, from the pair VC7 and VC8, often register the same particles.
[vc.in](/uploads/8630def3fe171c14cc64887dc9991232/vc.in)OPAL 1.6.0adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/86OPAL-1.6 check DKS version used to compile2017-06-17T20:38:34+02:00Uldis LocansOPAL-1.6 check DKS version used to compileOPAL-1.6 does not check which DKS version is used so compilation errors are possible due to the wrong versionsOPAL-1.6 does not check which DKS version is used so compilation errors are possible due to the wrong versionsOPAL 1.6.0https://gitlab.psi.ch/OPAL/src/-/issues/85Error in compiling OPAL-1.6 with -DENABLE_DKS=12017-06-17T20:38:34+02:00Valeria RizzoglioError in compiling OPAL-1.6 with -DENABLE_DKS=1I have the following modules loaded:
```
Currently Loaded Modulefiles:
1) gcc/5.4.0 4) hdf5/1.8.18 7) trilinos/12.10.1 10) OpenBLAS/0.2.19 13) opal-toolschain/1.6
2) openmpi/1.10.4 5) H5hut/2.0...I have the following modules loaded:
```
Currently Loaded Modulefiles:
1) gcc/5.4.0 4) hdf5/1.8.18 7) trilinos/12.10.1 10) OpenBLAS/0.2.19 13) opal-toolschain/1.6
2) openmpi/1.10.4 5) H5hut/2.0.0rc3 8) root/6.08.02 11) cuda/8.0.44
3) boost/1.62.0 6) gsl/2.2.1 9) cmake/3.6.3 12) dks/1.0.1
```
and I got the following error message:
```
/home/scratch/opal/src/src/Classic/Solvers/CollimatorPhysics.cpp: In member function ‘void CollimatorPhysics::setupCollimatorDKS(PartBunch&, Degrader*, size_t)’:
/home/scratch/opal/src/src/Classic/Solvers/CollimatorPhysics.cpp:1094:52: error: no matching function for call to ‘DKSBase::callInitRandoms(int&, int&)’
dksbase.callInitRandoms(size, Options::seed);
^
In file included from /home/scratch/opal/src/ippl/src/Utility/IpplInfo.h:59:0,
from /home/scratch/opal/src/ippl/src/Message/Message.hpp:29,
from /home/scratch/opal/src/ippl/src/Message/Message.h:618,
from /home/scratch/opal/src/ippl/src/AppTypes/Vektor.h:16,
from /home/scratch/opal/src/src/Classic/Algorithms/Vektor.h:6,
from /home/scratch/opal/src/src/Classic/Solvers/CollimatorPhysics.hh:13,
from /home/scratch/opal/src/src/Classic/Solvers/CollimatorPhysics.cpp:9:
/opt/psi/MPI/dks/1.0.1/openmpi/1.10.4/gcc/5.4.0/include/DKSBase.h:1077:7: note: candidate: int DKSBase::callInitRandoms(int)
int callInitRandoms(int size);
^
/opt/psi/MPI/dks/1.0.1/openmpi/1.10.4/gcc/5.4.0/include/DKSBase.h:1077:7: note: candidate expects 1 argument, 2 provided
[ 60%] Building CXX object src/CMakeFiles/OPALib.dir/Classic/Utilities/DivideError.cpp.o
```OPAL 1.6.0https://gitlab.psi.ch/OPAL/src/-/issues/81Segfault within Surfacephysics2017-06-17T20:38:35+02:00krausSegfault within SurfacephysicsWith input file [Degrader_70.in](/uploads/4971dc04fcdf6cbee66b92aea9f83832/Degrader_70.in) I got a segmentation fault. Suddenly an incredibly large number of additional particles were generated, then OPAL crashed. Couldn't reproduce it a...With input file [Degrader_70.in](/uploads/4971dc04fcdf6cbee66b92aea9f83832/Degrader_70.in) I got a segmentation fault. Suddenly an incredibly large number of additional particles were generated, then OPAL crashed. Couldn't reproduce it anymore, but something isn't correct.OPAL 2.0.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/76Make normalization of 3D fieldmaps optional2017-06-17T20:38:35+02:00krausMake normalization of 3D fieldmaps optionalCurrently the z-component of the (electric, if present) field of a 3D fieldmaps is normalized. The user should be able to disable this normalizaton to simulate e.g. transverse deflecting cavities.Currently the z-component of the (electric, if present) field of a 3D fieldmaps is normalized. The user should be able to disable this normalizaton to simulate e.g. transverse deflecting cavities.OPAL 2.0.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/70Regressiontest RingCyclotronMatched failed2017-08-11T10:50:28+02:00adelmannRegressiontest RingCyclotronMatched failedRegressiontest RingCyclotronMatched is failing with OPAL-1.5.x
```
OPAL{0}> *** User error detected by function "ClosedOrbitFinder::findOrbit()"
OPAL{0}> *** in line 84 of file "RingCyclotronMatched.in" at end of statement:
OPAL{0}>...Regressiontest RingCyclotronMatched is failing with OPAL-1.5.x
```
OPAL{0}> *** User error detected by function "ClosedOrbitFinder::findOrbit()"
OPAL{0}> *** in line 84 of file "RingCyclotronMatched.in" at end of statement:
OPAL{0}> RUN,METHOD="CYCLOTRON-T",BEAM=BEAM1,FIELDSOLVER=FS1,DISTRIBUTION=DIST1;
OPAL{0}> p_{r}^2 > p^{2} (defined in Gordon paper) --> Square root of negative number.
```OPAL 1.9.xfrey_mfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/69VariableRFCavity is failing tests2017-06-17T20:38:35+02:00ext-rogers_cVariableRFCavity is failing testsTest incorrectly returns True during calls to apply (indicating particles "out of aperture" when they should not be)Test incorrectly returns True during calls to apply (indicating particles "out of aperture" when they should not be)ext-rogers_cext-rogers_chttps://gitlab.psi.ch/OPAL/src/-/issues/63Unit tests throw segmentation fault2017-06-17T20:38:35+02:00ext-rogers_cUnit tests throw segmentation faultLooks like somehow the std::cout or std::cerr got set to NULL. I note there is some blobs of code which massages the output buffers to shut up noisy tests. This is handled on a per-test basis. and somewhere I think something went wrong. ...Looks like somehow the std::cout or std::cerr got set to NULL. I note there is some blobs of code which massages the output buffers to shut up noisy tests. This is handled on a per-test basis. and somewhere I think something went wrong. I would do this as a test fixture allowing code to be implemented once per test. Nb test fixtures docs are in here:
https://github.com/google/googletest/blob/master/googletest/docs/Primer.mdext-rogers_cext-rogers_chttps://gitlab.psi.ch/OPAL/src/-/issues/59OpalRing components2017-06-17T20:38:35+02:00ext-rogers_cOpalRing componentsLooks like something changed in the way classic AbsBeamline/Component is handled. Whenever I do a field lookup I get a segv. On digging, I see that there is a aperture_m sitting somewhere up the inheritance tree that is not set by defau...Looks like something changed in the way classic AbsBeamline/Component is handled. Whenever I do a field lookup I get a segv. On digging, I see that there is a aperture_m sitting somewhere up the inheritance tree that is not set by default - and field lookups now seem to throw a segv if not set. This breaks the unit tests.
Fix would be to either:
* Check for NULL in aperture_m before assuming it is set
* Set it by default
If it were anywhere else, I would do a NULL check. But because it is right in the inner tracking loop, my preference would be to set aperture_m to a stupid default value (e.g. very large rectangular aperture).ext-rogers_cext-rogers_chttps://gitlab.psi.ch/OPAL/src/-/issues/25SBEND3D - Local and Global Frame: different particle trajectories2017-12-18T18:21:19+01:00Valeria RizzoglioSBEND3D - Local and Global Frame: different particle trajectoriesThe tracking of the particles in the **LOCAL frame** reveals different behavior with respect to the **GLOBAL frame**
**Field Map**
* 120° Combined Function Magnet
* Reference energy: 230 MeV
**Beam distribution**
* From file...The tracking of the particles in the **LOCAL frame** reveals different behavior with respect to the **GLOBAL frame**
**Field Map**
* 120° Combined Function Magnet
* Reference energy: 230 MeV
**Beam distribution**
* From file: 10000 protons
* First 12 particles are:
```
#ID0: Reference particle
#ID1: Reference particle
#ID2 - #ID9: Particles with defined position and divergence offsets
#ID10: Off-momemtum particle (-11.5 % ) -> py = -0.08531
#ID11: Off-momemtum particle (+13.5 %) -> py = 0.1001511
#ID12: Dispersion function (1 %) -> py = 0.0074186
```
**Tracking** for particles #ID1 (reference), #ID10 and #ID11 (with momentum shift)
* **Global Frame**
![GlobalFrame](/uploads/3ade8aaf8db834db004c57cf8d1e49fa/GlobalFrame.png)
* **Local Frame**
![LocalFrame](/uploads/f079b9a5ee7e43b0918223b3d415f4ba/LocalFrame.png)
**Conclusion**
The particle trajectories in the **LOCAL frame**:
* show a "jump" at 3.8 m, where the field map ends.
* cross the reference trajectory, while in the **GLOBAL frame** the trajectory are separatedOPAL 1.5.2ext-rogers_cext-rogers_chttps://gitlab.psi.ch/OPAL/src/-/issues/14Particles stored in trackOrbit.dat2018-01-05T08:58:48+01:00Valeria RizzoglioParticles stored in trackOrbit.datAccording to the OPAL manual (pag. 40):
**Multi-particle tracking mode**
The intermediate phase space data of centeral particle (with ID of 0) and an off-centering particle (with ID of 1) are stored in an ASCII file.
Concernin...According to the OPAL manual (pag. 40):
**Multi-particle tracking mode**
The intermediate phase space data of centeral particle (with ID of 0) and an off-centering particle (with ID of 1) are stored in an ASCII file.
Concerning the particle with *ID0*:
* particle position is not updated in case OFFSETY > 0 is set in the distribution definition. The tracking of this particle does not reflect the beam behavior (because of the offset)
* Is the general idea: ID0 particle = reference particle?
Concerning the particle with *ID1*:
* ***Distribution from file:*** the second particle in the distribution file is used as ID1
* ***Generated distribution:*** It seems that a random particle from the distribution is set as ID1
A possible suggestion:
* ***Distribution from file:*** the first and the second particle in the file are used as ID0 and ID1, respectively. The user is completely free to decide which particles track.
* ***Generated distribution (Option 1):*** ID0 is by default assigned to the reference particle (with updated offset). An option can be added to the DISTRIBUTION command where the user can specify which particle uses as ID1 (ie: DISPERSION, CENTROID or USERDEF or NULL = not stored). This means that the first two particles generated by OPAL are replaced with ID0 (reference) and ID1, if option NULL is not specified.
* ***Generated distribution (Option 2):*** An option can be added to the DISTRIBUTION command where the user can specify which particle uses as ID0 and ID1 (ie: DISPERSION, CENTROID or USERDEF or or NULL = not stored)
A possible problem could arise in case of multi-distribution or vector of distributionOPAL 1.5.1adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/119Periodic BC's2021-07-06T10:08:36+02:00winklehner_dPeriodic BC'sIt seems that when I set BCFFTT = PERIODIC, not only the z-direction but all directions are automatically set to periodic boundary conditions. @uldis_l I am assuming "UL" in the comment of PartBunch::setBCForDCBeam() is you. Was there a ...It seems that when I set BCFFTT = PERIODIC, not only the z-direction but all directions are automatically set to periodic boundary conditions. @uldis_l I am assuming "UL" in the comment of PartBunch::setBCForDCBeam() is you. Was there a particular reason to do this? In my understanding, a DC beam would have open BC in x and y and periodic BC in z.
In addition, the manual calls the parameters "BCFFTZ" and "PARFFTZ" but OPAL tells me those don't exist and throws an Exception, I have to use "BCFFTT" and "PARFFTT". Just a minor bug.krausadelmannkraushttps://gitlab.psi.ch/OPAL/src/-/issues/120Particle Termination2019-12-12T23:16:08+01:00winklehner_dParticle TerminationHi,
Anybody else noticing that particles are not terminated correctly anymore if Bin is set to -1 (which is the usual way in the CyclotronTracker) since last week's commits to the head? It still works for the BoundaryGeometry, but not, f...Hi,
Anybody else noticing that particles are not terminated correctly anymore if Bin is set to -1 (which is the usual way in the CyclotronTracker) since last week's commits to the head? It still works for the BoundaryGeometry, but not, for example, for the Cyclotron outer boundaries. I think it might have to do with removing all the boundp's
Best,
DanielOPAL-2.2.0winklehner_dwinklehner_dhttps://gitlab.psi.ch/OPAL/src/-/issues/123No stat-file output in case of MTS tracking2017-07-05T12:12:53+02:00frey_mNo stat-file output in case of MTS trackingRunning the regression test [RingCyclotronMTS](https://gitlab.psi.ch/OPAL/regression-tests/blob/master/RegressionTests/RingCyclotronMTS/RingCyclotronMTS.in) however with ```nsteps = 2000``` and ```SPTDUMPFREQ = 10``` -- as in the test [R...Running the regression test [RingCyclotronMTS](https://gitlab.psi.ch/OPAL/regression-tests/blob/master/RegressionTests/RingCyclotronMTS/RingCyclotronMTS.in) however with ```nsteps = 2000``` and ```SPTDUMPFREQ = 10``` -- as in the test [RingCyclotron](https://gitlab.psi.ch/OPAL/regression-tests/blob/master/RegressionTests/RingCyclotron/RingCyclotron.in) using RK-4 -- I get only one dump in the RingCyclotronMTS.stat.OPAL 1.9.xfrey_mfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/125Vector of time steps: error in the parser2017-07-13T09:32:41+02:00Valeria RizzoglioVector of time steps: error in the parser[PROSCAN-G3-230.in](/uploads/0f541b042bd39fdf2fe62688529cc406/PROSCAN-G3-230.in)
If I track the particles using a vector of time steps:
```
TRACK, LINE=BEAMLINE_TOT,
BEAM=BEAM_G3_LA1,
MAXSTEPS={5e+08,5e+08,5e+08},
...[PROSCAN-G3-230.in](/uploads/0f541b042bd39fdf2fe62688529cc406/PROSCAN-G3-230.in)
If I track the particles using a vector of time steps:
```
TRACK, LINE=BEAMLINE_TOT,
BEAM=BEAM_G3_LA1,
MAXSTEPS={5e+08,5e+08,5e+08},
DT={5*PICOSECONDS,1*PICOSECONDS,5*PICOSECOND},
ZSTOP={6.145,6.75,16}OPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/128Let each distribution in array of distributions have its own offset in R and P.2017-07-15T20:33:10+02:00krausLet each distribution in array of distributions have its own offset in R and P.When providing an array of distribution and each distribution has its own OFFSET{X|Y|Z|PX|PY|PZ} then, so far, all distributions use the offsets of the first distribution.When providing an array of distribution and each distribution has its own OFFSET{X|Y|Z|PX|PY|PZ} then, so far, all distributions use the offsets of the first distribution.OPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/129Array of distributions containing FROMFILE2017-08-13T10:13:16+02:00krausArray of distributions containing FROMFILEThis won't work properly because e.g. the number of particles in a FROMFILE distribution is fixed. Thus when computing the number of particles the other distributions should contain we have first to subtract the number of particles in th...This won't work properly because e.g. the number of particles in a FROMFILE distribution is fixed. Thus when computing the number of particles the other distributions should contain we have first to subtract the number of particles in the FROMFILE distributions.OPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/131Segmentation fault - dks - SurfacePhysics Collimators2021-06-10T19:15:02+02:00Valeria RizzoglioSegmentation fault - dks - SurfacePhysics CollimatorsI got segmentation fault running this input file [PROSCAN-G3-230.in](/uploads/7820209c33311fcdd68601832deacf30/PROSCAN-G3-230.in). It includes SurfacePhysics on 3 consecutive collimators.
The error message:
```
ParallelTTracker {0}> ...I got segmentation fault running this input file [PROSCAN-G3-230.in](/uploads/7820209c33311fcdd68601832deacf30/PROSCAN-G3-230.in). It includes SurfacePhysics on 3 consecutive collimators.
The error message:
```
ParallelTTracker {0}> Coll/Deg statistics: bunch to material 2 redifused 0 stopped 1
[opalrunner:20589] *** Process received signal ***
[opalrunner:20589] Signal: Segmentation fault (11)
[opalrunner:20589] Signal code: Address not mapped (1)
[opalrunner:20589] Failing at address: 0x1b70f000
[opalrunner:20589] [ 0] /lib64/libc.so.6[0x32e9632660]
[opalrunner:20589] [ 1] opal(_ZN14ParticleAttribI6VektorIdLj3EEE7destroyERKSt6vectorISt4pairImmESaIS5_EEb+0x1f0)[0xe531d0]
[opalrunner:20589] [ 2] opal(_ZN16IpplParticleBaseI21ParticleSpatialLayoutIdLj3E16UniformCartesianILj3EdE24BoxParticleCachingPolicyIdLj3ES2_EEE14performDestroyEv+0xc2)[0xdac9e2]
[opalrunner:20589] [ 3] opal(_ZN21ParticleSpatialLayoutIdLj3E16UniformCartesianILj3EdE24BoxParticleCachingPolicyIdLj3ES1_EE6updateER16IpplParticleBaseIS4_EPK14ParticleAttribIcE+0x45)[0xdae095]
[opalrunner:20589] [ 4] opal(_ZN16IpplParticleBaseI21ParticleSpatialLayoutIdLj3E16UniformCartesianILj3EdE24BoxParticleCachingPolicyIdLj3ES2_EEE6updateEv+0x1a)[0xdae60a]
[opalrunner:20589] [ 5] opal(_ZN9PartBunch6boundpEv+0x406)[0xe225e6]
[opalrunner:20589] [ 6] opal(_ZN16ParallelTTracker21computeExternalFieldsEv+0xf19)[0x107ec79]
[opalrunner:20589] [ 7] opal(_ZN16ParallelTTracker21executeDefaultTrackerEv+0x637)[0x1084b77]
[opalrunner:20589] [ 8] opal(_ZN16ParallelTTracker7executeEv+0x1f)[0x108566f]
[opalrunner:20589] [ 9] opal(_ZN8TrackRun7executeEv+0x751)[0x104c4b1]
[opalrunner:20589] [10] opal(_ZNK10OpalParser7executeEP6ObjectRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x35)[0xcb57e5]
[opalrunner:20589] [11] opal(_ZNK10OpalParser11parseActionER9Statement+0x143)[0xcb9803]
[opalrunner:20589] [12] opal(_ZNK10OpalParser5parseER9Statement+0x186)[0xcb9196]
[opalrunner:20589] [13] opal(_ZNK10OpalParser3runEv+0x2c)[0xcba7ec]
[opalrunner:20589] [14] opal(_ZN8TrackCmd7executeEv+0x343)[0xd6ccc3]
[opalrunner:20589] [15] opal(_ZNK10OpalParser7executeEP6ObjectRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x35)[0xcb57e5]
[opalrunner:20589] [16] opal(_ZNK10OpalParser11parseActionER9Statement+0x143)[0xcb9803]
[opalrunner:20589] [17] opal(_ZNK10OpalParser5parseER9Statement+0x186)[0xcb9196]
[opalrunner:20589] [18] opal(_ZNK10OpalParser3runEv+0x2c)[0xcba7ec]
[opalrunner:20589] [19] opal(_ZNK10OpalParser3runEP11TokenStream+0x6a)[0xcb9cea]
[opalrunner:20589] [20] opal(main+0x8e8)[0xc48658]
[opalrunner:20589] [21] /lib64/libc.so.6(__libc_start_main+0xfd)[0x32e961ed1d]
[opalrunner:20589] [22] opal[0xc3fab5]
[opalrunner:20589] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 20589 on node opalrunner exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
```
I tried with two different time steps (1 ps and 5 ps) and I got the same error. The same file runs up to end without the option `--use-dks`.
Run configuration: opalrunner with 8 cores
Modules load:
```
Currently Loaded Modulefiles:
1) gcc/5.4.0 3) OPAL/1.6.0rc3 5) root/6.08.02 7) Tcl/8.6.4 9) Python/2.7.12 11) gsl/2.2.1
2) openmpi/1.10.4 4) OPAL/1.6 6) openssl/1.0.2j 8) Tk/8.6.4 10) boost/1.62.0 12) H5root/1.3.2rc4-1
```krausadelmannkraushttps://gitlab.psi.ch/OPAL/src/-/issues/132_M_range_check error2017-08-13T10:13:16+02:00winklehner_d_M_range_check errorSince pulling today, this happens:
```
Error{1}> *** Error:
Error{1}> *** in line 86 of file "RFQ_VECC-T.in":
Error{1}> RUN,METHOD="PARALLEL-T",BEAM=BEAM1,FIELDSOLVER=FS1,DISTRIBUTION=DIST;
Error{1}> vector::_M_range_check
...Since pulling today, this happens:
```
Error{1}> *** Error:
Error{1}> *** in line 86 of file "RFQ_VECC-T.in":
Error{1}> RUN,METHOD="PARALLEL-T",BEAM=BEAM1,FIELDSOLVER=FS1,DISTRIBUTION=DIST;
Error{1}> vector::_M_range_check
```
Any insights anyone? @kraus, did you write something about distributions now being arrays? @adelmann?https://gitlab.psi.ch/OPAL/src/-/issues/133BeamLine fails isInside test during OrbitThreader execute() when Aperture CIR...2017-08-02T22:49:58+02:00winklehner_dBeamLine fails isInside test during OrbitThreader execute() when Aperture CIRCLE is defined in RFCavity.It took me a long time to find out why my RFCavity was not in the imap_m generated by the OrbitThreader during execute(), so I wasn't able to test this with other apertures, but it seems that having a "CIRCLE(0.008, 1)" aperture defined ...It took me a long time to find out why my RFCavity was not in the imap_m generated by the OrbitThreader during execute(), so I wasn't able to test this with other apertures, but it seems that having a "CIRCLE(0.008, 1)" aperture defined in the RFCavity element prevents it from being added to the elementSet list in the getElements(nextR) function. I think the culprit is somehow the ElementBase::isInsideTransverse() function.krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/106Segfault in case of Material at beginning of beamline2017-07-24T10:29:37+02:00frey_mSegfault in case of Material at beginning of beamlineWe run [sim.in](/uploads/94c1f0db20f572ae97a6a320574d9545/sim.in).
Error output:
```
OPAL>
OPAL> --- BEGIN FIELD LIST ---------------------------------------------------------------
OPAL>
OPAL> --- 0.2 m -- 0.200228 m -- has surface...We run [sim.in](/uploads/94c1f0db20f572ae97a6a320574d9545/sim.in).
Error output:
```
OPAL>
OPAL> --- BEGIN FIELD LIST ---------------------------------------------------------------
OPAL>
OPAL> --- 0.2 m -- 0.200228 m -- has surface physics ------------------------------------
OPAL> DMA_DEG1
OPAL> --- 0.200228 m -- 1.20023 m -- -----------------------------------------------------
OPAL> D1
OPAL>
OPAL> --- END FIELD LIST -----------------------------------------------------------------
OPAL>
[opalrunner:18498] *** Process received signal ***
[opalrunner:18498] Signal: Segmentation fault (11)
[opalrunner:18498] Signal code: Address not mapped (1)
[opalrunner:18498] Failing at address: 0x30
[opalrunner:18498] [ 0] /lib64/libpthread.so.0[0x32ea20f7e0]
[opalrunner:18498] [ 1] opal(_ZN12OpalBeamline14switchElementsERKdS1_S1_RKb+0x1cf)[0xf1829f]
[opalrunner:18498] [ 2] opal[0x10758bd]
[opalrunner:18498] [ 3] opal(_ZN16ParallelTTracker21executeDefaultTrackerEv+0x2c0)[0x107c520]
[opalrunner:18498] [ 4] opal(_ZN16ParallelTTracker7executeEv+0x1f)[0x107d35f]
[opalrunner:18498] [ 5] opal(_ZN8TrackRun7executeEv+0x751)[0x1043b51]
[opalrunner:18498] [ 6] opal(_ZNK10OpalParser7executeEP6ObjectRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x35)[0xcac6c5]
[opalrunner:18498] [ 7] opal(_ZNK10OpalParser11parseActionER9Statement+0x11a)[0xcb062a]
[opalrunner:18498] [ 8] opal(_ZNK10OpalParser5parseER9Statement+0x186)[0xcb0076]
[opalrunner:18498] [ 9] opal(_ZNK10OpalParser3runEv+0x2c)[0xcb158c]
[opalrunner:18498] [10] opal(_ZN8TrackCmd7executeEv+0x343)[0xd63cb3]
[opalrunner:18498] [11] opal(_ZNK10OpalParser7executeEP6ObjectRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x35)[0xcac6c5]
[opalrunner:18498] [12] opal(_ZNK10OpalParser11parseActionER9Statement+0x11a)[0xcb062a]
[opalrunner:18498] [13] opal(_ZNK10OpalParser5parseER9Statement+0x186)[0xcb0076]
[opalrunner:18498] [14] opal(_ZNK10OpalParser3runEv+0x2c)[0xcb158c]
[opalrunner:18498] [15] opal(_ZNK10OpalParser3runEP11TokenStream+0x6a)[0xcb0a8a]
[opalrunner:18498] [16] opal(main+0x8e8)[0xc3f858]
[opalrunner:18498] [17] /lib64/libc.so.6(__libc_start_main+0xfd)[0x32e961ed1d]
[opalrunner:18498] [18] opal[0xc36cb5]
[opalrunner:18498] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 18498 on node opalrunner exited on signal 11 (Segmentation fault).
```
We doesn't crash when we add a drift in front of the material.OPAL 1.6.1adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/103Overlap of field maps OPAL-cycl2017-07-24T10:29:37+02:00adelmannOverlap of field maps OPAL-cyclCommunicated by @zhang_h
Case maps for COMET.
We have four non-superpose RF maps and one superpose electrostatic map. The read-in loop could be stopped at the third RF map, without reading the electrostatic map. We may put the ...Communicated by @zhang_h
Case maps for COMET.
We have four non-superpose RF maps and one superpose electrostatic map. The read-in loop could be stopped at the third RF map, without reading the electrostatic map. We may put the electrostatic map in front, but it could cause other problem.OPAL 1.9.xadelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/72Removal of data from a particle without reducing number of particles2017-07-24T10:29:37+02:00krausRemoval of data from a particle without reducing number of particlesThis leads to wrong results: https://gitlab.psi.ch/OPAL/src/blob/OPAL-1.6/src/Classic/Algorithms/PartBunch.cpp#L1930 . This is as if replacing position and momentum with zero.
Please remember to add the patch that solves this issue to ...This leads to wrong results: https://gitlab.psi.ch/OPAL/src/blob/OPAL-1.6/src/Classic/Algorithms/PartBunch.cpp#L1930 . This is as if replacing position and momentum with zero.
Please remember to add the patch that solves this issue to the master as well.adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/137Segmentation fault - Degrader 70 MeV2021-06-10T19:14:40+02:00Valeria RizzoglioSegmentation fault - Degrader 70 MeVI am trying to test the influence of the time step on the results of the OPAL Monte Carlo using the Multi-Slabs degrader for 70 MeV ([Degrader_70.in](/uploads/bc2a35adc56108066470d475851794f4/Degrader_70.in))
I set the time step to 1e-1...I am trying to test the influence of the time step on the results of the OPAL Monte Carlo using the Multi-Slabs degrader for 70 MeV ([Degrader_70.in](/uploads/bc2a35adc56108066470d475851794f4/Degrader_70.in))
I set the time step to 1e-10 s and I got a segmentation fault. So I did few tests, trying different configurations of time steps, n. of cores and options (ENABLERUTHERFORD = TRUE/FALSE or with/without GPU)
-- **Configuration 1**
- protons = 1e5, DT = 1e-10 s, cores = 4, with dks and ENABLERUTHERFORD = TRUE
- result: segmentation fault [Config1.out](/uploads/e22237cd275e223eafc1f393b7f00c3f/Config1.out)
-- **Configuration 2**
- protons = 1e5, DT = 1e-10 s, cores = 4, with dks and ENABLERUTHERFORD = FALSE
- result: OK [Config2.out](/uploads/e1744843830b2f7480ec1d210f9100e2/Config2.out)
-- **Configuration 3**
- protons = 1e5, DT = 1e-10 s, cores = 4, without dks and ENABLERUTHERFORD = TRUE
- result: segmentation fault [Config3.out](/uploads/1729b7c9fa264b2d19ef0b2ab8a30d2a/Config3.out)
-- **Configuration 4**
- protons = 1e7, DT = 1e-10 s, cores = 4, without dks and ENABLERUTHERFORD = TRUE
- result: OPAL stops at 4.4 mm with 4 protons while the ZSTOP is 4.3 m [Config4.out](/uploads/cd16cc8612b11ca5a93c4d2838406fab/Config4.out)
-- **Configuration 4.b**
- protons = 1e5, DT = 1e-10 s, cores = 8, without dks and ENABLERUTHERFORD = TRUE
- result: segmentation fault [Config4b.out](/uploads/2fe58a350447fc863a07bdf0f398bb93/Config4b.out)
-- **Configuration 5** (on Merlin)
- protons = 1e7, DT = 1e-10 s, cores = 32, without dks and ENABLERUTHERFORD = FALSE
- result: OK
-- **Configuration 6**
- protons = 1e5, DT = 1e-11 s, cores = 4, with dks and ENABLERUTHERFORD = FALSE
- result: OK [Config6.out](/uploads/85b27a193d0e9de8d463b99502220dfa/Config6.out)
-- **Configuration 7**
- protons = 1e5, DT = 1e-11 s, cores = 4, with dks and ENABLERUTHERFORD = TRUE
- result: OK [Config7.out](/uploads/89264c89f20abc3bfc933de6acaf2e52/Config7.out)
Run on opalrunner and Merlin with these settings:
```
Currently Loaded Modulefiles:
1) gcc/5.4.0 4) OPAL/1.6 7) Tcl/8.6.4 10) boost/1.62.0
2) openmpi/1.10.4 5) root/6.08.02 8) Tk/8.6.4 11) gsl/2.2.1
3) OPAL/1.6.0rc3 6) openssl/1.0.2j 9) Python/2.7.12 12) H5root/1.3.2rc4-1
```gselladelmanngsellhttps://gitlab.psi.ch/OPAL/src/-/issues/138Setting autophase option without a cavity in beamline throws mysterious error2017-08-05T20:04:40+02:00ext-hall_cSetting autophase option without a cavity in beamline throws mysterious errorWith `"OPTION, AUTOPHASE=4;"` in my input file when I use a beamline without a cavity I see an error like:
`opal(7879,0x7fff7f140000) malloc: *** error for object 0x7fff9a15b9f3: pointer being freed was not allocated`
Turning autophase...With `"OPTION, AUTOPHASE=4;"` in my input file when I use a beamline without a cavity I see an error like:
`opal(7879,0x7fff7f140000) malloc: *** error for object 0x7fff9a15b9f3: pointer being freed was not allocated`
Turning autophase off allowed my input file to run without error, but this error was not very informative and it took quite a while to find the culprit. It might be helpful if making this mistake generated a specific error message.OPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/140Particle delete2017-08-05T14:31:50+02:00adelmannParticle deleteWith OPAL-1.6 (newest pull) and Regressiontest PSIGUN-1 Bin 0 gets no particles at timestep 2:
....
OPAL {0}[3]> * Wrote beam statistics.
Ippl{0}[2]> Bin 0 gamma = 1.00717e+00; NpInBin= 667
Ippl{0}[2]> Bin 1 has...With OPAL-1.6 (newest pull) and Regressiontest PSIGUN-1 Bin 0 gets no particles at timestep 2:
....
OPAL {0}[3]> * Wrote beam statistics.
Ippl{0}[2]> Bin 0 gamma = 1.00717e+00; NpInBin= 667
Ippl{0}[2]> Bin 1 has no particles
Ippl{0}[2]> Bin 2 has no particles
Ippl{0}[2]> Bin 3 has no particles
Ippl{0}[2]> Bin 4 has no particles
Ippl{0}[3]> * Bin number: 2 has emitted all particles (new emit).
ParallelTTracker {0}> * Deleted 667 particles, remaining 4755 particles
ParallelTTracker {0}[3]> 12:03:09 Step 1 at -0.053 [mm] t= 1.060e-11 [s] E= 5.388 [keV]
...
OPAL {0}>
OPAL {0}[3]> * Wrote beam statistics.
Ippl{0}[2]> Bin 0 has no particles
Ippl{0}[2]> Bin 1 gamma = 1.01054e+00; NpInBin= 4755
Ippl{0}[2]> Bin 2 has no particles
Later on we are running into
I + M < LocalSize
@kraus Is there still an autophpse problem?OPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/143BoundaryGeometries VTK output produces odd results2018-05-16T13:12:14+02:00krausBoundaryGeometries VTK output produces odd resultsUsed the SAAMG-Test-1 to produce [attached screenshot](/uploads/695901d8c9e8a2e7afc37278f666eef7/Pipe_1m_10cm.png) (serial and parallel).Used the SAAMG-Test-1 to produce [attached screenshot](/uploads/695901d8c9e8a2e7afc37278f666eef7/Pipe_1m_10cm.png) (serial and parallel).gsellgsellhttps://gitlab.psi.ch/OPAL/src/-/issues/146Rewrite the ArbitraryDomain class.2021-06-09T18:40:56+02:00krausRewrite the ArbitraryDomain class.Currently the ArbitraryDomain class only works when it is partitioned in z-direction. Rewrite it such that the global linear indexing works also with PARFFTX=TRUE and/or PARFFTY=TRUE.Currently the ArbitraryDomain class only works when it is partitioned in z-direction. Rewrite it such that the global linear indexing works also with PARFFTX=TRUE and/or PARFFTY=TRUE.frey_mwinklehner_dfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/149Coulomb / Rutherford scattering2019-05-11T19:39:59+02:00krausCoulomb / Rutherford scatteringDoes multiplying R twice with 1000 really make sense?
- [first time here](https://gitlab.psi.ch/OPAL/src/blob/OPAL-1.6/src/Classic/Solvers/CollimatorPhysics.cpp#L773)
- [second time here](https://gitlab.psi.ch/OPAL/src/blob/OPAL-1.6/...Does multiplying R twice with 1000 really make sense?
- [first time here](https://gitlab.psi.ch/OPAL/src/blob/OPAL-1.6/src/Classic/Solvers/CollimatorPhysics.cpp#L773)
- [second time here](https://gitlab.psi.ch/OPAL/src/blob/OPAL-1.6/src/Classic/Solvers/CollimatorPhysics.cpp#L792)
@adelmann @baumgarten ?krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/151OPAL does not compile with DKS enabled after recent commits2017-08-14T21:21:16+02:00gsellOPAL does not compile with DKS enabled after recent commits@kraus, @uldis_l:
```
59%] Building CXX object src/CMakeFiles/OPALib.dir/Classic/Structure/LossDataSink.cpp.o
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:38:30: error: ‘const int Coll...@kraus, @uldis_l:
```
59%] Building CXX object src/CMakeFiles/OPALib.dir/Classic/Structure/LossDataSink.cpp.o
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:38:30: error: ‘const int CollimatorPhysics::numpar’ is not a static data member of ‘class CollimatorPhysics’
const int CollimatorPhysics::numpar = 13;
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp: In constructor ‘CollimatorPhysics::CollimatorPhysics(const string&, ElementBase*, std::__cxx11::string&, bool, double)’:
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:77:7: error: class ‘CollimatorPhysics’ does not have any field named ‘curandInitSet’
, curandInitSet(0)
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:78:7: error: class ‘CollimatorPhysics’ does not have any field named ‘ierr’
, ierr(0)
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:79:7: error: class ‘CollimatorPhysics’ does not have any field named ‘maxparticles’
, maxparticles(0)
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:80:7: error: class ‘CollimatorPhysics’ does not have any field named ‘numparticles’
, numparticles(0)
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:81:7: error: class ‘CollimatorPhysics’ does not have any field named ‘par_ptr’
, par_ptr(NULL)
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:82:7: error: class ‘CollimatorPhysics’ does not have any field named ‘mem_ptr’
, mem_ptr(NULL)
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp: In member function ‘void CollimatorPhysics::applyDKS(PartBunch&, size_t)’:
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:875:58: error: cannot allocate an object of abstract type ‘Degrader’
Degrader deg = dynamic_cast<Degrader *>(element_ref_m);
^
In file included from /home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.hh:16:0,
from /home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:9:
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/AbsBeamline/Degrader.h:38:7: note: because the following virtual functions are pure within ‘Degrader’:
class Degrader: public Component {
^
In file included from /home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/AbsBeamline/Component.h:26:0,
from /home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.hh:14,
from /home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:9:
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/AbsBeamline/ElementBase.h:190:29: note: virtual BGeometryBase& ElementBase::getGeometry()
virtual BGeometryBase &getGeometry() = 0;
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/AbsBeamline/ElementBase.h:195:35: note: virtual const BGeometryBase& ElementBase::getGeometry() const
virtual const BGeometryBase &getGeometry() const = 0;
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/AbsBeamline/ElementBase.h:311:26: note: virtual ElementBase* ElementBase::clone() const
virtual ElementBase *clone() const = 0;
^
In file included from /home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.hh:14:0,
from /home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:9:
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/AbsBeamline/Component.h:64:22: note: virtual EMField& Component::getField()
virtual EMField &getField() = 0;
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/AbsBeamline/Component.h:69:28: note: virtual const EMField& Component::getField() const
virtual const EMField &getField() const = 0;
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:875:14: error: cannot declare variable ‘deg’ to be of abstract type ‘Degrader’
Degrader deg = dynamic_cast<Degrader *>(element_ref_m);
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:878:60: error: no matching function for call to ‘CollimatorPhysics::setupCollimatorDKS(PartBunch&, Degrader&, size_t&)’
setupCollimatorDKS(bunch, deg, numParticlesInSimulation);
^
In file included from /home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:9:0:
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.hh:110:10: note: candidate: void CollimatorPhysics::setupCollimatorDKS(PartBunch&, Degrader*, size_t)
void setupCollimatorDKS(PartBunch &bunch, Degrader *deg, size_t numParticlesInSimulation);
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.hh:110:10: note: no known conversion for argument 2 from ‘Degrader’ to ‘Degrader*’
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp: In member function ‘void CollimatorPhysics::setupCollimatorDKS(PartBunch&, Degrader*, size_t)’:
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:1063:51: error: ‘numpar’ was not declared in this scope
par_mp = dksbase_m.allocateMemory<double>(numpar, ierr_m);
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:1082:50: error: ‘class Degrader’ has no member named ‘getZSize’
double params[numpar_ms] = {zBegin, deg->getZSize(), rho_m, Z_m,
^
make[2]: *** [src/CMakeFiles/OPALib.dir/Classic/Solvers/CollimatorPhysics.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [src/CMakeFiles/OPALib.dir/all] Error 2
make: *** [all] Error 2
```krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/152More than 1 coworker2019-01-10T09:37:22+01:00adelmannMore than 1 coworker**--num-coworkers=2** does not work. Simulation of first generation is not terminating**--num-coworkers=2** does not work. Simulation of first generation is not terminatingYves IneichenYves Ineichenhttps://gitlab.psi.ch/OPAL/src/-/issues/153Constraints validation fails2017-11-08T10:25:08+01:00frey_mConstraints validation failsI tried out the constraints with the condition that the number of particles should be greater than zero.
```
...
//c1: CONSTRAINT, EXPR="numParticles > 0";
//objs: OBJECTIVES=(dpeak1,dpeak2,dpeak3_5);
//constrs: CONSTRAINTS = (c1);...I tried out the constraints with the condition that the number of particles should be greater than zero.
```
...
//c1: CONSTRAINT, EXPR="numParticles > 0";
//objs: OBJECTIVES=(dpeak1,dpeak2,dpeak3_5);
//constrs: CONSTRAINTS = (c1);
//opt: OPTIMIZE, OBJECTIVES = objs, DVARS = dvars, CONSTRAINTS = constrs;
...
```
This is a dummy constraint since in our simulation we lose no particles. 'numParticles' is part of the SDDS file, i.e. *.stat file (OPAL 1.6).
For some reason -- I do not understand -- I get following message in [opt.trace.0](/uploads/71d42dd821ddfcc95d3fa165cb5ef5ad/opt.trace.0):
```
invalid individual, constraint "c1" failed to yield true; result: 0
```
OPT-Pilot never finds a solution. Without the constraint, it works fine. The template and data file attached:
[Ring.tmpl](/uploads/c94789c099aa26a0d20acd0daca29f93/Ring.tmpl)
[Ring.data](/uploads/95c2dac28b6a7785e708cc363977957c/Ring.data)
Best,
Matthias :bug:snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/156The Degrader-1 test yields different results when dks is enabled2020-05-01T10:10:14+02:00krausThe Degrader-1 test yields different results when dks is enabledrms x and rms y seem to be fine, only the energy is affected. On a first inspection of the DKS code (CudaCollimatorPhysics.cu) I couldn't find anything obvious. I have no expertise nor the hardware to debug code for cuda.rms x and rms y seem to be fine, only the energy is affected. On a first inspection of the DKS code (CudaCollimatorPhysics.cu) I couldn't find anything obvious. I have no expertise nor the hardware to debug code for cuda.OPAL 2.4.0locans_ulocans_uhttps://gitlab.psi.ch/OPAL/src/-/issues/158Somehow PSDump has influence on dumped statistics2017-08-18T09:32:49+02:00krausSomehow PSDump has influence on dumped statistics[red has PSDump simultaneously](/uploads/f289a4e3acd9d43703dc6b5c9c5c50fe/influencePSDump.png) This doesn't hurt any further but it's annoying.[red has PSDump simultaneously](/uploads/f289a4e3acd9d43703dc6b5c9c5c50fe/influencePSDump.png) This doesn't hurt any further but it's annoying.OPAL 1.6.0adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/92ENABLERUTHERFORD and DKS2019-03-15T13:40:02+01:00Valeria RizzoglioENABLERUTHERFORD and DKSI am testing the attribute **ENABLERUTHERFORD=FALSE** using the new OPAL module OPAL/1.5.2.
Analysing the particle distribution, I have noticed that phase space is different with and without DKS.
* **Run without DKS:** ` mpirun -np 8...I am testing the attribute **ENABLERUTHERFORD=FALSE** using the new OPAL module OPAL/1.5.2.
Analysing the particle distribution, I have noticed that phase space is different with and without DKS.
* **Run without DKS:** ` mpirun -np 8 opal Degrader_1Slab_230.in`
![OPAL_1.5.2_nodks](/uploads/135423a9df3842bc730cd54969389a75/OPAL_1.5.2_nodks.png)
* **Run with DKS:** ` mpirun -np 8 opal --use-dks Degrader_1Slab_230.in`
![OPAL_1.5.2_dks](/uploads/13a3731c8b3871d1e88630ab08d851cf/OPAL_1.5.2_dks.png)
It seems that running with DKS the attribute **ENABLERUTHERFORD** has not been implemented.
Here the input file: [Degrader_1Slab_230.in](/uploads/de37f170435fcdda5e621019974dda1e/Degrader_1Slab_230.in)OPAL 2.0.0baumgartenchristian.baumgarten@psi.chbaumgartenchristian.baumgarten@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/82IPPL extra message error2017-12-21T12:02:10+01:00frey_mIPPL extra message errorOPAL crashes for > 16 cores (but works with #cores = 4) with the error message
>>>
Error{0}> get_iter(): no more items in Message
Error{0}> reduce: mismatched element count in vector reduction.
Warning{0}> CommMPI: Found extra message...OPAL crashes for > 16 cores (but works with #cores = 4) with the error message
>>>
Error{0}> get_iter(): no more items in Message
Error{0}> reduce: mismatched element count in vector reduction.
Warning{0}> CommMPI: Found extra message from node 11, tag 10218: msg = Message contains 2 items (0 removed). Contents:
Warning{0}> Item 0: 1 elements, 1 bytes total, needDelete = 0
Warning{0}> Item 1: 3 elements, 24 bytes total, needDelete = 0
>>>
in case of serial x and y directions (i.e. PARFFTX=false, PARFFTY=false) and parallel z direction (i.e. PARFFTT=true). The simulation that was ran is [psiring.in](/uploads/06e3f41f765be149e96b56bd6b277485/psiring.in). The fieldmaps can be found in the repository [AMAS-BDModels / PSI-Ring](https://gitlab.psi.ch/AMAS-BDModels/PSI-Ring/tree/master/Fieldmaps). Following modules were used for running on Merlin:
>>>
module use unstable
module add gcc/5.4.0
module add openmpi/1.10.4
module add hdf5/1.8.18
module add H5hut/2.0.0rc3
module add trilinos/12.10.1
module add gsl/2.2.1
module add boost/1.62.0
>>>
When changing to parallel x, y and serial z, i.e. PARFFTX=true, PARFFTY=true and PARFFTT=false) no error occurs.OPAL 1.9.xfrey_mfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/78Particle Matter interaction and Large Angle scattering2019-05-16T21:05:08+02:00adelmannParticle Matter interaction and Large Angle scatteringA 249 MeV proton beam is hitting a degrader
REAL WEDGE_HLEN=0.0197293;
REAL START = 0.02;
DEGPHYS_Wedge : SURFACEPHYSICS, TYPE="DEGRADER", MATERIAL="GraphiteR6710";
Wedge1: DEGRADER, L=WEDGE_HLEN, OUTFN="sWedge1.h5", SURFACEP...A 249 MeV proton beam is hitting a degrader
REAL WEDGE_HLEN=0.0197293;
REAL START = 0.02;
DEGPHYS_Wedge : SURFACEPHYSICS, TYPE="DEGRADER", MATERIAL="GraphiteR6710";
Wedge1: DEGRADER, L=WEDGE_HLEN, OUTFN="sWedge1.h5", SURFACEPHYSICS=DEGPHYS_Wedge, ELEMEDGE=START;
The claim is that the following transverse real space
![image](/uploads/96f74bd4cd02104fb0f45ba275702de5/image.png)
and transverse momenta space
![image](/uploads/4a30f2ebddb24ba7bc1e7da81e087bb9/image.png)
is **not** correct.
Switching off the large angle scattering (http://amas.web.psi.ch/docs/opal/opal_user_guide.pdf 18.2.2) the "halo" is disappearing, as shown
by the red dots in the following picture:
![image](/uploads/ea17023a70f261b39db30854795d1485/image.png)
Switch off == omment out: https://gitlab.psi.ch/OPAL/src/blob/master/src/Classic/Solvers/CollimatorPhysics.cpp#L777 and
https://gitlab.psi.ch/OPAL/src/blob/master/src/Classic/Solvers/CollimatorPhysics.cpp#L746
Now we can enable/disable Rutherford scattering
`DEGPHYS_Wedge : SURFACEPHYSICS, TYPE="DEGRADER", MATERIAL="GraphiteR6710", ENABLERUTHERFORD=TRUE;
`
Default is **ENABLED**
Be aware of the fact this inout file runs only with OPAL-1.6 (git checkout OPAL-1.6)
[sDegrader_70.in](/uploads/8ef0732890ee80d73567650e8e4f810a/sDegrader_70.in)
OPAL 1.9.xbaumgartenchristian.baumgarten@psi.chbaumgartenchristian.baumgarten@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/73'RestartTest-2' fails on master branch2018-03-20T10:08:59+01:00gsell'RestartTest-2' fails on master branch'RestartTest-2' fails with the following error message:
```
Error{0}> get_iter(): no more items in Message
Error{0}> reduce: mismatched element count in vector reduction.
```'RestartTest-2' fails with the following error message:
```
Error{0}> get_iter(): no more items in Message
Error{0}> reduce: mismatched element count in vector reduction.
```OPAL 1.9.xadelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/2option SCAN broken2018-01-09T09:27:20+01:00adelmannoption SCAN brokenThe option SCAN is broken.The option SCAN is broken.OPAL 1.9.xadelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/162K0 attribute in RBend2017-10-22T14:14:11+02:00Valeria RizzoglioK0 attribute in RBendIt seems that the attribute K0 (to set the magnetic field) is not working in the RBend element.
In the regression test, if I am not wrong, only the Angle attribute is tested.
From a simple RBend test:
1- if **ANGLE** attribute is us...It seems that the attribute K0 (to set the magnetic field) is not working in the RBend element.
In the regression test, if I am not wrong, only the Angle attribute is tested.
From a simple RBend test:
1- if **ANGLE** attribute is used
```
RBend [2]> 1DPROFILE1-DEFAULT (1D Profile type 1)
RBend [2]> BEND using file 1DPROFILE1-DEFAULT (1D Profile type 1)
RBend [2]>
RBend [2]> Start of field map: 0.146472 m (in s coordinates)
RBend [2]> End of field map: 0.484418 m (in s coordinates)
RBend [2]> Entrance edge of magnet: 0.25 m (in s coordinates)
RBend [2]>
RBend [2]> Reference Trajectory Properties
RBend [2]> ===============================
RBend [2]>
RBend [2]> Bend angle magnitude: 0.523599 rad (30 degrees)
RBend [2]> Entrance edge angle: 0.261799 rad (15 degrees)
RBend [2]> Exit edge angle: 0.261799 rad (15 degrees)
RBend [2]> Bend design radius: 0.249982 m
RBend [2]> Bend design energy: 7e+06 eV
RBend [2]>
RBend [2]> Bend Field and Rotation Properties
RBend [2]> ==================================
RBend [2]>
RBend [2]> Field amplitude: 1.53217 T
RBend [2]> Field index: 0
RBend [2]> Rotation about x axis: 0 rad (0 degrees)
RBend [2]> Rotation about y axis: 0 rad (0 degrees)
RBend [2]> Rotation about z axis: 0 rad (0 degrees)
RBend [2]>
RBend [2]> Reference Trajectory Properties Through Bend Magnet with Fringe Fields
RBend [2]> ======================================================================
RBend [2]>
RBend [2]> Reference particle is bent: 0.523599 rad (30 degrees) in x plane
RBend [2]> Reference particle is bent: 0 rad (0 degrees) in y plane
RBend [2]>
```
2- if **K0** attribute is used:
```
RBend [2]> 1DPROFILE1-DEFAULT (1D Profile type 1)
RBend [2]> BEND using file 1DPROFILE1-DEFAULT (1D Profile type 1)
RBend [2]>
RBend [2]> Start of field map: 0.146472 m (in s coordinates)
RBend [2]> End of field map: -nan m (in s coordinates)
RBend [2]> Entrance edge of magnet: 0.25 m (in s coordinates)
RBend [2]>
RBend [2]> Reference Trajectory Properties
RBend [2]> ===============================
RBend [2]>
RBend [2]> Bend angle magnitude: -nan rad (-nan degrees)
RBend [2]> Entrance edge angle: 0.261799 rad (15 degrees)
RBend [2]> Exit edge angle: -0.261799 rad (-15 degrees)
RBend [2]> Bend design radius: 0.25001 m
RBend [2]> Bend design energy: 7e+06 eV
RBend [2]>
RBend [2]> Bend Field and Rotation Properties
RBend [2]> ==================================
RBend [2]>
RBend [2]> Field amplitude: 1.532 T
RBend [2]> Field index: 0
RBend [2]> Rotation about x axis: 0 rad (0 degrees)
RBend [2]> Rotation about y axis: 0 rad (0 degrees)
RBend [2]> Rotation about z axis: 0 rad (0 degrees)
RBend [2]>
RBend [2]> Reference Trajectory Properties Through Bend Magnet with Fringe Fields
RBend [2]> ======================================================================
RBend [2]>
RBend [2]> Reference particle is bent: -0 rad (-0 degrees) in x plane
RBend [2]> Reference particle is bent: 0 rad (0 degrees) in y plane
RBend [2]>
```
Could, please, someone check?OPAL 2.0.0krausadelmannkraushttps://gitlab.psi.ch/OPAL/src/-/issues/163Charge zero in OPAL-cycl & OPAL-t2017-10-02T09:29:53+02:00adelmannCharge zero in OPAL-cycl & OPAL-tCompare beam size of 1.6 and 1.9.x
![opal-cycl](/uploads/e0253f94e8aaec164cae26c992f33eab/opal-cycl.png)
for the IsoDAR cyclotron. Inputfiles can be found on
`merlin-l-01: /gpfs/home/adelmann/scratch/UQ/isodar-1/Accelerated and
...Compare beam size of 1.6 and 1.9.x
![opal-cycl](/uploads/e0253f94e8aaec164cae26c992f33eab/opal-cycl.png)
for the IsoDAR cyclotron. Inputfiles can be found on
`merlin-l-01: /gpfs/home/adelmann/scratch/UQ/isodar-1/Accelerated and
...../Accelerated-1.9`
FUN fact: **Qtot = 0.000**
`
OPAL{0}> * ************** B U N C H *********************************************************
OPAL{0}> * NP = 133000
OPAL{0}> * Qtot = 0.000 [fC] Qi = 1.017 [fC]
OPAL{0}> * Ekin = 361.221 [keV] dEkin = 1.445 [keV]
OPAL{0}> * rmax = ( 3.18003 , 8.91427 , 9.34380 ) [um]
OPAL{0}> * rmin = ( -3.18003 , -8.95209 , -9.36713 ) [um]
OPAL{0}> * rms beam size = ( 1.02826 , 2.91108 , 3.02269 ) [mm]
OPAL{0}> * rms momenta = ( 1.70888e-04 , 3.92498e-05 , 7.85035e-05 ) [beta gamma]
OPAL{0}> * mean position = ( 0.00000 , -0.00000 , 0.00009 ) [um]
OPAL{0}> * mean momenta = ( 2.92045e-15 , 1.96206e-02 , -1.26375e-09 ) [beta gamma]
OPAL{0}> * rms emittance = ( 8.78539e-06 , 5.71264e-06 , 1.18639e-05 ) (not normalized)
OPAL{0}> * rms correlation = ( 2.39105e-04 , 1.14814e-03 , 1.85573e-03 )
OPAL{0}> * hr = ( 0.44096 , 1.23873 , 1.29729 ) [mm]
OPAL{0}> * dh = 2.00000e+00 [%]
OPAL{0}> * t = 0.000 [fs] dT = 28.251 [ps]
OPAL{0}> * spos = 0.000 [um]
OPAL{0}> * **********************************************************************************
`OPAL 1.9.xadelmannwinklehner_dadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/164Compiling OPAL on Daint causes internal compiler error2017-09-18T10:18:43+02:00frey_mCompiling OPAL on Daint causes internal compiler errorWhen compiling OPAL on Piz Daint one obtains an internal compiler error
```
/users/freym/git/opal/src/opt-pilot/Util/MPIHelper.cpp:36:11: required from here
/users/freym/git/opal/src/opt-pilot/Util/Types.h:50:16: internal compiler err...When compiling OPAL on Piz Daint one obtains an internal compiler error
```
/users/freym/git/opal/src/opt-pilot/Util/MPIHelper.cpp:36:11: required from here
/users/freym/git/opal/src/opt-pilot/Util/Types.h:50:16: internal compiler error: Segmentation fault
typedef struct {
^
0xb0248f crash_signal
../../cray-gcc-5.3.0/gcc/toplev.c:383
0xafa0ff layout_decl(tree_node*, unsigned int)
../../cray-gcc-5.3.0/gcc/stor-layout.c:783
0x660ba4 require_complete_types_for_parms
../../cray-gcc-5.3.0/gcc/cp/decl.c:11148
0x660ba4 check_function_type
../../cray-gcc-5.3.0/gcc/cp/decl.c:13297
0x660ba4 start_preparsed_function(tree_node*, tree_node*, int)
../../cray-gcc-5.3.0/gcc/cp/decl.c:13471
0x70f654 synthesize_method(tree_node*)
../../cray-gcc-5.3.0/gcc/cp/method.c:798
0x6b29f3 mark_used(tree_node*, int)
../../cray-gcc-5.3.0/gcc/cp/decl2.c:5196
0x651dc4 build_over_call
../../cray-gcc-5.3.0/gcc/cp/call.c:7536
0x650976 build_new_method_call_1
../../cray-gcc-5.3.0/gcc/cp/call.c:8252
0x650976 build_new_method_call(tree_node*, tree_node*, vec<tree_node*, va_gc, vl_embed>**, tree_node*, int, tree_node**, int)
../../cray-gcc-5.3.0/gcc/cp/call.c:8322
0x64a6ba build_special_member_call(tree_node*, tree_node*, vec<tree_node*, va_gc, vl_embed>**, tree_node*, int, int)
../../cray-gcc-5.3.0/gcc/cp/call.c:7862
0x709877 build_value_init(tree_node*, int)
../../cray-gcc-5.3.0/gcc/cp/init.c:358
0x70de58 perform_member_init
../../cray-gcc-5.3.0/gcc/cp/init.c:646
0x70de58 emit_mem_initializers(tree_node*)
../../cray-gcc-5.3.0/gcc/cp/init.c:1167
0x684656 tsubst_expr
../../cray-gcc-5.3.0/gcc/cp/pt.c:13962
0x6844fc tsubst_expr
../../cray-gcc-5.3.0/gcc/cp/pt.c:14142
0x683267 instantiate_decl(tree_node*, int, bool)
../../cray-gcc-5.3.0/gcc/cp/pt.c:20589
0x6b271d mark_used(tree_node*, int)
../../cray-gcc-5.3.0/gcc/cp/decl2.c:5217
0x651dc4 build_over_call
../../cray-gcc-5.3.0/gcc/cp/call.c:7536
0x650976 build_new_method_call_1
../../cray-gcc-5.3.0/gcc/cp/call.c:8252
Please submit a full bug report,
```frey_mfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/166Placement of elements with PSI different from 0 and pi2017-10-15T22:12:10+02:00krausPlacement of elements with PSI different from 0 and piThe elements between dipoles are placed incorrectly when the dipoles have e.g. PSI = pi/2.The elements between dipoles are placed incorrectly when the dipoles have e.g. PSI = pi/2.OPAL 2.0.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/171OPAL-t wrong results2017-10-22T10:10:53+02:00adelmannOPAL-t wrong resultsThe attached input files give a consistent solutions in V1.4 and V1.6 as
demonstrated in regtest.pdf.
[regtest.pdf](/uploads/57c7e0d924b6ae43095fe9b0ca133088/regtest.pdf)
With 1.9.x depending on the BFREQ we get a different set of so...The attached input files give a consistent solutions in V1.4 and V1.6 as
demonstrated in regtest.pdf.
[regtest.pdf](/uploads/57c7e0d924b6ae43095fe9b0ca133088/regtest.pdf)
With 1.9.x depending on the BFREQ we get a different set of solutions:
![v1.9err_2](/uploads/221e8b0cd7f810cc8eafaa1c8ac85fa9/v1.9err_2.png)
In case of Hz as units the energy is correct in case of MHz (as it should be)
the energy is wrong.
In both cases Autophase finds the correct energy.
[M_440.T7](/uploads/01c55ee8da1bcbc085eafbf42c7d9338/M_440.T7)
[DriveGun.T7](/uploads/015112fe7906c295a044d37b941c673a/DriveGun.T7)
[BF_550.T7](/uploads/1b31be08c1ea892ed0bfb23110e2310d/BF_550.T7)
[RFphotoinjector-1.9.in](/uploads/7bd8b3675626b4f84e2816f19bea1c74/RFphotoinjector-1.9.in)OPAL 2.0.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/172All Fields in an expression must be aligned. (Do you have enough guard cells...2018-12-10T14:33:40+01:00adelmannAll Fields in an expression must be aligned. (Do you have enough guard cells?) OPAL-cyclmerlin-l-01:/gpfs/home/adelmann/scratch/UQ/isodar-1-O3/
Accelerated_BEAMCURRENT=0.0075_HW1=8.344454524637621_HL1=2.542497630448107_HW2=8.344454524637621_HL2=2.607642038949257
slurm-12130.out
```
OPAL{0}> *** Finished turn 23, Total num...merlin-l-01:/gpfs/home/adelmann/scratch/UQ/isodar-1-O3/
Accelerated_BEAMCURRENT=0.0075_HW1=8.344454524637621_HL1=2.542497630448107_HW2=8.344454524637621_HL2=2.607642038949257
slurm-12130.out
```
OPAL{0}> *** Finished turn 23, Total number of live particles: 330298
OPAL{0}> * Cavity RF1B Phase= 8.1602 [deg] transit time factor= 0.99972 dE= 0.091803 [MeV] E_kin= 15.05 [MeV]
OPAL{0}> * Cavity RF2A Phase= 17.119 [deg] transit time factor= 0.99972 dE= 0.089672 [MeV] E_kin= 15.14 [MeV]
OPAL{0}> * Cavity RF2B Phase= 8.4183 [deg] transit time factor= 0.99972 dE= 0.092922 [MeV] E_kin= 15.233 [MeV]
Error{4}> All Fields in an expression must be aligned. (Do you have enough guard cells?)
Error{4}> This error occurred while evaluating an expression for an LField with domain {[8:15:1],[0:8:1],[0:7:1]}
slurmstepd: error: *** JOB 12130 ON merlin-c-07 CANCELLED AT 2017-10-13T15:39:58 ***
```
*AND*
Accelerated_BEAMCURRENT=0.0075_HW1=7.655545475362379_HL1=2.607642038949257_HW2=8.344454524637621_HL2=2.542497630448107]
slurm-12492.out
```
OPAL{0}> * Cavity RF3B Phase= -1.6424 [deg] transit time factor= 0.99995 dE= 0.20723 [MeV] E_kin= 94.092 [MeV]
OPAL{0}> * Cavity RF4A Phase= 9.16 [deg] transit time factor= 0.99995 dE= 0.20493 [MeV] E_kin= 94.297 [MeV]
OPAL{0}> *** Finished turn 92, Total number of live particles: 289527
OPAL{0}> * Cavity RF2A Phase= 8.5664 [deg] transit time factor= 0.99995 dE= 0.20576 [MeV] E_kin= 95.123 [MeV]
OPAL{0}> * Cavity RF3A Phase= 8.5013 [deg] transit time factor= 0.99995 dE= 0.20631 [MeV] E_kin= 95.537 [MeV]
OPAL{0}> * Cavity RF3B Phase= -1.9567 [deg] transit time factor= 0.99995 dE= 0.2086 [MeV] E_kin= 95.746 [MeV]
Error{3}> All Fields in an expression must be aligned. (Do you have enough guard cells?)
Error{3}> This error occurred while evaluating an expression for an LField with domain {[0:5:1],[8:15:1],[8:15:1]}
slurmstepd: error: *** JOB 12492 ON merlin-c-40 CANCELLED AT 2017-10-16T03:25:28 DUE TO TIME LIMIT ***
```
*Go from 8 to 4 cores* in order to find out if the job is terminating nicely.OPAL 2.0.0adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/177python xxxx_ElementPositions.py --export-web2017-11-06T22:50:24+01:00adelmannpython xxxx_ElementPositions.py --export-web`adelmann@eduroam062-061 ~/Desktop/ANL/optLinac_40nC/data $ python optLinac_40nC_ElementPositions.py --export-web
Traceback (most recent call last):
File "optLinac_40nC_ElementPositions.py", line 590, in <module>
exportWeb()
File...`adelmann@eduroam062-061 ~/Desktop/ANL/optLinac_40nC/data $ python optLinac_40nC_ElementPositions.py --export-web
Traceback (most recent call last):
File "optLinac_40nC_ElementPositions.py", line 590, in <module>
exportWeb()
File "optLinac_40nC_ElementPositions.py", line 170, in exportWeb
decodeVertices()
File "optLinac_40nC_ElementPositions.py", line 16, in decodeVertices
for i in xrange(len(numVertices)):
NameError: name 'xrange' is not defined`
[optLinac_40nC_ElementPositions.py](/uploads/06f0c11ef6d6f05fcc0b40f43b8b7e67/optLinac_40nC_ElementPositions.py)OPAL 1.9.xkrauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/178H5root in the binary Linux distribution2017-12-28T14:30:38+01:00adelmannH5root in the binary Linux distribution`
[aandreas@beboplogin1 OPAL-1.6.1]$ source etc/profile.d/opal.sh
[aandreas@beboplogin1 OPAL-1.6.1]$ opal
`
OPAL works as expected.
`
[aandreas@beboplogin1 OPAL-1.6.1]$ H5root
/home/aandreas/OPAL-1.6.1/bin/H5root: line 13: /opt/psi/C...`
[aandreas@beboplogin1 OPAL-1.6.1]$ source etc/profile.d/opal.sh
[aandreas@beboplogin1 OPAL-1.6.1]$ opal
`
OPAL works as expected.
`
[aandreas@beboplogin1 OPAL-1.6.1]$ H5root
/home/aandreas/OPAL-1.6.1/bin/H5root: line 13: /opt/psi/Compiler/H5root/1.3.4/gcc/5.4.0/bin/H5root.bin: No such file or directory
`OPAL 1.6.1gsellgsellhttps://gitlab.psi.ch/OPAL/src/-/issues/179opt-pilot CONSTRAINT command not known2017-11-07T10:03:58+01:00snuverink_jjochem.snuverink@psi.chopt-pilot CONSTRAINT command not knownA opt-pilot constraint such as:
```
c1: CONSTRAINT, EXPR="fabs(rms_x)<1.5e-2";
constrs: CONSTRAINTS = (c1);
```
gives:
```
Error>
Error> *** Parse error detected by function "OpalParser::parseDefine()"
Error> *** in line 46 of file "...A opt-pilot constraint such as:
```
c1: CONSTRAINT, EXPR="fabs(rms_x)<1.5e-2";
constrs: CONSTRAINTS = (c1);
```
gives:
```
Error>
Error> *** Parse error detected by function "OpalParser::parseDefine()"
Error> *** in line 46 of file "RingOpt.in" before token ",":
Error> The object "CONSTRAINT" is unknown.
```
This is because the keyword CONSTRAINT is not skipped in [AbsFileStream](https://gitlab.psi.ch/OPAL/src/blob/master/src/Classic/Parser/AbsFileStream.cpp#L42).
I will fix this there.
One concern is that CONSTRAINT is also a keyword in the match routine: https://gitlab.psi.ch/OPAL/src/blob/master/src/Match/MatchParser.cpp.
However, as far as I can see this is currently not in OPAL? Quoting the manual: "Please note this is not yet available in: `DOPAL-t` and `DOPAL-cycl`." @adelmann can you confirm?snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/182Autophasing gives unexpected results2017-11-19T13:26:34+01:00adelmannAutophasing gives unexpected resultsThe attached lattice works perfect in OPAL 1.4.0 and does not show autophasing information on the *master*.
IN xxxDesignPath.dat we only have NaN's.
[csu_linac.in](/uploads/d8df65871270d3d1f51bf62ca2498266/csu_linac.in)[UOF20LFCell1_B....The attached lattice works perfect in OPAL 1.4.0 and does not show autophasing information on the *master*.
IN xxxDesignPath.dat we only have NaN's.
[csu_linac.in](/uploads/d8df65871270d3d1f51bf62ca2498266/csu_linac.in)[UOF20LFCell1_B.T7](/uploads/0518005dfebb87560f7d4d796e4c683b/UOF20LFCell1_B.T7)[UOF20LHCell1_B.T7](/uploads/050945818499b0fa4b79b3fbd7c5310c/UOF20LHCell1_B.T7)
[UOF20LFCell2_B.T7](/uploads/a0d905e3e6908b712fa2daa96307e650/UOF20LFCell2_B.T7)
[UOF20S1.T7](/uploads/aa07e8534f8cec72e47148db6d45e196/UOF20S1.T7)OPAL 2.0.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/188C version of FFTPACK segfaults2017-12-21T09:52:44+01:00snuverink_jjochem.snuverink@psi.chC version of FFTPACK segfaultsOpalRingTest and RingCyclotron-Tests segfault as [found by Christoph]
(https://gitlab.psi.ch/OPAL/src/commit/ee8a32ba55683d5dea34e7522e3c2cba2384d4d8#note_3812) with:
```
[pc12290:01336] Signal: Segmentation fault (11)
[pc12290:01336] S...OpalRingTest and RingCyclotron-Tests segfault as [found by Christoph]
(https://gitlab.psi.ch/OPAL/src/commit/ee8a32ba55683d5dea34e7522e3c2cba2384d4d8#note_3812) with:
```
[pc12290:01336] Signal: Segmentation fault (11)
[pc12290:01336] Signal code: (128)
[pc12290:01336] Failing at address: (nil)
[pc12290:01336] [ 0] /lib64/libpthread.so.0[0x3ab660f7e0]
[pc12290:01336] [ 1] opal(rffti1_+0x1c)[0x25b43ac]
[pc12290:01336] [ 2] opal(rffti_+0x24)[0x25b4374]
[pc12290:01336] [ 3] opal(_ZN3FFTI11RCTransformLj3EdEC1ERK7NDIndexILj3EES5_RKbi+0x2c7)[0x13109d7]
[pc12290:01336] [ 4] opal(_ZN16FFTPoissonSolverC1EP16UniformCartesianILj3EdEP19CenteredFieldLayoutILj3ES1_4CellENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESC_+0xf5c)[0x130428c]
[pc12290:01336] [ 5] opal(_ZN11FieldSolver10initSolverEP13PartBunchBaseIdLj3EE+0x7c5)[0x1038d35]
[pc12290:01336] [ 6] opal(_ZN8TrackRun16setupFieldsolverEv+0x1ef)[0x13efb6f]
[pc12290:01336] [ 7] opal(_ZN8TrackRun21setupCyclotronTrackerEv+0xbb)[0x13f3c2b]
[pc12290:01336] [ 8] opal(_ZN8TrackRun7executeEv+0x632)[0x13f5d02]
[pc12290:01336] [ 9] opal(_ZNK10OpalParser7executeEP6ObjectRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x35)[0xff7f35]
```gsellgsellhttps://gitlab.psi.ch/OPAL/src/-/issues/196Dumping phase space in global frame (Cyclotron-Tracker)2020-04-22T11:27:56+02:00frey_mDumping phase space in global frame (Cyclotron-Tracker)When dumping the phase space in global frame one obtains bad results if a core does not have particles, e.g.
```
OPAL{0}> * Integration step 0 (no phase space dump for <= 2 particles)
OPAL{0}> * T = 0 ns, Live Particles: 80640000
OPAL{0...When dumping the phase space in global frame one obtains bad results if a core does not have particles, e.g.
```
OPAL{0}> * Integration step 0 (no phase space dump for <= 2 particles)
OPAL{0}> * T = 0 ns, Live Particles: 80640000
OPAL{0}> * E = 71.6 MeV, beta * gamma = 0
OPAL{0}> * Bunch position: R = 0 mm, Theta = 0 Deg, Z = 0 mm
OPAL{0}> * Local Azimuth = -90 Deg, Local Elevation = -nan Deg
```
The reason is the usage of
```
meanR = itsBunch_m->R[0];
meanP = itsBunch_m->P[0];
```
in ```bunchDumpPhaseSpaceData()``` and ```bunchDumpStatData()```.frey_mfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/198Distribution-Binomial regression test is broken2019-02-15T08:51:23+01:00snuverink_jjochem.snuverink@psi.chDistribution-Binomial regression test is brokenThe binomial distribution regression test is broken.
According to the nightly tests this happened between [21 July](http://amas.web.psi.ch/opal/regressionTests/master/results_2017-07-21.xml), when the test was failing by a bit, and [23 ...The binomial distribution regression test is broken.
According to the nightly tests this happened between [21 July](http://amas.web.psi.ch/opal/regressionTests/master/results_2017-07-21.xml), when the test was failing by a bit, and [23 July](http://amas.web.psi.ch/opal/regressionTests/master/results_2017-07-23.xml), when the test was off much more.
So between commits b884784 and 3655140:
* 3655140 lift restriction on CORR[X|Y|Z] for binomial distributions
* e331e8b fixing issue with convertion to eV when ratio is small
* 6b08a26 add silencer to all tests
* 1a5bff8 fixing few issues binomial distribution and cleaning up
* acaf84b whitespaces
* 1c0fa9a further improve CMake files for opt-pilot: remove #define GIT_VERSION since already defined in OPAL
* df1840a (tag: OPAL-dev, tag: OPAL-1.9.0) fixing CMake files
* 85bc105 cleaning up Gauss distribution unit test; improve SilenceTest class to print output if tests fail
Of these 1a5bff8 is the most likely culprit. Assigning to @kraus who did all of those commits.krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/199Rethink catching of exceptions2018-01-09T06:41:26+01:00krausRethink catching of exceptionsCurrently all exceptions are caught by the OpalParser. Whe it catches exceptions then it calls exit. This is a problem for the optimizer since it's very likely that an individual (optimization perspective) failes while the others run smo...Currently all exceptions are caught by the OpalParser. Whe it catches exceptions then it calls exit. This is a problem for the optimizer since it's very likely that an individual (optimization perspective) failes while the others run smoothly.
OpalParser shouldn't catch any exceptions except ParseErrors.krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/200Optimiser throws unexpected Exceptions and later die2018-01-04T18:14:26+01:00adelmannOptimiser throws unexpected Exceptions and later dieAfter 3ed381f8 I see more Exceptions (merlinl01:/gpfs/home/adelmann/scratch/awa-optim/code/optLinac_40nC.o78706)
I am now a bit more confused:
- the *.data file should only read once (but we have over 600 Exeptions)
- we again have di...After 3ed381f8 I see more Exceptions (merlinl01:/gpfs/home/adelmann/scratch/awa-optim/code/optLinac_40nC.o78706)
I am now a bit more confused:
- the *.data file should only read once (but we have over 600 Exeptions)
- we again have directories that seams to exists
The real problem is reported in merlinl01:/gpfs/home/adelmann/scratch/awa-optim-0/code/optLinac_40nC.o78651
**░░░░░terminate called after throwing an instance of 'OpalException**OPAL 2.0.0krausadelmannYves Ineichenkraushttps://gitlab.psi.ch/OPAL/src/-/issues/203Structure/H5PartWrapper.cpp Error2018-01-11T08:45:06+01:00adelmannStructure/H5PartWrapper.cpp ErrorOn merlin: */gpfs/home/adelmann/scratch/opal-scaling/Merlin/test* check **scaling-1.o92428**
`Error{91}> H5 rc= -2 in /gpfs/home/adelmann/opal/opal-1.9/src/Structure/H5PartWrapper.cpp @ line 94`
The simulation does not crash maybe als...On merlin: */gpfs/home/adelmann/scratch/opal-scaling/Merlin/test* check **scaling-1.o92428**
`Error{91}> H5 rc= -2 in /gpfs/home/adelmann/opal/opal-1.9/src/Structure/H5PartWrapper.cpp @ line 94`
The simulation does not crash maybe also because **OPTION, PSDUMPFREQ = 1E9;**OPAL 2.0.0krausgsellkraushttps://gitlab.psi.ch/OPAL/src/-/issues/206Cyclotron elements require end position to have larger radius than start posi...2018-12-10T14:30:45+01:00snuverink_jjochem.snuverink@psi.chCyclotron elements require end position to have larger radius than start positionFound with @nesteruk_k:
In the [current implementation](https://gitlab.psi.ch/OPAL/src/blob/master/src/Classic/AbsBeamline/Probe.cpp#L195) it is assumed that the Probe (x,y) end position has a larger radius than the (x,y) starting posit...Found with @nesteruk_k:
In the [current implementation](https://gitlab.psi.ch/OPAL/src/blob/master/src/Classic/AbsBeamline/Probe.cpp#L195) it is assumed that the Probe (x,y) end position has a larger radius than the (x,y) starting position.
If not, almost no particles are recorded. This is not documented and without warning not very user-friendly, therefore labelling this a bug. For 1.6 perhaps a documentation update would be enough.
EDIT (23 April): (C)Collimator, Septum, Stripper make the same assumption.snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/208Destructors are noexcept(true) in C++112018-03-22T16:33:26+01:00frey_mDestructors are noexcept(true) in C++11With ```gcc/7.3.0``` one gets the following warning due to the ```CLOSE_FILE``` macro in ```LossDataSink```:
```
src/Classic/Structure/LossDataSink.cpp: In destructor 'LossDataSink::~LossDataSink()':
/scratch1/scratchdirs/frey_m/libs/op...With ```gcc/7.3.0``` one gets the following warning due to the ```CLOSE_FILE``` macro in ```LossDataSink```:
```
src/Classic/Structure/LossDataSink.cpp: In destructor 'LossDataSink::~LossDataSink()':
/scratch1/scratchdirs/frey_m/libs/opal/src/src/Classic/Structure/LossDataSink.cpp:100:70: warning: throw will always call terminate() [-Wterminate]
throw GeneralClassicException(std::string(__func__), ss.str()); \
^
/scratch1/scratchdirs/frey_m/libs/opal/src/src/Classic/Structure/LossDataSink.cpp:156:9: note: in expansion of macro 'CLOSE_FILE'
CLOSE_FILE ();
^
/scratch1/scratchdirs/frey_m/libs/opal/src/src/Classic/Structure/LossDataSink.cpp:100:70: note: in C++11 destructors default to noexcept
throw GeneralClassicException(std::string(__func__), ss.str()); \
```
As the warning explains: In C++11 destructors are ```noexcept(true)``` by default. I'll fix this by setting the destructor ```noexcept(false)``` explicitly.OPAL 1.9.xfrey_mfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/216FindHDF5 is broken2018-04-09T21:41:15+02:00gsellFindHDF5 is brokenThe FindHDF5 in OPAL is broken. If you link static the order of libraries matters on Linux. Linking with a static HDF5 library will most likely fail since the libraries required by libhdf5.a are not listed after libhdf5.a:
Linking with
...The FindHDF5 in OPAL is broken. If you link static the order of libraries matters on Linux. Linking with a static HDF5 library will most likely fail since the libraries required by libhdf5.a are not listed after libhdf5.a:
Linking with
```
g++ -o opal ... -ldl -lm -lz ... -lhdf5
```
will fail on Linux with a static HDF5 library. This will do:
```
g++ -o opal ... -ldl -lm -lz ... -lhdf5 -ldl -lm -lz
```OPAL-1.6.2gsellgsellhttps://gitlab.psi.ch/OPAL/src/-/issues/219Missing header attributes for H5 in OPAl-T2018-04-11T09:12:23+02:00frey_mMissing header attributes for H5 in OPAl-TThe H5PartWrapper for OPAL-T does not write all units into the header. E.g. the phase space variables are missing. For OPAL-Cycl they are in
```
WRITESTRINGFILEATTRIB(file_m, "xUnit", "m");
WRITESTRINGFILEATTRIB(file_m, "yUnit", "m");
W...The H5PartWrapper for OPAL-T does not write all units into the header. E.g. the phase space variables are missing. For OPAL-Cycl they are in
```
WRITESTRINGFILEATTRIB(file_m, "xUnit", "m");
WRITESTRINGFILEATTRIB(file_m, "yUnit", "m");
WRITESTRINGFILEATTRIB(file_m, "zUnit", "m");
WRITESTRINGFILEATTRIB(file_m, "pxUnit", "#beta#gamma");
WRITESTRINGFILEATTRIB(file_m, "pyUnit", "#beta#gamma");
WRITESTRINGFILEATTRIB(file_m, "pzUnit", "#beta#gamma");
```OPAL 1.9.xadelmannfrey_madelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/220Missing unit attributes in H5 file in OPAL-Cycl2018-04-11T09:12:23+02:00frey_mMissing unit attributes in H5 file in OPAL-CyclThe H5-file for OPAL-Cycl does not write the units of following attributes:
* centroid
* minX, maxX
* minP, maxP
* LOCAL
* REFPR
* REFPT
* REFPU
* REFTHETA
* REFZ
* AZIMUTH
* ELEVATION
* Ex, Ey, Ez
* Bx, By, Bz
* rhoThe H5-file for OPAL-Cycl does not write the units of following attributes:
* centroid
* minX, maxX
* minP, maxP
* LOCAL
* REFPR
* REFPT
* REFPU
* REFTHETA
* REFZ
* AZIMUTH
* ELEVATION
* Ex, Ey, Ez
* Bx, By, Bz
* rhoOPAL 1.9.xsnuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/223The flexible collimator yields different results on different computers2018-04-19T09:16:01+02:00krausThe flexible collimator yields different results on different computersThe regression tests slit-1 and slit-2 fail on opalrunner while they yield correct results on my laptop.The regression tests slit-1 and slit-2 fail on opalrunner while they yield correct results on my laptop.krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/230Non-existing objective gives segfault2018-06-22T11:11:14+02:00snuverink_jjochem.snuverink@psi.chNon-existing objective gives segfaultAs seen with @frey_m: in the `OPTIMIZE` command, a non-existing objective gives a segfault. Rather a warning should be printed.
```
OPAL{0}> opal opt-pilot/optRing.in --inputfile=template/Ring.tmpl --outfile=RingOpt --outdir=RingOpt --i...As seen with @frey_m: in the `OPTIMIZE` command, a non-existing objective gives a segfault. Rather a warning should be printed.
```
OPAL{0}> opal opt-pilot/optRing.in --inputfile=template/Ring.tmpl --outfile=RingOpt --outdir=RingOpt --initialPopulation=31 --num-masters=1 --num-coworkers=1 --num-ind-gen=31 --maxGenerations=30 --gene-mutation-probability=0.5 --mutation-probability=0.5 --recombination-probability=0.5 --simtmpdir=/home/scratch/AMAS-BDSModels/PSI-Ring/tmp --templates=/home/scratch/AMAS-BDSModels/PSI-Ring/template
[pc12290:09252] *** Process received signal ***
[pc12290:09252] Signal: Segmentation fault (11)
[pc12290:09252] Signal code: Address not mapped (1)
[pc12290:09252] Failing at address: 0x10
[pc12290:09249] *** Process received signal ***
[pc12290:09249] Signal: Segmentation fault (11)
[pc12290:09249] Signal code: Address not mapped (1)
[pc12290:09249] Failing at address: 0x10
[pc12290:09252] [ 0] [pc12290:09249] [ 0] /lib64/libpthread.so.0[0x3ab660f7e0]
[pc12290:09249] [ 1] /lib64/libpthread.so.0[0x3ab660f7e0]
[pc12290:09252] [ 1] opal(_ZNK9Objective13getExpressionB5cxx11Ev+0x1)[0x710301]
[pc12290:09252] opal(_ZNK9Objective13getExpressionB5cxx11Ev+0x1)[0x710301]
[pc12290:09249] [ 2] [ 2] -------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
-------------------------------------------------------
opal(_ZN11OptimizeCmd7executeEv+0x24cb)[0x6d679b]
[pc12290:09249] [ 3] opal(_ZN11OptimizeCmd7executeEv+0x24cb)[0x6d679b]
[pc12290:09252] [ 3] opal(_ZNK10OpalParser7executeEP6ObjectRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3a)[0x6c8b1a]
[pc12290:09249] [ 4] opal(_ZNK10OpalParser11parseActionER9Statement+0x142)[0x6cce92]
[pc12290:09249] [ 5] opal(_ZNK10OpalParser5parseER9Statement+0x173)[0x6cc733]
[pc12290:09249] [ 6] opal(_ZNK10OpalParser3runEv+0x2c)[0x6c87bc]
[pc12290:09249] [ 7] opal(_ZNK10OpalParser3runEP11TokenStream+0x70)[0x6cd5b0]
[pc12290:09249] [ 8] opal(main+0x1e8f)[0x618bef]
[pc12290:09249] [ 9] /lib64/libc.so.6(__libc_start_main+0xfd)[0x3ab5e1ed1d]
[pc12290:09249] [10] opal[0x60b559]
[pc12290:09249] *** End of error message ***
opal(_ZNK10OpalParser7executeEP6ObjectRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x3a)[0x6c8b1a]
[pc12290:09252] [ 4] opal(_ZNK10OpalParser11parseActionER9Statement+0x142)[0x6cce92]
[pc12290:09252] [ 5] opal(_ZNK10OpalParser5parseER9Statement+0x173)[0x6cc733]
[pc12290:09252] [ 6] opal(_ZNK10OpalParser3runEv+0x2c)[0x6c87bc]
[pc12290:09252] [ 7] opal(_ZNK10OpalParser3runEP11TokenStream+0x70)[0x6cd5b0]
[pc12290:09252] [ 8] opal(main+0x1e8f)[0x618bef]
[pc12290:09252] [ 9] /lib64/libc.so.6(__libc_start_main+0xfd)[0x3ab5e1ed1d]
[pc12290:09252] [10] opal[0x60b559]
[pc12290:09252] *** End of error message ***
```snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/233Optimiser: Parsing of expressions2018-12-10T14:34:25+01:00frey_mOptimiser: Parsing of expressionsI added the new function ```infNormRadialPeak```. However, it's not recognized. It crashes with ```Parsing failed!```. I do not see a difference to other expressions.
According to [line 180](https://gitlab.psi.ch/OPAL/src/blob/master/op...I added the new function ```infNormRadialPeak```. However, it's not recognized. It crashes with ```Parsing failed!```. I do not see a difference to other expressions.
According to [line 180](https://gitlab.psi.ch/OPAL/src/blob/master/optimizer/Expression/Expression.h#L180) it checks
```
if (success && iter != end) {
std::cout << "Parsing failed!" << std::endl;
throw new OptPilotException("Expression::parse()",
"Parsing failed!");
}
```
I think it should be ```!success``` and ```ìter == end``` instead. A first test confirms my proposal, i.e. it still works with
e.g. ```sumErrSqRadialPeak``` but now also with ```infNormRadialPeak```.frey_mfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/235Optimiser occasional crashes with HDF5 enabled.2018-12-10T14:33:54+01:00snuverink_jjochem.snuverink@psi.chOptimiser occasional crashes with HDF5 enabled.@ext-neveu_n reported crashes with the optimizer. It turned out that the crashes were coming from the HDF5 output (with `ENABLEHDF5=FALSE` there were no more crashes).
Stack trace:
```
H5PartWrapper::storeCavityInformation()
H5PartWrapp...@ext-neveu_n reported crashes with the optimizer. It turned out that the crashes were coming from the HDF5 output (with `ENABLEHDF5=FALSE` there were no more crashes).
Stack trace:
```
H5PartWrapper::storeCavityInformation()
H5PartWrapper::open(int)
h5_open_file2
h5_error
h5_report_errorhandler
```
`H5PartWrapper::open(int)`:
```c++
void H5PartWrapper::open(h5_int32_t flags) {
close();
h5_prop_t props = H5CreateFileProp ();
MPI_Comm comm = Ippl::getComm();
h5_err_t h5err = H5SetPropFileMPIOCollective (props, &comm);
#if defined (NDEBUG)
(void)h5err;
#endif
assert (h5err != H5_ERR);
file_m = H5OpenFile (fileName_m.c_str(), flags, props);
assert (file_m != (h5_file_t)H5_ERR);
H5CloseProp (props);
}
```
So the opening of the file failed for some reason (Perhaps the optimiser has deleted the directory?).https://gitlab.psi.ch/OPAL/src/-/issues/241Improvements on Matched Distribution2019-01-23T14:16:39+01:00frey_mImprovements on Matched Distribution- [x] use EV solver @cortes_c
- [x] unit fix sigma-matrix --> particles @cortes_c
- [x] update of rinit and prinit for tracking @frey_m
- [x] simplify input parameters @frey_m
- [x] bug fix in field interpolation @frey_m @cortes_c
-...- [x] use EV solver @cortes_c
- [x] unit fix sigma-matrix --> particles @cortes_c
- [x] update of rinit and prinit for tracking @frey_m
- [x] simplify input parameters @frey_m
- [x] bug fix in field interpolation @frey_m @cortes_c
- [x] allow closed orbit calculation only #234 @frey_m
- [x] update regression test @frey_m
- [x] update OPAL-manual @frey_m
Fixes done in [matched-gauss-fixes](https://gitlab.psi.ch/OPAL/src/tree/matched-gauss-fixes)frey_mcortes_cfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/243(C)Collimator does not check for lost particles correctly2018-12-10T14:30:45+01:00snuverink_jjochem.snuverink@psi.ch(C)Collimator does not check for lost particles correctlyDiscovered with @nesteruk_k:
The Cyclotron collimator does not check for lost particles correctly.
It first checks if the beam is close to the collimator first in `z` and then in `r`:
```
bunch->get_bounds(rmin, rmax);
double r...Discovered with @nesteruk_k:
The Cyclotron collimator does not check for lost particles correctly.
It first checks if the beam is close to the collimator first in `z` and then in `r`:
```
bunch->get_bounds(rmin, rmax);
double r1 = sqrt(rmax(0) * rmax(0) + rmax(1) * rmax(1));
if (rmax(2) >= zstart_m && rmin(2) <= zend_m) {
if ( r1 > rstart_m - 100.0 && r1 < rend_m + 100.0 ){
```
The check in `z` is correct, but the check in `r` only checks on the maximum bunch bound (it assumes that the bunch is increasing in radius and that the maximum bound is not skipping the collimator in the next turn).
A similar check as in `z` should be implemented instead.
This is both an issue in [2.0](https://gitlab.psi.ch/OPAL/src/blob/master/src/Classic/AbsBeamline/CCollimator.cpp#L126)
and [1.6](https://gitlab.psi.ch/OPAL/src/blob/OPAL-1.6/src/Classic/AbsBeamline/Collimator.cpp#L280)snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/244Linear transfer maps in SigmaGenerator.h2020-04-07T17:11:06+02:00cortes_cLinear transfer maps in SigmaGenerator.hThe linear transfer maps used in SigmaGenerator.h delivered from the MapGenerator.h class does not agree with the theoretical expectation. Please fix.The linear transfer maps used in SigmaGenerator.h delivered from the MapGenerator.h class does not agree with the theoretical expectation. Please fix.OPAL 2.4.0frey_mfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/246Strange collimator behavior2018-12-15T18:27:03+01:00luethi_mStrange collimator behaviorThe collimators in OPAL 2.0.0 show a strange behavior.
When a particle hits the collimator, it is not removed from the simulation, but it's scattered and leaves the Cyclotron in a straight line.
The particle also doesn't appear in the CO...The collimators in OPAL 2.0.0 show a strange behavior.
When a particle hits the collimator, it is not removed from the simulation, but it's scattered and leaves the Cyclotron in a straight line.
The particle also doesn't appear in the COLL.loss file.
This is a minimal example, which reproduces the error:
[comet_min_example.in](/uploads/c72cd4108dd4d7a76a97c6481066ec23/comet_min_example.in)
The LINE should be the one with the collimator (coll), the distribution is dist1 and NPART=1.
The magnetic map: [BMap_Dummy_m.txt](/uploads/6a86367992c1f59c5a27126fe1b786bb/BMap_Dummy_m.txt)
The distribution file: [dist_single_part.dat](/uploads/a110622515bd374276247a9e3903c70e/dist_single_part.dat)
The plotted particle track looks like this:
[gnuplot](/uploads/fea537b42c0319d4ea924807ba8d54ba/gnuplot.png)
If I use dist2 in the example and simulate with e.g. NPART=100, then the particles are correctly stopped by the collimator and written to the COLL.loss file.
But in that case in the terminal output from OPAL, the number of 'Live Particles', always stays at 100, even though clearly the particles hit the collimator.snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/247Strange Probe behavior2018-12-10T14:30:45+01:00luethi_mStrange Probe behaviorThe Probes in OPAL 2.0.0 also show a strange behavior.
I placed a test probe in the cyclotron. Every particle in the cyclotron goes through this probe exactly once.
I noticed that some particles are counted twice, some particles not at a...The Probes in OPAL 2.0.0 also show a strange behavior.
I placed a test probe in the cyclotron. Every particle in the cyclotron goes through this probe exactly once.
I noticed that some particles are counted twice, some particles not at all.
I made a minimal example to show the problem.
This is the input file:
[comet_min_example.in](/uploads/6ca49aab2a954fb42fb7bd958c054f7d/comet_min_example.in)
The magnetic field map:
[BMap_Dummy_m.txt](/uploads/1d796cbaeddf63359605094c31d57745/BMap_Dummy_m.txt)
Here is a plot of the tracks:
![probe](/uploads/6aa0aaa1410d4d4dc578026b38674ff4/probe.png)
You see clearly, that every particle goes through the probe.
The simulation was done with NPART=1000. However in the PROBE.hist file, only 996 particles are counted.
With this example I was not able to reproduce a situation where some particles are counted twice, but this also happened in my original simulation.frey_msnuverink_jjochem.snuverink@psi.chfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/248Problems with flexible collimator when used with optimizer2018-11-20T20:30:57+01:00adelmannProblems with flexible collimator when used with optimizerI am getting messages like this:
```
░No reference point provided: using the origin
Individual not viable, I try again: iter= 0
Individual not viable, I try again: iter= 0
░No reference point provided: using the origin
Individua...I am getting messages like this:
```
░No reference point provided: using the origin
Individual not viable, I try again: iter= 0
Individual not viable, I try again: iter= 0
░No reference point provided: using the origin
Individual not viable, I try again: iter= 0
Individual not viable, I try again: iter= 1
Individual not viable, I try again: iter= 2
```
and then the optimisation is killed with
```
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 61951 RUNNING AT bdw-0026
= EXIT CODE: 9
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
Intel(R) MPI Library troubleshooting guide:
https://software.intel.com/node/561764
===================================================================================
```
No further information !
All simulation in progress, at the time when the optimisation is killed,
are terminating just fine.
There is the new flexible collimator involved and a constraint w.r.t. number of particles.
When removing the flexible collimator the optimisation works just fine.
Will add input files later. Need to catch my airplane now!krausadelmannkraushttps://gitlab.psi.ch/OPAL/src/-/issues/253Collimator Losses2018-12-10T14:31:51+01:00snuverink_jjochem.snuverink@psi.chCollimator LossesAs reported by @luethi_m in https://gitlab.psi.ch/OPAL/src/issues/246#note_7357:
I found some other strange behavior working with collimators (but not with the test example I uploaded here).
I start with a beam of 10'000 particles which...As reported by @luethi_m in https://gitlab.psi.ch/OPAL/src/issues/246#note_7357:
I found some other strange behavior working with collimators (but not with the test example I uploaded here).
I start with a beam of 10'000 particles which have a gauss distribution.
I have a vertical collimator and a probe in front of the collimator. If plotting the tracks, I see that there are some particles hitting the vertical collimator, but they are not registered in the collimator.loss file.
This is a plot of the profile:
![image](/uploads/c097c15589e6087c166391a9c6aefdac/image.png)
However, if I make the collimator thicker (10mm instead of 2mm), some particles are now registered in the collimator.loss file. It looks like this:
![image](/uploads/1d1460e7081147029bf2c73d9608c8a2/image.png)
The red points are the coordinates from the probe.loss file, the green dots are from the collimator.loss file. You can see that some, but not all particles are now registered in the collimator.
I thought it might be a problem with the time step, so I increased the number of steps per turn from 5'000 up to 100'000, but the particles are still not registered in the collimator.loss file.
This behavior happens no matter if I use PARTICLEMATTERINTERACTION or not.snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/257GA interface is appending new generation output to new json file, rather than...2019-07-04T17:15:48+02:00ext-edelen_aGA interface is appending new generation output to new json file, rather than having one generation output per json fileaccording to the input file:
```
INITIALPOPULATION=100,
MAXGENERATIONS=2,
NUM_MASTERS=1,
NUM_COWORKERS=8,
SIMTMPDIR="tmp",
TEMPLATEDIR="tmpl",
FIELDMAPDIR="fieldm...according to the input file:
```
INITIALPOPULATION=100,
MAXGENERATIONS=2,
NUM_MASTERS=1,
NUM_COWORKERS=8,
SIMTMPDIR="tmp",
TEMPLATEDIR="tmpl",
FIELDMAPDIR="fieldmaps",
NUM_IND_GEN=100,
GENE_MUTATION_PROBABILITY=0.8,
MUTATION_PROBABILITY=0.8,
RECOMBINATION_PROBABILITY=0.2;
```
It seems like this should be 200 samples (100 in population x 2 generations)
However, using mldb to convert to pk reports 300 samples:
```
OPAL ML Database Generator
Found 2 json files.
Write ML-Database 1nCGA-small-test.pk
xDim = 6 -> ['G0', 'G1', 'K0', 'K1', 'PH0', 'PH1']
yDim = 7 -> ['DE', 'EMITS', 'EMITX', 'EMITY', 'RMSS', 'RMSX', 'RMSY']
generations = 2
Data points = 300
```
Looking at the json files:
- 1_1nCGA_0.json has 100 individuals
- 2_1nCGA_0.json has 200 individuals
And if we compare the first individual in each case, they are both the same:
```
"G0": 63.7949,
"G1": 17.5757,
"K0": 474.876,
"K1": 185.284,
"PH0": -8.94274,
"PH1": -2.44182
```
So it looks like new data is just being appended to the new json file in subsequent generations rather than creating a new file for each generation with only that generation's data in it.kraussnuverink_jjochem.snuverink@psi.chkraushttps://gitlab.psi.ch/OPAL/src/-/issues/267RFCavity can be applied twice in ParallelCyclotronTracker2018-12-10T14:31:18+01:00snuverink_jjochem.snuverink@psi.chRFCavity can be applied twice in ParallelCyclotronTracker@frey_m and I have seen that sometimes a particle can see the same cavity twice in two consecutive steps. The cavity is applied if the particle crossed the cavity. If so then the last step is recalculated with the RF kick added. However,...@frey_m and I have seen that sometimes a particle can see the same cavity twice in two consecutive steps. The cavity is applied if the particle crossed the cavity. If so then the last step is recalculated with the RF kick added. However, then it can happen that the new step doesn't cross the cavity completely and then the cavity is applied again.
Example:
```
OPAL> ( -904.91 , -2521.4 , 0 )
OPAL> dist to cav before: 0.00073554 after: -6.3257e-07
OPAL> * Cavity FT1 Phase= 208.053 [deg] transit time factor= 0.892176 dE= -0.390086 [MeV] E_kin= 135.409 [MeV] Time dep freq = 1
OPAL> ( -904.21 , -2521.7 , 0 )
OPAL> dist to cav before: 2.7412e-07 after: -0.000735
OPAL> * Cavity FT1 Phase= 208.334 [deg] transit time factor= 0.891931 dE= -0.388959 [MeV] E_kin= 135.02 [MeV] Time dep freq = 1
```
We propose to add a flag such that the same cavity is not applied again in the next time step.snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/271OPAL gets stuck2018-12-20T12:24:44+01:00frey_mOPAL gets stuckAs several users (@luethi_m, @ext-calvo_p) already noticed (me included), OPAL sometimes gets stuck. In my case it's the ```reduce``` function of IPPL that is used all over OPAL. I have currently no clue what't the reason for this.
CC: ...As several users (@luethi_m, @ext-calvo_p) already noticed (me included), OPAL sometimes gets stuck. In my case it's the ```reduce``` function of IPPL that is used all over OPAL. I have currently no clue what't the reason for this.
CC: @gsell, @snuverink_jhttps://gitlab.psi.ch/OPAL/src/-/issues/274SBend3d field map not parsed2023-11-30T15:49:12+01:00ext-rogers_cSBend3d field map not parsedWhen attempting to load an SBend3D field map, the input is not parsed correctly. OPAL dies with error
> User error detected by function SectorMagneticFieldMap::IO::getInterpolator
> Ran out of field points during read operation; che...When attempting to load an SBend3D field map, the input is not parsed correctly. OPAL dies with error
> User error detected by function SectorMagneticFieldMap::IO::getInterpolator
> Ran out of field points during read operation; check bounds and ordering
> Ran out of field points during read operation; check bounds and ordering
and then the usual MPI_ABORT stuff
Files:
[Bfield.dat.gz](/uploads/ac220399c42a92b40ee8542ca208a753/Bfield.dat.gz)
[Bfield-nobump.dat.gz](/uploads/1227ca0fa9b8a501630e01298989099b/Bfield-nobump.dat.gz)
[tosca.in](/uploads/bd535625cba6ef794dfd2680e0ed7711/tosca.in)ext-rogers_csnuverink_jjochem.snuverink@psi.chext-rogers_c2020-09-23https://gitlab.psi.ch/OPAL/src/-/issues/277OPAL-Cycl - Particle out of bounds2019-03-20T12:06:00+01:00frey_mOPAL-Cycl - Particle out of boundsClose to the septum of the PSI Ring cyclotron we lose a lot of particles and particles might end up out of the field map. Thus, we get magnetic field values of $`\mathcal{O}(10^{177})`$ which is not physical. This ends up of producing ``...Close to the septum of the PSI Ring cyclotron we lose a lot of particles and particles might end up out of the field map. Thus, we get magnetic field values of $`\mathcal{O}(10^{177})`$ which is not physical. This ends up of producing ```NAN``` values of the particle positions what causes problems in case of AMR (adaptive mesh refinement) simulations.
I propose to destroy the particles directly after applying the field [line 3494](https://gitlab.psi.ch/OPAL/src/blob/master/src/Algorithms/ParallelCyclotronTracker.cpp#L3494) of ```ParallelCyclotronTracker::computeExternalFields_m``` by
```C++
if ( outOfBound ) {
itsBunch_m->destroy(1, i, true);
}
```
With this fix, it works -- that is: a small test worked.
Are there any concerns or better suggestions?
CC: @snuverink_jfrey_msnuverink_jjochem.snuverink@psi.chfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/282Closed Orbit Finder not always converging2019-04-01T15:01:25+02:00snuverink_jjochem.snuverink@psi.chClosed Orbit Finder not always convergingThe closed orbit finder is not always converging as found by @matlocha_t. His input file is [u2_seo.in](/uploads/ca99e5634a65602ee0f2a91c61033aa6/u2_seo.in).
cc: @frey_mThe closed orbit finder is not always converging as found by @matlocha_t. His input file is [u2_seo.in](/uploads/ca99e5634a65602ee0f2a91c61033aa6/u2_seo.in).
cc: @frey_msnuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/283Element positions in *_ElementPositions.txt file not correct2019-03-24T11:59:14+01:00krausElement positions in *_ElementPositions.txt file not correct- If ELEMEDGE isn't at the beginning of the element then BEGIN and END are computed as if it were at the beginning.
- Dipoles with negative angle are flipped.- If ELEMEDGE isn't at the beginning of the element then BEGIN and END are computed as if it were at the beginning.
- Dipoles with negative angle are flipped.krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/285Matched Distribution: Not matched?2020-07-15T16:53:42+02:00frey_mMatched Distribution: Not matched?@cortes_c experienced that a particle tracking with a matched distribution doesn't remain matched after 1 turn. @baumgarten suggested to check the orientation of the integration.
Corresponding branch: https://gitlab.psi.ch/OPAL/src/tree...@cortes_c experienced that a particle tracking with a matched distribution doesn't remain matched after 1 turn. @baumgarten suggested to check the orientation of the integration.
Corresponding branch: https://gitlab.psi.ch/OPAL/src/tree/matched-gauss-fixes **(deleted)**
CC: @snuverink_j @cortes_c
- [x] fix crash of regression test (see https://gitlab.psi.ch/OPAL/src/merge_requests/364)
- [x] ~~check unit~~ [**Edit:** Checked with !393]
- [x] check why not matched (see !393)
- [x] update regression test due to changes of !393 (see https://gitlab.psi.ch/OPAL/regression-tests/merge_requests/20)OPAL 2.4.0frey_mfrey_m2020-07-24https://gitlab.psi.ch/OPAL/src/-/issues/292reading H5hut fieldmap fails due to unset view2019-04-12T10:57:21+02:00gsellreading H5hut fieldmap fails due to unset viewIn method FM3dH5Block::readMap() a view must be set before we can read the map.In method FM3dH5Block::readMap() a view must be set before we can read the map.gsellgsellhttps://gitlab.psi.ch/OPAL/src/-/issues/293Stripper Element not losing any particles2019-04-06T09:55:12+02:00snuverink_jjochem.snuverink@psi.chStripper Element not losing any particlesDiscovered by @ext\-calvo\_p . The stripper element is not recording any particles.
This is due to a forgotten `bunch->get_bounds()` statement. This was introduced in commit 60b4de13 and 57787997 (OPAL-2.0)Discovered by @ext\-calvo\_p . The stripper element is not recording any particles.
This is due to a forgotten `bunch->get_bounds()` statement. This was introduced in commit 60b4de13 and 57787997 (OPAL-2.0)snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/294ParallelCyclotronTracker crashes (in single particle?) mode when all particle...2019-04-04T13:32:15+02:00snuverink_jjochem.snuverink@psi.chParallelCyclotronTracker crashes (in single particle?) mode when all particles are lostI noticed that in single particle mode (might not be related) that if all particles (in this case on a stripper) are lost OPAL crashes:
```
OPAL> At step 25071, lost 1 particles on stripper, collimator, septum, or out of cyclotron apert...I noticed that in single particle mode (might not be related) that if all particles (in this case on a stripper) are lost OPAL crashes:
```
OPAL> At step 25071, lost 1 particles on stripper, collimator, septum, or out of cyclotron aperture
Error>
Error> *** User error detected by function "boundp() "
Error> h<0, can not build a mesh
Error> h<0, can not build a mesh
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode -100.
```
In [ParallelCyclotronTracker::deleteParticle()](https://gitlab.psi.ch/OPAL/src/blob/master/src/Algorithms/ParallelCyclotronTracker.cpp#L2169) there needs to be a check after the particles are removed if there are any particles left and early return before the [boundp()](https://gitlab.psi.ch/OPAL/src/blob/master/src/Algorithms/ParallelCyclotronTracker.cpp#L2243) call.snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/295ParallelTTracker crashes when using particle matter integration and space cha...2019-04-12T13:26:34+02:00krausParallelTTracker crashes when using particle matter integration and space charge solver if all particles are in material```
ParallelTTracker [2]> --- CollimatorPhysics - Name AIR1 Material AIR
ParallelTTracker [2]> Particle Statistics @ 12:29:52
ParallelTTracker [2]> entered: 1
ParallelTTracker [2]> rediffused: 0
ParallelTTracker [2]>...```
ParallelTTracker [2]> --- CollimatorPhysics - Name AIR1 Material AIR
ParallelTTracker [2]> Particle Statistics @ 12:29:52
ParallelTTracker [2]> entered: 1
ParallelTTracker [2]> rediffused: 0
ParallelTTracker [2]> stopped: 0
ParallelTTracker [2]> total in material: 50'000
Error>
Error> *** User error detected by function "boundp() "
Error> h<0, can not build a mesh
Error> h<0, can not build a mesh
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode -100.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
```krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/298Reference particle has to be slowed down by material2019-08-01T05:57:13+02:00krausReference particle has to be slowed down by materialAt the moment the reference particle isn't slowed down when passing a degrader. The beam then has a different kinetic energy which poses a problem in subsequent dipoles and in the statistical analysis.At the moment the reference particle isn't slowed down when passing a degrader. The beam then has a different kinetic energy which poses a problem in subsequent dipoles and in the statistical analysis.krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/300reading H5Block formatted field-maps crashes2019-04-12T10:47:46+02:00gsellreading H5Block formatted field-maps crashesif the size of the field-map in z-direction is less than the number of cores, reading the field-map crashes.
This is already fixed in OPAL 2.0: see 0172837a and #292 if the size of the field-map in z-direction is less than the number of cores, reading the field-map crashes.
This is already fixed in OPAL 2.0: see 0172837a and #292 gsellgsellhttps://gitlab.psi.ch/OPAL/src/-/issues/301Premature termination of integration when reference particle after MAXSTEPS i...2019-05-15T22:43:30+02:00krausPremature termination of integration when reference particle after MAXSTEPS in implicit drift.The Degrader-1 test is flagged as broken because the number of saved steps in the `.stat` file differ. This is caused by the fact that after 230 steps (MAXSTEPS) the reference particle in the OrbitThreader class is located in a drift, th...The Degrader-1 test is flagged as broken because the number of saved steps in the `.stat` file differ. This is caused by the fact that after 230 steps (MAXSTEPS) the reference particle in the OrbitThreader class is located in a drift, that isn't explicitly mentioned in the input file. During simulation ParallelTTracker stops because it seems to have reached the end of the beamline.krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/305Calculation of chord length in RBend wrong2019-05-09T14:16:31+02:00krausCalculation of chord length in RBend wrong### Summary
When the deflection angle is negative then the chord length that is calculated in RBend is wrong. In a rectangular bend when the orientation of the face relative to the beam (`E1`) is half of the deflection angle then the ch...### Summary
When the deflection angle is negative then the chord length that is calculated in RBend is wrong. In a rectangular bend when the orientation of the face relative to the beam (`E1`) is half of the deflection angle then the chord length should be equal to the length of the dipole. Instead the calculated length is as if `E1` was multiplied by `-1`.
### Steps to reproduce
Add `OPTION, LOGBENDTRAJECTORY=TRUE;` to the input file and track a bunch through a rectangular bend with `ANGLE < 0` and `E1 = ANGLE / 2`. Then look up the distance in the file `data/<input_fname>_<bend_name>_traj.dat` between the two locations where the reference particle crosses `x=0`.
### What is the current *bug* behavior?
The current chord length is as if `E1 = -ANGLE / 2`.
### What is the expected *correct* behavior?
The chord length should be equal to `L` in the description of the bend in the input file.krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/308Time shift and time structure in phase space after particle-matter interaction2020-07-30T21:23:18+02:00krausTime shift and time structure in phase space after particle-matter interaction### Summary
The longitudinal phase space exhibits a time structure and the time needed to penetrate the material differs significantly depending on the seed and number of cores used to compute. This can be seen in the following plot sho...### Summary
The longitudinal phase space exhibits a time structure and the time needed to penetrate the material differs significantly depending on the seed and number of cores used to compute. This can be seen in the following plot showing histograms for the time of arrival at a monitor after a degrader:
![hist_t](/uploads/86eed088ff9fbb4ac4b533f6510d5616/hist_t.png)
### Steps to reproduce
Run the Degrader-1 regression test and plot a histogram of the time of arrival at the monitor M1. Use different seeds and number of cores to run the test.
### What is the current *bug* behavior?
The phase space exhibits a time structure that corresponds to the length of the time step and the mean time of arrival differs significantly for different setups. This size of this time shift cannot be explained by the stochastic nature of the particle-matter interaction.
### What is the expected *correct* behavior?
There shouldn't be a regular time structure and the difference of mean time of arrival for different setups should be very small.
### Relevant logs and/or screenshots
See above.OPAL 2.4.0krauskraus2020-07-24https://gitlab.psi.ch/OPAL/src/-/issues/309review OPAL CMakeModule files2020-04-22T11:25:48+02:00gsellreview OPAL CMakeModule filesSearching for a library with
```
FIND_LIBRARY (GSL_LIBRARY gsl
HINTS $ENV{GSL_ROOT_DIR}/lib $ENV{GSL_LIBRARY_PATH} $ENV{GSL_LIBRARY_DIR} $ENV{GSL_PREFIX}/lib $ENV{GSL_DIR}/lib $ENV{GSL}/lib
PATHS ENV LIBRARY_PATH
)
```
can fail i...Searching for a library with
```
FIND_LIBRARY (GSL_LIBRARY gsl
HINTS $ENV{GSL_ROOT_DIR}/lib $ENV{GSL_LIBRARY_PATH} $ENV{GSL_LIBRARY_DIR} $ENV{GSL_PREFIX}/lib $ENV{GSL_DIR}/lib $ENV{GSL}/lib
PATHS ENV LIBRARY_PATH
)
```
can fail if the library is installed in `/lib` or `/lib64`. For some unknown reasons this works on Merlin-5 but fails on Merlin-6 if e.g. `libgsl` is installed.
What is the problem?
* On RHEL7 `/usr/lib64` is a symbolic link to `/lib64`.
* If e.g. `GSL_ROOT_DIR` is not set in the environment `/lib` and `/lib64` are used to search for the library.
* In RHEL7 most system libraries are installed in `/lib64`, which is the first hint if `GSL_ROOT_DIR` is not set.gsellgsellhttps://gitlab.psi.ch/OPAL/src/-/issues/313PluginElements: particles not recorded when element crosses (or close to) origin2019-07-14T16:56:42+02:00snuverink_jjochem.snuverink@psi.chPluginElements: particles not recorded when element crosses (or close to) origin### Summary
Noticed by @nesteruk\_k: Particles are not recorded in some Probes.
Likely the other PluginElements are affected too.
### Steps to reproduce
A Probe crossing the origin, e.g. one defined as:
```
P: Probe, XSTART=-1e10, YS...### Summary
Noticed by @nesteruk\_k: Particles are not recorded in some Probes.
Likely the other PluginElements are affected too.
### Steps to reproduce
A Probe crossing the origin, e.g. one defined as:
```
P: Probe, XSTART=-1e10, YSTART=0, XEND=1e10, YEND=0;
```
will not record any particles.
### What is the expected *correct* behavior?
Recorded particle and output files: P.hist, P.h5, P.peaks
### Possible fixes
A check is performed if the bunch is close to the probe (and only then individual particles are checked) as follows:
https://gitlab.psi.ch/OPAL/src/blob/master/src/Classic/AbsBeamline/Probe.cpp#L62
```c++
if( rbunch_max > rstart_m - 10.0 && rbunch_min < rend_m + 10.0 ) {
```
With `rstart_m` and `rend_m` defined as:
https://gitlab.psi.ch/OPAL/src/blob/master/src/Classic/AbsBeamline/PluginElement.cpp#L86
```cpp
rstart_m = std::hypot(xstart, ystart);
rend_m = std::hypot(xend, yend);
// start position is the one with lowest radius
if (rstart_m > rend_m) {
std::swap(xstart_m, xend_m);
std::swap(ystart_m, yend_m);
std::swap(rstart_m, rend_m);
}
```
Instead of `rstart_m` the closest point to the origin should be used in the check.snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/314in BoundaryGeometry: replace recursive algorithm to set orientation of triang...2019-07-03T15:59:34+02:00gsellin BoundaryGeometry: replace recursive algorithm to set orientation of triangle with iterativeFor the time being an recursive algorithm is used to make the normal vector of each triangle inward pointing. This is inefficient for large meshes and - more important - can cause crashes due to memory consumption.For the time being an recursive algorithm is used to make the normal vector of each triangle inward pointing. This is inefficient for large meshes and - more important - can cause crashes due to memory consumption.gsellgsellhttps://gitlab.psi.ch/OPAL/src/-/issues/315cleanup/fixes in Bend.cpp, Cyclotron.cpp and BeamStrippingPhysics.cpp2019-07-11T10:48:57+02:00gsellcleanup/fixes in Bend.cpp, Cyclotron.cpp and BeamStrippingPhysics.cppgsellgsellhttps://gitlab.psi.ch/OPAL/src/-/issues/316reading fields in H5Block format fails if z-dimension is less than the number...2020-07-01T15:46:46+02:00gsellreading fields in H5Block format fails if z-dimension is less than the number of coresOPAL 2.4.0gsellgsell