src issueshttps://gitlab.psi.ch/OPAL/src/-/issues2017-10-02T09:29:53+02:00https://gitlab.psi.ch/OPAL/src/-/issues/163Charge zero in OPAL-cycl & OPAL-t2017-10-02T09:29:53+02:00adelmannCharge zero in OPAL-cycl & OPAL-tCompare beam size of 1.6 and 1.9.x
![opal-cycl](/uploads/e0253f94e8aaec164cae26c992f33eab/opal-cycl.png)
for the IsoDAR cyclotron. Inputfiles can be found on
`merlin-l-01: /gpfs/home/adelmann/scratch/UQ/isodar-1/Accelerated and
...Compare beam size of 1.6 and 1.9.x
![opal-cycl](/uploads/e0253f94e8aaec164cae26c992f33eab/opal-cycl.png)
for the IsoDAR cyclotron. Inputfiles can be found on
`merlin-l-01: /gpfs/home/adelmann/scratch/UQ/isodar-1/Accelerated and
...../Accelerated-1.9`
FUN fact: **Qtot = 0.000**
`
OPAL{0}> * ************** B U N C H *********************************************************
OPAL{0}> * NP = 133000
OPAL{0}> * Qtot = 0.000 [fC] Qi = 1.017 [fC]
OPAL{0}> * Ekin = 361.221 [keV] dEkin = 1.445 [keV]
OPAL{0}> * rmax = ( 3.18003 , 8.91427 , 9.34380 ) [um]
OPAL{0}> * rmin = ( -3.18003 , -8.95209 , -9.36713 ) [um]
OPAL{0}> * rms beam size = ( 1.02826 , 2.91108 , 3.02269 ) [mm]
OPAL{0}> * rms momenta = ( 1.70888e-04 , 3.92498e-05 , 7.85035e-05 ) [beta gamma]
OPAL{0}> * mean position = ( 0.00000 , -0.00000 , 0.00009 ) [um]
OPAL{0}> * mean momenta = ( 2.92045e-15 , 1.96206e-02 , -1.26375e-09 ) [beta gamma]
OPAL{0}> * rms emittance = ( 8.78539e-06 , 5.71264e-06 , 1.18639e-05 ) (not normalized)
OPAL{0}> * rms correlation = ( 2.39105e-04 , 1.14814e-03 , 1.85573e-03 )
OPAL{0}> * hr = ( 0.44096 , 1.23873 , 1.29729 ) [mm]
OPAL{0}> * dh = 2.00000e+00 [%]
OPAL{0}> * t = 0.000 [fs] dT = 28.251 [ps]
OPAL{0}> * spos = 0.000 [um]
OPAL{0}> * **********************************************************************************
`OPAL 1.9.xadelmannwinklehner_dadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/162K0 attribute in RBend2017-10-22T14:14:11+02:00Valeria RizzoglioK0 attribute in RBendIt seems that the attribute K0 (to set the magnetic field) is not working in the RBend element.
In the regression test, if I am not wrong, only the Angle attribute is tested.
From a simple RBend test:
1- if **ANGLE** attribute is us...It seems that the attribute K0 (to set the magnetic field) is not working in the RBend element.
In the regression test, if I am not wrong, only the Angle attribute is tested.
From a simple RBend test:
1- if **ANGLE** attribute is used
```
RBend [2]> 1DPROFILE1-DEFAULT (1D Profile type 1)
RBend [2]> BEND using file 1DPROFILE1-DEFAULT (1D Profile type 1)
RBend [2]>
RBend [2]> Start of field map: 0.146472 m (in s coordinates)
RBend [2]> End of field map: 0.484418 m (in s coordinates)
RBend [2]> Entrance edge of magnet: 0.25 m (in s coordinates)
RBend [2]>
RBend [2]> Reference Trajectory Properties
RBend [2]> ===============================
RBend [2]>
RBend [2]> Bend angle magnitude: 0.523599 rad (30 degrees)
RBend [2]> Entrance edge angle: 0.261799 rad (15 degrees)
RBend [2]> Exit edge angle: 0.261799 rad (15 degrees)
RBend [2]> Bend design radius: 0.249982 m
RBend [2]> Bend design energy: 7e+06 eV
RBend [2]>
RBend [2]> Bend Field and Rotation Properties
RBend [2]> ==================================
RBend [2]>
RBend [2]> Field amplitude: 1.53217 T
RBend [2]> Field index: 0
RBend [2]> Rotation about x axis: 0 rad (0 degrees)
RBend [2]> Rotation about y axis: 0 rad (0 degrees)
RBend [2]> Rotation about z axis: 0 rad (0 degrees)
RBend [2]>
RBend [2]> Reference Trajectory Properties Through Bend Magnet with Fringe Fields
RBend [2]> ======================================================================
RBend [2]>
RBend [2]> Reference particle is bent: 0.523599 rad (30 degrees) in x plane
RBend [2]> Reference particle is bent: 0 rad (0 degrees) in y plane
RBend [2]>
```
2- if **K0** attribute is used:
```
RBend [2]> 1DPROFILE1-DEFAULT (1D Profile type 1)
RBend [2]> BEND using file 1DPROFILE1-DEFAULT (1D Profile type 1)
RBend [2]>
RBend [2]> Start of field map: 0.146472 m (in s coordinates)
RBend [2]> End of field map: -nan m (in s coordinates)
RBend [2]> Entrance edge of magnet: 0.25 m (in s coordinates)
RBend [2]>
RBend [2]> Reference Trajectory Properties
RBend [2]> ===============================
RBend [2]>
RBend [2]> Bend angle magnitude: -nan rad (-nan degrees)
RBend [2]> Entrance edge angle: 0.261799 rad (15 degrees)
RBend [2]> Exit edge angle: -0.261799 rad (-15 degrees)
RBend [2]> Bend design radius: 0.25001 m
RBend [2]> Bend design energy: 7e+06 eV
RBend [2]>
RBend [2]> Bend Field and Rotation Properties
RBend [2]> ==================================
RBend [2]>
RBend [2]> Field amplitude: 1.532 T
RBend [2]> Field index: 0
RBend [2]> Rotation about x axis: 0 rad (0 degrees)
RBend [2]> Rotation about y axis: 0 rad (0 degrees)
RBend [2]> Rotation about z axis: 0 rad (0 degrees)
RBend [2]>
RBend [2]> Reference Trajectory Properties Through Bend Magnet with Fringe Fields
RBend [2]> ======================================================================
RBend [2]>
RBend [2]> Reference particle is bent: -0 rad (-0 degrees) in x plane
RBend [2]> Reference particle is bent: 0 rad (0 degrees) in y plane
RBend [2]>
```
Could, please, someone check?OPAL 2.0.0adelmannkrausadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/158Somehow PSDump has influence on dumped statistics2017-08-18T09:32:49+02:00krausSomehow PSDump has influence on dumped statistics[red has PSDump simultaneously](/uploads/f289a4e3acd9d43703dc6b5c9c5c50fe/influencePSDump.png) This doesn't hurt any further but it's annoying.[red has PSDump simultaneously](/uploads/f289a4e3acd9d43703dc6b5c9c5c50fe/influencePSDump.png) This doesn't hurt any further but it's annoying.OPAL 1.6.0adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/156The Degrader-1 test yields different results when dks is enabled2020-05-01T10:10:14+02:00krausThe Degrader-1 test yields different results when dks is enabledrms x and rms y seem to be fine, only the energy is affected. On a first inspection of the DKS code (CudaCollimatorPhysics.cu) I couldn't find anything obvious. I have no expertise nor the hardware to debug code for cuda.rms x and rms y seem to be fine, only the energy is affected. On a first inspection of the DKS code (CudaCollimatorPhysics.cu) I couldn't find anything obvious. I have no expertise nor the hardware to debug code for cuda.OPAL 2.4.0locans_ulocans_uhttps://gitlab.psi.ch/OPAL/src/-/issues/153Constraints validation fails2017-11-08T10:25:08+01:00frey_mConstraints validation failsI tried out the constraints with the condition that the number of particles should be greater than zero.
```
...
//c1: CONSTRAINT, EXPR="numParticles > 0";
//objs: OBJECTIVES=(dpeak1,dpeak2,dpeak3_5);
//constrs: CONSTRAINTS = (c1);...I tried out the constraints with the condition that the number of particles should be greater than zero.
```
...
//c1: CONSTRAINT, EXPR="numParticles > 0";
//objs: OBJECTIVES=(dpeak1,dpeak2,dpeak3_5);
//constrs: CONSTRAINTS = (c1);
//opt: OPTIMIZE, OBJECTIVES = objs, DVARS = dvars, CONSTRAINTS = constrs;
...
```
This is a dummy constraint since in our simulation we lose no particles. 'numParticles' is part of the SDDS file, i.e. *.stat file (OPAL 1.6).
For some reason -- I do not understand -- I get following message in [opt.trace.0](/uploads/71d42dd821ddfcc95d3fa165cb5ef5ad/opt.trace.0):
```
invalid individual, constraint "c1" failed to yield true; result: 0
```
OPT-Pilot never finds a solution. Without the constraint, it works fine. The template and data file attached:
[Ring.tmpl](/uploads/c94789c099aa26a0d20acd0daca29f93/Ring.tmpl)
[Ring.data](/uploads/95c2dac28b6a7785e708cc363977957c/Ring.data)
Best,
Matthias :bug:snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/152More than 1 coworker2019-01-10T09:37:22+01:00adelmannMore than 1 coworker**--num-coworkers=2** does not work. Simulation of first generation is not terminating**--num-coworkers=2** does not work. Simulation of first generation is not terminatingYves IneichenYves Ineichenhttps://gitlab.psi.ch/OPAL/src/-/issues/151OPAL does not compile with DKS enabled after recent commits2017-08-14T21:21:16+02:00gsellOPAL does not compile with DKS enabled after recent commits@kraus, @uldis_l:
```
59%] Building CXX object src/CMakeFiles/OPALib.dir/Classic/Structure/LossDataSink.cpp.o
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:38:30: error: ‘const int Coll...@kraus, @uldis_l:
```
59%] Building CXX object src/CMakeFiles/OPALib.dir/Classic/Structure/LossDataSink.cpp.o
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:38:30: error: ‘const int CollimatorPhysics::numpar’ is not a static data member of ‘class CollimatorPhysics’
const int CollimatorPhysics::numpar = 13;
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp: In constructor ‘CollimatorPhysics::CollimatorPhysics(const string&, ElementBase*, std::__cxx11::string&, bool, double)’:
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:77:7: error: class ‘CollimatorPhysics’ does not have any field named ‘curandInitSet’
, curandInitSet(0)
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:78:7: error: class ‘CollimatorPhysics’ does not have any field named ‘ierr’
, ierr(0)
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:79:7: error: class ‘CollimatorPhysics’ does not have any field named ‘maxparticles’
, maxparticles(0)
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:80:7: error: class ‘CollimatorPhysics’ does not have any field named ‘numparticles’
, numparticles(0)
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:81:7: error: class ‘CollimatorPhysics’ does not have any field named ‘par_ptr’
, par_ptr(NULL)
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:82:7: error: class ‘CollimatorPhysics’ does not have any field named ‘mem_ptr’
, mem_ptr(NULL)
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp: In member function ‘void CollimatorPhysics::applyDKS(PartBunch&, size_t)’:
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:875:58: error: cannot allocate an object of abstract type ‘Degrader’
Degrader deg = dynamic_cast<Degrader *>(element_ref_m);
^
In file included from /home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.hh:16:0,
from /home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:9:
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/AbsBeamline/Degrader.h:38:7: note: because the following virtual functions are pure within ‘Degrader’:
class Degrader: public Component {
^
In file included from /home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/AbsBeamline/Component.h:26:0,
from /home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.hh:14,
from /home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:9:
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/AbsBeamline/ElementBase.h:190:29: note: virtual BGeometryBase& ElementBase::getGeometry()
virtual BGeometryBase &getGeometry() = 0;
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/AbsBeamline/ElementBase.h:195:35: note: virtual const BGeometryBase& ElementBase::getGeometry() const
virtual const BGeometryBase &getGeometry() const = 0;
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/AbsBeamline/ElementBase.h:311:26: note: virtual ElementBase* ElementBase::clone() const
virtual ElementBase *clone() const = 0;
^
In file included from /home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.hh:14:0,
from /home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:9:
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/AbsBeamline/Component.h:64:22: note: virtual EMField& Component::getField()
virtual EMField &getField() = 0;
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/AbsBeamline/Component.h:69:28: note: virtual const EMField& Component::getField() const
virtual const EMField &getField() const = 0;
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:875:14: error: cannot declare variable ‘deg’ to be of abstract type ‘Degrader’
Degrader deg = dynamic_cast<Degrader *>(element_ref_m);
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:878:60: error: no matching function for call to ‘CollimatorPhysics::setupCollimatorDKS(PartBunch&, Degrader&, size_t&)’
setupCollimatorDKS(bunch, deg, numParticlesInSimulation);
^
In file included from /home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:9:0:
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.hh:110:10: note: candidate: void CollimatorPhysics::setupCollimatorDKS(PartBunch&, Degrader*, size_t)
void setupCollimatorDKS(PartBunch &bunch, Degrader *deg, size_t numParticlesInSimulation);
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.hh:110:10: note: no known conversion for argument 2 from ‘Degrader’ to ‘Degrader*’
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp: In member function ‘void CollimatorPhysics::setupCollimatorDKS(PartBunch&, Degrader*, size_t)’:
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:1063:51: error: ‘numpar’ was not declared in this scope
par_mp = dksbase_m.allocateMemory<double>(numpar, ierr_m);
^
/home/opalci/NightlyBuild/workspace/OPAL-1.6-DKS/src/src/Classic/Solvers/CollimatorPhysics.cpp:1082:50: error: ‘class Degrader’ has no member named ‘getZSize’
double params[numpar_ms] = {zBegin, deg->getZSize(), rho_m, Z_m,
^
make[2]: *** [src/CMakeFiles/OPALib.dir/Classic/Solvers/CollimatorPhysics.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [src/CMakeFiles/OPALib.dir/all] Error 2
make: *** [all] Error 2
```krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/149Coulomb / Rutherford scattering2019-05-11T19:39:59+02:00krausCoulomb / Rutherford scatteringDoes multiplying R twice with 1000 really make sense?
- [first time here](https://gitlab.psi.ch/OPAL/src/blob/OPAL-1.6/src/Classic/Solvers/CollimatorPhysics.cpp#L773)
- [second time here](https://gitlab.psi.ch/OPAL/src/blob/OPAL-1.6/...Does multiplying R twice with 1000 really make sense?
- [first time here](https://gitlab.psi.ch/OPAL/src/blob/OPAL-1.6/src/Classic/Solvers/CollimatorPhysics.cpp#L773)
- [second time here](https://gitlab.psi.ch/OPAL/src/blob/OPAL-1.6/src/Classic/Solvers/CollimatorPhysics.cpp#L792)
@adelmann @baumgarten ?krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/146Rewrite the ArbitraryDomain class.2021-06-09T18:40:56+02:00krausRewrite the ArbitraryDomain class.Currently the ArbitraryDomain class only works when it is partitioned in z-direction. Rewrite it such that the global linear indexing works also with PARFFTX=TRUE and/or PARFFTY=TRUE.Currently the ArbitraryDomain class only works when it is partitioned in z-direction. Rewrite it such that the global linear indexing works also with PARFFTX=TRUE and/or PARFFTY=TRUE.winklehner_dfrey_mwinklehner_dhttps://gitlab.psi.ch/OPAL/src/-/issues/143BoundaryGeometries VTK output produces odd results2018-05-16T13:12:14+02:00krausBoundaryGeometries VTK output produces odd resultsUsed the SAAMG-Test-1 to produce [attached screenshot](/uploads/695901d8c9e8a2e7afc37278f666eef7/Pipe_1m_10cm.png) (serial and parallel).Used the SAAMG-Test-1 to produce [attached screenshot](/uploads/695901d8c9e8a2e7afc37278f666eef7/Pipe_1m_10cm.png) (serial and parallel).gsellgsellhttps://gitlab.psi.ch/OPAL/src/-/issues/140Particle delete2017-08-05T14:31:50+02:00adelmannParticle deleteWith OPAL-1.6 (newest pull) and Regressiontest PSIGUN-1 Bin 0 gets no particles at timestep 2:
....
OPAL {0}[3]> * Wrote beam statistics.
Ippl{0}[2]> Bin 0 gamma = 1.00717e+00; NpInBin= 667
Ippl{0}[2]> Bin 1 has...With OPAL-1.6 (newest pull) and Regressiontest PSIGUN-1 Bin 0 gets no particles at timestep 2:
....
OPAL {0}[3]> * Wrote beam statistics.
Ippl{0}[2]> Bin 0 gamma = 1.00717e+00; NpInBin= 667
Ippl{0}[2]> Bin 1 has no particles
Ippl{0}[2]> Bin 2 has no particles
Ippl{0}[2]> Bin 3 has no particles
Ippl{0}[2]> Bin 4 has no particles
Ippl{0}[3]> * Bin number: 2 has emitted all particles (new emit).
ParallelTTracker {0}> * Deleted 667 particles, remaining 4755 particles
ParallelTTracker {0}[3]> 12:03:09 Step 1 at -0.053 [mm] t= 1.060e-11 [s] E= 5.388 [keV]
...
OPAL {0}>
OPAL {0}[3]> * Wrote beam statistics.
Ippl{0}[2]> Bin 0 has no particles
Ippl{0}[2]> Bin 1 gamma = 1.01054e+00; NpInBin= 4755
Ippl{0}[2]> Bin 2 has no particles
Later on we are running into
I + M < LocalSize
@kraus Is there still an autophpse problem?OPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/138Setting autophase option without a cavity in beamline throws mysterious error2017-08-05T20:04:40+02:00ext-hall_cSetting autophase option without a cavity in beamline throws mysterious errorWith `"OPTION, AUTOPHASE=4;"` in my input file when I use a beamline without a cavity I see an error like:
`opal(7879,0x7fff7f140000) malloc: *** error for object 0x7fff9a15b9f3: pointer being freed was not allocated`
Turning autophase...With `"OPTION, AUTOPHASE=4;"` in my input file when I use a beamline without a cavity I see an error like:
`opal(7879,0x7fff7f140000) malloc: *** error for object 0x7fff9a15b9f3: pointer being freed was not allocated`
Turning autophase off allowed my input file to run without error, but this error was not very informative and it took quite a while to find the culprit. It might be helpful if making this mistake generated a specific error message.OPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/137Segmentation fault - Degrader 70 MeV2021-06-10T19:14:40+02:00Valeria RizzoglioSegmentation fault - Degrader 70 MeVI am trying to test the influence of the time step on the results of the OPAL Monte Carlo using the Multi-Slabs degrader for 70 MeV ([Degrader_70.in](/uploads/bc2a35adc56108066470d475851794f4/Degrader_70.in))
I set the time step to 1e-1...I am trying to test the influence of the time step on the results of the OPAL Monte Carlo using the Multi-Slabs degrader for 70 MeV ([Degrader_70.in](/uploads/bc2a35adc56108066470d475851794f4/Degrader_70.in))
I set the time step to 1e-10 s and I got a segmentation fault. So I did few tests, trying different configurations of time steps, n. of cores and options (ENABLERUTHERFORD = TRUE/FALSE or with/without GPU)
-- **Configuration 1**
- protons = 1e5, DT = 1e-10 s, cores = 4, with dks and ENABLERUTHERFORD = TRUE
- result: segmentation fault [Config1.out](/uploads/e22237cd275e223eafc1f393b7f00c3f/Config1.out)
-- **Configuration 2**
- protons = 1e5, DT = 1e-10 s, cores = 4, with dks and ENABLERUTHERFORD = FALSE
- result: OK [Config2.out](/uploads/e1744843830b2f7480ec1d210f9100e2/Config2.out)
-- **Configuration 3**
- protons = 1e5, DT = 1e-10 s, cores = 4, without dks and ENABLERUTHERFORD = TRUE
- result: segmentation fault [Config3.out](/uploads/1729b7c9fa264b2d19ef0b2ab8a30d2a/Config3.out)
-- **Configuration 4**
- protons = 1e7, DT = 1e-10 s, cores = 4, without dks and ENABLERUTHERFORD = TRUE
- result: OPAL stops at 4.4 mm with 4 protons while the ZSTOP is 4.3 m [Config4.out](/uploads/cd16cc8612b11ca5a93c4d2838406fab/Config4.out)
-- **Configuration 4.b**
- protons = 1e5, DT = 1e-10 s, cores = 8, without dks and ENABLERUTHERFORD = TRUE
- result: segmentation fault [Config4b.out](/uploads/2fe58a350447fc863a07bdf0f398bb93/Config4b.out)
-- **Configuration 5** (on Merlin)
- protons = 1e7, DT = 1e-10 s, cores = 32, without dks and ENABLERUTHERFORD = FALSE
- result: OK
-- **Configuration 6**
- protons = 1e5, DT = 1e-11 s, cores = 4, with dks and ENABLERUTHERFORD = FALSE
- result: OK [Config6.out](/uploads/85b27a193d0e9de8d463b99502220dfa/Config6.out)
-- **Configuration 7**
- protons = 1e5, DT = 1e-11 s, cores = 4, with dks and ENABLERUTHERFORD = TRUE
- result: OK [Config7.out](/uploads/89264c89f20abc3bfc933de6acaf2e52/Config7.out)
Run on opalrunner and Merlin with these settings:
```
Currently Loaded Modulefiles:
1) gcc/5.4.0 4) OPAL/1.6 7) Tcl/8.6.4 10) boost/1.62.0
2) openmpi/1.10.4 5) root/6.08.02 8) Tk/8.6.4 11) gsl/2.2.1
3) OPAL/1.6.0rc3 6) openssl/1.0.2j 9) Python/2.7.12 12) H5root/1.3.2rc4-1
```gselladelmanngsellhttps://gitlab.psi.ch/OPAL/src/-/issues/133BeamLine fails isInside test during OrbitThreader execute() when Aperture CIR...2017-08-02T22:49:58+02:00winklehner_dBeamLine fails isInside test during OrbitThreader execute() when Aperture CIRCLE is defined in RFCavity.It took me a long time to find out why my RFCavity was not in the imap_m generated by the OrbitThreader during execute(), so I wasn't able to test this with other apertures, but it seems that having a "CIRCLE(0.008, 1)" aperture defined ...It took me a long time to find out why my RFCavity was not in the imap_m generated by the OrbitThreader during execute(), so I wasn't able to test this with other apertures, but it seems that having a "CIRCLE(0.008, 1)" aperture defined in the RFCavity element prevents it from being added to the elementSet list in the getElements(nextR) function. I think the culprit is somehow the ElementBase::isInsideTransverse() function.krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/132_M_range_check error2017-08-13T10:13:16+02:00winklehner_d_M_range_check errorSince pulling today, this happens:
```
Error{1}> *** Error:
Error{1}> *** in line 86 of file "RFQ_VECC-T.in":
Error{1}> RUN,METHOD="PARALLEL-T",BEAM=BEAM1,FIELDSOLVER=FS1,DISTRIBUTION=DIST;
Error{1}> vector::_M_range_check
...Since pulling today, this happens:
```
Error{1}> *** Error:
Error{1}> *** in line 86 of file "RFQ_VECC-T.in":
Error{1}> RUN,METHOD="PARALLEL-T",BEAM=BEAM1,FIELDSOLVER=FS1,DISTRIBUTION=DIST;
Error{1}> vector::_M_range_check
```
Any insights anyone? @kraus, did you write something about distributions now being arrays? @adelmann?https://gitlab.psi.ch/OPAL/src/-/issues/131Segmentation fault - dks - SurfacePhysics Collimators2021-06-10T19:15:02+02:00Valeria RizzoglioSegmentation fault - dks - SurfacePhysics CollimatorsI got segmentation fault running this input file [PROSCAN-G3-230.in](/uploads/7820209c33311fcdd68601832deacf30/PROSCAN-G3-230.in). It includes SurfacePhysics on 3 consecutive collimators.
The error message:
```
ParallelTTracker {0}> ...I got segmentation fault running this input file [PROSCAN-G3-230.in](/uploads/7820209c33311fcdd68601832deacf30/PROSCAN-G3-230.in). It includes SurfacePhysics on 3 consecutive collimators.
The error message:
```
ParallelTTracker {0}> Coll/Deg statistics: bunch to material 2 redifused 0 stopped 1
[opalrunner:20589] *** Process received signal ***
[opalrunner:20589] Signal: Segmentation fault (11)
[opalrunner:20589] Signal code: Address not mapped (1)
[opalrunner:20589] Failing at address: 0x1b70f000
[opalrunner:20589] [ 0] /lib64/libc.so.6[0x32e9632660]
[opalrunner:20589] [ 1] opal(_ZN14ParticleAttribI6VektorIdLj3EEE7destroyERKSt6vectorISt4pairImmESaIS5_EEb+0x1f0)[0xe531d0]
[opalrunner:20589] [ 2] opal(_ZN16IpplParticleBaseI21ParticleSpatialLayoutIdLj3E16UniformCartesianILj3EdE24BoxParticleCachingPolicyIdLj3ES2_EEE14performDestroyEv+0xc2)[0xdac9e2]
[opalrunner:20589] [ 3] opal(_ZN21ParticleSpatialLayoutIdLj3E16UniformCartesianILj3EdE24BoxParticleCachingPolicyIdLj3ES1_EE6updateER16IpplParticleBaseIS4_EPK14ParticleAttribIcE+0x45)[0xdae095]
[opalrunner:20589] [ 4] opal(_ZN16IpplParticleBaseI21ParticleSpatialLayoutIdLj3E16UniformCartesianILj3EdE24BoxParticleCachingPolicyIdLj3ES2_EEE6updateEv+0x1a)[0xdae60a]
[opalrunner:20589] [ 5] opal(_ZN9PartBunch6boundpEv+0x406)[0xe225e6]
[opalrunner:20589] [ 6] opal(_ZN16ParallelTTracker21computeExternalFieldsEv+0xf19)[0x107ec79]
[opalrunner:20589] [ 7] opal(_ZN16ParallelTTracker21executeDefaultTrackerEv+0x637)[0x1084b77]
[opalrunner:20589] [ 8] opal(_ZN16ParallelTTracker7executeEv+0x1f)[0x108566f]
[opalrunner:20589] [ 9] opal(_ZN8TrackRun7executeEv+0x751)[0x104c4b1]
[opalrunner:20589] [10] opal(_ZNK10OpalParser7executeEP6ObjectRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x35)[0xcb57e5]
[opalrunner:20589] [11] opal(_ZNK10OpalParser11parseActionER9Statement+0x143)[0xcb9803]
[opalrunner:20589] [12] opal(_ZNK10OpalParser5parseER9Statement+0x186)[0xcb9196]
[opalrunner:20589] [13] opal(_ZNK10OpalParser3runEv+0x2c)[0xcba7ec]
[opalrunner:20589] [14] opal(_ZN8TrackCmd7executeEv+0x343)[0xd6ccc3]
[opalrunner:20589] [15] opal(_ZNK10OpalParser7executeEP6ObjectRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x35)[0xcb57e5]
[opalrunner:20589] [16] opal(_ZNK10OpalParser11parseActionER9Statement+0x143)[0xcb9803]
[opalrunner:20589] [17] opal(_ZNK10OpalParser5parseER9Statement+0x186)[0xcb9196]
[opalrunner:20589] [18] opal(_ZNK10OpalParser3runEv+0x2c)[0xcba7ec]
[opalrunner:20589] [19] opal(_ZNK10OpalParser3runEP11TokenStream+0x6a)[0xcb9cea]
[opalrunner:20589] [20] opal(main+0x8e8)[0xc48658]
[opalrunner:20589] [21] /lib64/libc.so.6(__libc_start_main+0xfd)[0x32e961ed1d]
[opalrunner:20589] [22] opal[0xc3fab5]
[opalrunner:20589] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 20589 on node opalrunner exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
```
I tried with two different time steps (1 ps and 5 ps) and I got the same error. The same file runs up to end without the option `--use-dks`.
Run configuration: opalrunner with 8 cores
Modules load:
```
Currently Loaded Modulefiles:
1) gcc/5.4.0 3) OPAL/1.6.0rc3 5) root/6.08.02 7) Tcl/8.6.4 9) Python/2.7.12 11) gsl/2.2.1
2) openmpi/1.10.4 4) OPAL/1.6 6) openssl/1.0.2j 8) Tk/8.6.4 10) boost/1.62.0 12) H5root/1.3.2rc4-1
```adelmannkrausadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/129Array of distributions containing FROMFILE2017-08-13T10:13:16+02:00krausArray of distributions containing FROMFILEThis won't work properly because e.g. the number of particles in a FROMFILE distribution is fixed. Thus when computing the number of particles the other distributions should contain we have first to subtract the number of particles in th...This won't work properly because e.g. the number of particles in a FROMFILE distribution is fixed. Thus when computing the number of particles the other distributions should contain we have first to subtract the number of particles in the FROMFILE distributions.OPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/128Let each distribution in array of distributions have its own offset in R and P.2017-07-15T20:33:10+02:00krausLet each distribution in array of distributions have its own offset in R and P.When providing an array of distribution and each distribution has its own OFFSET{X|Y|Z|PX|PY|PZ} then, so far, all distributions use the offsets of the first distribution.When providing an array of distribution and each distribution has its own OFFSET{X|Y|Z|PX|PY|PZ} then, so far, all distributions use the offsets of the first distribution.OPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/125Vector of time steps: error in the parser2017-07-13T09:32:41+02:00Valeria RizzoglioVector of time steps: error in the parser[PROSCAN-G3-230.in](/uploads/0f541b042bd39fdf2fe62688529cc406/PROSCAN-G3-230.in)
If I track the particles using a vector of time steps:
```
TRACK, LINE=BEAMLINE_TOT,
BEAM=BEAM_G3_LA1,
MAXSTEPS={5e+08,5e+08,5e+08},
...[PROSCAN-G3-230.in](/uploads/0f541b042bd39fdf2fe62688529cc406/PROSCAN-G3-230.in)
If I track the particles using a vector of time steps:
```
TRACK, LINE=BEAMLINE_TOT,
BEAM=BEAM_G3_LA1,
MAXSTEPS={5e+08,5e+08,5e+08},
DT={5*PICOSECONDS,1*PICOSECONDS,5*PICOSECOND},
ZSTOP={6.145,6.75,16}OPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/123No stat-file output in case of MTS tracking2017-07-05T12:12:53+02:00frey_mNo stat-file output in case of MTS trackingRunning the regression test [RingCyclotronMTS](https://gitlab.psi.ch/OPAL/regression-tests/blob/master/RegressionTests/RingCyclotronMTS/RingCyclotronMTS.in) however with ```nsteps = 2000``` and ```SPTDUMPFREQ = 10``` -- as in the test [R...Running the regression test [RingCyclotronMTS](https://gitlab.psi.ch/OPAL/regression-tests/blob/master/RegressionTests/RingCyclotronMTS/RingCyclotronMTS.in) however with ```nsteps = 2000``` and ```SPTDUMPFREQ = 10``` -- as in the test [RingCyclotron](https://gitlab.psi.ch/OPAL/regression-tests/blob/master/RegressionTests/RingCyclotron/RingCyclotron.in) using RK-4 -- I get only one dump in the RingCyclotronMTS.stat.OPAL 1.9.xfrey_mfrey_m