src issueshttps://gitlab.psi.ch/OPAL/src/-/issues2021-06-10T19:14:40+02:00https://gitlab.psi.ch/OPAL/src/-/issues/137Segmentation fault - Degrader 70 MeV2021-06-10T19:14:40+02:00Valeria RizzoglioSegmentation fault - Degrader 70 MeVI am trying to test the influence of the time step on the results of the OPAL Monte Carlo using the Multi-Slabs degrader for 70 MeV ([Degrader_70.in](/uploads/bc2a35adc56108066470d475851794f4/Degrader_70.in))
I set the time step to 1e-1...I am trying to test the influence of the time step on the results of the OPAL Monte Carlo using the Multi-Slabs degrader for 70 MeV ([Degrader_70.in](/uploads/bc2a35adc56108066470d475851794f4/Degrader_70.in))
I set the time step to 1e-10 s and I got a segmentation fault. So I did few tests, trying different configurations of time steps, n. of cores and options (ENABLERUTHERFORD = TRUE/FALSE or with/without GPU)
-- **Configuration 1**
- protons = 1e5, DT = 1e-10 s, cores = 4, with dks and ENABLERUTHERFORD = TRUE
- result: segmentation fault [Config1.out](/uploads/e22237cd275e223eafc1f393b7f00c3f/Config1.out)
-- **Configuration 2**
- protons = 1e5, DT = 1e-10 s, cores = 4, with dks and ENABLERUTHERFORD = FALSE
- result: OK [Config2.out](/uploads/e1744843830b2f7480ec1d210f9100e2/Config2.out)
-- **Configuration 3**
- protons = 1e5, DT = 1e-10 s, cores = 4, without dks and ENABLERUTHERFORD = TRUE
- result: segmentation fault [Config3.out](/uploads/1729b7c9fa264b2d19ef0b2ab8a30d2a/Config3.out)
-- **Configuration 4**
- protons = 1e7, DT = 1e-10 s, cores = 4, without dks and ENABLERUTHERFORD = TRUE
- result: OPAL stops at 4.4 mm with 4 protons while the ZSTOP is 4.3 m [Config4.out](/uploads/cd16cc8612b11ca5a93c4d2838406fab/Config4.out)
-- **Configuration 4.b**
- protons = 1e5, DT = 1e-10 s, cores = 8, without dks and ENABLERUTHERFORD = TRUE
- result: segmentation fault [Config4b.out](/uploads/2fe58a350447fc863a07bdf0f398bb93/Config4b.out)
-- **Configuration 5** (on Merlin)
- protons = 1e7, DT = 1e-10 s, cores = 32, without dks and ENABLERUTHERFORD = FALSE
- result: OK
-- **Configuration 6**
- protons = 1e5, DT = 1e-11 s, cores = 4, with dks and ENABLERUTHERFORD = FALSE
- result: OK [Config6.out](/uploads/85b27a193d0e9de8d463b99502220dfa/Config6.out)
-- **Configuration 7**
- protons = 1e5, DT = 1e-11 s, cores = 4, with dks and ENABLERUTHERFORD = TRUE
- result: OK [Config7.out](/uploads/89264c89f20abc3bfc933de6acaf2e52/Config7.out)
Run on opalrunner and Merlin with these settings:
```
Currently Loaded Modulefiles:
1) gcc/5.4.0 4) OPAL/1.6 7) Tcl/8.6.4 10) boost/1.62.0
2) openmpi/1.10.4 5) root/6.08.02 8) Tk/8.6.4 11) gsl/2.2.1
3) OPAL/1.6.0rc3 6) openssl/1.0.2j 9) Python/2.7.12 12) H5root/1.3.2rc4-1
```gselladelmanngsellhttps://gitlab.psi.ch/OPAL/src/-/issues/136Duplicated Bunch2017-08-13T17:23:59+02:00frey_mDuplicated Bunch```ParallelCyclotronTracker``` is a derived class of ```Tracker``` that has a protected member variable ```PartBunch```. As far as I see, the bunch is copied at construction, i.e. leading to a duplicated bunch: One stored in the instance...```ParallelCyclotronTracker``` is a derived class of ```Tracker``` that has a protected member variable ```PartBunch```. As far as I see, the bunch is copied at construction, i.e. leading to a duplicated bunch: One stored in the instance of ```Tracker``` and the other stored in ```Track::block```.OPAL 1.9.xfrey_mfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/135Restart in Opal-Cycl2018-04-11T10:36:40+02:00krausRestart in Opal-CyclThe restart in the RestartTest-2 looks odd to me. Maybe some Opal-Cycl-expert should look into it.The restart in the RestartTest-2 looks odd to me. Maybe some Opal-Cycl-expert should look into it.adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/134Perfect Diode Regression-Test2017-07-24T06:20:51+02:00adelmannPerfect Diode Regression-TestI remember seeing this lately and connect the solution wit @winklehner_d
Ippl{0}> *** Error:
Ippl{0}> RUN,METHOD="PARALLEL-T",BEAM=BEAM1,FIELDSOLVER=FS1,DISTRIBUTION=DIST2;
Ippl{0}> Internal OPAL error: vector::_M_range_check:...I remember seeing this lately and connect the solution wit @winklehner_d
Ippl{0}> *** Error:
Ippl{0}> RUN,METHOD="PARALLEL-T",BEAM=BEAM1,FIELDSOLVER=FS1,DISTRIBUTION=DIST2;
Ippl{0}> Internal OPAL error: vector::_M_range_check: __n (which is 25000) >= this->size() (which is 25000)
I concerns the fail of PerfectDiode regression testOPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/133BeamLine fails isInside test during OrbitThreader execute() when Aperture CIR...2017-08-02T22:49:58+02:00winklehner_dBeamLine fails isInside test during OrbitThreader execute() when Aperture CIRCLE is defined in RFCavity.It took me a long time to find out why my RFCavity was not in the imap_m generated by the OrbitThreader during execute(), so I wasn't able to test this with other apertures, but it seems that having a "CIRCLE(0.008, 1)" aperture defined ...It took me a long time to find out why my RFCavity was not in the imap_m generated by the OrbitThreader during execute(), so I wasn't able to test this with other apertures, but it seems that having a "CIRCLE(0.008, 1)" aperture defined in the RFCavity element prevents it from being added to the elementSet list in the getElements(nextR) function. I think the culprit is somehow the ElementBase::isInsideTransverse() function.krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/132_M_range_check error2017-08-13T10:13:16+02:00winklehner_d_M_range_check errorSince pulling today, this happens:
```
Error{1}> *** Error:
Error{1}> *** in line 86 of file "RFQ_VECC-T.in":
Error{1}> RUN,METHOD="PARALLEL-T",BEAM=BEAM1,FIELDSOLVER=FS1,DISTRIBUTION=DIST;
Error{1}> vector::_M_range_check
...Since pulling today, this happens:
```
Error{1}> *** Error:
Error{1}> *** in line 86 of file "RFQ_VECC-T.in":
Error{1}> RUN,METHOD="PARALLEL-T",BEAM=BEAM1,FIELDSOLVER=FS1,DISTRIBUTION=DIST;
Error{1}> vector::_M_range_check
```
Any insights anyone? @kraus, did you write something about distributions now being arrays? @adelmann?https://gitlab.psi.ch/OPAL/src/-/issues/131Segmentation fault - dks - SurfacePhysics Collimators2021-06-10T19:15:02+02:00Valeria RizzoglioSegmentation fault - dks - SurfacePhysics CollimatorsI got segmentation fault running this input file [PROSCAN-G3-230.in](/uploads/7820209c33311fcdd68601832deacf30/PROSCAN-G3-230.in). It includes SurfacePhysics on 3 consecutive collimators.
The error message:
```
ParallelTTracker {0}> ...I got segmentation fault running this input file [PROSCAN-G3-230.in](/uploads/7820209c33311fcdd68601832deacf30/PROSCAN-G3-230.in). It includes SurfacePhysics on 3 consecutive collimators.
The error message:
```
ParallelTTracker {0}> Coll/Deg statistics: bunch to material 2 redifused 0 stopped 1
[opalrunner:20589] *** Process received signal ***
[opalrunner:20589] Signal: Segmentation fault (11)
[opalrunner:20589] Signal code: Address not mapped (1)
[opalrunner:20589] Failing at address: 0x1b70f000
[opalrunner:20589] [ 0] /lib64/libc.so.6[0x32e9632660]
[opalrunner:20589] [ 1] opal(_ZN14ParticleAttribI6VektorIdLj3EEE7destroyERKSt6vectorISt4pairImmESaIS5_EEb+0x1f0)[0xe531d0]
[opalrunner:20589] [ 2] opal(_ZN16IpplParticleBaseI21ParticleSpatialLayoutIdLj3E16UniformCartesianILj3EdE24BoxParticleCachingPolicyIdLj3ES2_EEE14performDestroyEv+0xc2)[0xdac9e2]
[opalrunner:20589] [ 3] opal(_ZN21ParticleSpatialLayoutIdLj3E16UniformCartesianILj3EdE24BoxParticleCachingPolicyIdLj3ES1_EE6updateER16IpplParticleBaseIS4_EPK14ParticleAttribIcE+0x45)[0xdae095]
[opalrunner:20589] [ 4] opal(_ZN16IpplParticleBaseI21ParticleSpatialLayoutIdLj3E16UniformCartesianILj3EdE24BoxParticleCachingPolicyIdLj3ES2_EEE6updateEv+0x1a)[0xdae60a]
[opalrunner:20589] [ 5] opal(_ZN9PartBunch6boundpEv+0x406)[0xe225e6]
[opalrunner:20589] [ 6] opal(_ZN16ParallelTTracker21computeExternalFieldsEv+0xf19)[0x107ec79]
[opalrunner:20589] [ 7] opal(_ZN16ParallelTTracker21executeDefaultTrackerEv+0x637)[0x1084b77]
[opalrunner:20589] [ 8] opal(_ZN16ParallelTTracker7executeEv+0x1f)[0x108566f]
[opalrunner:20589] [ 9] opal(_ZN8TrackRun7executeEv+0x751)[0x104c4b1]
[opalrunner:20589] [10] opal(_ZNK10OpalParser7executeEP6ObjectRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x35)[0xcb57e5]
[opalrunner:20589] [11] opal(_ZNK10OpalParser11parseActionER9Statement+0x143)[0xcb9803]
[opalrunner:20589] [12] opal(_ZNK10OpalParser5parseER9Statement+0x186)[0xcb9196]
[opalrunner:20589] [13] opal(_ZNK10OpalParser3runEv+0x2c)[0xcba7ec]
[opalrunner:20589] [14] opal(_ZN8TrackCmd7executeEv+0x343)[0xd6ccc3]
[opalrunner:20589] [15] opal(_ZNK10OpalParser7executeEP6ObjectRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x35)[0xcb57e5]
[opalrunner:20589] [16] opal(_ZNK10OpalParser11parseActionER9Statement+0x143)[0xcb9803]
[opalrunner:20589] [17] opal(_ZNK10OpalParser5parseER9Statement+0x186)[0xcb9196]
[opalrunner:20589] [18] opal(_ZNK10OpalParser3runEv+0x2c)[0xcba7ec]
[opalrunner:20589] [19] opal(_ZNK10OpalParser3runEP11TokenStream+0x6a)[0xcb9cea]
[opalrunner:20589] [20] opal(main+0x8e8)[0xc48658]
[opalrunner:20589] [21] /lib64/libc.so.6(__libc_start_main+0xfd)[0x32e961ed1d]
[opalrunner:20589] [22] opal[0xc3fab5]
[opalrunner:20589] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 20589 on node opalrunner exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
```
I tried with two different time steps (1 ps and 5 ps) and I got the same error. The same file runs up to end without the option `--use-dks`.
Run configuration: opalrunner with 8 cores
Modules load:
```
Currently Loaded Modulefiles:
1) gcc/5.4.0 3) OPAL/1.6.0rc3 5) root/6.08.02 7) Tcl/8.6.4 9) Python/2.7.12 11) gsl/2.2.1
2) openmpi/1.10.4 4) OPAL/1.6 6) openssl/1.0.2j 8) Tk/8.6.4 10) boost/1.62.0 12) H5root/1.3.2rc4-1
```adelmannkrausadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/130Unit tests report2017-10-09T06:17:32+02:00snuverink_jjochem.snuverink@psi.chUnit tests reportRunning the unit tests on master gives the following result:
```
[==========] 66 tests from 12 test cases ran. (1266 ms total)
[ PASSED ] 61 tests.
[ FAILED ] 5 tests, listed below:
[ FAILED ] RingTest.TestApply
[ FAILED ] RingT...Running the unit tests on master gives the following result:
```
[==========] 66 tests from 12 test cases ran. (1266 ms total)
[ PASSED ] 61 tests.
[ FAILED ] 5 tests, listed below:
[ FAILED ] RingTest.TestApply
[ FAILED ] RingTest.TestApply2
[ FAILED ] RingTest.TestApply3
[ FAILED ] GaussTest.FullSigmaTest1
[ FAILED ] GaussTest.FullSigmaTest2
5 FAILED TESTS
```
Tentatively assigned to @ext-rogers_c. Please reassign or open a new report for individual tests.
The Ring tests were not failing on `OPAL-1.6`.
`RingTest.TestApply`:
```
tests/classic_src/AbsBeamline/RingTest.cpp:259: Failure
The difference between B(i) and BRef(i) is 0.90010000000000012, which exceeds 1e-6, where
B(i) evaluates to 0,
BRef(i) evaluates to -0.90010000000000012, and
1e-6 evaluates to 9.9999999999999995e-07.
for pos ( 0.099899999999999878 , -2.2000000000000002 , -0.5 )
```
`RingTest.TestApply2`:
```
tests/classic_src/AbsBeamline/RingTest.cpp:298: Failure
Value of: ring.apply(pos, Vector_t(0.0), 0., E, B)
Actual: true
Expected: false
tests/classic_src/AbsBeamline/RingTest.cpp:303: Failure
Expected: (-B(2)) >= (0.1), actual: -0 vs 0.1
```
`RingTest.TestApply3`:
```
tests/classic_src/AbsBeamline/RingTest.cpp:395: Failure
The difference between B(0) and bx is 3, which exceeds 1e-6, where
B(0) evaluates to 0,
bx evaluates to 3, and
1e-6 evaluates to 9.9999999999999995e-07.
```
The `GaussTests` fail both in the same way (also on `OPAL-1.6`), output for Test1:
```
tests/opal_src/Distribution/GaussTest.cpp:119: Failure
Expected: (std::abs(expectedR11 - R11)) < (0.05 * expectedR11), actual: 0.247124 vs 0.1957
src/tests/opal_src/Distribution/GaussTest.cpp:120: Failure
Expected: (std::abs(expectedR21 - R21)) < (-0.05 * expectedR21), actual: 0.062553 vs 0.03243
src/tests/opal_src/Distribution/GaussTest.cpp:121: Failure
Expected: (std::abs(expectedR22 - R22)) < (0.05 * expectedR22), actual: 0.0412111 vs 0.03198
src/tests/opal_src/Distribution/GaussTest.cpp:123: Failure
Expected: (std::abs(expectedR52 - R52)) < (0.05 * expectedR52), actual: 0.0466059 vs 0.036325
src/tests/opal_src/Distribution/GaussTest.cpp:124: Failure
Expected: (std::abs(expectedR61 - R61)) < (0.05 * expectedR61), actual: 0.0998879 vs 0.0681
src/tests/opal_src/Distribution/GaussTest.cpp:125: Failure
Expected: (std::abs(expectedR62 - R62)) < (-0.05 * expectedR62), actual: 0.0256172 vs 0.013425
[ FAILED ] GaussTest.FullSigmaTest1 (552 ms)
```ext-rogers_cext-rogers_chttps://gitlab.psi.ch/OPAL/src/-/issues/129Array of distributions containing FROMFILE2017-08-13T10:13:16+02:00krausArray of distributions containing FROMFILEThis won't work properly because e.g. the number of particles in a FROMFILE distribution is fixed. Thus when computing the number of particles the other distributions should contain we have first to subtract the number of particles in th...This won't work properly because e.g. the number of particles in a FROMFILE distribution is fixed. Thus when computing the number of particles the other distributions should contain we have first to subtract the number of particles in the FROMFILE distributions.OPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/128Let each distribution in array of distributions have its own offset in R and P.2017-07-15T20:33:10+02:00krausLet each distribution in array of distributions have its own offset in R and P.When providing an array of distribution and each distribution has its own OFFSET{X|Y|Z|PX|PY|PZ} then, so far, all distributions use the offsets of the first distribution.When providing an array of distribution and each distribution has its own OFFSET{X|Y|Z|PX|PY|PZ} then, so far, all distributions use the offsets of the first distribution.OPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/127Bug in BorisPusher ParallelTTracker?2017-07-12T22:30:10+02:00frey_mBug in BorisPusher ParallelTTracker?Comparing the push part of the Boris-Brunemann algorithm as written in [Toggweiler_BorisIntegrator.pdf](/uploads/1fc710b9496b0bcb9a7def7f505d3fef/Toggweiler_BorisIntegrator.pdf) (pseudo code in Algorithm 2), I noticed that the time step ...Comparing the push part of the Boris-Brunemann algorithm as written in [Toggweiler_BorisIntegrator.pdf](/uploads/1fc710b9496b0bcb9a7def7f505d3fef/Toggweiler_BorisIntegrator.pdf) (pseudo code in Algorithm 2), I noticed that the time step ```dt```, respectively ```h``` in the paper, is missing. Is this on purpose or is it really a bug? I don't know if it's taken care of that in the ParallelTTracker.krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/126Documentation2017-07-12T10:56:47+02:00adelmannDocumentation![image](/uploads/38526391c24562a6608f256bdbc1e23d/image.png)
@ext-mayes_c This seams not a fatal error, with "Q" the manual compiles.
Chris is getting
! Misplaced \noalign.
\hline ->\noalign
{\ifnum 0=`}\f...![image](/uploads/38526391c24562a6608f256bdbc1e23d/image.png)
@ext-mayes_c This seams not a fatal error, with "Q" the manual compiles.
Chris is getting
! Misplaced \noalign.
\hline ->\noalign
{\ifnum 0=`}\fi \hrule \@height \arrayrulewidth \futurelet...
l.72 \hline
OPAL 1.9.xadelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/125Vector of time steps: error in the parser2017-07-13T09:32:41+02:00Valeria RizzoglioVector of time steps: error in the parser[PROSCAN-G3-230.in](/uploads/0f541b042bd39fdf2fe62688529cc406/PROSCAN-G3-230.in)
If I track the particles using a vector of time steps:
```
TRACK, LINE=BEAMLINE_TOT,
BEAM=BEAM_G3_LA1,
MAXSTEPS={5e+08,5e+08,5e+08},
...[PROSCAN-G3-230.in](/uploads/0f541b042bd39fdf2fe62688529cc406/PROSCAN-G3-230.in)
If I track the particles using a vector of time steps:
```
TRACK, LINE=BEAMLINE_TOT,
BEAM=BEAM_G3_LA1,
MAXSTEPS={5e+08,5e+08,5e+08},
DT={5*PICOSECONDS,1*PICOSECONDS,5*PICOSECOND},
ZSTOP={6.145,6.75,16}OPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/124Reimplementation of ParallelCyclotronTracker2020-04-22T12:01:54+02:00frey_mReimplementation of ParallelCyclotronTrackerWe should re-implement the tracker such that it is independent of the integrator in order to avoid duplicated code. Adding new integrators is then also simplified.We should re-implement the tracker such that it is independent of the integrator in order to avoid duplicated code. Adding new integrators is then also simplified.OPAL 1.9.xfrey_mfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/123No stat-file output in case of MTS tracking2017-07-05T12:12:53+02:00frey_mNo stat-file output in case of MTS trackingRunning the regression test [RingCyclotronMTS](https://gitlab.psi.ch/OPAL/regression-tests/blob/master/RegressionTests/RingCyclotronMTS/RingCyclotronMTS.in) however with ```nsteps = 2000``` and ```SPTDUMPFREQ = 10``` -- as in the test [R...Running the regression test [RingCyclotronMTS](https://gitlab.psi.ch/OPAL/regression-tests/blob/master/RegressionTests/RingCyclotronMTS/RingCyclotronMTS.in) however with ```nsteps = 2000``` and ```SPTDUMPFREQ = 10``` -- as in the test [RingCyclotron](https://gitlab.psi.ch/OPAL/regression-tests/blob/master/RegressionTests/RingCyclotron/RingCyclotron.in) using RK-4 -- I get only one dump in the RingCyclotronMTS.stat.OPAL 1.9.xfrey_mfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/122Attempt to create IpplInfo with argc, argv again.2018-11-25T16:01:27+01:00adelmannAttempt to create IpplInfo with argc, argv again.Brach: scalable-emission and use OPAL in optimiser mode
Warning> Attempt to create IpplInfo with argc, argv again.
Warning> Using previous argc,argv settings.
merlinl01:/gpfs/home/adelmann/scratch/opt-pilot-week/ANL/optLinac-1
use ru...Brach: scalable-emission and use OPAL in optimiser mode
Warning> Attempt to create IpplInfo with argc, argv again.
Warning> Using previous argc,argv settings.
merlinl01:/gpfs/home/adelmann/scratch/opt-pilot-week/ANL/optLinac-1
use runopt-opal.sgeOPAL 2.0.0adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/121Interpolator::getFieldIter: attempt to access non-local2017-07-08T21:59:56+02:00adelmannInterpolator::getFieldIter: attempt to access non-localReproducible error on one core: Interpolator::getFieldIter: attempt to access non-local:
merlinl01:error/25f0fe5c361321294c559e667430d6125c346809_6
Brach: scalable-emissionReproducible error on one core: Interpolator::getFieldIter: attempt to access non-local:
merlinl01:error/25f0fe5c361321294c559e667430d6125c346809_6
Brach: scalable-emissionOPAL 2.0.0frey_mfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/120Particle Termination2019-12-12T23:16:08+01:00winklehner_dParticle TerminationHi,
Anybody else noticing that particles are not terminated correctly anymore if Bin is set to -1 (which is the usual way in the CyclotronTracker) since last week's commits to the head? It still works for the BoundaryGeometry, but not, f...Hi,
Anybody else noticing that particles are not terminated correctly anymore if Bin is set to -1 (which is the usual way in the CyclotronTracker) since last week's commits to the head? It still works for the BoundaryGeometry, but not, for example, for the Cyclotron outer boundaries. I think it might have to do with removing all the boundp's
Best,
DanielOPAL-2.2.0winklehner_dwinklehner_dhttps://gitlab.psi.ch/OPAL/src/-/issues/119Periodic BC's2021-07-06T10:08:36+02:00winklehner_dPeriodic BC'sIt seems that when I set BCFFTT = PERIODIC, not only the z-direction but all directions are automatically set to periodic boundary conditions. @uldis_l I am assuming "UL" in the comment of PartBunch::setBCForDCBeam() is you. Was there a ...It seems that when I set BCFFTT = PERIODIC, not only the z-direction but all directions are automatically set to periodic boundary conditions. @uldis_l I am assuming "UL" in the comment of PartBunch::setBCForDCBeam() is you. Was there a particular reason to do this? In my understanding, a DC beam would have open BC in x and y and periodic BC in z.
In addition, the manual calls the parameters "BCFFTZ" and "PARFFTZ" but OPAL tells me those don't exist and throws an Exception, I have to use "BCFFTT" and "PARFFTT". Just a minor bug.adelmannkrausadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/118ParallelCyclotronTracker::applyPluginElements2017-07-27T08:32:10+02:00frey_mParallelCyclotronTracker::applyPluginElementsIn ParallelCyclotronTracker the particles are mapped several times from local to global coordinates and vice versa. The PartBunch::boundp() operation where the particles are redistributed among the cores is always performed in local coor...In ParallelCyclotronTracker the particles are mapped several times from local to global coordinates and vice versa. The PartBunch::boundp() operation where the particles are redistributed among the cores is always performed in local coordinates except in during the function call ParallelCyclotronTracker::applyPluginElements in case the boolean flag_stripper being true
(line 3389 ff.).
```
if(((*sindex)->first) == ElementBase::STRIPPER) {
bool flag_stripper = (static_cast<Stripper *>(((*sindex)->second).second))
-> checkStripper(itsBunch, turnnumber_m, itsBunch->getT() * 1e9, dt);
if(flag_stripper) {
itsBunch->boundp();
*gmsg << "* Total number of particles after stripping = " << itsBunch->getTotalNum() << endl;
}
}
```
The workflow of ParallelCyclotronTracker::Tracker_Generic()
```
...
ParallelCyclotronTracker::initDistInGlobalFrame(); // (line 1235) --> particle in global coordinates
ParallelCyclotronTracker::applyPluginElements(dt); // (line 1285) --> PartBunch::boundp() in global coordinates !!!
...
// start tracking
```
Shouldn't the PartBunch::bounp() operation always be performed in local coordinates?
Best,
Matthiaswinklehner_dwinklehner_d