src issueshttps://gitlab.psi.ch/OPAL/src/-/issues2018-04-11T10:36:40+02:00https://gitlab.psi.ch/OPAL/src/-/issues/135Restart in Opal-Cycl2018-04-11T10:36:40+02:00krausRestart in Opal-CyclThe restart in the RestartTest-2 looks odd to me. Maybe some Opal-Cycl-expert should look into it.The restart in the RestartTest-2 looks odd to me. Maybe some Opal-Cycl-expert should look into it.adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/133BeamLine fails isInside test during OrbitThreader execute() when Aperture CIR...2017-08-02T22:49:58+02:00winklehner_dBeamLine fails isInside test during OrbitThreader execute() when Aperture CIRCLE is defined in RFCavity.It took me a long time to find out why my RFCavity was not in the imap_m generated by the OrbitThreader during execute(), so I wasn't able to test this with other apertures, but it seems that having a "CIRCLE(0.008, 1)" aperture defined ...It took me a long time to find out why my RFCavity was not in the imap_m generated by the OrbitThreader during execute(), so I wasn't able to test this with other apertures, but it seems that having a "CIRCLE(0.008, 1)" aperture defined in the RFCavity element prevents it from being added to the elementSet list in the getElements(nextR) function. I think the culprit is somehow the ElementBase::isInsideTransverse() function.krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/132_M_range_check error2017-08-13T10:13:16+02:00winklehner_d_M_range_check errorSince pulling today, this happens:
```
Error{1}> *** Error:
Error{1}> *** in line 86 of file "RFQ_VECC-T.in":
Error{1}> RUN,METHOD="PARALLEL-T",BEAM=BEAM1,FIELDSOLVER=FS1,DISTRIBUTION=DIST;
Error{1}> vector::_M_range_check
...Since pulling today, this happens:
```
Error{1}> *** Error:
Error{1}> *** in line 86 of file "RFQ_VECC-T.in":
Error{1}> RUN,METHOD="PARALLEL-T",BEAM=BEAM1,FIELDSOLVER=FS1,DISTRIBUTION=DIST;
Error{1}> vector::_M_range_check
```
Any insights anyone? @kraus, did you write something about distributions now being arrays? @adelmann?https://gitlab.psi.ch/OPAL/src/-/issues/131Segmentation fault - dks - SurfacePhysics Collimators2021-06-10T19:15:02+02:00Valeria RizzoglioSegmentation fault - dks - SurfacePhysics CollimatorsI got segmentation fault running this input file [PROSCAN-G3-230.in](/uploads/7820209c33311fcdd68601832deacf30/PROSCAN-G3-230.in). It includes SurfacePhysics on 3 consecutive collimators.
The error message:
```
ParallelTTracker {0}> ...I got segmentation fault running this input file [PROSCAN-G3-230.in](/uploads/7820209c33311fcdd68601832deacf30/PROSCAN-G3-230.in). It includes SurfacePhysics on 3 consecutive collimators.
The error message:
```
ParallelTTracker {0}> Coll/Deg statistics: bunch to material 2 redifused 0 stopped 1
[opalrunner:20589] *** Process received signal ***
[opalrunner:20589] Signal: Segmentation fault (11)
[opalrunner:20589] Signal code: Address not mapped (1)
[opalrunner:20589] Failing at address: 0x1b70f000
[opalrunner:20589] [ 0] /lib64/libc.so.6[0x32e9632660]
[opalrunner:20589] [ 1] opal(_ZN14ParticleAttribI6VektorIdLj3EEE7destroyERKSt6vectorISt4pairImmESaIS5_EEb+0x1f0)[0xe531d0]
[opalrunner:20589] [ 2] opal(_ZN16IpplParticleBaseI21ParticleSpatialLayoutIdLj3E16UniformCartesianILj3EdE24BoxParticleCachingPolicyIdLj3ES2_EEE14performDestroyEv+0xc2)[0xdac9e2]
[opalrunner:20589] [ 3] opal(_ZN21ParticleSpatialLayoutIdLj3E16UniformCartesianILj3EdE24BoxParticleCachingPolicyIdLj3ES1_EE6updateER16IpplParticleBaseIS4_EPK14ParticleAttribIcE+0x45)[0xdae095]
[opalrunner:20589] [ 4] opal(_ZN16IpplParticleBaseI21ParticleSpatialLayoutIdLj3E16UniformCartesianILj3EdE24BoxParticleCachingPolicyIdLj3ES2_EEE6updateEv+0x1a)[0xdae60a]
[opalrunner:20589] [ 5] opal(_ZN9PartBunch6boundpEv+0x406)[0xe225e6]
[opalrunner:20589] [ 6] opal(_ZN16ParallelTTracker21computeExternalFieldsEv+0xf19)[0x107ec79]
[opalrunner:20589] [ 7] opal(_ZN16ParallelTTracker21executeDefaultTrackerEv+0x637)[0x1084b77]
[opalrunner:20589] [ 8] opal(_ZN16ParallelTTracker7executeEv+0x1f)[0x108566f]
[opalrunner:20589] [ 9] opal(_ZN8TrackRun7executeEv+0x751)[0x104c4b1]
[opalrunner:20589] [10] opal(_ZNK10OpalParser7executeEP6ObjectRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x35)[0xcb57e5]
[opalrunner:20589] [11] opal(_ZNK10OpalParser11parseActionER9Statement+0x143)[0xcb9803]
[opalrunner:20589] [12] opal(_ZNK10OpalParser5parseER9Statement+0x186)[0xcb9196]
[opalrunner:20589] [13] opal(_ZNK10OpalParser3runEv+0x2c)[0xcba7ec]
[opalrunner:20589] [14] opal(_ZN8TrackCmd7executeEv+0x343)[0xd6ccc3]
[opalrunner:20589] [15] opal(_ZNK10OpalParser7executeEP6ObjectRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x35)[0xcb57e5]
[opalrunner:20589] [16] opal(_ZNK10OpalParser11parseActionER9Statement+0x143)[0xcb9803]
[opalrunner:20589] [17] opal(_ZNK10OpalParser5parseER9Statement+0x186)[0xcb9196]
[opalrunner:20589] [18] opal(_ZNK10OpalParser3runEv+0x2c)[0xcba7ec]
[opalrunner:20589] [19] opal(_ZNK10OpalParser3runEP11TokenStream+0x6a)[0xcb9cea]
[opalrunner:20589] [20] opal(main+0x8e8)[0xc48658]
[opalrunner:20589] [21] /lib64/libc.so.6(__libc_start_main+0xfd)[0x32e961ed1d]
[opalrunner:20589] [22] opal[0xc3fab5]
[opalrunner:20589] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 20589 on node opalrunner exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
```
I tried with two different time steps (1 ps and 5 ps) and I got the same error. The same file runs up to end without the option `--use-dks`.
Run configuration: opalrunner with 8 cores
Modules load:
```
Currently Loaded Modulefiles:
1) gcc/5.4.0 3) OPAL/1.6.0rc3 5) root/6.08.02 7) Tcl/8.6.4 9) Python/2.7.12 11) gsl/2.2.1
2) openmpi/1.10.4 4) OPAL/1.6 6) openssl/1.0.2j 8) Tk/8.6.4 10) boost/1.62.0 12) H5root/1.3.2rc4-1
```adelmannkrausadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/130Unit tests report2017-10-09T06:17:32+02:00snuverink_jjochem.snuverink@psi.chUnit tests reportRunning the unit tests on master gives the following result:
```
[==========] 66 tests from 12 test cases ran. (1266 ms total)
[ PASSED ] 61 tests.
[ FAILED ] 5 tests, listed below:
[ FAILED ] RingTest.TestApply
[ FAILED ] RingT...Running the unit tests on master gives the following result:
```
[==========] 66 tests from 12 test cases ran. (1266 ms total)
[ PASSED ] 61 tests.
[ FAILED ] 5 tests, listed below:
[ FAILED ] RingTest.TestApply
[ FAILED ] RingTest.TestApply2
[ FAILED ] RingTest.TestApply3
[ FAILED ] GaussTest.FullSigmaTest1
[ FAILED ] GaussTest.FullSigmaTest2
5 FAILED TESTS
```
Tentatively assigned to @ext-rogers_c. Please reassign or open a new report for individual tests.
The Ring tests were not failing on `OPAL-1.6`.
`RingTest.TestApply`:
```
tests/classic_src/AbsBeamline/RingTest.cpp:259: Failure
The difference between B(i) and BRef(i) is 0.90010000000000012, which exceeds 1e-6, where
B(i) evaluates to 0,
BRef(i) evaluates to -0.90010000000000012, and
1e-6 evaluates to 9.9999999999999995e-07.
for pos ( 0.099899999999999878 , -2.2000000000000002 , -0.5 )
```
`RingTest.TestApply2`:
```
tests/classic_src/AbsBeamline/RingTest.cpp:298: Failure
Value of: ring.apply(pos, Vector_t(0.0), 0., E, B)
Actual: true
Expected: false
tests/classic_src/AbsBeamline/RingTest.cpp:303: Failure
Expected: (-B(2)) >= (0.1), actual: -0 vs 0.1
```
`RingTest.TestApply3`:
```
tests/classic_src/AbsBeamline/RingTest.cpp:395: Failure
The difference between B(0) and bx is 3, which exceeds 1e-6, where
B(0) evaluates to 0,
bx evaluates to 3, and
1e-6 evaluates to 9.9999999999999995e-07.
```
The `GaussTests` fail both in the same way (also on `OPAL-1.6`), output for Test1:
```
tests/opal_src/Distribution/GaussTest.cpp:119: Failure
Expected: (std::abs(expectedR11 - R11)) < (0.05 * expectedR11), actual: 0.247124 vs 0.1957
src/tests/opal_src/Distribution/GaussTest.cpp:120: Failure
Expected: (std::abs(expectedR21 - R21)) < (-0.05 * expectedR21), actual: 0.062553 vs 0.03243
src/tests/opal_src/Distribution/GaussTest.cpp:121: Failure
Expected: (std::abs(expectedR22 - R22)) < (0.05 * expectedR22), actual: 0.0412111 vs 0.03198
src/tests/opal_src/Distribution/GaussTest.cpp:123: Failure
Expected: (std::abs(expectedR52 - R52)) < (0.05 * expectedR52), actual: 0.0466059 vs 0.036325
src/tests/opal_src/Distribution/GaussTest.cpp:124: Failure
Expected: (std::abs(expectedR61 - R61)) < (0.05 * expectedR61), actual: 0.0998879 vs 0.0681
src/tests/opal_src/Distribution/GaussTest.cpp:125: Failure
Expected: (std::abs(expectedR62 - R62)) < (-0.05 * expectedR62), actual: 0.0256172 vs 0.013425
[ FAILED ] GaussTest.FullSigmaTest1 (552 ms)
```ext-rogers_cext-rogers_chttps://gitlab.psi.ch/OPAL/src/-/issues/127Bug in BorisPusher ParallelTTracker?2017-07-12T22:30:10+02:00frey_mBug in BorisPusher ParallelTTracker?Comparing the push part of the Boris-Brunemann algorithm as written in [Toggweiler_BorisIntegrator.pdf](/uploads/1fc710b9496b0bcb9a7def7f505d3fef/Toggweiler_BorisIntegrator.pdf) (pseudo code in Algorithm 2), I noticed that the time step ...Comparing the push part of the Boris-Brunemann algorithm as written in [Toggweiler_BorisIntegrator.pdf](/uploads/1fc710b9496b0bcb9a7def7f505d3fef/Toggweiler_BorisIntegrator.pdf) (pseudo code in Algorithm 2), I noticed that the time step ```dt```, respectively ```h``` in the paper, is missing. Is this on purpose or is it really a bug? I don't know if it's taken care of that in the ParallelTTracker.krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/119Periodic BC's2021-07-06T10:08:36+02:00winklehner_dPeriodic BC'sIt seems that when I set BCFFTT = PERIODIC, not only the z-direction but all directions are automatically set to periodic boundary conditions. @uldis_l I am assuming "UL" in the comment of PartBunch::setBCForDCBeam() is you. Was there a ...It seems that when I set BCFFTT = PERIODIC, not only the z-direction but all directions are automatically set to periodic boundary conditions. @uldis_l I am assuming "UL" in the comment of PartBunch::setBCForDCBeam() is you. Was there a particular reason to do this? In my understanding, a DC beam would have open BC in x and y and periodic BC in z.
In addition, the manual calls the parameters "BCFFTZ" and "PARFFTZ" but OPAL tells me those don't exist and throws an Exception, I have to use "BCFFTT" and "PARFFTT". Just a minor bug.adelmannkrausadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/118ParallelCyclotronTracker::applyPluginElements2017-07-27T08:32:10+02:00frey_mParallelCyclotronTracker::applyPluginElementsIn ParallelCyclotronTracker the particles are mapped several times from local to global coordinates and vice versa. The PartBunch::boundp() operation where the particles are redistributed among the cores is always performed in local coor...In ParallelCyclotronTracker the particles are mapped several times from local to global coordinates and vice versa. The PartBunch::boundp() operation where the particles are redistributed among the cores is always performed in local coordinates except in during the function call ParallelCyclotronTracker::applyPluginElements in case the boolean flag_stripper being true
(line 3389 ff.).
```
if(((*sindex)->first) == ElementBase::STRIPPER) {
bool flag_stripper = (static_cast<Stripper *>(((*sindex)->second).second))
-> checkStripper(itsBunch, turnnumber_m, itsBunch->getT() * 1e9, dt);
if(flag_stripper) {
itsBunch->boundp();
*gmsg << "* Total number of particles after stripping = " << itsBunch->getTotalNum() << endl;
}
}
```
The workflow of ParallelCyclotronTracker::Tracker_Generic()
```
...
ParallelCyclotronTracker::initDistInGlobalFrame(); // (line 1235) --> particle in global coordinates
ParallelCyclotronTracker::applyPluginElements(dt); // (line 1285) --> PartBunch::boundp() in global coordinates !!!
...
// start tracking
```
Shouldn't the PartBunch::bounp() operation always be performed in local coordinates?
Best,
Matthiaswinklehner_dwinklehner_dhttps://gitlab.psi.ch/OPAL/src/-/issues/110PartBunch::get_bounds can produce NaNs2019-10-25T13:47:36+02:00snuverink_jjochem.snuverink@psi.chPartBunch::get_bounds can produce NaNsWhile trying to update the [PSI-Ring](https://gitlab.psi.ch/AMAS-BDModels/PSI-Ring) simulations to the master branch, I encountered the following running error:
```
OPAL> PartBunch.cpp: 1574 nan 2.000000e-02
Error>
Error> *** User...While trying to update the [PSI-Ring](https://gitlab.psi.ch/AMAS-BDModels/PSI-Ring) simulations to the master branch, I encountered the following running error:
```
OPAL> PartBunch.cpp: 1574 nan 2.000000e-02
Error>
Error> *** User error detected by function "PartBunch::boundp() "
Error> *** in line 311 of file "Ring.in":
Error> RUN,METHOD="CYCLOTRON-T",BEAM=BEAM1,FIELDSOLVER=FS1,DISTRIBUTION=DIST;
Error> h<0, can not build a mesh
```
The `nan` gets introduced in line 1521: `get_bounds(rmin_m, rmax_m);`
Printing out rmax and rmin before and after this line gives (ymmv):
before:
```
(i,rmax, rmin) 0 0.0000000000000000e+00 0.0000000000000000e+00
(i,rmax, rmin) 1 0.0000000000000000e+00 0.0000000000000000e+00
(i,rmax, rmin) 2 0.0000000000000000e+00 0.0000000000000000e+00
```
after
```
(i,rmax, rmin) 0 7.1153710538428058e-03 -6.9640951722910538e-03
(i,rmax, rmin) 1 4.0421699390708048e-02 -4.0512781208033796e-02
(i,rmax, rmin) 2 -nan -nan
```
I likely do something wrong with my input, but I believe the code should not get this far and produce a better error message.
This can be reproduced with OPAL master (0469d1ac), and the latest version of [PSI-Ring](https://gitlab.psi.ch/AMAS-BDModels/PSI-Ring) and executing `runOpal --nobatch`
**Edit 20 July:**
https://gitlab.psi.ch/OPAL/src/issues/110#note_1914: Simplified input file [Ring.in](https://gitlab.psi.ch/OPAL/src/uploads/ec9579a7c5009c1b7465266afe4373c0/Ring.in)
https://gitlab.psi.ch/OPAL/src/issues/110#note_1916: Regression test `RingCyclotron` has the same bug when one changes the distribution from gauss to either single particle or binomial. snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/109OPAL does not compile with AMR Solver and unit tests2017-07-24T10:29:38+02:00snuverink_jjochem.snuverink@psi.chOPAL does not compile with AMR Solver and unit testsCompiling OPAL with ENABLE_AMR_SOLVER=1 and DBUILD_OPAL_UNIT_TESTS=1 gives the following compile error:
```
/home/scratch/OPAL/src/tests/opal_src/Distribution/GaussTest.cpp: In member function ‘virtual void GaussTest_FullSigmaTest1_Tes...Compiling OPAL with ENABLE_AMR_SOLVER=1 and DBUILD_OPAL_UNIT_TESTS=1 gives the following compile error:
```
/home/scratch/OPAL/src/tests/opal_src/Distribution/GaussTest.cpp: In member function ‘virtual void GaussTest_FullSigmaTest1_Test::TestBody()’:
<command-line>:0:6: error: expected unqualified-id before numeric constant
/home/scratch/OPAL/src/tests/opal_src/Distribution/GaussTest.cpp:74:15: note: in expansion of macro ‘OPAL’
OpalData *OPAL = OpalData::getInstance();
^
```
This is because OPAL is defined as preprocessor macro within CMake (with value 1):
```
[ 93%] Building CXX object tests/CMakeFiles/opal_unit_tests.dir/opal_src/Distribution/GaussTest.cpp.o
g++ -DBL_FORT_USE_UNDERSCORE -DBL_Linux -DBL_NOLINEVALUES -DBL_PARALLEL_IO -DBL_SPACEDIM=3 -DBL_USE_DOUBLE -DBL_USE_MPI -DMG_USE_F90_SOLVERS -DMG_USE_FBOXLIB -DNDEBUG -DOPAL .... tests/opal_src/Distribution/GaussTest.cpp
```
This is done in [CMakeModules/CCSEOptions.cmake](https://gitlab.psi.ch/OPAL/src/blob/master/CMakeModules/CCSEOptions.cmake#L74)
:
`list(APPEND BL_DEFINES "OPAL")`
I don't see a reason to add OPAL here since afaik it is not used anywhere as such.
That said it might good to adopt [camelCase for the variable name](https://gitlab.psi.ch/OPAL/src/wikis/for-developers#28-method-argument-and-local-variable-names).https://gitlab.psi.ch/OPAL/src/-/issues/108Revise macros such as DBG_SCALARFIELD and replace them with an Option command2017-07-24T10:29:38+02:00snuverink_jjochem.snuverink@psi.chRevise macros such as DBG_SCALARFIELD and replace them with an Option commandCompiling OPAL with the option DBG_SCALARFIELD gives the following compiler error:
```
src/Classic/Algorithms/PartBunch.cpp: In member function ‘void PartBunch::computeSelfFields(int)’:
/home/scratch/OPAL/src/src/Classic/Algorithms/...Compiling OPAL with the option DBG_SCALARFIELD gives the following compiler error:
```
src/Classic/Algorithms/PartBunch.cpp: In member function ‘void PartBunch::computeSelfFields(int)’:
/home/scratch/OPAL/src/src/Classic/Algorithms/PartBunch.cpp:715:29: error: ‘rmin’ was not declared in this scope
*gmsg << (rmin(0) - origin(0)) / spacing(0) << "\t"
^
```
This was introduced in commit https://gitlab.psi.ch/OPAL/src/commit/595b4b83818596b5f7a72e086cbbda4325f70aa8#852edcbb7804c7416aa51f7264a7a36fc1fa3fef_781_683snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/107keyword register2017-07-24T10:29:38+02:00snuverink_jjochem.snuverink@psi.chkeyword registerThe source code contains at several locations the keyword `register`. This is of no use anymore (see e.g. http://www.drdobbs.com/keywords-that-arent-or-comments-by-anoth/184403859). And with gcc compilers this might also gives some warni...The source code contains at several locations the keyword `register`. This is of no use anymore (see e.g. http://www.drdobbs.com/keywords-that-arent-or-comments-by-anoth/184403859). And with gcc compilers this might also gives some warnings:
```
src/Solvers/TaperDomain.cpp:89:66: warning: address requested for ‘y’, which is declared ‘register’ [-Wextra]
IntersectXDir.insert(std::pair<int, double>(y, xd));
```
Therefore, I propose to remove this keyword from the code everywhere.snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/101OPAL version2017-07-24T10:29:38+02:00frey_mOPAL versionAn additional flag at runtime of OPAL would be nice that returns the current version, i.e.
```
matthias@R2-D2:~$ opal --version
```An additional flag at runtime of OPAL would be nice that returns the current version, i.e.
```
matthias@R2-D2:~$ opal --version
```2017-05-02https://gitlab.psi.ch/OPAL/src/-/issues/100Kickers with field maps2021-06-10T18:07:13+02:00krausKickers with field mapsKickers at bERLinPro are far from perfect dipoles. Instead they have a strong higher order components. The current implementation only provides a hard-edge model. Let the user add field maps to better model them.Kickers at bERLinPro are far from perfect dipoles. Instead they have a strong higher order components. The current implementation only provides a hard-edge model. Let the user add field maps to better model them.krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/96DKS 1.1.0 for OPAL 1.6 branch2017-06-17T20:38:34+02:00gsellDKS 1.1.0 for OPAL 1.6 branchDKS 1.1.0 must be used in OPAL 1.6. So we have the same toolchain for OPAL 1.6 and masterDKS 1.1.0 must be used in OPAL 1.6. So we have the same toolchain for OPAL 1.6 and masterhttps://gitlab.psi.ch/OPAL/src/-/issues/95OpalRingTest2020-04-22T17:05:41+02:00adelmannOpalRingTestOpalRingTest new with 2x2x2 space charge grid gives of course different answers w.r.t. emittance etc.
Please check that this makes still sense. I updated the reference with the actual resultsOpalRingTest new with 2x2x2 space charge grid gives of course different answers w.r.t. emittance etc.
Please check that this makes still sense. I updated the reference with the actual resultsext-rogers_cext-rogers_chttps://gitlab.psi.ch/OPAL/src/-/issues/88PSI Opal build chain failt2019-10-25T13:46:05+02:00baumgartenchristian.baumgarten@psi.chPSI Opal build chain failt`cmake ..
-- The C compiler identification is GNU 5.4.0
-- The CXX compiler identification is GNU 5.4.0
-- Check for working C compiler: /afs/psi.ch/sys/psi.x86_64_slp6/Programming/gcc/5.4.0/bin/gcc
-- Check for working C compiler: /...`cmake ..
-- The C compiler identification is GNU 5.4.0
-- The CXX compiler identification is GNU 5.4.0
-- Check for working C compiler: /afs/psi.ch/sys/psi.x86_64_slp6/Programming/gcc/5.4.0/bin/gcc
-- Check for working C compiler: /afs/psi.ch/sys/psi.x86_64_slp6/Programming/gcc/5.4.0/bin/gcc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done`
Ich bin wie im Wiki beschrieben vorgegangen, dh.:
`mkdir $HOME/opal
cd $HOME/opal
git clone git@gitlab.psi.ch:OPAL/src.git
git checkout OPAL-1.6
mkdir build
cd build
cmake ..`
Dann kommt leider folgende Fehlermeldung:
`-- Check for working CXX compiler: /afs/psi.ch/sys/psi.x86_64_slp6/Programming/gcc/5.4.0/bin/g++
-- Check for working CXX compiler: /afs/psi.ch/sys/psi.x86_64_slp6/Programming/gcc/5.4.0/bin/g++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Build type is: RelWithDebInfo
-- Host OS System: Linux-2.6.32-642.13.1.el6.x86_64
-- Hostname: pc10758
-- Unable to determine MPI from MPI driver /afs/psi.ch/sys/psi.x86_64_slp6/Compiler/openmpi/1.10.4/gcc/5.4.0/bin/mpicc
CMake Error at /afs/psi.ch/sys/psi.x86_64_slp6/Programming/cmake/3.6.3/share/cmake-3.6/Modules/FindPackageHandleStandardArgs.cmake:148 (message):
Could NOT find MPI_C (missing: MPI_C_LIBRARIES MPI_C_INCLUDE_PATH)
Call Stack (most recent call first):
/afs/psi.ch/sys/psi.x86_64_slp6/Programming/cmake/3.6.3/share/cmake-3.6/Modules/FindPackageHandleStandardArgs.cmake:388 (_FPHSA_FAILURE_MESSAGE)
/afs/psi.ch/sys/psi.x86_64_slp6/Programming/cmake/3.6.3/share/cmake-3.6/Modules/FindMPI.cmake:628 (find_package_handle_standard_args)
CMakeLists.txt:33 (find_package)`
`-- Configuring incomplete, errors occurred!
See also "/home/l_baumgarten/opal/devel/src/build/CMakeFiles/CMakeOutput.log".`
Module:
`module list
Currently Loaded Modulefiles:
1) cmake/3.6.3 4) Tcl/8.6.4 7) boost/1.62.0 10) trilinos/12.10.1 13) gnuplot/5.0.0
2) gcc/5.4.0 5) Tk/8.6.4 8) gsl/2.2.1 11) hdf5/1.8.18
3) openssl/1.0.2j 6) Python/2.7.12 9) openmpi/1.10.4 12) H5hut/2.0.0rc3`gsellgsellhttps://gitlab.psi.ch/OPAL/src/-/issues/80Format of selected particle ID1 and ID22021-06-10T18:08:00+02:00Valeria RizzoglioFormat of selected particle ID1 and ID2Just a reminder for the use of the selected particles with ID1 and ID2.
From Andreas's email:
```
the ID1 and ID2 are in the format (x,y,z,px,py,pz) this is different than in the case when you read from file.
```
A coherent format ...Just a reminder for the use of the selected particles with ID1 and ID2.
From Andreas's email:
```
the ID1 and ID2 are in the format (x,y,z,px,py,pz) this is different than in the case when you read from file.
```
A coherent format for ID1 and ID2 with the distribution from file would be nicer, as well as a unique definition of the longitudinal momentum between OPAL-T and OPAL-Cyc. At the moment, they differ not only in used variable (pz for OPAL-T and py for OPAL-Cyc) but also in the meaning:
* **pz** in OPAL-T indicates the longitudinal momentum of the particle,
* **py** in OPAL-Cyc indicates the momentum offset with respect to the reference momentumhttps://gitlab.psi.ch/OPAL/src/-/issues/75VERSION string2017-03-28T14:02:28+02:00adelmannVERSION stringcan someone explain why we write
OPTION, VERSION=10500;
and not
Option, VERSION="1.5.1"can someone explain why we write
OPTION, VERSION=10500;
and not
Option, VERSION="1.5.1"https://gitlab.psi.ch/OPAL/src/-/issues/72Removal of data from a particle without reducing number of particles2017-07-24T10:29:37+02:00krausRemoval of data from a particle without reducing number of particlesThis leads to wrong results: https://gitlab.psi.ch/OPAL/src/blob/OPAL-1.6/src/Classic/Algorithms/PartBunch.cpp#L1930 . This is as if replacing position and momentum with zero.
Please remember to add the patch that solves this issue to ...This leads to wrong results: https://gitlab.psi.ch/OPAL/src/blob/OPAL-1.6/src/Classic/Algorithms/PartBunch.cpp#L1930 . This is as if replacing position and momentum with zero.
Please remember to add the patch that solves this issue to the master as well.adelmannadelmann