src issueshttps://gitlab.psi.ch/OPAL/src/-/issues2023-07-18T10:47:51+02:00https://gitlab.psi.ch/OPAL/src/-/issues/768testing2023-07-18T10:47:51+02:00sadr_mtesting### Summary
(Summarize the bug encountered concisely)
### Steps to reproduce
(How one can reproduce the issue - this is very important)
### What is the current *bug* behavior?
(What actually happens)
### What is the expected *co...### Summary
(Summarize the bug encountered concisely)
### Steps to reproduce
(How one can reproduce the issue - this is very important)
### What is the current *bug* behavior?
(What actually happens)
### What is the expected *correct* behavior?
(What you should see instead)
### Relevant logs and/or screenshots
(Paste any relevant logs - please use code blocks (```) to format console output,
logs, and code as it's very hard to read otherwise.)
### Possible fixes
(If you can, link to the line of code that might be responsible for the problem)2023.1https://gitlab.psi.ch/OPAL/src/-/issues/767Problem in Command DUMPEMFIELDS and DUMPFIELDS2023-08-22T10:12:45+02:00ext-rogers_cProblem in Command DUMPEMFIELDS and DUMPFIELDS### Summary
By email from Dou Gouliang
>>>
When I use the command "DUMPFIELDS" and "DUMPEMFIELDS" , the opal_2.4 shows the error (annex 1):
```
Error> *** Error:
Error> Internal OPAL error:
Error> Assertion 'idx.i < num_gridpx_m - 1' ...### Summary
By email from Dou Gouliang
>>>
When I use the command "DUMPFIELDS" and "DUMPEMFIELDS" , the opal_2.4 shows the error (annex 1):
```
Error> *** Error:
Error> Internal OPAL error:
Error> Assertion 'idx.i < num_gridpx_m - 1' failed.
Error> idx.i = 118, num_gridpx_m - 1 = 118.000000
Error> in
Error> /afs/psi.ch/user/g/gsell/private/src/OPAL/src/src/Classic/Fields/FM3DH5BlockBase.h, line 163
```
I have checked my inputfile ,and there is no wrong experssion. The detailed experssion is :
```
DUMPEMFIELDS, COORDINATE_SYSTEM = Cartesian, X_START= -0.8, X_STEPS=1601, DX= 0.001, Y_START=-0.25, Y_STEPS=501, DY= 0.001, Z_START=-0.02, Z_STEPS=41, DZ=0.001, T_START=0,T_STEPS=1, DT=0.1 ,FILE_NAME="FIELDEM-MAPXYZ.dat";
```
When I try to split the original output area of X into two parts, opal can output the corresponding two parts normally. First change X_STEPS to 117, and then change X_START and X_STEPS to -0.215 and 204.
And I re-ran another cyclotron simulation and got the same error, except idx.i changed to 14 .
>>>
and later
>>>
I ran it again in a newer version of OPAL(opal-2022.01) and found the same errors. In particular, I used OPAL-2022.01 as a Linux binary package from the official website.
>>>
lattice in attached zip...
[input_file.zip](/uploads/0c4926991866db21e66977b22b9e06c7/input_file.zip)2023.1ext-rogers_cext-rogers_chttps://gitlab.psi.ch/OPAL/src/-/issues/763ascii2h5block error reading data2023-08-24T11:15:06+02:00ext-calvo_ppedro.calvo@ciemat.esascii2h5block error reading data### Summary
I've got the following error running ascii2h5block tool
```
[proc 0] E: H5Block3dWriteVector3dFieldFloat64: Write to dataset '/Step#0/Block/Efield/0' failed.
```
In addition, the loop to save the results in the h5part for...### Summary
I've got the following error running ascii2h5block tool
```
[proc 0] E: H5Block3dWriteVector3dFieldFloat64: Write to dataset '/Step#0/Block/Efield/0' failed.
```
In addition, the loop to save the results in the h5part format is not necessary, since the input files should already be sorted in the correct format. Therefore, it is sufficient to read the data directly
An enhancement can be included to ensure that the number of data in the fields matches the grid specified in the input header.
### Possible fixes
`H5Block3dSetView` has to be adapted to each field, separately for E-field and H-field.2023.1ext-calvo_ppedro.calvo@ciemat.esext-calvo_ppedro.calvo@ciemat.eshttps://gitlab.psi.ch/OPAL/src/-/issues/756Wrong class member2023-08-23T10:00:55+02:00ext-calvo_ppedro.calvo@ciemat.esWrong class memberOPAL compilation [fails](http://amas.web.psi.ch/opal/master/output/2023-04-05_10-49.txt). The bug was introduced in OPAL/src!613. I get the following error compiling with SAAMG solver
```
error: ‘class Tpetra::Map<>’ has no member named...OPAL compilation [fails](http://amas.web.psi.ch/opal/master/output/2023-04-05_10-49.txt). The bug was introduced in OPAL/src!613. I get the following error compiling with SAAMG solver
```
error: ‘class Tpetra::Map<>’ has no member named ‘getLocalNumElements’
```2023.1https://gitlab.psi.ch/OPAL/src/-/issues/751Cyclotron out of range field lookup2023-03-31T14:48:00+02:00snuverink_jjochem.snuverink@psi.chCyclotron out of range field lookup### Summary
The magnetic field lookup can be out of range resulting in an segmentation fault.
### Steps to reproduce
One way to reproduce this is to have a trim coil with a very large `BMAX`, e.g.:
```
tc15: TRIMCOIL, TYPE="PSI-PHASE...### Summary
The magnetic field lookup can be out of range resulting in an segmentation fault.
### Steps to reproduce
One way to reproduce this is to have a trim coil with a very large `BMAX`, e.g.:
```
tc15: TRIMCOIL, TYPE="PSI-PHASE", RMIN = 3000, RMAX = 4560.073, BMAX=100.0025264327051118017, COEFNUM = {-0.0312020990404, 0.0227946756108, -0.00354827255973}, COEFDENOM = {14.7460286849, -16.9186605846, 7.61516943548, -1.53074181639, 0.11384470123};
psi_ring: Cyclotron, TYPE="RING", CYHARMON=6, PHIINIT=0.0, PRINIT=pr0, RINIT=r0 , SYMMETRY=8.0, RFFREQ=frequency, FMAPFN="./s03av.nar", FMLOWE=0.072, FMHIGHE=0.595,
TRIMCOIL={tc15};
```
### What is the current *bug* behavior?
segmentation fault:
```
OPAL> * ---------------------- Start tracking ---------------------------------- *
/afs/psi.ch/sys/psi.x86_64_slp6/Programming/gcc/10.3.0/include/c++/10.3.0/bits/stl_vector.h:1045: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](std::vector<_Tp, _Alloc>::size_type) [with _Tp = double; _Alloc = std::allocator<double>; std::vector<_Tp, _Alloc>::reference = double&; std::vector<_Tp, _Alloc>::size_type = long unsigned int]: Assertion '__builtin_expect(__n < this->size(), true)' failed.
Thread 1 "opal" received signal SIGABRT, Aborted.
0x00007ffff43a2aff in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: yum debuginfo-install bzip2-libs-1.0.6-26.el8.x86_64 glibc-2.28-211.el8.x86_64 zlib-1.2.11-21.el8_7.x86_64
(gdb) bt
#0 0x00007ffff43a2aff in raise () from /lib64/libc.so.6
#1 0x00007ffff4375ea5 in abort () from /lib64/libc.so.6
#2 0x00000000004c60c2 in std::__replacement_assert (
__file=__file@entry=0xce5908 "/afs/psi.ch/sys/psi.x86_64_slp6/Programming/gcc/10.3.0/include/c++/10.3.0/bits/stl_vector.h", __line=__line@entry=1045,
__function=__function@entry=0xce77d8 "std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](std::vector<_Tp, _Alloc>::size_type) [with _Tp = double; _Alloc = std::allocator<double>; std::vector<_Tp, _Alloc>::reference ="..., __condition=__condition@entry=0xce5e98 "__builtin_expect(__n < this->size(), true)")
at /afs/psi.ch/sys/psi.x86_64_slp6/Programming/gcc/10.3.0/include/c++/10.3.0/x86_64-pc-linux-gnu/bits/c++config.h:461
#3 0x00000000007785c0 in std::vector<double, std::allocator<double> >::operator[] (__n=19210, this=0x1999428)
at /home/snuverink_j/OPAL/src/src/Classic/AbsBeamline/Cyclotron.cpp:767
#4 Cyclotron::interpolate (this=this@entry=0x1998fd0, rad=@0x7fffffff6ac8: 4.7088452431449133,
tet_rad=@0x7fffffff6ad0: 0.1981665351710836, brint=@0x7fffffff6ad8: 0, btint=@0x7fffffff6ae0: 0,
bzint=@0x7fffffff6ae8: 0) at /home/snuverink_j/OPAL/src/src/Classic/AbsBeamline/Cyclotron.cpp:750
#5 0x000000000077887b in Cyclotron::apply (this=0x1998fd0, R=..., t=@0x7fffffff6f38: 6000.00000001014, E=..., B=...)
at /home/snuverink_j/OPAL/src/src/Classic/AbsBeamline/Cyclotron.cpp:454
#6 0x0000000000775b38 in Cyclotron::apply (this=0x1998fd0, id=@0x7fffffff70d8: 1, t=@0x7fffffff6f38: 6000.00000001014,
E=..., B=...) at /home/snuverink_j/OPAL/src/src/Classic/AbsBeamline/Cyclotron.cpp:409
#7 0x00000000006d2154 in ParallelCyclotronTracker::computeExternalFields_m (this=this@entry=0x1996380,
i=@0x7fffffff70d8: 1, t=<optimized out>, Efield=..., Bfield=...)
at /home/snuverink_j/OPAL/src/src/Algorithms/ParallelCyclotronTracker.cpp:3417
#8 0x00000000006d21bf in ParallelCyclotronTracker::getFieldsAtPoint (this=0x1996380, t=<optimized out>,
Pindex=@0x7fffffff70d8: 1, Efield=..., Bfield=...)
at /home/snuverink_j/OPAL/src/src/Algorithms/ParallelCyclotronTracker.cpp:1463
#9 0x00000000006ebe66 in std::function<bool (double const&, unsigned long const&, Vektor<double, 3u>&, Vektor<double, 3u>&)>::operator()(double const&, unsigned long const&, Vektor<double, 3u>&, Vektor<double, 3u>&) const (__args#3=...,
__args#2=..., __args#1=@0x7fffffff70d8: 1, __args#0=@0x7fffffff6da8: 6.9533465388994597e-310, this=<optimized out>)
at /afs/psi.ch/sys/psi.x86_64_slp6/Programming/gcc/10.3.0/include/c++/10.3.0/bits/std_function.h:248
#10 RK4<std::function<bool (double const&, unsigned long const&, Vektor<double, 3u>&, Vektor<double, 3u>&)>>::derivate_m(PartBunchBase<double, 3u>*, double*, double const&, double*, unsigned long const&) const (this=this@entry=0x193f8f0,
bunch=bunch@entry=0x193c500, y=y@entry=0x7fffffff7030, t=@0x7fffffff6f38: 6000.00000001014,
yp=yp@entry=0x7fffffff7000, i=@0x7fffffff70d8: 1) at /home/snuverink_j/OPAL/src/src/Steppers/RK4.hpp:108
#11 0x00000000006ec366 in RK4<std::function<bool (double const&, unsigned long const&, Vektor<double, 3u>&, Vektor<double, 3u>&)>>::doAdvance_m(PartBunchBase<double, 3u>*, unsigned long const&, double const&, double) const (this=0x193f8f0,
bunch=0x193c500, i=@0x7fffffff70d8: 1, t=@0x7fffffff7168: 5999.9341888880599, dt=0.065811122079631468)
at /home/snuverink_j/OPAL/src/src/Steppers/RK4.hpp:70
#12 0x00000000006d2a51 in Stepper<std::function<bool (double const&, unsigned long const&, Vektor<double, 3u>&, Vektor<double, 3u>&)>>::advance(PartBunchBase<double, 3u>*, unsigned long const&, double const&, double) const (
dt=0.065811122079631468, t=@0x7fffffff7168: 5999.9341888880599, i=@0x7fffffff70d8: 1, bunch=0x193c500,
this=<optimized out>) at /home/snuverink_j/OPAL/src/src/Algorithms/ParallelCyclotronTracker.cpp:3046
#13 ParallelCyclotronTracker::seoMode_m (this=this@entry=0x1996380, t=@0x7fffffff7168: 5999.9341888880599,
dt=0.065811122079631468, Ttime=..., Tdeltr=..., Tdeltz=..., TturnNumber=...)
at /home/snuverink_j/OPAL/src/src/Algorithms/ParallelCyclotronTracker.cpp:3046
#14 0x00000000006e21dd in ParallelCyclotronTracker::GenericTracker (this=0x1996380)
at /home/snuverink_j/OPAL/src/src/Algorithms/ParallelCyclotronTracker.cpp:1429
#15 0x00000000006e2765 in ParallelCyclotronTracker::execute (this=0x1996380)
at /home/snuverink_j/OPAL/src/src/Algorithms/ParallelCyclotronTracker.cpp:1238
#16 0x00000000006b2651 in TrackRun::execute (this=0x193e6c0) at /home/snuverink_j/OPAL/src/src/Track/TrackRun.cpp:245
#17 0x000000000053531b in OpalParser::execute (this=this@entry=0x193bb50, object=object@entry=0x193e6c0, name=...)
at /home/snuverink_j/OPAL/src/src/OpalParser/OpalParser.cpp:140
#18 0x0000000000538d29 in OpalParser::parseAction (this=0x193bb50, stat=...)
at /home/snuverink_j/OPAL/src/src/OpalParser/OpalParser.cpp:173
#19 0x00000000005392d9 in OpalParser::parse (this=0x193bb50, stat=...)
at /home/snuverink_j/OPAL/src/src/OpalParser/OpalParser.cpp:91
#20 0x00000000005390d6 in OpalParser::run (this=0x193bb50)
at /home/snuverink_j/OPAL/src/src/OpalParser/OpalParser.cpp:608
#21 0x00000000006aaba9 in TrackCmd::execute (this=0x18947b0) at /home/snuverink_j/OPAL/src/src/Track/TrackCmd.cpp:230
#22 0x000000000053531b in OpalParser::execute (this=this@entry=0x7fffffff8490, object=object@entry=0x18947b0, name=...)
at /home/snuverink_j/OPAL/src/src/OpalParser/OpalParser.cpp:140
#23 0x0000000000538d29 in OpalParser::parseAction (this=0x7fffffff8490, stat=...)
at /home/snuverink_j/OPAL/src/src/OpalParser/OpalParser.cpp:173
#24 0x00000000005392d9 in OpalParser::parse (this=0x7fffffff8490, stat=...)
at /home/snuverink_j/OPAL/src/src/OpalParser/OpalParser.cpp:91
#25 0x00000000005390d6 in OpalParser::run (this=0x7fffffff8490)
at /home/snuverink_j/OPAL/src/src/OpalParser/OpalParser.cpp:608
#26 0x0000000000535800 in OpalParser::run (this=this@entry=0x7fffffff8490, is=is@entry=0x1890ad0)
at /home/snuverink_j/OPAL/src/src/OpalParser/OpalParser.cpp:633
#27 0x00000000004b3843 in opalMain (argc=<optimized out>, argv=<optimized out>)
at /home/snuverink_j/OPAL/src/src/Main.cpp:364
#28 0x00007ffff438ed85 in __libc_start_main () from /lib64/libc.so.6
#29 0x00000000004af24e in _start () at /home/snuverink_j/OPAL/src/src/Main.cpp:131
```
### What is the expected *correct* behavior?
not crash and ignore the field. the particle will be cleaned up afterwards.
### Possible fixes
https://gitlab.psi.ch/OPAL/src/-/blob/master/src/Classic/AbsBeamline/Cyclotron.cpp#L747:
```cpp
if (fieldType_m != BFieldType::FFABF) {
/*
For FFA this does not work
*/
r1t1 = it + ntetS * ir - 1;
r1t2 = r1t1 + 1;
r2t1 = r1t1 + ntetS;
r2t2 = r2t1 + 1 ;
} else {
/*
With this we have B-field AND this is far more
intuitive for me ....
*/
r1t1 = idx(ir, it);
r2t1 = idx(ir + 1, it);
r1t2 = idx(ir, it + 1);
r2t2 = idx(ir + 1, it + 1);
}
if ((it >= 0) && (ir >= 0) && (it < Bfield_m.ntetS_m) && (ir < Bfield_m.nrad_m)) {
// lookup and apply field
```
The range check should rather be `ir + 1 < Bfield_m.nrad_m`2023.1snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/750Trimcoil implementation bug2023-07-24T11:00:49+02:00snuverink_jjochem.snuverink@psi.chTrimcoil implementation bug### Summary
In #276 and !53 a trim coil range in the azimuthal direction was introduced. However, this was not properly tested and it contained a bug, that was discovered by @zhang_h. This was partly fixed in #736 / !598, but not comple...### Summary
In #276 and !53 a trim coil range in the azimuthal direction was introduced. However, this was not properly tested and it contained a bug, that was discovered by @zhang_h. This was partly fixed in #736 / !598, but not completely tested.
### What is the current *bug* behavior?
The trim coils do not work anymore without `PHIMIN` and `PHIMAX` specified.
### What is the expected *correct* behavior?
Trim coils working as normal.
### Possible fixes
The bug was introduced in 77c975dcca3b99cf195cbf020d5039f8be745646, especially the default value of `PHIMAX` was not specified (https://gitlab.psi.ch/OPAL/src/-/commit/77c975dcca3b99cf195cbf020d5039f8be745646#fa26d1e4b267fcc893fcd886f40d73b50d62cdef_46_56) and the `TrimCoil::setAzimuth` method was not so well implemented (https://gitlab.psi.ch/OPAL/src/-/commit/77c975dcca3b99cf195cbf020d5039f8be745646#c96f8a350e29295ac068d6a60d9a39c1972d72e6_16_28).2023.1snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/747OPAL hangs when running FFA in parallel2022-11-08T12:25:32+01:00ext-rogers_cOPAL hangs when running FFA in parallel### Summary
OPAL hangs when running FFA in parallel
### Possible fixes
`Ring::apply` has a call to `lossDS_m->save();` which in turn calls `ippl::allreduce`. This is a type of MPI reduce. At the same time, `ParallelCyclotronTracker::d...### Summary
OPAL hangs when running FFA in parallel
### Possible fixes
`Ring::apply` has a call to `lossDS_m->save();` which in turn calls `ippl::allreduce`. This is a type of MPI reduce. At the same time, `ParallelCyclotronTracker::deleteParticle` calls `allreduce(flagNeedUpdate, 1, std::logical_or<bool>());`. So the procs can never align and we get a hang.
Fix is to call `lossDS_m->save()` in the Ring::finalise method, which seems to be the approach taken by `Cyclotron`.2023.1ext-rogers_cext-rogers_chttps://gitlab.psi.ch/OPAL/src/-/issues/743Compilation fails2022-10-19T15:13:27+02:00ext-calvo_ppedro.calvo@ciemat.esCompilation failsAfter OPAL/src!602, OPAL compilation fails. The format of the new version is invalid (see [link](http://amas.web.psi.ch/opal/master/output/2022-10-19_10-49.txt))After OPAL/src!602, OPAL compilation fails. The format of the new version is invalid (see [link](http://amas.web.psi.ch/opal/master/output/2022-10-19_10-49.txt))gsellgsellhttps://gitlab.psi.ch/OPAL/src/-/issues/727Hard coded momentum tolerance2023-08-30T10:35:27+02:00ext-rogers_cHard coded momentum tolerance### Summary
In Opal-Cycl, there is a hardcoded parameter which requires the mean beam momentum to be within 1e-2 of the reference particle momentum. This is completely inappropriate for many simulations.
### Steps to reproduce
Run a l...### Summary
In Opal-Cycl, there is a hardcoded parameter which requires the mean beam momentum to be within 1e-2 of the reference particle momentum. This is completely inappropriate for many simulations.
### Steps to reproduce
Run a lattice with PC != mean momentum of the beam
### What is the current *bug* behavior?
Opal throws an exception. This comes from line 2411 of ParallelCyclotronTracker.cpp
### What is the expected *correct* behavior?
Really, opal should not throw an exception at all. There are many use cases where the reference momentum does not want to be the same as the actual distribution. At the very least the tolerance should be soft coded.
### Relevant logs and/or screenshots
Line 2414 of src/Algorithms/ParallelCyclotronTracker.cpp
```
if (std::abs(pTotalMean - referencePtot) / pTotalMean > 1e-2) { // ROGERS BUG; 1e-2 should be user parameter
throw OpalException("ParallelCyclotronTracker::checkFileMomentum",
"The total momentum of the particle distribution\n"
"in the global reference frame: " +
std::to_string(pTotalMean) + ",\n"
"is different from the momentum given\n"
"in the \"BEAM\" command: " +
std::to_string(referencePtot) + ".\n"
"In Opal-cycl the initial distribution\n"
"is specified in the local reference frame.\n"
"When using a \"FROMFILE\" type distribution, the momentum \n"
"must be the same as the specified in the \"BEAM\" command,\n"
"which is in global reference frame.");
}
```
I guess easiest would be to add a tolerance parameter to the FROMFILE distribution type. Default to 1e-2.2023.1ext-rogers_cext-rogers_chttps://gitlab.psi.ch/OPAL/src/-/issues/717Writing initial distribution fails in multicore case2022-05-19T14:35:52+02:00ext-calvo_ppedro.calvo@ciemat.esWriting initial distribution fails in multicore caseWriting the initial distribution to file (making use of `WRITETOFILE` attribute) fails for injected distributions when the simulation is run in a parallel environment. The output file only saves the particles of the first nodeWriting the initial distribution to file (making use of `WRITETOFILE` attribute) fails for injected distributions when the simulation is run in a parallel environment. The output file only saves the particles of the first nodehttps://gitlab.psi.ch/OPAL/src/-/issues/645Fix turnNumber in loss output file2021-04-06T09:21:34+02:00ext-calvo_ppedro.calvo@ciemat.esFix turnNumber in loss output fileAfter implementing OPAL/src#503, loss files in ASCII format is not considering `turnNumber` info when simulations are performed in parallel environment unless all nodes has particles.
`hasTurnInformations()` could be modified to fix itAfter implementing OPAL/src#503, loss files in ASCII format is not considering `turnNumber` info when simulations are performed in parallel environment unless all nodes has particles.
`hasTurnInformations()` could be modified to fix itext-calvo_ppedro.calvo@ciemat.esext-calvo_ppedro.calvo@ciemat.eshttps://gitlab.psi.ch/OPAL/src/-/issues/625Fix exceptions in parallel2021-06-09T21:19:42+02:00ext-calvo_ppedro.calvo@ciemat.esFix exceptions in parallelThe following discussion from !458 should be addressed:
- [ ] @snuverink_j started a [discussion](https://gitlab.psi.ch/OPAL/src/-/merge_requests/458#note_28753): (+5 comments)
> If I understand correctly the file is only read by ...The following discussion from !458 should be addressed:
- [ ] @snuverink_j started a [discussion](https://gitlab.psi.ch/OPAL/src/-/merge_requests/458#note_28753): (+5 comments)
> If I understand correctly the file is only read by node 0, so this check can be done by node 0 only (as it was before).
>
> But to be honest, I don't understand why it was not working before in the parallel environment. Can you elaborate a bit?
@ext-calvo_p
> I thought in the same way, but when OPAL is run in the parallel environment and the distribution file doesn't exist, OpalException is not thrown.
>
> I think that checking if the file exists before opening it does not have to be done exclusively by node 0.
@snuverink_j
> I can reproduce it: the simulation hangs.
>
>But this seems something more fundamental to me, because I would also have expected that a throw by a single node would be enough to stop the simulation. @gsell: Should that not be the case?
@kraus
> In Main.cpp we catch the exception and then call MPI_Abort on MPI_COMM_WORLD. I thought that this should also stop the other nodes but this isn't the case. So we try to throw the exception on all nodes.https://gitlab.psi.ch/OPAL/src/-/issues/615Opal version in master branch2020-10-09T13:30:32+02:00ext-calvo_ppedro.calvo@ciemat.esOpal version in master branchOPAL VERSION should be change to 2.5 in master branch (CMakeLists.txt actually shows 2.3)OPAL VERSION should be change to 2.5 in master branch (CMakeLists.txt actually shows 2.3)ext-calvo_ppedro.calvo@ciemat.esext-calvo_ppedro.calvo@ciemat.eshttps://gitlab.psi.ch/OPAL/src/-/issues/605BANDRF fieldmaps have no effect beginning with OPAL 2.22022-02-04T10:22:01+01:00winklehner_dBANDRF fieldmaps have no effect beginning with OPAL 2.2h5hut field maps (.h5part), loaded as part of the BANDRF cyclotron type that produce the desired effect in OPAL 2.0, don't seem to have any effect in OPAL 2.2 and up. Cyclotron units used to be mm and kV/mm (same as MV/m). Have these inp...h5hut field maps (.h5part), loaded as part of the BANDRF cyclotron type that produce the desired effect in OPAL 2.0, don't seem to have any effect in OPAL 2.2 and up. Cyclotron units used to be mm and kV/mm (same as MV/m). Have these input units been changed somehow?
My current example is that of an electrostatic extraction septum that correctly pushes the final turn out by ~2 cm in OPAL 2.0, but does nothing in OPAL 2.2 and up.https://gitlab.psi.ch/OPAL/src/-/issues/603WHAT command used in regression tests2021-01-04T03:23:32+01:00krausWHAT command used in regression testsThe `WHAT` command was removed in revision 01405a79. However it is used in the regression tests to determine the revision of the source repository, [see here](https://gitlab.psi.ch/OPAL/NightlyBuild/-/blob/master/scripts/OpalRegressionTe...The `WHAT` command was removed in revision 01405a79. However it is used in the regression tests to determine the revision of the source repository, [see here](https://gitlab.psi.ch/OPAL/NightlyBuild/-/blob/master/scripts/OpalRegressionTests/regressiontest.py#L78). We should either find a different way to determine the revision or revert the deletion of the `WHAT` command.https://gitlab.psi.ch/OPAL/src/-/issues/588compiler errors with clang92020-08-04T14:13:03+02:00snuverink_jjochem.snuverink@psi.chcompiler errors with clang9### Summary
Compiler error with clang.
### Steps to reproduce
```
-- The C++ compiler identification is: Clang
-- The C++ compiler version is: 9.0.1
-- The MPI C++ compiler is: /opt/local/bin/mpicxx-mpich-clang90
```
### Relevant log...### Summary
Compiler error with clang.
### Steps to reproduce
```
-- The C++ compiler identification is: Clang
-- The C++ compiler version is: 9.0.1
-- The MPI C++ compiler is: /opt/local/bin/mpicxx-mpich-clang90
```
### Relevant logs and/or screenshots
```
[ 45%] Building CXX object src/CMakeFiles/libOPAL.dir/Classic/BeamlineCore/RBendRep.cpp.o
/Users/jsnuverink/Documents/OPAL/fork/src/src/Classic/BeamlineCore/RBendRep.cpp:35:17: error: unused variable
'entries' [-Werror,-Wunused-const-variable]
const Entry entries[] = {
^
1 error generated.
make[2]: *** [src/CMakeFiles/libOPAL.dir/Classic/BeamlineCore/RBendRep.cpp.o] Error 1
```snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/583Particles miss step in element with particle / matter interaction2023-11-30T15:49:00+01:00krausParticles miss step in element with particle / matter interactionIf two elements with particle / matter interaction are closer to each other than a single time step then the particles drift for one time step after leaving the first element and before entering the second element. In #308 the variable `...If two elements with particle / matter interaction are closer to each other than a single time step then the particles drift for one time step after leaving the first element and before entering the second element. In #308 the variable `tau` was introduced in the class `CollimatorPhysics` in order to get rid of a time structure. The quantity `tau` is computed for the first and the last time step. The meaning of `tau` is the fraction of a time step the current position of a particle is away from the edge. `tau` has to between `0.0` and `1.0`. However this isn't true if a particle hops from one element with particle / matter interaction to another. Currently this isn't handled correctly yet. Instead the particles drifts for at least one time step before it can enter another element with particle / matter interaction.krausext-calvo_ppedro.calvo@ciemat.eskraushttps://gitlab.psi.ch/OPAL/src/-/issues/574P3M solver declaration is missing2021-05-26T12:54:27+02:00ext-calvo_ppedro.calvo@ciemat.esP3M solver declaration is missingP3M solver is missing as FieldSolver type (`FSTYPE`) in FieldSolver.cpp. In addition, it is not documented in the manual (OPAL/documentation/manual#41)P3M solver is missing as FieldSolver type (`FSTYPE`) in FieldSolver.cpp. In addition, it is not documented in the manual (OPAL/documentation/manual#41)OPAL-3.0https://gitlab.psi.ch/OPAL/src/-/issues/567Segmentation fault simulation2020-07-16T08:47:53+02:00ext-calvo_ppedro.calvo@ciemat.esSegmentation fault simulationAfter !395 I've got a segmentation fault
```
*** Process received signal ***
Signal: Segmentation fault (11)
Signal code: Invalid permissions (2)
Failing at address: 0x15c69b0
[ 0] /lib64/libpthread.so.0[0x30b520f130]
[ 1] opal(_ZTVN10_...After !395 I've got a segmentation fault
```
*** Process received signal ***
Signal: Segmentation fault (11)
Signal code: Invalid permissions (2)
Failing at address: 0x15c69b0
[ 0] /lib64/libpthread.so.0[0x30b520f130]
[ 1] opal(_ZTVN10__cxxabiv120__si_class_type_infoE+0x10)[0x15c69b0]
*** End of error message ***
Violación de segmento (`core' generado)
```
The opal compilation was correct. I don't know the cause of the error
cc: @frey\_m @snuverink\_j https://gitlab.psi.ch/OPAL/src/-/issues/557Optimizer gets stuck2021-06-10T19:16:54+02:00snuverink_jjochem.snuverink@psi.chOptimizer gets stuckAs reported by Finn O'Shea: the optimizer get stuck from time to time with the following example:
[fdopt.in](/uploads/e8ef7cb47cc60dcfbb8daab723b461df/fdopt.in)
[fdopt.data](/uploads/2f965a6b27a71ba36ab64519a5d8c74d/fdopt.data)
[fdopt.t...As reported by Finn O'Shea: the optimizer get stuck from time to time with the following example:
[fdopt.in](/uploads/e8ef7cb47cc60dcfbb8daab723b461df/fdopt.in)
[fdopt.data](/uploads/2f965a6b27a71ba36ab64519a5d8c74d/fdopt.data)
[fdopt.tmpl](/uploads/6e7dc37c7fe54832e01c0d7a41755e2f/fdopt.tmpl)snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.ch