src issueshttps://gitlab.psi.ch/OPAL/src/-/issues2017-08-18T09:32:49+02:00https://gitlab.psi.ch/OPAL/src/-/issues/158Somehow PSDump has influence on dumped statistics2017-08-18T09:32:49+02:00krausSomehow PSDump has influence on dumped statistics[red has PSDump simultaneously](/uploads/f289a4e3acd9d43703dc6b5c9c5c50fe/influencePSDump.png) This doesn't hurt any further but it's annoying.[red has PSDump simultaneously](/uploads/f289a4e3acd9d43703dc6b5c9c5c50fe/influencePSDump.png) This doesn't hurt any further but it's annoying.OPAL 1.6.0adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/150Add a user definable transverse limit to degrader class2017-08-12T18:17:25+02:00krausAdd a user definable transverse limit to degrader classOPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/140Particle delete2017-08-05T14:31:50+02:00adelmannParticle deleteWith OPAL-1.6 (newest pull) and Regressiontest PSIGUN-1 Bin 0 gets no particles at timestep 2:
....
OPAL {0}[3]> * Wrote beam statistics.
Ippl{0}[2]> Bin 0 gamma = 1.00717e+00; NpInBin= 667
Ippl{0}[2]> Bin 1 has...With OPAL-1.6 (newest pull) and Regressiontest PSIGUN-1 Bin 0 gets no particles at timestep 2:
....
OPAL {0}[3]> * Wrote beam statistics.
Ippl{0}[2]> Bin 0 gamma = 1.00717e+00; NpInBin= 667
Ippl{0}[2]> Bin 1 has no particles
Ippl{0}[2]> Bin 2 has no particles
Ippl{0}[2]> Bin 3 has no particles
Ippl{0}[2]> Bin 4 has no particles
Ippl{0}[3]> * Bin number: 2 has emitted all particles (new emit).
ParallelTTracker {0}> * Deleted 667 particles, remaining 4755 particles
ParallelTTracker {0}[3]> 12:03:09 Step 1 at -0.053 [mm] t= 1.060e-11 [s] E= 5.388 [keV]
...
OPAL {0}>
OPAL {0}[3]> * Wrote beam statistics.
Ippl{0}[2]> Bin 0 has no particles
Ippl{0}[2]> Bin 1 gamma = 1.01054e+00; NpInBin= 4755
Ippl{0}[2]> Bin 2 has no particles
Later on we are running into
I + M < LocalSize
@kraus Is there still an autophpse problem?OPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/138Setting autophase option without a cavity in beamline throws mysterious error2017-08-05T20:04:40+02:00ext-hall_cSetting autophase option without a cavity in beamline throws mysterious errorWith `"OPTION, AUTOPHASE=4;"` in my input file when I use a beamline without a cavity I see an error like:
`opal(7879,0x7fff7f140000) malloc: *** error for object 0x7fff9a15b9f3: pointer being freed was not allocated`
Turning autophase...With `"OPTION, AUTOPHASE=4;"` in my input file when I use a beamline without a cavity I see an error like:
`opal(7879,0x7fff7f140000) malloc: *** error for object 0x7fff9a15b9f3: pointer being freed was not allocated`
Turning autophase off allowed my input file to run without error, but this error was not very informative and it took quite a while to find the culprit. It might be helpful if making this mistake generated a specific error message.OPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/134Perfect Diode Regression-Test2017-07-24T06:20:51+02:00adelmannPerfect Diode Regression-TestI remember seeing this lately and connect the solution wit @winklehner_d
Ippl{0}> *** Error:
Ippl{0}> RUN,METHOD="PARALLEL-T",BEAM=BEAM1,FIELDSOLVER=FS1,DISTRIBUTION=DIST2;
Ippl{0}> Internal OPAL error: vector::_M_range_check:...I remember seeing this lately and connect the solution wit @winklehner_d
Ippl{0}> *** Error:
Ippl{0}> RUN,METHOD="PARALLEL-T",BEAM=BEAM1,FIELDSOLVER=FS1,DISTRIBUTION=DIST2;
Ippl{0}> Internal OPAL error: vector::_M_range_check: __n (which is 25000) >= this->size() (which is 25000)
I concerns the fail of PerfectDiode regression testOPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/129Array of distributions containing FROMFILE2017-08-13T10:13:16+02:00krausArray of distributions containing FROMFILEThis won't work properly because e.g. the number of particles in a FROMFILE distribution is fixed. Thus when computing the number of particles the other distributions should contain we have first to subtract the number of particles in th...This won't work properly because e.g. the number of particles in a FROMFILE distribution is fixed. Thus when computing the number of particles the other distributions should contain we have first to subtract the number of particles in the FROMFILE distributions.OPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/128Let each distribution in array of distributions have its own offset in R and P.2017-07-15T20:33:10+02:00krausLet each distribution in array of distributions have its own offset in R and P.When providing an array of distribution and each distribution has its own OFFSET{X|Y|Z|PX|PY|PZ} then, so far, all distributions use the offsets of the first distribution.When providing an array of distribution and each distribution has its own OFFSET{X|Y|Z|PX|PY|PZ} then, so far, all distributions use the offsets of the first distribution.OPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/125Vector of time steps: error in the parser2017-07-13T09:32:41+02:00Valeria RizzoglioVector of time steps: error in the parser[PROSCAN-G3-230.in](/uploads/0f541b042bd39fdf2fe62688529cc406/PROSCAN-G3-230.in)
If I track the particles using a vector of time steps:
```
TRACK, LINE=BEAMLINE_TOT,
BEAM=BEAM_G3_LA1,
MAXSTEPS={5e+08,5e+08,5e+08},
...[PROSCAN-G3-230.in](/uploads/0f541b042bd39fdf2fe62688529cc406/PROSCAN-G3-230.in)
If I track the particles using a vector of time steps:
```
TRACK, LINE=BEAMLINE_TOT,
BEAM=BEAM_G3_LA1,
MAXSTEPS={5e+08,5e+08,5e+08},
DT={5*PICOSECONDS,1*PICOSECONDS,5*PICOSECOND},
ZSTOP={6.145,6.75,16}OPAL 1.6.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/111Move download page from Trac to gitlab2017-07-24T10:29:38+02:00krausMove download page from Trac to gitlabCurrently the download page is a page in the old Trac instance. Move it to gitlab.Currently the download page is a page in the old Trac instance. Move it to gitlab.OPAL 1.6.0gsellgsellhttps://gitlab.psi.ch/OPAL/src/-/issues/94Error detected by function "FileStream::fillLine()"2017-06-17T20:38:34+02:00ganz_pError detected by function "FileStream::fillLine()"I ran some simulations and at a certain point on all simulations gave me following error:
[Terminal.out](/uploads/8d537807dbf8586b2ec6f08e87a708ae/Terminal.out)
I've tried to vary the opal command (with and without `mpirun`, or `--use-d...I ran some simulations and at a certain point on all simulations gave me following error:
[Terminal.out](/uploads/8d537807dbf8586b2ec6f08e87a708ae/Terminal.out)
I've tried to vary the opal command (with and without `mpirun`, or `--use-dks`), but all files, even files which already ran well gave me that error.
The Opal Version I use is: `OPAL/1.5.1-20170217`
Example .in file:
[100MeV_InvQuad_1_NoColl.in](/uploads/44d81f1f63a2ffffc828556e7944cfdb/100MeV_InvQuad_1_NoColl.in)OPAL 1.6.0adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/90OPAL-Cycl - COMET2017-06-17T20:38:34+02:00adelmannOPAL-Cycl - COMETI have been using a locally compiled code with a version number 1.2.1 SVN. I have also run the program through module load with a version number 1.4.3. The loss files are basically the same.
Attached is the input file vc.in. Two phase...I have been using a locally compiled code with a version number 1.2.1 SVN. I have also run the program through module load with a version number 1.4.3. The loss files are basically the same.
Attached is the input file vc.in. Two phase slits CMA1 and CMA2 work quite well. However, the loss data from the vertical collimators, for example, from the pair VC7 and VC8, often register the same particles.
[vc.in](/uploads/8630def3fe171c14cc64887dc9991232/vc.in)OPAL 1.6.0adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/86OPAL-1.6 check DKS version used to compile2017-06-17T20:38:34+02:00Uldis LocansOPAL-1.6 check DKS version used to compileOPAL-1.6 does not check which DKS version is used so compilation errors are possible due to the wrong versionsOPAL-1.6 does not check which DKS version is used so compilation errors are possible due to the wrong versionsOPAL 1.6.0https://gitlab.psi.ch/OPAL/src/-/issues/85Error in compiling OPAL-1.6 with -DENABLE_DKS=12017-06-17T20:38:34+02:00Valeria RizzoglioError in compiling OPAL-1.6 with -DENABLE_DKS=1I have the following modules loaded:
```
Currently Loaded Modulefiles:
1) gcc/5.4.0 4) hdf5/1.8.18 7) trilinos/12.10.1 10) OpenBLAS/0.2.19 13) opal-toolschain/1.6
2) openmpi/1.10.4 5) H5hut/2.0...I have the following modules loaded:
```
Currently Loaded Modulefiles:
1) gcc/5.4.0 4) hdf5/1.8.18 7) trilinos/12.10.1 10) OpenBLAS/0.2.19 13) opal-toolschain/1.6
2) openmpi/1.10.4 5) H5hut/2.0.0rc3 8) root/6.08.02 11) cuda/8.0.44
3) boost/1.62.0 6) gsl/2.2.1 9) cmake/3.6.3 12) dks/1.0.1
```
and I got the following error message:
```
/home/scratch/opal/src/src/Classic/Solvers/CollimatorPhysics.cpp: In member function ‘void CollimatorPhysics::setupCollimatorDKS(PartBunch&, Degrader*, size_t)’:
/home/scratch/opal/src/src/Classic/Solvers/CollimatorPhysics.cpp:1094:52: error: no matching function for call to ‘DKSBase::callInitRandoms(int&, int&)’
dksbase.callInitRandoms(size, Options::seed);
^
In file included from /home/scratch/opal/src/ippl/src/Utility/IpplInfo.h:59:0,
from /home/scratch/opal/src/ippl/src/Message/Message.hpp:29,
from /home/scratch/opal/src/ippl/src/Message/Message.h:618,
from /home/scratch/opal/src/ippl/src/AppTypes/Vektor.h:16,
from /home/scratch/opal/src/src/Classic/Algorithms/Vektor.h:6,
from /home/scratch/opal/src/src/Classic/Solvers/CollimatorPhysics.hh:13,
from /home/scratch/opal/src/src/Classic/Solvers/CollimatorPhysics.cpp:9:
/opt/psi/MPI/dks/1.0.1/openmpi/1.10.4/gcc/5.4.0/include/DKSBase.h:1077:7: note: candidate: int DKSBase::callInitRandoms(int)
int callInitRandoms(int size);
^
/opt/psi/MPI/dks/1.0.1/openmpi/1.10.4/gcc/5.4.0/include/DKSBase.h:1077:7: note: candidate expects 1 argument, 2 provided
[ 60%] Building CXX object src/CMakeFiles/OPALib.dir/Classic/Utilities/DivideError.cpp.o
```OPAL 1.6.0https://gitlab.psi.ch/OPAL/src/-/issues/57Description of BANDRF in Manual2017-03-16T15:33:39+01:00adelmannDescription of BANDRF in ManualNeeds a format description in the manual.Needs a format description in the manual.OPAL 1.6.0adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/30Regression test 'RestartTest-2' is running forever.2017-03-16T14:44:43+01:00gsellRegression test 'RestartTest-2' is running forever.OPAL 1.6.0gsellgsellhttps://gitlab.psi.ch/OPAL/src/-/issues/27RingCyclotron- and RingCyclotronMTS-Test broken2017-03-18T00:05:29+01:00adelmannRingCyclotron- and RingCyclotronMTS-Test brokencommit 005f20628c7049f5fd1c8c06610ea084d1db2983
Author: Andreas Adelmann <andreas.adelmann@psi.ch>
Date: Tue Nov 8 21:45:26 2016 +0100
remove particle with ID==0 form the H5 file in case of opal-cycl and from the statistics ca...commit 005f20628c7049f5fd1c8c06610ea084d1db2983
Author: Andreas Adelmann <andreas.adelmann@psi.ch>
Date: Tue Nov 8 21:45:26 2016 +0100
remove particle with ID==0 form the H5 file in case of opal-cycl and from the statistics calculation
src/Classic/Algorithms/PartBunch.cpp | 34 +++++++++++++++++++++++++++++++++-
src/Structure/H5PartWrapperForPC.cpp | 21 +++++++++++++++++++++
2 files changed, 54 insertions(+), 1 deletion(-)
bricht nicht ganz unerwartet den RingCyclotron- und RingCyclotronMTS-Test.
gCOPAL 1.6.0adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/26regression tests do not run on head of the repo2017-03-16T15:34:26+01:00adelmannregression tests do not run on head of the repoHoi zäme,
ich habe gerade gesehen, dass die Regression Tests nicht mehr die neueste Version des Codes aus dem git-repo holt, sie blieben bei Andreas' Commit vom 19 August (2016!), 987ab1f stehen. Vielleicht müsste einer von euch beide...Hoi zäme,
ich habe gerade gesehen, dass die Regression Tests nicht mehr die neueste Version des Codes aus dem git-repo holt, sie blieben bei Andreas' Commit vom 19 August (2016!), 987ab1f stehen. Vielleicht müsste einer von euch beiden nachschauen, was das Problem ist.
Grüsse,
christofOPAL 1.6.0gsellgsell2017-02-06https://gitlab.psi.ch/OPAL/src/-/issues/24Fieldsolver ?2017-03-16T14:44:47+01:00adelmannFieldsolver ?Dear OPAL users, I write you because we have some problems about the simulation of a big number of particles.
We simulate without problems our machine with the correct electric and magnetic fields, including the geometry in the simulatio...Dear OPAL users, I write you because we have some problems about the simulation of a big number of particles.
We simulate without problems our machine with the correct electric and magnetic fields, including the geometry in the simulation. But when we increase the number of particles above 1000, we obtain the following error in the output:
Error> Interpolator::getFieldIter: attempt to access non-local index{[-2147483648:-2147483648:1],[-2147483648:-2147483648:1],[-2147483648:-2147483648:1]} on node 0
Error> Dumping local owned and allocated domains:
Error> 0: owned = {[0:31:1],[0:31:1],[0:31:1]}, allocated = {[-1:32:1],[-1:32:1],[-1:32:1]}
Error> Error occurred for BareField with layout = Domain = {[0:31:1],[0:31:1],[0:31:1]}
Error> FieldLayoutUsers = 3
Error> Total number of vnodes = 1
Error> Local Vnodes = 1
Error> vnode 0: Node = 0 ; vnode_m = -1 ; Domain = {[0:31:1],[0:31:1],[0:31:1]}
Error> Remote Vnodes = 0
Error>
Error> Calling abort ...
This error don't occur when the number of particles is lower than 1000.
We have tried to solve this error changing the fieldsolver in the code for a 1000 particles, but the results are completly diferent if we compare with the simulation with a number of particles of 999. I
I will appreciate any suggestion to solve this problem.
Best Regards
Pedro CalvoOPAL 1.6.0adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/19DKS Documentation in OPAL manual2018-01-30T13:42:19+01:00adelmannDKS Documentation in OPAL manualAdd DKS documentation to OPAL manual
- reference to paper
- how to compile
- how to use
- update OPTION, remove DKS
A brief description on how to install cuda and install dks is available on the Wiki. Add DKS documentation to OPAL manual
- reference to paper
- how to compile
- how to use
- update OPTION, remove DKS
A brief description on how to install cuda and install dks is available on the Wiki. OPAL 1.6.0