src issueshttps://gitlab.psi.ch/OPAL/src/-/issues2019-12-12T23:16:08+01:00https://gitlab.psi.ch/OPAL/src/-/issues/120Particle Termination2019-12-12T23:16:08+01:00winklehner_dParticle TerminationHi,
Anybody else noticing that particles are not terminated correctly anymore if Bin is set to -1 (which is the usual way in the CyclotronTracker) since last week's commits to the head? It still works for the BoundaryGeometry, but not, f...Hi,
Anybody else noticing that particles are not terminated correctly anymore if Bin is set to -1 (which is the usual way in the CyclotronTracker) since last week's commits to the head? It still works for the BoundaryGeometry, but not, for example, for the Cyclotron outer boundaries. I think it might have to do with removing all the boundp's
Best,
DanielOPAL-2.2.0winklehner_dwinklehner_dhttps://gitlab.psi.ch/OPAL/src/-/issues/119Periodic BC's2021-07-06T10:08:36+02:00winklehner_dPeriodic BC'sIt seems that when I set BCFFTT = PERIODIC, not only the z-direction but all directions are automatically set to periodic boundary conditions. @uldis_l I am assuming "UL" in the comment of PartBunch::setBCForDCBeam() is you. Was there a ...It seems that when I set BCFFTT = PERIODIC, not only the z-direction but all directions are automatically set to periodic boundary conditions. @uldis_l I am assuming "UL" in the comment of PartBunch::setBCForDCBeam() is you. Was there a particular reason to do this? In my understanding, a DC beam would have open BC in x and y and periodic BC in z.
In addition, the manual calls the parameters "BCFFTZ" and "PARFFTZ" but OPAL tells me those don't exist and throws an Exception, I have to use "BCFFTT" and "PARFFTT". Just a minor bug.adelmannkrausadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/110PartBunch::get_bounds can produce NaNs2019-10-25T13:47:36+02:00snuverink_jjochem.snuverink@psi.chPartBunch::get_bounds can produce NaNsWhile trying to update the [PSI-Ring](https://gitlab.psi.ch/AMAS-BDModels/PSI-Ring) simulations to the master branch, I encountered the following running error:
```
OPAL> PartBunch.cpp: 1574 nan 2.000000e-02
Error>
Error> *** User...While trying to update the [PSI-Ring](https://gitlab.psi.ch/AMAS-BDModels/PSI-Ring) simulations to the master branch, I encountered the following running error:
```
OPAL> PartBunch.cpp: 1574 nan 2.000000e-02
Error>
Error> *** User error detected by function "PartBunch::boundp() "
Error> *** in line 311 of file "Ring.in":
Error> RUN,METHOD="CYCLOTRON-T",BEAM=BEAM1,FIELDSOLVER=FS1,DISTRIBUTION=DIST;
Error> h<0, can not build a mesh
```
The `nan` gets introduced in line 1521: `get_bounds(rmin_m, rmax_m);`
Printing out rmax and rmin before and after this line gives (ymmv):
before:
```
(i,rmax, rmin) 0 0.0000000000000000e+00 0.0000000000000000e+00
(i,rmax, rmin) 1 0.0000000000000000e+00 0.0000000000000000e+00
(i,rmax, rmin) 2 0.0000000000000000e+00 0.0000000000000000e+00
```
after
```
(i,rmax, rmin) 0 7.1153710538428058e-03 -6.9640951722910538e-03
(i,rmax, rmin) 1 4.0421699390708048e-02 -4.0512781208033796e-02
(i,rmax, rmin) 2 -nan -nan
```
I likely do something wrong with my input, but I believe the code should not get this far and produce a better error message.
This can be reproduced with OPAL master (0469d1ac), and the latest version of [PSI-Ring](https://gitlab.psi.ch/AMAS-BDModels/PSI-Ring) and executing `runOpal --nobatch`
**Edit 20 July:**
https://gitlab.psi.ch/OPAL/src/issues/110#note_1914: Simplified input file [Ring.in](https://gitlab.psi.ch/OPAL/src/uploads/ec9579a7c5009c1b7465266afe4373c0/Ring.in)
https://gitlab.psi.ch/OPAL/src/issues/110#note_1916: Regression test `RingCyclotron` has the same bug when one changes the distribution from gauss to either single particle or binomial. snuverink_jjochem.snuverink@psi.chsnuverink_jjochem.snuverink@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/106Segfault in case of Material at beginning of beamline2017-07-24T10:29:37+02:00frey_mSegfault in case of Material at beginning of beamlineWe run [sim.in](/uploads/94c1f0db20f572ae97a6a320574d9545/sim.in).
Error output:
```
OPAL>
OPAL> --- BEGIN FIELD LIST ---------------------------------------------------------------
OPAL>
OPAL> --- 0.2 m -- 0.200228 m -- has surface...We run [sim.in](/uploads/94c1f0db20f572ae97a6a320574d9545/sim.in).
Error output:
```
OPAL>
OPAL> --- BEGIN FIELD LIST ---------------------------------------------------------------
OPAL>
OPAL> --- 0.2 m -- 0.200228 m -- has surface physics ------------------------------------
OPAL> DMA_DEG1
OPAL> --- 0.200228 m -- 1.20023 m -- -----------------------------------------------------
OPAL> D1
OPAL>
OPAL> --- END FIELD LIST -----------------------------------------------------------------
OPAL>
[opalrunner:18498] *** Process received signal ***
[opalrunner:18498] Signal: Segmentation fault (11)
[opalrunner:18498] Signal code: Address not mapped (1)
[opalrunner:18498] Failing at address: 0x30
[opalrunner:18498] [ 0] /lib64/libpthread.so.0[0x32ea20f7e0]
[opalrunner:18498] [ 1] opal(_ZN12OpalBeamline14switchElementsERKdS1_S1_RKb+0x1cf)[0xf1829f]
[opalrunner:18498] [ 2] opal[0x10758bd]
[opalrunner:18498] [ 3] opal(_ZN16ParallelTTracker21executeDefaultTrackerEv+0x2c0)[0x107c520]
[opalrunner:18498] [ 4] opal(_ZN16ParallelTTracker7executeEv+0x1f)[0x107d35f]
[opalrunner:18498] [ 5] opal(_ZN8TrackRun7executeEv+0x751)[0x1043b51]
[opalrunner:18498] [ 6] opal(_ZNK10OpalParser7executeEP6ObjectRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x35)[0xcac6c5]
[opalrunner:18498] [ 7] opal(_ZNK10OpalParser11parseActionER9Statement+0x11a)[0xcb062a]
[opalrunner:18498] [ 8] opal(_ZNK10OpalParser5parseER9Statement+0x186)[0xcb0076]
[opalrunner:18498] [ 9] opal(_ZNK10OpalParser3runEv+0x2c)[0xcb158c]
[opalrunner:18498] [10] opal(_ZN8TrackCmd7executeEv+0x343)[0xd63cb3]
[opalrunner:18498] [11] opal(_ZNK10OpalParser7executeEP6ObjectRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x35)[0xcac6c5]
[opalrunner:18498] [12] opal(_ZNK10OpalParser11parseActionER9Statement+0x11a)[0xcb062a]
[opalrunner:18498] [13] opal(_ZNK10OpalParser5parseER9Statement+0x186)[0xcb0076]
[opalrunner:18498] [14] opal(_ZNK10OpalParser3runEv+0x2c)[0xcb158c]
[opalrunner:18498] [15] opal(_ZNK10OpalParser3runEP11TokenStream+0x6a)[0xcb0a8a]
[opalrunner:18498] [16] opal(main+0x8e8)[0xc3f858]
[opalrunner:18498] [17] /lib64/libc.so.6(__libc_start_main+0xfd)[0x32e961ed1d]
[opalrunner:18498] [18] opal[0xc36cb5]
[opalrunner:18498] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 18498 on node opalrunner exited on signal 11 (Segmentation fault).
```
We doesn't crash when we add a drift in front of the material.OPAL 1.6.1adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/105RectangularDomain::getBoundaryStencil typo2017-06-17T20:38:34+02:00snuverink_jjochem.snuverink@psi.chRectangularDomain::getBoundaryStencil typolines [51-53](https://gitlab.psi.ch/OPAL/src/blob/master/src/Solvers/RectangularDomain.cpp#L51):
```c++
S = -hr[0] * hr[2] / hr[1];
F = -hr[0] * hr[1] / hr[2];
S = -hr[0] * hr[1] / hr[2];
```
The second `S` assignment is lik...lines [51-53](https://gitlab.psi.ch/OPAL/src/blob/master/src/Solvers/RectangularDomain.cpp#L51):
```c++
S = -hr[0] * hr[2] / hr[1];
F = -hr[0] * hr[1] / hr[2];
S = -hr[0] * hr[1] / hr[2];
```
The second `S` assignment is likely a typo and should be `B`, but it would be good if someone could check the formulas.OPAL 1.6.1https://gitlab.psi.ch/OPAL/src/-/issues/104--version or --help crashes OPAL2017-06-17T20:38:34+02:00adelmann--version or --help crashes OPAL--version or --help crashes OPAL--version or --help crashes OPALOPAL 1.9.xadelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/103Overlap of field maps OPAL-cycl2017-07-24T10:29:37+02:00adelmannOverlap of field maps OPAL-cyclCommunicated by @zhang_h
Case maps for COMET.
We have four non-superpose RF maps and one superpose electrostatic map. The read-in loop could be stopped at the third RF map, without reading the electrostatic map. We may put the ...Communicated by @zhang_h
Case maps for COMET.
We have four non-superpose RF maps and one superpose electrostatic map. The read-in loop could be stopped at the third RF map, without reading the electrostatic map. We may put the electrostatic map in front, but it could cause other problem.OPAL 1.9.xadelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/102PSDUMPFRAME report is wrong in OPTIOn TELL=TRUE2017-06-17T20:38:34+02:00adelmannPSDUMPFRAME report is wrong in OPTIOn TELL=TRUEThe following trivial OPAL input file:
```
OPTION, TELL=TRUE;
OPTION, PSDUMPFRAME=REFERENCE;
QUIT;
```
shows
OPAL> Current settings of options:
OPAL> OPTION,ECHO=FALSE,INFO=TRUE,TRACE=FALSE,VERIFY=FALSE,WARN=TRUE,
OPAL> SEED=1.2...The following trivial OPAL input file:
```
OPTION, TELL=TRUE;
OPTION, PSDUMPFRAME=REFERENCE;
QUIT;
```
shows
OPAL> Current settings of options:
OPAL> OPTION,ECHO=FALSE,INFO=TRUE,TRACE=FALSE,VERIFY=FALSE,WARN=TRUE,
OPAL> SEED=1.23457e+08,TELL=TRUE,PSDUMPFREQ=10,STATDUMPFREQ=10,
OPAL> PSDUMPEACHTURN=FALSE,PSDUMPLOCALFRAME=FALSE,**PSDUMPFRAME="GLOBAL"**,
OPAL> SPTDUMPFREQ=1,REPARTFREQ=10,REBINFREQ=100,SCSOLVEFREQ=1,
OPAL> MTSSUBSTEPS=1,REMOTEPARTDEL=0,SCAN=FALSE,RHODUMP=FALSE,
OPAL> EBDUMP=FALSE,CSRDUMP=FALSE,AUTOPHASE=0,PPDEBUG=FALSE,
OPAL> SURFDUMPFREQ=-1,NUMBLOCKS=0,RECYCLEBLOCKS=0,NLHS=1,CZERO=FALSE,
OPAL> RNGTYPE="RANDOM",SCHOTTKYCORR=FALSE,SCHOTTKYRENO=-1,ENABLEHDF5=TRUE,
OPAL> ASCIIDUMP=FALSE,BOUNDPDESTROYFQ=10,BEAMHALOBOUNDARY=0,
OPAL> CLOTUNEONLY=FALSE,VERSION=10000;OPAL 1.6.1adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/98Placement of elements in 3D coordinates not possible anymore2017-06-17T20:38:34+02:00krausPlacement of elements in 3D coordinates not possible anymorePlacement of elements in 3D coordinates (see attachment) was possible, this isn't the case anymore.
This issue has to do with the fact that I added the attribute ELEMEDGE and introduced access methods.
[Niowave_first_korrektur.dat](/u...Placement of elements in 3D coordinates (see attachment) was possible, this isn't the case anymore.
This issue has to do with the fact that I added the attribute ELEMEDGE and introduced access methods.
[Niowave_first_korrektur.dat](/uploads/ad152c3a3e13fa0ec231105ec4711817/Niowave_first_korrektur.dat)[Banana_ref.in](/uploads/68db2ec88393f764cfebd520466bf2de/Banana_ref.in)[ez_normalizedcathodepos_4.txt](/uploads/8015defbc8c4082e95296f6ff3133670/ez_normalizedcathodepos_4.txt)OPAL 1.9.xkrauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/96DKS 1.1.0 for OPAL 1.6 branch2017-06-17T20:38:34+02:00gsellDKS 1.1.0 for OPAL 1.6 branchDKS 1.1.0 must be used in OPAL 1.6. So we have the same toolchain for OPAL 1.6 and masterDKS 1.1.0 must be used in OPAL 1.6. So we have the same toolchain for OPAL 1.6 and masterhttps://gitlab.psi.ch/OPAL/src/-/issues/94Error detected by function "FileStream::fillLine()"2017-06-17T20:38:34+02:00ganz_pError detected by function "FileStream::fillLine()"I ran some simulations and at a certain point on all simulations gave me following error:
[Terminal.out](/uploads/8d537807dbf8586b2ec6f08e87a708ae/Terminal.out)
I've tried to vary the opal command (with and without `mpirun`, or `--use-d...I ran some simulations and at a certain point on all simulations gave me following error:
[Terminal.out](/uploads/8d537807dbf8586b2ec6f08e87a708ae/Terminal.out)
I've tried to vary the opal command (with and without `mpirun`, or `--use-dks`), but all files, even files which already ran well gave me that error.
The Opal Version I use is: `OPAL/1.5.1-20170217`
Example .in file:
[100MeV_InvQuad_1_NoColl.in](/uploads/44d81f1f63a2ffffc828556e7944cfdb/100MeV_InvQuad_1_NoColl.in)OPAL 1.6.0adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/93SAAMG-Test-1.in PARALLEL2017-08-09T21:28:33+02:00adelmannSAAMG-Test-1.in PARALLEL
The test is from git@gitlab.psi.ch:OPAL/regression-tests.git
and the `git checkout OPAL-1.6`
Parallel run fails, serial is ok.
```
mpirun -np 4 opal SAAMG-Test-1.in
* Node:0, Filling RHS...
* Node:1, Filling RHS...
* Nod...
The test is from git@gitlab.psi.ch:OPAL/regression-tests.git
and the `git checkout OPAL-1.6`
Parallel run fails, serial is ok.
```
mpirun -np 4 opal SAAMG-Test-1.in
* Node:0, Filling RHS...
* Node:1, Filling RHS...
* Node:1, Rho for final element: 0.0000000000000000e+00
* Node:2, Filling RHS...
* Node:2, Rho for final element: 0.0000000000000000e+00
* Node:2, Local nx*ny*nz = 1575
* Node:2, Number of reserved local elements in RHS: 832
* Node:2, Number of reserved global elements in RHS: 3328
* Node:3, Filling RHS...
* Node:3, Rho for final element: 0.0000000000000000e+00
* Node:3, Local nx*ny*nz = 3375
* Node:3, Number of reserved local elements in RHS: 832
* Node:3, Number of reserved global elements in RHS: 3328
* Node:0, Rho for final element: 0.0000000000000000e+00
* Node:0, Local nx*ny*nz = 735
* Node:0, Number of reserved local elements in RHS: 832
* Node:0, Number of reserved global elements in RHS: 3328
* Node:1, Local nx*ny*nz = 1575
* Node:1, Number of reserved local elements in RHS: 832
* Node:1, Number of reserved global elements in RHS: 3328
* Node:2, Number of Local Inside Points 832
* Node:0, Number of Local Inside Points 832
* Node:3, Number of Local Inside Points 832
* Node:3, Done.
* Node:0, Done.
* Node:1, Number of Local Inside Points 832
* Node:1, Done.
* Node:2, Done.
[fast-dude:02195] *** Process received signal ***
[fast-dude:02195] Signal: Segmentation fault: 11 (11)
[fast-dude:02195] Signal code: Address not mapped (1)
[fast-dude:02195] Failing at address: 0x7fe2336ae600
[fast-dude:02195] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 3 with PID 2195 on node fast-dude exited on signal 11 (Segmentation fault: 11).
--------------------------------------------------------------------------
```OPAL 2.0.0Yves IneichenYves Ineichenhttps://gitlab.psi.ch/OPAL/src/-/issues/92ENABLERUTHERFORD and DKS2019-03-15T13:40:02+01:00Valeria RizzoglioENABLERUTHERFORD and DKSI am testing the attribute **ENABLERUTHERFORD=FALSE** using the new OPAL module OPAL/1.5.2.
Analysing the particle distribution, I have noticed that phase space is different with and without DKS.
* **Run without DKS:** ` mpirun -np 8...I am testing the attribute **ENABLERUTHERFORD=FALSE** using the new OPAL module OPAL/1.5.2.
Analysing the particle distribution, I have noticed that phase space is different with and without DKS.
* **Run without DKS:** ` mpirun -np 8 opal Degrader_1Slab_230.in`
![OPAL_1.5.2_nodks](/uploads/135423a9df3842bc730cd54969389a75/OPAL_1.5.2_nodks.png)
* **Run with DKS:** ` mpirun -np 8 opal --use-dks Degrader_1Slab_230.in`
![OPAL_1.5.2_dks](/uploads/13a3731c8b3871d1e88630ab08d851cf/OPAL_1.5.2_dks.png)
It seems that running with DKS the attribute **ENABLERUTHERFORD** has not been implemented.
Here the input file: [Degrader_1Slab_230.in](/uploads/de37f170435fcdda5e621019974dda1e/Degrader_1Slab_230.in)OPAL 2.0.0baumgartenchristian.baumgarten@psi.chbaumgartenchristian.baumgarten@psi.chhttps://gitlab.psi.ch/OPAL/src/-/issues/91Documentation for attribute DESIGNENERGY of kickers missing2017-06-17T20:38:34+02:00krausDocumentation for attribute DESIGNENERGY of kickers missingOPAL 2.0.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/90OPAL-Cycl - COMET2017-06-17T20:38:34+02:00adelmannOPAL-Cycl - COMETI have been using a locally compiled code with a version number 1.2.1 SVN. I have also run the program through module load with a version number 1.4.3. The loss files are basically the same.
Attached is the input file vc.in. Two phase...I have been using a locally compiled code with a version number 1.2.1 SVN. I have also run the program through module load with a version number 1.4.3. The loss files are basically the same.
Attached is the input file vc.in. Two phase slits CMA1 and CMA2 work quite well. However, the loss data from the vertical collimators, for example, from the pair VC7 and VC8, often register the same particles.
[vc.in](/uploads/8630def3fe171c14cc64887dc9991232/vc.in)OPAL 1.6.0adelmannadelmannhttps://gitlab.psi.ch/OPAL/src/-/issues/86OPAL-1.6 check DKS version used to compile2017-06-17T20:38:34+02:00Uldis LocansOPAL-1.6 check DKS version used to compileOPAL-1.6 does not check which DKS version is used so compilation errors are possible due to the wrong versionsOPAL-1.6 does not check which DKS version is used so compilation errors are possible due to the wrong versionsOPAL 1.6.0https://gitlab.psi.ch/OPAL/src/-/issues/85Error in compiling OPAL-1.6 with -DENABLE_DKS=12017-06-17T20:38:34+02:00Valeria RizzoglioError in compiling OPAL-1.6 with -DENABLE_DKS=1I have the following modules loaded:
```
Currently Loaded Modulefiles:
1) gcc/5.4.0 4) hdf5/1.8.18 7) trilinos/12.10.1 10) OpenBLAS/0.2.19 13) opal-toolschain/1.6
2) openmpi/1.10.4 5) H5hut/2.0...I have the following modules loaded:
```
Currently Loaded Modulefiles:
1) gcc/5.4.0 4) hdf5/1.8.18 7) trilinos/12.10.1 10) OpenBLAS/0.2.19 13) opal-toolschain/1.6
2) openmpi/1.10.4 5) H5hut/2.0.0rc3 8) root/6.08.02 11) cuda/8.0.44
3) boost/1.62.0 6) gsl/2.2.1 9) cmake/3.6.3 12) dks/1.0.1
```
and I got the following error message:
```
/home/scratch/opal/src/src/Classic/Solvers/CollimatorPhysics.cpp: In member function ‘void CollimatorPhysics::setupCollimatorDKS(PartBunch&, Degrader*, size_t)’:
/home/scratch/opal/src/src/Classic/Solvers/CollimatorPhysics.cpp:1094:52: error: no matching function for call to ‘DKSBase::callInitRandoms(int&, int&)’
dksbase.callInitRandoms(size, Options::seed);
^
In file included from /home/scratch/opal/src/ippl/src/Utility/IpplInfo.h:59:0,
from /home/scratch/opal/src/ippl/src/Message/Message.hpp:29,
from /home/scratch/opal/src/ippl/src/Message/Message.h:618,
from /home/scratch/opal/src/ippl/src/AppTypes/Vektor.h:16,
from /home/scratch/opal/src/src/Classic/Algorithms/Vektor.h:6,
from /home/scratch/opal/src/src/Classic/Solvers/CollimatorPhysics.hh:13,
from /home/scratch/opal/src/src/Classic/Solvers/CollimatorPhysics.cpp:9:
/opt/psi/MPI/dks/1.0.1/openmpi/1.10.4/gcc/5.4.0/include/DKSBase.h:1077:7: note: candidate: int DKSBase::callInitRandoms(int)
int callInitRandoms(int size);
^
/opt/psi/MPI/dks/1.0.1/openmpi/1.10.4/gcc/5.4.0/include/DKSBase.h:1077:7: note: candidate expects 1 argument, 2 provided
[ 60%] Building CXX object src/CMakeFiles/OPALib.dir/Classic/Utilities/DivideError.cpp.o
```OPAL 1.6.0https://gitlab.psi.ch/OPAL/src/-/issues/82IPPL extra message error2017-12-21T12:02:10+01:00frey_mIPPL extra message errorOPAL crashes for > 16 cores (but works with #cores = 4) with the error message
>>>
Error{0}> get_iter(): no more items in Message
Error{0}> reduce: mismatched element count in vector reduction.
Warning{0}> CommMPI: Found extra message...OPAL crashes for > 16 cores (but works with #cores = 4) with the error message
>>>
Error{0}> get_iter(): no more items in Message
Error{0}> reduce: mismatched element count in vector reduction.
Warning{0}> CommMPI: Found extra message from node 11, tag 10218: msg = Message contains 2 items (0 removed). Contents:
Warning{0}> Item 0: 1 elements, 1 bytes total, needDelete = 0
Warning{0}> Item 1: 3 elements, 24 bytes total, needDelete = 0
>>>
in case of serial x and y directions (i.e. PARFFTX=false, PARFFTY=false) and parallel z direction (i.e. PARFFTT=true). The simulation that was ran is [psiring.in](/uploads/06e3f41f765be149e96b56bd6b277485/psiring.in). The fieldmaps can be found in the repository [AMAS-BDModels / PSI-Ring](https://gitlab.psi.ch/AMAS-BDModels/PSI-Ring/tree/master/Fieldmaps). Following modules were used for running on Merlin:
>>>
module use unstable
module add gcc/5.4.0
module add openmpi/1.10.4
module add hdf5/1.8.18
module add H5hut/2.0.0rc3
module add trilinos/12.10.1
module add gsl/2.2.1
module add boost/1.62.0
>>>
When changing to parallel x, y and serial z, i.e. PARFFTX=true, PARFFTY=true and PARFFTT=false) no error occurs.OPAL 1.9.xfrey_mfrey_mhttps://gitlab.psi.ch/OPAL/src/-/issues/81Segfault within Surfacephysics2017-06-17T20:38:35+02:00krausSegfault within SurfacephysicsWith input file [Degrader_70.in](/uploads/4971dc04fcdf6cbee66b92aea9f83832/Degrader_70.in) I got a segmentation fault. Suddenly an incredibly large number of additional particles were generated, then OPAL crashed. Couldn't reproduce it a...With input file [Degrader_70.in](/uploads/4971dc04fcdf6cbee66b92aea9f83832/Degrader_70.in) I got a segmentation fault. Suddenly an incredibly large number of additional particles were generated, then OPAL crashed. Couldn't reproduce it anymore, but something isn't correct.OPAL 2.0.0krauskraushttps://gitlab.psi.ch/OPAL/src/-/issues/78Particle Matter interaction and Large Angle scattering2019-05-16T21:05:08+02:00adelmannParticle Matter interaction and Large Angle scatteringA 249 MeV proton beam is hitting a degrader
REAL WEDGE_HLEN=0.0197293;
REAL START = 0.02;
DEGPHYS_Wedge : SURFACEPHYSICS, TYPE="DEGRADER", MATERIAL="GraphiteR6710";
Wedge1: DEGRADER, L=WEDGE_HLEN, OUTFN="sWedge1.h5", SURFACEP...A 249 MeV proton beam is hitting a degrader
REAL WEDGE_HLEN=0.0197293;
REAL START = 0.02;
DEGPHYS_Wedge : SURFACEPHYSICS, TYPE="DEGRADER", MATERIAL="GraphiteR6710";
Wedge1: DEGRADER, L=WEDGE_HLEN, OUTFN="sWedge1.h5", SURFACEPHYSICS=DEGPHYS_Wedge, ELEMEDGE=START;
The claim is that the following transverse real space
![image](/uploads/96f74bd4cd02104fb0f45ba275702de5/image.png)
and transverse momenta space
![image](/uploads/4a30f2ebddb24ba7bc1e7da81e087bb9/image.png)
is **not** correct.
Switching off the large angle scattering (http://amas.web.psi.ch/docs/opal/opal_user_guide.pdf 18.2.2) the "halo" is disappearing, as shown
by the red dots in the following picture:
![image](/uploads/ea17023a70f261b39db30854795d1485/image.png)
Switch off == omment out: https://gitlab.psi.ch/OPAL/src/blob/master/src/Classic/Solvers/CollimatorPhysics.cpp#L777 and
https://gitlab.psi.ch/OPAL/src/blob/master/src/Classic/Solvers/CollimatorPhysics.cpp#L746
Now we can enable/disable Rutherford scattering
`DEGPHYS_Wedge : SURFACEPHYSICS, TYPE="DEGRADER", MATERIAL="GraphiteR6710", ENABLERUTHERFORD=TRUE;
`
Default is **ENABLED**
Be aware of the fact this inout file runs only with OPAL-1.6 (git checkout OPAL-1.6)
[sDegrader_70.in](/uploads/8ef0732890ee80d73567650e8e4f810a/sDegrader_70.in)
OPAL 1.9.xbaumgartenchristian.baumgarten@psi.chbaumgartenchristian.baumgarten@psi.ch