Commit cc87391b authored by ulrich_y's avatar ulrich_y
Browse files

Added tex files for run

parent 263f15f6
%!TEX root=thesis
\begin{lstlisting}[language=bash]
$ ./mcmule
1000
10
10000
50
70998
1.0
1.0
m2enng0
tau-e
$ ./mcmule < b1.in
$ echo "1000\n10\n10000\n50\n70998\n1.0\n1.0\nm2enng0\ntau-e" \
| ./mcmule
\end{lstlisting}
\vspace{-0.4cm}
%!TEX root=thesis
\subfloat[Menu file {\tt menu-m2enn-tau-e.menu}]{
\lstinputlisting[language=bash]{figures/lst/lst:menuA.sh}
}
\subfloat[Configuration file {\tt m2enng-tau-e.conf}]{
\lstinputlisting[language=bash]{figures/lst/lst:menuB.sh}
}
## Generated at 16:00 on February 28 2020 by yannickulrich
# git version: redesign (b558978)
conf babar-tau-e/m2enng-tau-e.conf
run 70998 0.500000 m2enngR tau-e 0
run 66707 0.500000 m2enngR tau-e 0
...
run 70998 0.250000 m2enngR tau-e 0
...
run 70998 1.000000 m2enng0 tau-e 0
...
run 70998 1.000000 m2enngV tau-e 0
...
run 70998 0.500000 m2enngC tau-e 0
...
run 70998 0.250000 m2enngC tau-e 0
...
## Generated at 16:00 on February 28 2020 by yannickulrich
# git version: redesign (b558978)
# specify the program to run relative to `pwd`
binary=mcmule
# specify the output folder
folder=babar-tau-e/
# Specify the variables nenter_ad, itmx_ad, nenter and itmx
# for each piece you want to run.
declare -A STAT=(
["m2enngR"]="5000\n50\n10000\n100"
["m2enng0"]="1000\n10\n1000\n50"
["m2enngV"]="1000\n10\n1000\n50"
["m2enngC"]="1000\n10\n1000\n50"
)
# run the following to generate
python tools/create-menu.py \
-ns 3 -xi 0.1 0.3 \
--flavour mu-e \
--piece m2enn \
--output-dir mu-e-nlo \
--prog xs_main \
--stat 0,1000,10,3000,30 \
--stat R,10000,20,10000,40 \
--stat V,2000,10,5000,30
%!TEX root=thesis
\lstdefinelanguage{none}{
identifierstyle=
}
\renewcommand{\ttdefault}{pcr}
\begin{lstlisting}[language=none]
$ python pymule create -i
What generic process? [m2enn] (*@\bfseries m2enng@*)
Which flavour combination? [mu-e] (*@\bfseries tau-e@*)
How many / which seeds? [5]
Which xi cuts? [[0.5, 0.25, 0.125]]
Where to store data? [m2enngtau-e] (*@\bfseries babar-tau-e@*)
Which pieces? [['0', 'V', 'R']] (*@\bfseries0, V, C, R@*)
How much statistics for 0 (pc, pi, c, i)? [(10000, 20, 100000, 100)] (*@\bfseries1000,10,1000,50@*)
How much statistics for V (pc, pi, c, i)? [(10000, 20, 100000, 100)] (*@\bfseries1000,10,1000,50@*)
How much statistics for C (pc, pi, c, i)? [(10000, 20, 100000, 100)] (*@\bfseries1000,10,1000,50@*)
How much statistics for R (pc, pi, c, i)? [(10000, 20, 100000, 100)] (*@\bfseries5000,50,10000,100@*)
Building files. To rerun this, execute
python pymule create\
--seeds 70998 66707 69184 75845 63937 \
-xi 0.5 0.25 0.125 \
--flavour tau-e \
--genprocess m2enng \
--output-dir babar-tau-e \
--prog mcmule \
--stat R,5000,50,10000,100 \
--stat 0,1000,10,1000,50 \
--stat V,1000,10,1000,50 \
--stat C,1000,10,1000,50
Expect 3750 iterations, 20.250000G calls
Created menu, config and submit script in babar-tau-e
Please change the ntasks and time options accordingly
\end{lstlisting}
%!TEX root=thesis
\begin{lstlisting}
FUNCTION QUANT(Q1,Q2,Q3,Q4,Q5,Q6,Q7)
...
pass_cut = .true.
if(q5(4) < 10._prec) pass_cut = .false.
pol1 = (/ 0._prec, 0._prec, 0._prec, 0._prec /)
names(1) = 'minv'
quant(1) = sqrt(sq(q2+q5))
names(2) = 'Ee'
quant(2) = q2(4)
END FUNCTION QUANT
\end{lstlisting}
\vspace{-0.4cm}
%!TEX root=thesis
\begin{lstlisting}
FUNCTION QUANT(Q1,Q2,Q3,Q4,Q5,Q6,Q7)
...
pass_cut = .true.
qq5 = 0._prec
qq6 = 0._prec
if(present(q5)) qq5=q5
if(present(q6)) qq6=q6
if (qq5(4) > qq6(4)) then
gah = qq5 ; gas = qq6
else
gah = qq6 ; gas = qq5
endif
if (gah(4) < 10.) pass_cut = .false.
if (gas(4) > 10.) pass_cut = .false.
names(1) = 'minv'
quant(1) = sqrt(sq(q2+gah))
names(2) = 'Ee'
quant(2) = q2(4)
END FUNCTION QUANT
\end{lstlisting}
%!TEX root=thesis
\begin{tabular}{l|l|l}
\bf Variable name& \bf Data type & \bf Comment
\\\hline
\tt nenter\_ad & \tt integer & calls / iteration during pre-conditioning \\
\tt itmx\_ad & \tt integer & iterations during pre-conditioning \\
\tt nenter & \tt integer & calls / iteration during main run \\
\tt itmx & \tt integer & iterations during main run \\
\tt ran\_seed & \tt integer & random seed $z_1$ \\
\tt xinormcut & \tt real(prec) & the $0<\xc\le1$ parameter \\
\tt delcut & \tt real(prec) & the $\delta_{\text{cut}}$ parameter
(or at NNLO the second $\xc$) \\
\tt which\_piece & \tt char(10) & the part of the calculation to perform \\
\tt flavour & \tt char(8) & the particles involved \\
(opt) & unknown & the user can request further input during
{\tt userinit}
\end{tabular}
......@@ -46,6 +46,7 @@ is based on~\cite{Ulrich:2020phd} for detailed background information.}
\input{introduction}
\input{structure}
\input{run}
\bibliographystyle{JHEP}
......
%!TEX root=manual
\section{Running \mcmule{}: an example}
\label{sec:example}
In order to provide a simple example with concrete instructions on how
to run the code and to illustrate how it works, we consider the
radiative decay of the tau $\tau\to[\nu\bar\nu] e \gamma $. Since the
neutrinos are not detected, we average over them, indicated by the
brackets. Hence, we have to be fully inclusive w.r.t. the neutrinos.
Still, the code allows to make any cut on the other final-state
particles. As we will see, the \ac{BR} for this process, as measured
by {\sc BaBar}~\cite{Lees:2015gea, Oberhof:2015snl} has a
discrepancy of more than $3\,\sigma$ from the \ac{SM} value. This will
illustrate the importance of fully differential \ac{NLO} corrections
in \ac{QED}.
\input{run/setup}
\input{run/walkthrough}
\input{run/nlo}
\input{run/exec}
\input{run/analysis}
\section{Analysis}
\label{sec:analyse}
After running the code, we need to combine the various {\tt
which\_pieces} into physical results that we will want to use to
create plots. For this purpose, we provide the python tool {\tt
pymule}, though of course other tools can be used as well. Here, we
will only cover the aspects of {\tt pymule} required for the present
analysis as shown in Listing~\ref{lst:pymule}; a full documentation
can be found in {\tt docstrings} used in {\tt pymule} as well as
Appendix~\ref{sec:pymule}. First, we import {\tt pymule}. Next, we
need to point {\tt pymule} to the output directory of {\tt mcmule}
with the {\tt setup} command. In our example this is {\tt
babar-tau-e/out}.
\begin{figure}
\centering
\lstinputlisting[language=python,firstline=10,lastline=51]{figures/mule/babar.py}
\renewcommand{\figurename}{Listing}
\caption{An example code to analyse the results for $\tau\to\nu\bar\nu
e\gamma$ in {\tt pymule}. Note that, in the Fortran code
$G_F=\alpha=1$. In {\tt pymule} they are at their physical
values~\cite{PDG}.}
\label{lst:pymule}
\end{figure}
As a next step, we import the \ac{LO} and \ac{NLO} {\tt
which\_pieces} and combine them using two central {\tt pymule}
commands: {\tt sigma} and {\tt mergefks}. {\tt sigma} takes the {\tt
which\_piece} as an argument and imports matching results, already
merging different random seeds. {\tt mergefks} takes the results of
(multiple) {\tt sigma} invocations, adds results with matching $\xc$
values and combines the result. In the present case, $\sigma_n^{(1)}$
is split into multiple contributions, namely {\tt m2enngV} and {\tt
m2enngC}. This is indicated by the {\tt anyxi} argument.
Users should keep in mind that \mcmule{} ships with a version of {\tt
global\_def} where the couplings $G_F={\tt GF}$ and $\alpha={\tt
alpha}$ are set to $G_F=\alpha=1$. Hence, we use {\tt pymule}'s
function {\tt scaleset} to multiply the result with the correct values
of $G_F$ (in ${\rm MeV}^{-1}$) and $\alpha$ (in the \ac{OS} scheme).
Next, we can use some of {\tt pymule}'s tools (cf.
Listing~\ref{lst:pymule}) to calculate the full \ac{NLO} \ac{BR}s from
the corrections and the \ac{LO} results
\begin{align}
\mathcal{B}|_\text{LO} &= 1.8339(1) \times 10^{-2}\,, &
\mathcal{B}|_\text{NLO} &= 1.6451(1) \times 10^{-2}\,,
\end{align}
which agree with~\cite{Fael:2015gua,Pruna:2017upz}, but
$\mathcal{B}|_\text{NLO}$ is in tension with the value
$\mathcal{B}|_\text{exp} = 1.847(54)\times 10^{-2}$ reported by
{\sc BaBar}~\cite{Lees:2015gea, Oberhof:2015snl}. As discussed
in~\cite{Pruna:2017upz, Ulrich:2017adq} it is very likely that this
tension would be removed if a full \ac{NLO} result was used to take
into account the effects of the stringent experimental cuts to extract
the signal. This issue has been explained in detail
in~\cite{Pruna:2017upz, Ulrich:2017adq, Ulrich:2020phd}.
As a last step, we can use the {\tt matplotlib}-backed {\tt kplot}
command to present the results for the distributions (logarithmic for
$m_{e\gamma}$ and linear for $E_e$). The results are shown in
Figure~\ref{fig:babares}. The upper panel of
Figure~\ref{fig:babares:minv} shows the results for the invariant mass
$m_{e\gamma}$ at \ac{LO} (green) and \ac{NLO} (blue) in the range
$0\le m_{e\gamma} \le 1\,\gev$. Note that this, for the purposes of
the demonstration, does not correspond to the boundaries given in the
run.
The distribution falls sharply for large $m_{e\gamma}$. Consequently,
there are only few events generated in the tail and the statistical
error becomes large. This can be seen clearly in the lower panel,
where the \ac{NLO} $K$ factor is shown. It is defined as
\begin{align}
K^{(1)}
=1+\frac{\D\sigma^{(1)}}{\D\sigma^{(0)}}\,,
\end{align}
and the band represents the statistical error of the Monte Carlo
integration. To obtain a reliable prediction for larger values of
$m_{e\gamma}$, i.e. the tail of the distribution, we would have to
perform tailored runs. To this end, we should introduce a cut
$m_\text{cut}\ll m_\tau$ on $m_{e\gamma}$ to eliminate events with
larger invariant mass. Due to the adaption in the numerical
integration, we then obtain reliable and precise results for values of
$m_{e\gamma} \lesssim m_\text{cut}$.
Figure~\ref{fig:babares:el} shows the electron energy distribution,
again at \ac{LO} (green) and \ac{NLO} (blue). As for $m_{e\gamma}$
the corrections are negative and amount to roughly $10\%$. Since this
plot is linear, they can be clearly seen by comparing \ac{LO} and
\ac{NLO}. In the lower panel once more the $K$ factor is depicted.
Unsurprisingly, at the very end of the distribution, $E_e\sim
900\,{\rm MeV}$, the statistics is out of control.
\begin{figure}
\centering
\subfloat[The invariant mass distribution $m_{e\gamma}$.\label{fig:babares:minv}
]{
\scalebox{0.45}{\input{figures/mule/tau:minv.pgf}}
}
\subfloat[The electron energy distribution
\label{fig:babares:el}
]{
\scalebox{0.45}{\input{figures/mule/tau:energy.pgf}}
}
\caption{Results of the toy run to compute $m_{e\gamma}$ (left) and
$E_e$ (right) for $\tau\to\nu\bar\nu e\gamma$. Upper panels show the
\ac{LO} (green) and \ac{NLO} (blue) results, the lower panels show
the \ac{NLO} $K$ factor.}
\label{fig:babares}
\end{figure}
%!TEX root=manual
\subsection{Running \mcmule{} natively}
When we run \mcmule{}, we will want to choose various random seeds and
different values for the unphysical parameter $\xc$. Checking the
independence of physical results on the latter serves as a consistency
check. To do this, it helps to disentangle {\tt{m2enngF}} into
{\tt{m2enngV}} and {\tt{m2enngC}}. Only the latter depends on $\xi_c$
and this part is typically much faster in the numerical evaluation.
However, this can quickly lead to a rather large number of runs that
need to be taken care of.
A particularly convenient way to run \mcmule{} is using \term{menu
files}\footnote{The name menu was originally used by the cryptanalysts
at Bletchley Park to describe a particular set of configurations for
the `computer' to try}. A menu file contains a list of jobs to be
computed s.t. the user will only have to vary the random seed and
$\xc$ by hand as the statistical requirements are defined globally in
a \term{config file}. This is completed by a \term{submission script},
usually called {\tt submit.sh}. The submit script is what will need to
be launched. It will take care of the starting of different jobs. It
can be run on a normal computer or on a Slurm
cluster~\cite{Yoo:2003slurm}.
To prepare the run in this way we can use {\tt pymule} as shown in
Listing~\ref{lst:menubabar}. When using the tool, we are asked various
questions, most of which have a default answer in square brackets. In
the end {\tt pymule} will create a directory that the user decided to
call {\tt babar-tau-e}, where
all results will be stored. The menu and config files generated by
{\tt pymule} are shown in Figure~\ref{lst:menu}
\begin{figure}
\centering
\input{figures/lst/lst:menubabar}
\renewcommand{\figurename}{Listing}
\caption{The steps necessary to use {\tt pymule} to prepare running
\mcmule{}. Input by the user is shown in bold. A red arrow indicates a
graphical line wrap. Note that numbers listed as {\tt seeds} are
random and hence not reproducible.}
\label{lst:menubabar}
\end{figure}
\begin{figure}
\centering
\input{figures/lst/lst:menu}
\renewcommand{\figurename}{Listing}
\caption{The files required for the present calculations as generated
in Listing~\ref{lst:menubabar}. The file has been massively shortened
for presentation. The full file is attached to this document}
\label{lst:menu}
\end{figure}
To start {\tt mcmule}, we now just need to execute the created {\tt
babar-tau-e/submit.sh}. Note that per default this will spawn at most
as many jobs as the computer {\tt pymule} ran on had CPU cores. If the
user wishes a different number of parallel jobs, change the fifth line
of {\tt babar-tau-e/submit.sh} to
\begin{lstlisting}[language=bash]
#SBATCH --ntasks=<number of cores>
\end{lstlisting}
\subsection{Running \mcmule{} as a container}
\term{Linux containers} are an emergent new technology in the software
engineering world. The main idea behind such \term{containerisation}
is to bundle all dependencies with a software when shipping. This
allows the software to be executed regardless of the Linux
distribution running on the host system without having to install any
software beyond the containerising tool. This is possible without any
measurable loss in performance. For these reasons, containerising
\mcmule{} allows the code to be easily deployed on any modern
computer, including systems running macOS or Windows (albeit with a
loss of performance), and all results to be perfectly reproducible.
A popular containerisation tool is Docker~\cite{Merkel:2014}.
Unfortunately, Docker requires some processes to be executed in
privileged mode which is rarely available in the multi-user
environments usually found on computing infrastructures. This led to
the creation of {\sl udocker}~\cite{Gomes:2017hct} which circumvents
these problems. {\sl udocker} can be obtained from\footnote{An
internal version, managed by the \ac{MMCT} and used for the legacy
builds, can be obtained from \url{https://gitlab.psi.ch/mcmule/udocker}}
\begin{lstlisting}[language=bash]
(*@\url{https://github.com/indigo-dc/udocker}@*)
\end{lstlisting}
{\sl udocker} can be installed by calling \footnote{It might be
advisable to point the variable {\tt UDOCKER\_DIR} to a folder on a
drive without quota first as {\sl udocker} requires sizeable disk
space}
\begin{lstlisting}[language=bash]
$ curl https://raw.githubusercontent.com/indigo-dc/udocker/master/udocker.py > udocker
$ chmod u+rx ./udocker
$ ./udocker install
\end{lstlisting}%stopzone
Once Docker or {\sl udocker} has been installed, \mcmule{} can be
downloaded by simply calling
\begin{lstlisting}[language=bash]
$ docker pull yulrich/mcmule # requires Docker to be installed
$ udocker pull yulrich/mcmule # requires udocker to be installed
\end{lstlisting}
This automatically fetches the latest public release of \mcmule{}
deemed stable by the \ac{MMCT}. We will discuss some technical details
behind containerisation in Section~\ref{sec:docker}.
\mcmule{} can be run containerised on a specified {\tt user.f95} which
is compiled automatically into {\tt mcmule}. This is possible both
directly or using menu files as discussed above. To run \mcmule{}
directly on a specified {\tt user.f95}, simply call
\begin{lstlisting}[language=bash]
$ ./tools/run-docker.sh -i yulrich/mcmule:latest -u path/to/user.f95 -r
\end{lstlisting}%stopzone
This requests the same input already discussed in
Table~\ref{tab:mcmuleinput} and Listing~\ref{lst:mcmuleinput}. To run
a containerised menu file, add an {\tt image} command before the first
{\tt conf} command in the menu file
\begin{lstlisting}[language=bash]
image yulrich/mcmule:latest path/to/user.f95
conf babar-tau-e/m2enng-tau-e.conf
run 70998 0.500000 m2enngR tau-e 0
...
\end{lstlisting}
Note that only one {\tt image} command per menu file is allowed. After
this, the menu file can be executed normally though the drive where
Docker or {\sl udocker} is installed needs to be shared between all
nodes working on the job. It is recommended that all legacy results
use be produced with {\sl udocker} or Docker.
%!TEX root=manual
For a corresponding computation at \ac{NLO}, a few things need to be
modified. First, the observables have to be specified more carefully.
In particular, we need to decide how we treat the additional photon
due to real radiation. In our example we will consider the exclusive
radiative decay, i.e. we request precisely one photon with energy
$E_\gamma>10\,\mev$. The function {\tt quant} will have to take this
into account with the additional argument {\tt q6}, the momentum of
the second photon.
An example of how this could be done is shown in
Listing~\ref{lst:quantnlo}. Here we have just defined the harder and
softer photon {\tt gah} and {\tt gas}, respectively, and require that
the former (latter) has energy larger (smaller) than $10\,{\rm MeV}$.
This version of {\tt quant} is also suitable for the \ac{LO}
calculation, and to ensure infrared-safety, it is generally advisable
to use a single {\tt quant} function for all parts of a computation.
This is also mandatory if \ac{LO} and \ac{NLO} runs are done in one
go, as discussed below.
With this version of {\tt quant} we evaluate the virtual and real
corrections, as well as the infrared counterterm (i.e. the integrated
eikonal times the tree-level matrix element.) The latter is often
combined with the virtual corrections. The corresponding {\tt
which\_piece} are {\tt m2enngV}, {\tt m2enngR}, and {\tt m2enngC},
respectively. If the counterterm is combined with the virtual part, we
would use {\tt m2enngF} which is not implemented.
\begin{figure}
\centering
\input{figures/lst/lst:quantnlo}
\renewcommand{\figurename}{Listing}
\caption{An example for the function {\tt quant} to calculate the
radiative $\tau$ decay at \ac{NLO}. This implementation is \ac{IR}
safe and exclusive w.r.t. extra photons.}
\label{lst:quantnlo}
\end{figure}
%!TEX root=manual
\subsection{Preparations}
To be concrete let us assume that we want to compute two distributions, the
invariant mass of the $e\gamma$ pair, $m_{\gamma e}\equiv
\sqrt{(p_e+p_\gamma)^2}$, and the energy of the electron, $E_e$, in the
rest frame of the tau. To avoid an \ac{IR} singularity in the \ac{BR},
we have to require a minimum energy of the photon. We choose this to
be $E_\gamma \ge 10\,{\rm MeV}$ as used in~\cite{Lees:2015gea,
Oberhof:2015snl}.
As mentioned in Section~\ref{sec:structure} the quantities are defined
in the module {\tt user} ({\tt src/user.f95}). At the beginning of the
module we set
\begin{lstlisting}
nr_q = 2
nr_bins = 90
min_val = (/ 0._prec, 0._prec /)
max_val = (/ 1800._prec, 900._prec /)
\end{lstlisting}
where we have decided to have 90 bins for both distributions and {\tt
nr\_q} determines the number of distributions. The boundaries for the
distributions are set as $0 < m_{\gamma e} < 1800\,{\rm MeV}$ and $0
\le E_e \le 900\,{\rm MeV}$.
The quantities themselves are defined in the function {\tt quant} of
the module {\tt user}. This function takes arguments, {\tt q1} to {\tt
q7}. These are the momenta of the particles, arrays of length 4 with
the fourth entry the energy. Depending on the process though not all
momenta are needed and may be zero.
The \ac{PID}, i.e. which momentum corresponds to which particle,
can be looked up in Appendix~\ref{sec:pid} as well as the file {\tt
mat\_el.f95}. In our case we find
\begin{lstlisting}
use mudec, only: pm2enngav!!(q1,n,q2,q3,q4,q5, linear)
!! mu+(p1) -> e+(p2) y(q5) + ( \nu_e \bar{\nu}_\mu )
!! mu-(p1) -> e-(p2) y(q5) + ( \bar{\nu}_e \nu_\mu )
!! for massive electron
!! average over neutrino tensor taken
\end{lstlisting}
This means that we have to use {\tt q1} for the incoming $\tau$, {\tt
q2} for the outgoing $e$, and {\tt q5} for the outgoing $\gamma$. At
\ac{NLO}, we will also need {\tt q6} for the second $\gamma$.
\begin{lstlisting}
use mudec, only: pm2ennggav!!(p1, n1, p2, p3, p4, p5, p6)
!! mu+(p1) -> e+(p2) \nu_e \bar{\nu}_\mu g(p5) g(p6)
!! mu-(p1) -> e-(p2) \bar{nu}_e \nu_\mu g(p5) g(p6)
!! for massive (and massless) electron
!! average over neutrino tensor taken
\end{lstlisting}
The momenta of the neutrinos do not enter, as we average over them.
Schematically, the function {\tt quant} is shown in
Listing~\ref{lst:quantlo}. Here we have used {\tt sq} provided by
{\tt functions} to compute the square of a four-vector. We have also
specified the polarisation vector {\tt pol1} s.t. the initial tau is
considered unpolarised. The variable {\tt pass\_cut} controls the
cuts. Initially it is set to true, to indicate that the event is kept.
Applying a cut amounts to setting {\tt pass\_cut} to false. The version
of {\tt quant} in Listing~\ref{lst:quantlo} will work for a \ac{LO}
calculation, but will need to be adapted for the presence of a second
photon in an \ac{NLO} computation. Being content with \ac{LO} for the
moment, all that remains to be done is prepare the input read by {\tt
mcmule} from {\tt stdin}, as specified in Table~\ref{tab:mcmuleinput}.
\begin{figure}
\centering
\input{figures/lst/lst:quantlo}
\renewcommand{\figurename}{Listing}
\caption{An example for the function {\tt quant} to calculate the
radiative $\tau$ decay. Note that this is only valid at \ac{LO} and
should not be used in any actual calculation.}
\label{lst:quantlo}
\end{figure}
\begin{figure}
\vspace{0.5cm}
\input{figures/tab/tab:mcmuleread}
\renewcommand{\figurename}{Table}
\caption{The options read from {\tt stdin} by \mcmule{}}
\label{tab:mcmuleinput}
\end{figure}
\begin{figure}
\input{figures/lst/lst:mcmuleread}
\renewcommand{\figurename}{Listing}
\caption{Methods to enter configuration into \mcmule{}. All four
invocations will result in the same run assuming the text file {\tt
r1.in} contains the correct data.}
\label{lst:mcmuleinput}
\end{figure}
To be concrete let us assume we want to use 10 iterations with
$1000\times 10^3$ points each for pre-conditioning and 50 iterations
with $1000\times 10^3$ points each for the actual numerical evaluation
(cf. Section~\ref{sec:stat} for some heuristics to determine the
statistics needed). We pick a random seed between 0 and $2^{31}-1$
(cf. Section~\ref{sec:rng}), say $70\,998$, and for the input
variable {\tt which\_piece} we enter {\tt m2enng0}. This stands for
the generic process $\mu\to\nu\bar\nu e\gamma$ and 0 for tree level.
The {\tt flavour} variable is now set to {\tt tau-e} to change from
the generic process $\mu\to\nu\bar\nu e\gamma$ to the process we are
actually interested in, $\tau\to\nu\bar\nu e\gamma$. This system is
used for other processes as well. The input variable {\tt
which\_piece} determines the generic process and the part of it that
is to be computed (i.e. tree level, real, double virtual etc.). In a
second step, the input {\tt flavour} associates actual numbers to the
parameters entering the matrix elements and phase-space generation.
Obviously, in practice the input will typically not be given by typing
in by hand. In Listing~\ref{lst:mcmuleinput}, we have listed four
equivalent ways to input this data into {\tt mcmule}. The two
variables {\tt xinormcut1} and {\tt xinormcut2} have no effect at all
for a tree-level calculation and will be discussed below in the
context of the \ac{NLO} run. We also ignore the optional input for the
moment.
%!TEX root=manual
Now the mule is ready to trot. The first step it does in {\tt mcmule}
is to associate the numerical values of the masses, as specified
through {\tt flavour}. In particular, we set the generic masses {\tt
Mm} and {\tt Me} to {\tt Mtau} and {\tt Mel}. This is done in {\tt