FAQ database.
--------------------------------------------------------
UPDATED===26.04.2016.10.42.01
TITLE===FAQ database
TYPE===database
--------------------------------------------------------
the title
::::::::::::
question...
::::::::::::
answer...
::::::::::::
question...
::::::::::::
answer...
...
--------------------------------------------------------
Relations with other particle transport codes
::::::::::::
What relation exists between FLUKA and EGS?
::::::::::::
In 1985, EGS was interfaced to the old FLUKA (the version which became known as FLUKA86), four years before the modern FLUKA was born. For several years, starting in 1989, many changes were made to the treatment of electrons and photons in the new code, eliminating approximations, modifying the existing physical models, adding new effects and introducing new sampling techniques.
The two codes EGS and EMF (ElectroMagnetic Fluka) where already de facto two independent codes in 1992. Only some commonalities in a few parts of the cross section preprocessor were left: EMF had revised new physics for many processes.
The cross section preprocessor PEMF, although strongly modified, has kept a memory of the EGS equivalent PEGS preprocessor until 2005. In the meantime, the EGS program itself was modified by its authors in completely different and independent directions. The story of the development of the EMF part of FLUKA is narrated in detail in the Manual (section 16.4.10 of the Yellow Report CERN-2005-10).
::::::::::::
What relation exists between FLUKA and GEANT?
::::::::::::
The hadronic event generator of the FLUKA92 version was interfaced with GEANT3 (version 3.15) around 1993, with the collaboration of K. Lassila. That version of FLUKA, which was described in A. Fassò, A. Ferrari, J. Ranft and P.R. Sala, Proc. IV Int. Conf. on Calorimetry in High Energy Physics, La Biodola (Italy) 21-26 September 1993, Ed. A. Menzione and A. Scribano, World Scientific, p. 493-502 included the first implementation of the PEANUT preequilibrium model (the so-called linear model). It has never been updated and differs greatly from the present version. It should be considered obsolete and no longer used, but surprisingly it is still widely popular. To avoid confusion with today's FLUKA, that version must be referred to as G-FLUKA. Some extra informations can be found in the ATLAS Internal Note PHYS-NO-086, (28 pages) (1996) which can be easily found from the CERN Web site.
A special package, FLUGG, has been written to allow the use of FLUKA with the GEANT4 geometry. Instructions and detailed examples are available on the FLUKA web page: FLUGG
::::::::::::
What relation exists between FLUKA and LAHET/MCNPX?
::::::::::::
A very old version of the FLUKA hadron generator, the so-called EVENTQ contained in FLUKA87 with some corrections made around 1989, was included in the LAHET program and was later inherited by MCNPX. That version has very little in common with the modern FLUKA.
Some other exchanges took place between FLUKA and LAHET (but not MCNPX). In 1993 the RAL high-energy fission model by Atchison was kindly provided by R.E. Prael as implemented in LAHET. Since then the model has undergone important modifications and improvements and little is now left of the original implementation. On the other hand, the event generator which was in FLUKA till spring 1997 was made available to R. Prael in 1997 but it is not included in MCNPX.
::::::::::::
What relation exists between FLUKA and CORSIKA?
::::::::::::
The hadron event generator of FLUKA has been interfaced to CORSIKA in 2003. Only the part between 50 MeV and 80 GeV has been included. The version in CORSIKA is constantly being kept up to date with the current FLUKA version.
--------------------------------------------------------
Material definitions
::::::::::::
When and why do I need to input a MAT-PROP card?
::::::::::::
The density which is provided by the user via the WHAT(3) of a MATERIAL card is used for transport, i.e. it is used to convert cross sections to mean free paths. Therefore, it must be the actual density in case of a gas at non-atmospheric pressure, or the average density in case of a non uniform material. However, it is not sufficient because:
- if the material is a gas, the Sternheimer-Peierls formula must be evaluated first at NTP and then corrected for actual TP. The density at NTP used in the evaluation of the formula is derived as MATERIAL-WHAT(3)/MAT-PROP-WHAT(1)
- the material density can be non-uniform (for instance a material with voids (like a sponge or a powder). Transport (mean free path) is evaluated based on the average density MATERIAL-WHAT(3), while dE/dx must be evaluated based on local density MATERIAL-WHAT(3)/MAT-PROP-WHAT(2).
::::::::::::
When using names for region how are ranges of regions defined with ASSIGNMA?
::::::::::::
The ranges refer to the order of appearance in the region definition
section of your geometry and are not defined alphabetically.
--------------------------------------------------------
Geometry
::::::::::::
What is the maximum number of regions allowed in FLUKA?
::::::::::::
The maximum number of regions is 1000, but can be increased to 20000 by means of option GLOBAL (WHAT(1)). Using the LATTICE option in the geometry, however, it is possible to define regions as copies obtained by geometric transformations (rotations and translations). The number of such copies is unlimited, within the available computer memory.
::::::::::::
Is there a maximum of the number of characters in each line of a geometry input?
::::::::::::
Yes, in a geometry input each line must not exceed a certain number of characters, as shown below. Any further characters are skipped when FLUKA reads the input.
Old, number-based input:
Body reading:
"normal" format : 70 characters (2X,A3,I5,6D10.3)
"extended precision" format: 76 characters (2X,A3,I5,3D22.15)
Region reading:
"normal" format : 73 characters ( 2X, A3, I5, 9(A2,I5) )
"extended reg. num." format: 73 characters ( 2X, A3, I5, 7(A2,I7) )
New, name based input: max 132 characters
::::::::::::
Is there a function in FLUKA that returns for a certain point in space (x,y,z) the region number to which the point belongs as well as the lattice number, if it is a lattice replica?
::::::::::::
In principle one can call the following subroutine:
CALL GEOREG ( X, Y, Z, NREG, IDISC )
where X, Y, Z are the coordinates of the point where you want to know the region (returned in NREG) and the lattice (returned in MLATTC in common LTCLCM). IDISC is a flag which, if different from zero, signals you are already out or going out of the geometry.
However, the situation is slightly more complex:
--------------------------------------------------------
Low-energy neutrons
::::::::::::
How can I calculate the capture cross-section of neutrons in a given target material?
::::::::::::
In principle, you can get all the information you need on the FLUKA neutron cross sections by setting a printing flag in option LOW-NEUT. From the manual:
WHAT(4) = printing flag: from 0.0 to 3.0 increases the amount of
output about cross-sections, kerma factors, etc.
Default: 0.0 (minimum output)
However, the information is given in a form that is difficult to understand unless one is familiar with multigroup neutron transport codes. Anyway, one will get a table of which a few lines are reproduced here:
1 CROSS SECTIONS FOR MEDIA 1
(RESIDUAL NUCLEI INFORMATIONS AVAILABLE)
GROUP SIGT SIGST PNUP PNABS GAMGEN NU*FIS EDEP
DOWNSCATTER MATRIX
barn barn (PNEL PXN PFISS PNGAM) GeV/col
1 5.826E+00 9.287E+00 .0000 1.5939 1.2904 .3886 1.536E-02 .3228
.0079 .0021 .0023 .0020 .0013 .0011 .0035
.......................................................................
26 6.861E+00 6.697E+00 .0000 .9761 1.0609 .0181 1.458E-03
.......................................................................
Explanation of the relevant quantities:
Group 1 (the highest): Total cross section (SIGT) = 5.826E+00 barn
"Scattering" cross. s. (SIGST) = 9.287E+00 barn
Probability of Non Absorption (PNABS) = 1.5939
Group 26 : Total cross section (SIGT) = 6.861E+00 barn
"Scattering" cross. s. (SIGST) = 6.697E+00 barn
Probability of Non Absorption (PNABS) = .9761
The data for the first groups will probably look strange (scattering cross section larger than total cross section): the reason is that there is neutron production through (n,xn) reactions (fission is accounted for separately), and here "scattering" means "number of outgoing neutrons times cross section" or also "changing energy group". Since more than one neutron on average is exiting a collision, the probability of non absorption is larger than 1.
But looking at the lower groups (group 26 has been copied here as an example) one will see that the data make more sense. The absorption cross sections (all of them included) will be = total - scattering = 6.861 - 6.697 = 0.164 barn
::::::::::::
Suppose that A=(number of neutrons created)+(number of neutrons absorbed) and B=(number of neutrons created)-(number of neutrons absorbed). Is it true that neutron balance (particle number 222) neut-bala is related to A and neutron (particle number 8) is related to B?
::::::::::::
The neutron balance according to your definition is related to B. Is the term that enters in the diffusion equation for a steady in time solution. The diffusion equation is described as:
d(Flux(r))/dt = D*\Delta2(Flux(r)) + S(r) - \Sigma_abs*Flux(r)
where S(r) term describing the sources and \Sigma_abs the absorption cross section.
With words the diffusion equation is (per unit volume):
dFlux(r)/dt = -[outflow rate] + [production rate] - [absorption rate]
and for a steady state solution
[outflow rate] = [production rate] - [absorption rate] = neutron balance
The particle 8=neutron is scoring the neutron fluence, the quantity most frequently used for describing neutron fields. Fluence is defined as the number of particles that penetrate a sphere with a cross section of pi*r^2 = 1cm2 per unit of time and/or energy. Otherwise, the particles crossing a surface of 1cm2 that is ALWAYS perpendicular to the direction of the particle.
::::::::::::
How do I define transport thresholds for low-energy neutrons?
::::::::::::
In order to kill all low-energy neutrons below the group transport boundary one should use LOW-BIAS with WHAT(1) set to 1, i.e., selecting the highest energy group as cut-off boundary (inclusive). If you want to select a even higher neutron cut-off PART-THR has to be used which will then also stop the low-energy neutron group transport.
Example:
no neutron transport below 19.6 MeV
LOW-BIAS 1.0 0.0 Reg1 Reg2
no neutron transport below 500.0 MeV
PART-THR -0.5 NEUTRON
::::::::::::
Are cross section data for low-energy neutron transport available in FLUKA with take into account molecular bindings?
::::::::::::
Yes, there are low-energy neutron cross section data sets available in FLUKA which take into account molecular bindings. The cross section data set has to be associated with the material by the card LOW-MAT. See Chapter 10 of the manual for a list of available data sets.
--------------------------------------------------------
Particle sources
::::::::::::
How do you sample a Synchrotron Radiation spectrum?
::::::::::::
There are two ways:
- Sample uniformly an energy in the interval of interest, and load in stack a photon with that energy and a weight equal to the corresponding spectral value, multiplied by the width of the sampling interval.
- Pre-calculate a table of values of the cumulative (i.e. integral) spectrum. There is no need to use sophisticated integration techniques: a simple sum of S(E)*DE is sufficient provided you keep the energy intervals DE sufficiently small. Normalise the table dividing all values by the last one. Take a random number x between 0 and 1, and search the table for the smallest value larger than x. Interpolate the energy between that point and the previous one.
The second way can be extended to sample from a biased spectrum: indeed in many problems you don't want to sample too many photons at the lowest energies, which contribute little to most of the quantities of interest. Using the first way, all energies are sampled with the same probability.
The explained ways of sampling are not limited to the case of a synchrotron radiation spectrum but are used in many similar cases such as sampling from a galactic cosmic ray spectrum.
::::::::::::
How do you sample a Gas Bremsstrahlung spectrum?
::::::::::::
In short, the technique is the following: Make the electron beam cross a volume of air (or appropriate gas) at atmospheric pressure, or, if the straight line is longer than about 10 m, at 1/10 atm. The results must be normalised to the actual pressure (dividing by a factor which is generally of the order of 1.E11~1.E12). Very important:
- multiple scattering in the gas must be suppressed (there is no appreciable scattering in the residual gas of very low density, but at atmospheric pressure scattering will introduce a non-physical angular spread of the photons)
- Secondary electron and positron production thresholds (Moller and Bhabha thresholds) must be set very large, close to the incoming energy, in order to avoid angular spread coming from those processes as per multiple scattering
- kill the electrons at the end of their trajectory in gas (in real life they would be bent out of the way by some magnet). One way to do this is to make a very thin region of gas with electron cutoff higher than beam energy.
See Ferrari et al., Nucl. Instr. Meth. B83, 518 (1993). A detailed example is shown on the FLUKA web page: Examples
::::::::::::
How do I simulate an isotropic source?
::::::::::::
Use a BEAM command with divergence [WHAT(3)] > 6284 (2000 Pi mrad).
::::::::::::
What subroutine should I use to sample from a Gaussian distribution?
::::::::::::
CALL FLNRRN (RGAUSS)
returns one number normally distributed
CALL FLNRR2 (RGAUS1, RGAUS2)
returns two such numbers, independent of each other
::::::::::::
When using the source routine source.f how can beam parameters defined with the BEAM card be preserved?
::::::::::::
By default, all settings defined with the beam card are ignored when using source, i.e., you must sample yourself any momentum spread, divergence or distribution of the beam spot. Only the beam momentum, defined with WHAT(1) of the BEAM card is available to the user as variable PBEAM (common block BEAMCM).
--------------------------------------------------------
Common blocks and information on particle histories
::::::::::::
What is a stack? How many stacks does FLUKA have?
::::::::::::
A stack is a set of arrays containing all information about particles to be transported. A same value of array index ("stack pointer") corresponds to one specific particle. There is one array per each of the phase space coordinates (position coordinates, direction cosines, energy, age), others for the particle type, statistical weight, momentum, generation, various flags, etc. In other codes the stack is called the bank.
At the beginning of a history (or event), a primary is loaded in stack with all its properties (particle ID, position, direction cosines, energy, age, generation number, etc.). The stack index is increased by 1.
Then the particle is "unloaded" (i.e., a copy is made for transport, and the stack index is decreased by 1). The particle is transported and during its life any produced secondary is loaded onto stack, increasing the stack index by 1 at each new particle. (Secondaries includes delta ray electrons, evaporation particles, photon-produced pairs, capture gammas, split particles, photoneutrons, etc.) At the end of a particle life (by energy cutoff, absorption, escape, etc.), the program looks if there is still any particle in stack. If there are, the top one (i.e., the one with the largest pointer value) is unloaded and transported, and during transport, again, any new produced particle is stored in stack.
A primary history is completed when the stack is empty. Then FLUKA calls a subroutine called feeder (or a user-supplied source) to get a new primary to load on stack. And so on.
Note that when a particle changes its energy, that is not recorded in stack: in stack you have only the initial energies, ages, weights etc., and anyway they are deleted at the time that the particle is unloaded. Any change affects the "copy". Note that the stack is a collection of arrays in a COMMON: XFLK(I), YFLK(I) etc., but the copy is a collection of local scalars XTRACK, YTRACK, etc., contained in a COMMON called TRACKR.
In addition to the main stack, called FLKSTK, FLUKA has several secondary stacks, each contained in a Fortran COMMON:
- the electromagnetic stack (EMFSTK),
- the stack of optical photons (OPPHST),
- the hadron generator stack (GENSTK),
- the stack of heavy secondaries created in nuclear evaporation (FHEAVY),
- the stack of radioactive decays (RDPSTK)
::::::::::::
Is it possible to print information on the ancestor ("mother") of a particle produced in FLUKA?
::::::::::::
At any time during a FLUKA run the common block FLKSTK is carrying those particles which have not yet been followed in the cascade, such as the primary beam particle or any other produced secondaries. Beside variables containing information on the type, momentum, position etc. of the respective entry it also contains variables which can be used by the user in order to save additional information:
SPAREK (11,0:MFSTCK) (11 double precision variables for each stack entry)
ISPARK (11,0:MFSTCK) (11 integer variables for each stack entry)
LOUSE (0:MFSTCK) (one integer variable for each stack entry)
Furthermore the user-subroutine stuprf (in subdirectory usermvax) is called every time a new particle is produced and put into the stack. The identity and coordinates of the "mother" particle are handed over to stuprf as arguments (IJ, XX, YY, ZZ) and further information on the "mother" particle is stored in common TRACKR.
Hence in stuprf any information on the "mother" particle of a particle, which has just been produced and is put into the stack, can be stored in addition using the above 11 user variables.
For example if you want to save the identity, total energy and z-coordinate of the mother you must edit stuprf as follows
LOUSE (NPFLKA) = JTRACK
SPAREK (1,NPFLKA) = ETRACK
SPAREK (2,NPFLKA) = ZTRACK
and leave the other user variables untouched
DO 100 ISPR = 3, MKBMX1
SPAREK (ISPR,NPFLKA) = SPAUSR (ISPR)
100 CONTINUE
DO 200 ISPR = 1, MKBMX2
ISPARK (ISPR,NPFLKA) = ISPUSR (ISPR)
200 CONTINUE
When a new particle is taken from the stack in order to transport it this particle is filled into TRACKR with the above 11 variables saved on LLOUSE, SPAUSR and ISPUSR (the following lines are not visible for the user):
LLOUSE = LOUSE (NPFLKA)
DO 96 ISPR = 1, MKBMX1
SPAUSR (ISPR) = SPAREK (ISPR,NPFLKA)
96 CONTINUE
DO 98 ISPR = 1, MKBMX2
ISPUSR (ISPR) = ISPARK (ISPR,NPFLKA)
98 CONTINUE
Therefore common block TRACKR now contains not only the information on the presently transported particle but also the information on its mother and can be printed using mgdraw (see manual, collision tape, input card USERDUMP).
You'll therefore have to edit stuprf and mgdraw, to compile and link them with the FLUKA-library and to use the USERDUMP to plot all information.
In a similar way, one can save the information about the mother particle (and possibly other ancestors) when the particle is an electron or a photon. In this case the stack concerned is EMFSTK, and the user subroutine is stupre.
::::::::::::
How can I get the nuclear recoils in neutron interactions? Can I deduce them from the residual nuclei and an energy balance of the outgoing fragments? How else can I get displacements per atom (DPA)?
::::::::::::
For neutrons (and other hadrons) of energy > 19.6 MeV, the recoils can be obtained using the user routine mgdraw. They can be found after each nuclear interaction in COMMONs GENSTK and FHEAVY.
However, this is not possible in general with low energy neutrons. FLUKA provides recoil production and transport with detailed kinematics only for scattering on hydrogen and for 14-N(n,p), 10-B(n,alpha) and 6-Li(n,x). But don't try to get the recoils for low energy neutron interactions from energy balance, it will never work!
All low energy neutron information for FLUKA as well as for all other database based codes (pointwise or group) are unfortunately uncorrelated (i.e. you can get the capture gammas without having the neutron captured, or emit more energy than the original one etc.: all the data in the neutronic databases are uncorrelated).
To get DPAs from low energy neutrons, the simplest way is to use the user routine fluscw to weight track length with DPA cross sections. These can be obtained for some selected materials from specialised centers (IAEA, NEA, RSICC).
::::::::::::
There exist different common blocks with particle properties, for example PART and PAPROP. What is the difference between both?
::::::::::::
FLUKA distinguishes between particles that are produced and particles that are transported. The properties of the former are listed in COMMON block PART and of the latter in COMMON block PAPROP.
::::::::::::
In the printout of the properties of transported particles stored in the common block TRACKR I noticed that the identity number of the particle (variable Jtrack) can be lower than -6. Which particles carry this label and where can I find their properties?
::::::::::::
These particles are heavy fragments (heavier than 4He). When detailed transport for ions is activated, the "currently transported" ion/fragment properties are -almost always- cloned into the PART, PAPROP and THRSCM commons at index -2, in addition to its storage in common block FHEAVY under index abs(Jtrack).
--------------------------------------------------------
Functions, subroutines and Fortran programming
::::::::::::
What is the correct way of generating random numbers to be used in a user-specific source routine?
::::::::::::
The FLUKA random number generator can be invoked using the FLRNDM(XDUMMY) function. It returns a 64-bit pseudo random number in the interval [0.D+00,1.D+00), 1 being not included. See also next question about the argument of FLRNDM argument. FLRNDM gurarantees an extremely low peridiocity (of the order of 10*44). Any other function, for instance the intrinsic random generator of fortran or other functions belonging to external libraries, must be absolutely avoided, since they follow a different seed history and break any possibility of a correct use of the last seed in progressive runs.
::::::::::::
When using the FLRNDM random number generator function in user routines, what argument must be used?
::::::::::::
In general, any name is ok, provided it begins by A-H, O-Z. If possible, it is suggested to use the name of a variable which has been recently modified, otherwise it doesn't matter. The reason is that in the far past some compiler tried to be "clever" by extracting the function call when it was present inside a DO loop (in principle, a function with constant argument is supposed to give a constant result - but not a random number generator!). Therefore, it is safer to fool the compiler by letting it believe that the argument has changed, but fortunately such dumb compilers seem to have disappeared. With most compilers accepting f90 extensions, it is now allowed also to write FLRNDM( ) without any argument.
::::::::::::
Are there in FLUKA other utilities for generation and manipulation of random numbers?
::::::::::::
Yes. In particular the use of the following subroutines is recommended whenever useful:
CALL FLNRRN(RGAUSS)
|
which returns a normal gaussian number RGAUSS
|
|
CALL FLNRR2(RGAUS1,RGAUS2)
|
which returns two independent normal gaussian numbers RGAUS1 and RGAUS2
|
|
CALL SFECFE(SINT,COST)
|
which returns a pair of random numbers SINT and COST such that SINT**2+COST**2 = 1.D+00 and therefore can be interpreted as Sin() and Cos() of the same random angle, uniformly distributed in the range 0-2 Pi.
|
|
CALL RACO(TXX,TYY,TZZ)
|
which returns 3 random numbers TXX, TYY, TZZ such that TXX**2 + TYY**2 + TZZ**2 = 1.D+00 and therefore can be interpreted as the cosine directors of a random (isotropically distributed) direction in 3-D cartesian space, i.e. reconducible to a pair of angles: a uniformly distributed azimuthal angle phi in the range 0-2Pi and a polar angle theta uniformly distributed in cos(theta) in the range -1,+1
|
::::::::::::
When programming code in the user routines, is there a recommended way of giving numerical constants?
::::::::::::
When the user has to insert a numerical constant in the user code, he must always give it explicitly either in INTEGER, if this is the case, or in DOUBLE PRECISION. This is fundamental especially when geometrical accuracy is involved (for example when using a routine like lattic.f). Therefore one should never write something like '4.2' but instead '4.2D+00'. Most compilers treat the two cases in different ways. In this respect, whenever possible, the user should make use of the constants defined as PARAMETER in the (DBLPRC) include file.
It is also important to know that all numerical values passed on the code by data cards are automatically interpreted in DOUBLE PRECISION.
--------------------------------------------------------
Biasing
::::::::::::
I am doing some biasing using the weight window (with command WW-FACTOr). But I have done a test without biasing and I get the same result! Why?
::::::::::::
Check that you have in input a WW-THRESh card. It is compulsory when using WW-FACTOr, otherwise the two energy thresholds are set equal to zero, and it is as if the weight window was not there.
::::::::::::
When simulating electromagnetic showers of high energy, it takes very long time. Which biasing should I use to increase performance?
::::::::::::
If you are not interested in fluctuations or correlations, use Leading Particle Biasing (see EMF-BIAS on the Manual). For shielding calculations of high energy accelerators, the use of this biasing feature is very effective.
::::::::::::
I am interested in a correlated analysis on a event by event basis. Is there anything which I must define in the input for that purpose?
::::::::::::
You must define a "fully analogue" simulation with WHAT(2)<0.0 in the GLOBAL card.
--------------------------------------------------------
Particle transport and energy cutoffs
::::::::::::
How do I replace multiple scattering by single scattering?
::::::::::::
To activate single scattering everywhere, use command MULSOPT with the following values:
WHAT(1): 0.0
WHAT(2): 0.0
WHAT(3): 0.0
WHAT(4): 1.0
WHAT(5): 1.0
WHAT(6): > 1000.0
SDUM: GLOBAL (GLOBEMF applies only to electrons and positrons, GLOBHAD
only tho hadrons and muons)
Note that this choice is generally extremely demanding in CPU time, except for particles of very low energy (a few keV), which have a very short history anyway. In such cases, the single scattering option is even recommended.
::::::::::::
Can light nuclei (like for instance 3-He) migrate from region to region and be stored (by RESNUCLEi card) not in the region where they were born?
::::::::::::
Yes, if you have activated recoil transport with EVENTYPE. In general, however, they don't travel very far.
::::::::::::
In FLUKA, is energy conserved at the level of each single interaction?
::::::::::::
Yes, with the exception of low-energy neutron interactions. In those interactions, energy is deposited by charged recoils. Some of them (protons from scattering on hydrogen and from 14-N(n,p), alphas from 10-B(n,alpha), light fragments from 6-Li(n,x)) are explicitly transported by FLUKA and their energy is deposited by dE/dx. But in most cases, the recoil energy is deposited via kerma factors averaged over all possible reactions for a given energy group, so that energy is conserved on average but not necessarily in a single interaction.
Note that "kerma" in principle refers only to charged particles and not to gammas. But in the unlucky case of Germanium, and of some other elements listed in Chap. 10 of the manual, gamma production is not available. In such cases, the energy of the gamma(s) is added to the kerma, making the situation even worse.
The reason for this is that the NJOY code, which is used to process the evaluated data files (ENDF, JEF, JENDL...), calculates kerma as the result of an energy-mass balance: and whatever energy cannot be accounted for is added to the kerma. This is clearly a weakness of the system, intrinsically related to the way the neutron cross section databases are built (in some cases one even gets negative values due to inconsistencies in the evaluated neutron cross section databases).
No separate balance for each transition can be done due to the lack of correlations in the original databases: only averaging over all transitions make sense and produces exact, albeit uncorrelated, dose calculations (within the limit of the evaluated databases accuracy).
::::::::::::
How are K0 and anti-K0 related to K_short?
::::::::::::
The question of K0's in FLUKA is very complicated. You can have "pure states" and superpositions of states, hadronic and leptonic, short and long: and some of these are relevant at production time but not at transport time and vice-versa.
FLUKA considers in different ways neutral Kaons according to their origin: If kaons0 are coming from the weak decay of some resonance, like for instance a phi particle, where the transport eigenstates matter, then they are labelled KAONLONG (I.D.: 12) and KAONSHRT (I.D.: 19). Otherwise, if they are coming directly from the hadronisation chain, or from strong decays of resonances, they are labelled KAONZERO (I.D.: 24) and AKAONZER (I.D.: 25), according to their parton content. These kaons, during particle transport, are then treated as a proper combination of KAONLONG and KAONSHRT in order to have the right decay properties.
From the point of view of hadronic interactions instead, only KAONZERO and AKAONZER have meaning. Therefore, if a KAONLONG has to interact with another hadron it is first decomposed in KAONZERO/AKAONZER.
::::::::::::
I would like to score stars in an Al shell from a carbon source. The total number and weight of stars get zero.
::::::::::::
The transport of ions is off by default and has to be invoked by the user with WHAT(3) of the EVENTYPE card. Note that this switches on only ion *transport*. In order to simulate ion *interactions* the event-generators DPMJET and RQMD have to be linked using the script ldpm3qmd which is located in the flutil subdirectory. Also keep in mind that no nuclear interactions are simulated below 100MeV/n in the present version.
::::::::::::
I do not see the phi(1020) meson appearing in the list of transportable Fluka particles. Does it mean that this particle (resonance) is discarded completely?
::::::::::::
No: FLUKA distinguishes between particles that are produced (COMMON block PART) and particles that are transported (COMMON block PAPROP). All particles and resonances listed in Particle Data Group, with the exception of those containing quarks heavier than charm, are produced. Those which have a very short lifetime, i.e. the resonances with hadronic decay, are decayed immediately after their production. The reason is that even at the highest energies their path would be irrelevantly short. In particular phi(1020) is indeed produced and has its decay list according to measured branching ratios, in this case K+K- (49.2%) and Klong Kshort (34%).
Internally to the code, produced particles and transported particles have different numbering schemes. Only transported particle numbers are normally accessible to the user.
::::::::::::
I am interested in the bremsstrahlung generated by an electron beam. However would like to kill any electrons or positrons with energies below the beam energy as, in reality, they are removed by a magnet. How can this be done?
::::::::::::
Define a thin region with a material of low density in which you set the transport thresholds for electrons and positrons to a value just below beam energy (card EMFCUT). Electrons and positrons would then be stopped in that region with the energy entirely deposited at the stopping point. The region should be thin enough (and of low-density material) such that other particles interact as little as possible in it. Note, that the region cannot be vacuum. Electron transport cutoffs are not allowed in vacuum since the energy deposition (at the stopping point) would be unphysical.
--------------------------------------------------------
Default settings
::::::::::::
I want to simulate an electromagnetic cascade including photonuclear interactions. Unfortunately, the program crashes. What could be wrong?
::::::::::::
When invoking photonuclear reaction (Card PHOTONUC) the DEFAULTS must not be set to EM-CASCA. Otherwise, the code will crash.
--------------------------------------------------------
Output and Error messages
::::::::::::
If I change the value of a magnetic field in my problem, the program suddenly starts to output error messages of the type:
MAGNEW,TXYZ: 0.31640638004497
U,V,W -6.56290015166051E-04 9.07632797085983E-03 0.56242650150838
::::::::::::
In option MGNFIELD, one inputs the components of the magnetic field, but if they are set all = 0, a user subroutine MAGFLD is called, with the following arguments: ( X, Y, Z, BTX, BTY, BTZ, B, NREG, IDISC ). Here, BTX, BTY and BTZ ARE NOT the components, but the direction cosines! The absolute value is given by B. Looking at the direction cosines printed in the error message, it is clear that U, V and W (the direction cosines) are not properly normalised. The sum of their squares is 0.316406, as shown by the TXYZ value).
::::::::::::
I get the following message: **** Photonuclear interaction not performed because of missing room in FLUKA stack *** What shall I do?
::::::::::::
Probably you did excessive biasing of the interaction length (WHAT(2) of option LAM-BIAS), and you got too many interactions. Try increasing the absolute value of WHAT(2): the smaller the reduction factor, the stronger the biasing.
::::::::::::
I get in my FLUKA error file the message 'GEOMETRY SEARCH ARRAY FULL'. What does this mean and can I ignore it?
::::::::::::
This message indicates that insufficient memory has been allocated for the "contiguity list" (list of zones contiguous to each zone). This is not an actual error, but it is highly preferable that the user optimise computer time by increasing the values of the NAZ variable in the geometry region specifications.
In particular, for each region, FLUKA sets up a list of "neighbour regions" to be tested first when a particle leaves any of the bodies making up that region.
If the new region is not found in the list, the other regions are searched. When found, the region is added to the list of neighbours, provided there is enough space left in the array. The space allocated is by default 5 neighbours per region, which is normally more than sufficient. The value can be modified by the user as explained in the manual (description of region data).
The integer in columns 6-10 is the number of regions which can be entered by a particle leaving any of the bodies defined for the region being described (leave blank in continuation cards). This number is used to allocate memory, and it is not essential that it be exact (if left blank in number-based geometry inputs, it is set to 5). Any number is accepted, and only the sum of all numbers indicated for each region matters. There is no need for this number to be close to the actual value for each individual region, provided the total is large enough. For example, setting NAZ to 91 for 1 region and to 1 for 9 other regions is exactly the same as setting it to 10 for all ten regions.
Note that memory allocation is done globally, not region by region. In general, the impact on the CPU time can vary wildly depending on whether the contiguity list is marginally small or severely underestimated, and on the overall complexity of the geometry. For complex geometries, with severely underestimated NAZ's, the impact can be huge, while in other cases it can be hard to appreciate.
::::::::::::
In the output file, what does the "missing energy" value indicate?
::::::::::::
Don't worry: the "missing energy" means nothing wrong, it is a honest physical quantity! It is what we call "Q" in nuclear physics. When you have an endoenergetic nuclear reaction, for instance a (n,2n) reaction, you have a positive missing energy (Q<0). It is missing because which it has been transformed into mass of the final nucleus. When you have an exoenergetic nuclear reaction, for instance a (n,gamma) or a thermal fission, you have a negative "missing energy" (Q>0). That means, energy is not missing at all, but is created out of nuclear binding energy balance. In the end, FLUKA does its total energy balance, which can be positive or negative. In a pure electromagnetic run, the missing energy is practically zero. In a run with nuclear reactions, it can be positive or negative. With a thermal neutron source or with fissionable materials the missing energy can usually have a very large negative value. With Pb and Ta, as in your case, a lot of energy is needed to break the nucleus (think of all reactions having an energy threshold), and the missing energy can be positive. The value you found of 6.3% is not particularly high.
::::::::::::
The error file contains the following message. What does this mean and can I ignore it?
Geofar: Particle in region 3 (cell # 0) in position 1.000000000E+00
0.000000000E+00 1.000000000E+00
is now causing trouble, requesting a step of 6.258867675E-07 cm
to direction -2.285059979E-01 -9.412338141E-01 2.487245789E-01, error count: 0
[...skipped...]
Particle index 3 total energy 5.189748600E-04 GeV Nsurf 0
We succeeded in saving the particle: current region is n. 2 (cell # 0)
::::::::::::
The message indicates a real problem if repeated more than a few times. As it can be seen, the program has some difficulty to track a particle in a certain direction, and it tries to fix the problem by "nudging" the particle by a small amount, in case the problem is due to a rounding error near a boundary. If the message appears often, it is recommended to run the geometry debugger centering around the position reported in order to find if there is an error in the geometry description.
--------------------------------------------------------
Scoring
::::::::::::
I would like to calculate energy depositions in a calorimeter; something like the summary table that FLUKA prints out at the end. How can I do it?
::::::::::::
Energy deposition scoring can be done in FLUKA in several ways:
- Option SCORE: gives you energy deposited (total or electromagnetic only) in each region. However, it does not provide a distribution as that of the summary table.
- Option USRBIN: does the same in a detailed spatial mesh independent of geometry. The variant EVENTBIN gives the results separately for each primary event.
- Option EVENTDAT: gives a detailed energy balance per region, similar to that of the summary table at the end of the standard output but more extended, at the end of each primary event.
EVENTDAT produce binary output files: EVENTBIN can produce text or binary output files, depending on the user choice. The instructions for reading them are listed in the manual.
The energy deposited in scintillators can be weighted by a quenching factor (option TCQUENCH).
The user routine comscw.f can be called at every energy deposition event (see option USERWEIG) and can be used to multiply the amount deposited by a weighting factor or to perform any other manipulation.
Be aware also that the distribution of deposited energy as electromagnetic, heavy recoil, ionisation etc. is in part arbitrary: for instance changing the threshold for delta-ray production can affect the ratio between ionisation and electromagnetic; similarly, if recoils are transported, their energy is deposited as ionisation, otherwise it is deposited as recoil energy, etc.
::::::::::::
The output corresponding to the SCORE command announces in the title of each column: "GeV/cm**3/one beam particle" or "Stars/cm**3/one beam particle" However, the units actually used seem to be respectively GeV/beam particle and Stars/beam particle. Why does it say "per cm**3"?
::::::::::::
The volume used to calculate the energy density and the star density is the one reported in the second column ("volume in cubic cm"), which is equal to 1.0 by default. The actual region volumes can be optionally input by the user at the end of the geometry section of the input (just before the GEOEND card), provided the IVOPT variable in the Geometry Title card has been set equal to 3. As many volume definition cards must be given as is needed to input a volume for every region. The input variable is an array VNOR(I) = volume of the Ith region. The format is (7E10.5). Volume data are used by FLUKA only to normalise energy or star densities in regions requested by the SCORE command.
::::::::::::
How can I use FLUKA to score n-tuples with HBOOK?
::::::::::::
A detailed example is available from the FLUKA web page. See: "Demonstration of simple muon transport. In addition, one can learn how to link FLUKA with the CERN library in order to utilize HBOOK functionality" in examples.
::::::::::::
I would like to calculate the dose generated by gamma and neutron. How can I do this?
::::::::::::
You can calculate dose with a special fluscw routine by Stefan Roesler et al. named deq99c.f with USRBIN, USRBDX, USRTRACK. Attention, this routine converts fluence into dose equivalent, not absorbed dose.
::::::::::::
How can I score a histogram of LET?
::::::::::::
LET can be scored with the USRYIELD card. The bins will be in units of keV/(micron g/cm^3 ). Note that when scoring with USRYIELD differential fields are scored over any desired number of intervals for what concerns the first quantity, but over only one interval for the second quantity. However, the results are always expressed as second derivatives and not as interval-integrated yields. If LET is your first quantity the content of your bins will be normalized to unit interval of your second variable. Furthermore, WHAT(6) of the continuation card of USRYIELD must contain the code of the material in which the LET has to be calculated: it is not taken to be equal to the one of the ingoing region, and if absent will be put equal to Hydrogen (material number 3, the first non-vacuum in fluka).
::::::::::::
How can I calculate a spectrum in energy per nucleon independent of the ion?
::::::::::::
This can be achieved by using the user routine fluscw.f, however in a non-standard way: That means that you don't assign any value to FLUSCW (leave it at the default value = ONEONE), but exploit the fact that fluscw.f is called at tracklength scoring to manipulate the energy of the ion. Put the following lines in fluscw.f:
...................................
INCLUDE '(FHEAVY)'
INCLUDE '(PAPROP)'
...................................
FLUSCW = ONEONE
IF (-6 .LE. IJ ) THEN
IA = IBARCH(IJ)
ELSE IF (IJ .LT. -6) THEN
IA = IBHEAV(-IJ)
END IF
PLA = -PLA/DBLE(IA)
Of course, in your USRTRACK commands must set the maximum and minimum energy of the spectrum in a way consistent with the fact that it will be a spectrum of E/n and not of E.
::::::::::::
When scoring activity at a certain cooling time by associating a RESNUCLE card with the DCYSCORE card to that cooling time I obtain results, e.g., for 24Na, which are not identical to the activity which I calculate offline based on a residual nuclide scoring (RESNUCLE without associating it to a certain cooling time) and exponential built-up and decay. How can this be explained?
::::::::::::
This happens when the residual nucleus (obtained by RESNUCLE) decays to another radioactive nucleus. RESNUCLE gives you the parent nucleus, but not the daughter. On the other hand, when you use DCYTIMES, FLUKA follows the decay of the parent and gives you the daughter (and sometimes even the daughter of the daughter). This is obtained using the full Bateman equations which govern the chain of nuclear transformations, see for instance http://www.neutron.kth.se/courses/transmutation/Bateman/Bateman.html In case of 24Na the nuclide is produced in at least two different ways: 1) directly, and 2) indirectly by the decay of another nucleus. Probably it is 24Ne, which decays into 24Na with a half-life of about 3 min: check your RESNUCLE results if you get any 24Ne nucleus.
--------------------------------------------------------
Installation and running
::::::::::::
How can I run several jobs simultaneously on different machines using a different random number sequence?
::::::::::::
Set WHAT(2) in command RANDOMIZE equal to a different number for each independent job you want to run. You can then even run several jobs in a sequence for each number used. But remember that each sequence is independent of the others and creates its own series of seeds files, which cannot be mixed.
::::::::::::
Can FLUKA be run under CYGWIN?
::::::::::::
A FLUKA version for CYGWIN is presently being prepared and tested. Among others it has to be certified with a few benchmarking cases to verify its consistency with the default FLUKA version for Linux. This is a tedious process that needs some time. Once completed the CYGWIN-version will be announced on the FLUKA discussion list.
::::::::::::
How do I compile the utility programs in $FLUPRO/flutil?
::::::::::::
There is a Makefile for that purpose in the same directory which can be activated with the 'make' command. It is recommended to run 'make' during the installation of a FLUKA.
--------------------------------------------------------
How do I reference FLUKA correctly?
::::::::::::
How do I reference FLUKA correctly?
::::::::::::
In accordance with the User License, the use of the FLUKA code must be acknowledged explicitly by quoting the following set of references:
[[FLUKA_REFERENCE]]
Additional FLUKA references can be added, provided they are relevant for the FLUKA version under consideration.
This set of refences is subject to change in time. New ones will be communicated, when necessary, in the Release Notes of new FLUKA versions.