Last version:
FLUKA 2023.3.3, January 31st 2024
(last respin 2023.3.3)
flair-2.3-0d 13-Sep-2023

News:

-- Fluka Release
( 31.01.2024 )

FLUKA 2023.3.3 has been released.
Next FLUKA Course
The 23rd FLUKA course
will be held at the Lanzhou University, China, on June 1-8, 2024


[   About  |  License  |  News  |  Courses  |  Lanzhou (China) 2024  |  Projects  |  History   ]

font_small font_med font_big print_ascii

The history of FLUKA goes back to 1962-1967. During that period, Johannes Ranft was at CERN doing work on hadron cascades under the guide of Hans Geibel and Lothar Hoffmann, and wrote the first high-energy Monte Carlo transport codes.

Starting from those early pioneer attempts, according to a personal recollection of Ranft himself [Ran97], it is possible to distinguish three different generation of "FLUKA" codes along the years, which can be roughly identified as the FLUKA of the '70s (main authors J. Ranft and J. Routti), the FLUKA of the '80s (P. Aarnio, A. Fassò, H.-J. Moehring, J. Ranft, G.R. Stevenson), and the FLUKA of today (A. Fassò, A. Ferrari, J. Ranft and P.R. Sala).

These codes stem from the same root and of course every new "generation" originated from the previous one. However, each new "generation" represented not only an improvement of the existing program, but rather a quantum jump in the code physics, design and goals. The same name "FLUKA" has been preserved as a reminder of this historical development - mainly as a homage to J. Ranft who has been involved in it as an author and mentor from the beginning until the present days - but the present code is completely different from the versions which were released before 1990, and far more powerful than them.

First generation (the CERN SPS Project, 1962-1978)

The first codes of Ranft [Ran64,Gei65,Ran66,Ran67,Ran70a,Ran72] were originally non-analogue and were used as a tool for designing shielding of high energy proton accelerators. The first analogue cascade code was written and published at Rutherford High Energy Lab, where Ranft worked from 1967 to 1969. His work was supported by Leo Hobbis of RHEL, who at the time was the radiation study group leader for the CERN 300 GeV project. The analogue code was called FLUKA (FLUktuierende KAskade), and was used to evaluate the performances of NaI crystals used as hadron calorimeters [Ran70a].

Around 1970, J. Ranft got a position at Leipzig University. During the SPS construction phase in the Seventies he was frequently invited by the CERN-Lab-II radiation group, leader Klaus Goebel, to collaborate in the evaluation of radiation problems at the SPS on the basis of his hadron cascade codes. These codes were FLUKA and versions with different geometries and slightly differing names [Sch74]. Jorma Routti, of Helsinki University of Technology, collaborated with Ranft in setting up several of such versions [Ran72a,Ran74]. The particles considered were protons, neutrons and charged pions.

At that time, FLUKA was used mainly for radiation studies connected with the 300 GeV Project [Goe71,Goe73,Fas78]. During that time, the development of FLUKA was entirely managed by Ranft, although many suggestions for various improvements came from Klaus Goebel, partly from Jorma Routti and later from Graham Stevenson (CERN). In that version of FLUKA, inelastic hadronic interactions were described by means of an inclusive event generator [Ran74,Ran80a]. In addition to nucleons and charged pions, the generator could now sample also neutral pions, kaons and antiprotons.

Ionisation energy losses and multiple Coulomb scattering were implemented only in a crude way, and a transport cutoff was set at 50 MeV for all particles. The only quantities scored were star density and energy deposited. The electromagnetic cascade and the transport of low-energy particles were not simulated in detail but the corresponding energy deposition was sampled from "typical" space distributions.

Second generation (development of new hadron generators, 1978-1989)

After the SPS construction phase, a complete re-design of the code was started in 1978 on the initiative of Graham Stevenson (CERN), with the support of Klaus Goebel, then leader of the CERN Radiation Protection Group, and of Jorma Routti, Chairman of the Department of Technical Physics at the Helsinki University of Technology (HUT), in the form of a collaboration between CERN, HUT and Leipzig University [Moh81,Aar84,Aar84a,Ran85b]. The goal was to make FLUKA a more user friendly hadron cascade code with flexible geometry and with a modern formulation of the hadron interaction model. The new FLUKA code was started to be written by visitors from Leipzig University (H.-J. Moehring) and Helsinki Technical University (Jorma Sandberg). The project was finished by Pertti Aarnio, also visitor from Helsinki. Other contributions came from Jukka Lindgren (Helsinki) and by Stevenson himself, who was acting as a coordinator.

The existing versions of Ranft's programs (at least 14) were unified into a single code under the name FLUKA. The new code was capable to perform multi-material calculations in different geometries and to score energy deposition, star density and differential "fluxes" (actually, angular yields around a target).

This second generation resulted in the release of several versions. In FLUKA81 [Moh81] only one geometry was available (cylindrical). High-energy hadronic events were still sampled from inclusive distributions, but the low-energy generators HADRIN [Han79,Han86] and NUCRIN [Han80,Han86a] were introduced for the first time.

In FLUKA82 [Aar84,Ran85b], Cartesian and spherical geometries were added, and in principle Combinatorial Geometry too (but the latter option was rarely used, since there was no scoring associated with it and it did not support charged particle multiple scattering and/or magnetic fields). After a first release with the old inclusive hadron generator, an update [Aar84a] was released soon in which a new quark-chain generator developed by Ranft and his collaborators was introduced in a tentative way [Ran83,Ran85,Ran85a]. At least four Ph.D. projects at Leipzig University did contribute to this new generator, based on the Dual Parton Model, known as EVENTQ. The model soon turned out to be superior by far to all those used before in hadron Monte Carlo, and various versions of it were later adopted also in other codes (HETC [Als89,Als90], HERMES [Clo88], CALOR [Gab89], and the simulation codes used for the H1 and ZEUS experiments).

The link to the EGS4 program [Nel85] was introduced in the FLUKA86 version by G.R. Stevenson and A. Fassò, as an alternative to the parameterised electromagnetic cascade used before. The link was working both ways, allowing to transport gammas issued from pi0 decay, and also photohadrons. Production of the latter was implemented only for energies larger than the Delta resonance, in the frame of the Vector Meson Dominance model, by J. Ranft and W.R. Nelson [Ran87b].

The possibility to work with complex composite materials was introduced in the FLUKA81 version by Moehring and Sandberg. P. Aarnio restructured the code by encapsulating all COMMON blocks into INCLUDE files. In that version, and in FLUKA87 which soon followed [Aar87], several other new features were introduced. A first attempt at simulating ionisation fluctuations (with the Landau approach) was contributed by P. Aarnio, and a rudimentary transport of particles in magnetic fields was provided by J. Lindgren (for charged hadrons only). Some fluence estimators (boundary crossing, collision, tracklength) were added in a preliminary form by Alberto Fassò, based on the same algorithms he had written for the MORSE code [Fas87]. J. Ranft and his group improved the EVENTQ hadron generator with the inclusion of diffractive events and Fermi momentum and provided a first model (later abandoned) of nucleus-nucleus collisions.

Practically none of these features, however, is surviving today in same form: in all cases, with the exception of the hadron event generator, even the basic approach is now completely different.

Third generation (the modern multiparticle/multipurpose code, 1988 to present)

At about the time when the last version was frozen (1987), a new generation of proton colliders, with large luminosities and energies of the order of several TeV, started to be planned. Because of its superior high-energy hadron generator, FLUKA became the object of a great interest and began to be employed for shielding calculations and especially to predict radiation damage to the components of the machines and of the experiments. But soon many limitations of the code became evident: the design of the new accelerators (SSC and LHC) and associated experiments needed a capability to handle large multiplicities, strong magnetic fields, energy deposition in very small volumes, high-energy effects, low-energy neutron interactions, which the code was lacking. A. Ferrari (INFN) and A. Fassò set up a plan to transform FLUKA from a high-energy code mostly devoted to radiation shielding and beam heating into a code which could handle most particles of practical interest and their interactions over the widest possible energy range. This plan was entirely supported by INFN, since after the retirement of K. Goebel, the CERN Radiation Protection Group had decided to stop support to any further FLUKA development. The Leipzig group was dissolved following Germany reunification, but J. Ranft continued to contribute, especially during three 6-months stays in different INFN labs.

Over a period of six years, FLUKA evolved from a code specialised in high energy accelerator shielding, into a multipurpose multiparticle code successfully applied in a very wide range of fields and energies, going much beyond what was originally intended in the initial development reworking plan of Fassò and Ferrari. Just as examples, a few of the fields where the modern FLUKA has been successfully applied are listed in the following:

  • Neutrino physics and Cosmic Ray studies: initiated within ICARUS
    • Neutrino physics: ICARUS, CNGS, NOMAD, CHORUS
    • Cosmic Rays: First 3D neutrino flux simulation, Bartol, MACRO, Notre-Dame, AMS, Karlsruhe (CORSIKA)
    • Neutron background in underground experiments (MACRO, Palo Verde)
  • Accelerators and shielding: the very first FLUKA application field
    • Beam-machine interactions: CERN, NLC, LCLS, IGNITOR
    • Radiation Protection: CERN, INFN, SLAC, Rossendorf, DESY, GSI, TERA, APS
    • Waste Management and environment: LEP dismantling, SLAC
  • Synchrotron radiation shielding: SLAC
  • Background and radiation damage in experiments: Pioneering work for ATLAS
    • all LHC experiments, NLC
  • Dosimetry, radiobiology and therapy:
    • Dose to Commercial Flights: E.U., NASA, AIR project (USA)
    • Dosimetry: INFN, ENEA, GSF, NASA
    • Radiotherapy: Already applied to real situations (Optis at PSI, Clatterbridge, Rossendorf/GSI)
    • Dose and radiation damage to Space flights: NASA, ASI
  • Calorimetry:
    • ATLAS test beams
    • ICARUS
  • ADS, spallation sources (FLUKA+EA-MC, C.Rubbia et al.)
    • Energy Amplifier
    • Waste trasmutation with hybrid systems
    • Pivotal experiments on ADS (TARC, FEAT)
    • nTOF

This effort, mostly done in Milan by Ferrari and Paola Sala (also of INFN), started in 1989 and went off immediately in many directions: a new structure of the code, a new transport package including in particular an original multiple Coulomb scattering algorithm for all charged particles, a complete remake of the electromagnetic part, an improvement and extension of the hadronic part, a new module for the transport of low-energy neutrons, an extension of Combinatorial Geometry and new scoring and biasing facilities. At the end of 1990, most of these goals had been achieved, although only in a preliminary form. All the new features were further improved and refined in the following years.

Code structure

One of the first changes which led to the modern FLUKA was a complete redesign of the code structure. The main change was a general dynamical allocation scheme allowing to obtain a great flexibility while keeping the global memory size requirements within reasonable limits. Other improvements were a re-arrangement of COMMON blocks to optimise variable alignment, a parameterisation of constants making the program easier to maintain and update, the possibility to insert freely comments in input, and a special attention devoted to portability (FLUKA87 could run only on IBM under VM-CMS).

The greatest importance was attached to numerical accuracy: the whole code was converted to double precision (but the new allocation scheme allowed for implementation also in single precision on 64-bit computers). As a result, energy conservation was ensured within 10^{-10}.

A decision was also made to take systematically maximum advantage from the available machine precision, avoiding all unnecessary rounding and using consistently the latest recommended set of the physical constant values. Such an effort succeeded in strongly reducing the number of errors in energy and momentum conservation and especially the number of geometry errors.

A double precision random number generator was also adopted, kindly provided by Fred James (CERN) [Jam90], and based on the algorithm of RANMAR by Marsaglia and Zaman of Florida State University [Mar87,Mar91]. The possibility to initialise different independent random number sequences was introduced in 2001. In 2005, the newly proposed double-precision generator proposed by Marsaglia and Tsang [Mar04] has been implemented.

A deliberate choice was made at an early stage to give preference to table look-up over analytical parameterisations or rejection sampling. The burden of large file management was more than compensated by the better accuracy and increased efficiency. Cumulative tabulations optimised for fast sampling were initialised at run-time for the materials of the problem on hand, and were obtained mainly from complete binary data libraries stored in external files.

The concern for self-consistency was and still is the main guiding principle in the design of the code structure. The same attention has been devoted to each component of the hadronic and of the electromagnetic cascade, with the aim of obtaining a uniform degree of accuracy. For this reason, FLUKA can now be used just as well to solve problems where only a single component is present (pure hadron, neutron, muon or electromagnetic problems). It has also been tried to give a complete description of the mutual interaction between the different components, preserving the possible correlations.

A set of default settings recommended for several applications (shielding, radiotherapy, calorimetry etc.) was introduced in 1996 to help the user in a difficult task, but essential to get reliable results.

Geometry

The original Combinatorial Geometry (CG) package from MAGI [Gub67,Lic79] was adopted and extensively improved by Fassò and Ferrari, starting from the one used in their improved version of the Morse code. In 1990, new bodies were added (infinite planes and cylinders) which made the task of writing geometry input much easier and allowed more accurate and faster tracking.

CG had originally been designed for neutral particles, but charged particles definitely required a fully new treatment near boundaries, especially when magnetic fields were present. Previous attempts to use CG to track charged particles, in FLUKA87, EGS4 and other codes, had always resulted in a large number of particle rejections, due to rounding errors and to the "pathological" behaviour of charged particles scattering near boundaries and in the practical impossibility to use CG for these purposes.

The tracking algorithm was thoroughly redesigned attaining a complete elimination of rejections. A new tracking strategy brought about large speed improvements for complex geometries, and the so-called DNEAR facility (minimum distance from any boundary) was implemented for most geometry bodies and all particles. A sophisticated algorithm was written to ensure a smooth approach of charged particles to boundaries by progressively shortening the length of the step as the particle gets closer to a boundary. Boundary crossing points could now be identified precisely even in the presence of very thin layers. The combined effect of multiple scattering and magnetic/electric fields was taken into account.

In 1994, the PLOTGEOM program, written by R. Jaarsma and H. Rief in Ispra and adapted as a FLUKA subroutine by G.R. Stevenson in 1990, was modified by replacing its huge fixed dimension arrays with others, dynamically allocated. The same year, a repetitive (lattice) geometry capability was introduced in CG by Ferrari, and a powerful debugger facility was made available.

In 1997-1998, following a request from the ATLAS experiment, INFN hired a fellow, S. Vanini, who, together with Sala, developed an interface called FLUGG which allows to use FLUKA using the GEANT4 geometry routines for detector description. This interface was further improved by Sala and in recent times I. Gonzalez and F. Carminati from ALICE.

In 2001-2002, following a collaboration between INFN-Milan and GSF (Germany), Ferrari developed a generalised voxel geometry model for FLUKA. This algorithm was originally developed to allow to use inside FLUKA the human phantoms developed at GSF out of real person whole body CT scans. It was general enough to be coupled with generic CT scans, and it is already used in Rossendorf (Germany) for hadron therapy applications.

Particle transport

The number of particles which can be transported by FLUKA was 25 in 1990; after the muon (anti)neutrinos and several hyperons were added, the number increased to 36. In 1992, transport of light ions (deuterons, tritons, 3He, alpha) was introduced, without nuclear interactions. In 1996 tau leptons and neutrinos transport (and in some cases interactions) were added. In 1999 the transport of optical photons was set up. The same year charmed hadrons brought the total number of transported particles to 63, plus all kinds of heavy ions.

The transport threshold, originally fixed at 50 MeV has been lowered since 1991 to the energy of the Coulomb barrier. Below that value, charged particles were ranged out to rest.

Particle decays

The old FLUKA allowed for two and three body phase-space like particle decays with no account for possible matrix element, particle polarisations and higher multiplicities. Starting since 1995, particle decays have been rewritten from scratch by Ferrari allowing for more than 3 body decays, implementing polarised decays for pi's, Kaons, Tau's and muons when relevant, and introducing proper matrix elements for Kaon and muon decays. Among these features, polarised muon decays with the proper V-A matrix element were developed by G. Battistoni (INFN-Milan) and K_mu3 and K_l3 decays of K+/- K_Long with the proper matrix element were based on the KL3DCAY code kindly provided by Vincenzo Patera (INFN-Frascati).

Various methods of particle decay biasing were also introduced by Ferrari (described later in the Biasing subsection).

Magnetic fields

Transport in magnetic fields was extended to electrons and positrons in 1990 by Ferrari. In 1992 and again in 1994, the magnetic field tracking algorithm was completely reworked by Ferrari and Sala in order to overcome the many limitations of the previous one. The new algorithm was much more precise and fast, it supported complex particle/boundary interactions typical of multiple scattering, allowed for charged particle polarisation and did no longer miss by construction any geometrical feature, even if small, met along the curved path.

Multiple Coulomb scattering

The version of EGS4 which had been linked to FLUKA87 was an early one, which did not include the PRESTA algorithm for the control of the multiple scattering step and was therefore very sensitive to the step length chosen. In 1989, Ferrari and Sala developed and implemented in FLUKA an independent model which had several advantages even with respect to PRESTA: it was accounting for several correlations, it sampled the path length correction accounting for the first and second moment of its distribution, it allowed the introduction of high-energy effects (nuclear form factors) and could be easily integrated within the Combinatorial Geometry package. The algorithm, which included also higher order Born approximations, was very efficient and was taking care also of special high-energy effects, very grazing angles, correlations between angular, lateral and longitudinal displacements, backscattering near a boundary etc.

The Ferrari-Sala model, which has proved very robust and has been shown to reproduce with good accuracy even backscattering experiments, was applied to both electrons and heavier charged particles. The final revision and update of the algorithm were made in 1991. In 1995, the Fano correction for multiple scattering of heavy charged particles was introduced.

Ionisation losses

The treatment of ionisation losses was completely re-written in 1991-1992 by Fassò and Ferrari to eliminate many crude approximations, and delta-ray production was added. Ranging of stopping charged particle was also changed. Quenching according to the Birks law was introduced to calculate the response of scintillators.

Application of FLUKA to proton therapy called for further refinements of stopping power routines in 1995, with the inclusion of tabulated data of effective ionisation potentials and density effect parameters. Shell corrections were added. The new treatment was fully compliant with ICRU recommended formulae and parameters and included all corrections, including low energy shell corrections as worked out by Ziegler et al. [Zie77]

In 1996, a new formalism for energy loss fluctuations by Ferrari replaced the old treatment of Landau fluctuations. This formalism, based on the statistical properties of the cumulants of a distribution, was applied to both heavy charged particles and e+e-, and was fully compatible with any user-defined threshold for delta ray emission.

Other improvements concerned the possibility to define materials with local density different from average (porous substances), and the ranging out of particles with energies lower than the transport cutoff.

In 1999-2000, heavy ion dE/dx was improved by the inclusion of effective Z and straggling (Ferrari).

High-energy energy loss mechanisms for heavy charged particles were implemented by Ferrari both as a continuous and as an explicit treatment: bremsstrahlung and pair production in 1992, nuclear interaction via virtual photons in 1993.

Time dependence

Time-dependent calculations were made available in FLUKA for all particles since 1991. Time cut-offs could be set by the user for both transport and scoring.

Nuclear data and cross sections

All the nuclear data used by the original evaporation routines by Dresner [Dre61] (see below), were replaced by Ferrari at an early stage with the most recent ones available from the NNDC database, thanks to the help of Federico Carminati. In 1990, the isotopic composition of elements was included, taken from modern evaluations.

In 1991, the proton and neutron inelastic cross sections between 10 and 200 MeV were updated by Ferrari and Sala with fits to experimental data. An accurate treatment of cross section energy dependence for all charged particles, independent of the step size, was introduced at that stage through the fictitious sigma method.

Hadron-nucleus cross sections underwent further extensive reworking starting from 1994 by Ferrari and Sala. The present treatment is based on a novel approach blending experimental data, data driven theoretical approaches, PDG fits and phase shift analysis when available.

Elastic scattering on hydrogen of protons, neutrons, and pions was rewritten from scratch in 1994 by Ferrari and Sala. The same work was done for kaons in 1997.

Electron and photon transport (EMF)

The original EGS4 implementation in FLUKA was progressively modified, substituded with new algorithms and increasingly integrated with the hadronic and the muon components of FLUKA, giving rise to a very different code, called EMF (Electro-Magnetic-Fluka). In 2005, the last remaining EGS routine has been eliminated, although some of the structures still remind of the original EGS4 implementation.

The main developments were made according to the following sequence.

The Ferrari-Sala multiple scattering algorithm was the first major addition in 1989. It has already been described elsewhere since it was applied to hadrons and muons as well. Following its implementation, the whole electron/positron transport algorithm had to be reworked from scratch in order to comply with the features (initial and final step deflections, complex boundary crossing algorithm) of the new model.

In 1990, the treatment of photoelectric effect was completely changed. Shell-by-shell cross sections were implemented, the photoelectron angular distribution [Sau31] was added, taking into account the fine structure of the edges, and production of fluorescence X-rays was implemented.

Many new features were added in 1991. The emission angle of pair-produced electron and positrons and that of bremsstrahlung photons, which were only crudely approximated in the original EGS4 code, were now sampled from the actual physical distributions.

The full set of the electron-nucleus and electron-electron bremsstrahlung cross sections, differential in photon energy and angle, published by Seltzer and Berger for all elements up to 10 GeV [Sel86] was tabulated in extended form and introduced into the code together with a brand new sampling scheme by Fassò and Ferrari. The energy mesh was concentrated, especially near the photon spectrum tip, and the maximum energy was extended to higher energies. The Landau-Pomeranchuk-Migdal effect [Lan53,Lan53a,Mig56,Mig57] for bremsstrahlung and the Ter-Mikaelyan polarisation effect [Ter54](suppressing soft photon emission) were implemented.

Positron bremsstrahlung was treated separately, using below 50 MeV the scaling function for the radiation integral given by Kim [Kim86] and differential cross sections obtained by fitting proper analytical formulae to numerical results of Feng et al. The photon angular distribution was obtained by sampling the emission angle from the double differential formula reported by Koch and Motz [Koc59], fully correlated with the photon energy sampled from the Seltzer-Berger distributions.

The Compton effect routines were rewritten in 1993 by Ferrari and Luca Cozzi (University of Milan), including binding effects. At the end of the same year, the effect of photon polarisation was introduced for Compton, Rayleigh and photoelectric interactions by Ferrari.

In 1993 and 1994, A. Fassò and A. Ferrari implemented photonuclear reactions over the whole energy range, opening the way to the use of Monte Carlo in the design of electron accelerator shielding [Fas94]. Giant Dipole Resonance, Delta Resonance and high-energy photonuclear total cross sections were compiled from published data [Fas98] (further updated in 2000 and 2002), while the quasideuteron cross section was calculated according to the Levinger model, with the Levinger's constant taken from Tavares et al. [Tav92], and the damping factor according to Chadwick et al. [Cha91]. The photon interaction with the nucleus was handled in the frame of the FLUKA hadronic event generators PEANUT and DPM (see below).

In 1995, a single Coulomb scattering option was made available for electrons and positrons by Ferrari and Sala. The aim of this option was mainly to eliminate some artifacts which affected the angular distributions of charged particles crossing a boundary, but it turned out very useful also to solve some problems at very low electron energy or with materials of low density (gases). In the same year, the electron transport algorithm was reworked once more by Ferrari and Sala introducing an adaptive scheme which "senses" close boundaries in advance and automatically adapts the step length depending on their distance. Also in 1995 Ferrari discovered that the EGS4 implementation of Möller and Bhabha scattering, still used at that time in FLUKA, was flawed. The bug was duly reported to the EGS4 authors who took corrective actions on their own code, while Ferrari developed a new algorithm for Möller and Bhabha scattering for FLUKA.

In 1997 mutual polarisation of photons emitted in positron annihilation at rest was introduced by Ferrari.

Cherenkov photon production and optical photon transport was implemented in 1999 by Ferrari. In 2002 scintillation photon production was added as well.

In 1998-2001 an improved version of the Ferrari-Sala multiple scattering model was developed, introducing further refinements and the so called "polygonal" step approach. This version is presently fully tested offline and will be soon introduced into the production code.

In 2005, the need for an external data preprocessor has been eliminated, integrating all the needed functionalities into the \F\ initialization stage. At the same time, data from the EPDL97 [EPDL97] photon data library have become the source for pair production, photoelectric and total coherent cross section tabulations, as well as for atomic form factor data.

At the same time, Rayleigh scattering has been reworked from scratch by Ferrari with a novel approach, and the photoeletric interaction model have been further improved with respect to the 1993 Ferrari-Sala approach, extending it among the others down to a few eV's.

Finally the energy sampling for pair production have been completely reworked by Ferrari using a vastly superior approach, which can distinguish between interactions in the nuclear or electron field, and properly sample the element in a compound or mixture on which the interaction is going to occur. Thew algorithm is also capable of producing meaningful results for photon energies close to thresholds where several corrections are important and the symmetry electron/positron is broken, insimilar fashion to the bremsstrahlung case.

Low-energy neutrons

The 50 MeV energy cutoff was one of the most important limitations of the FLUKA87 version. The cutoff concerned muons and all hadrons, but it was the absence of neutron transport below 50 MeV which constituted the most serious drawback for many applications. The limitations stemmed from the increasing inadequacy of the hadron interaction models in dealing with interactions below 1 GeV and with the lack of any detailed nuclear physics treatment, i.e. the lack of an evaporation model and low energy particle production, at all energies.

Actually, several early attempts to overcome these weaknesses of the code had been made by H.-J. Moehring, H. Kowalski and T. Tymieniecka (code NEUKA [Kow87,Tym90], for Uranium/Lead scintillator only) and J. Zazula (code FLUNEV [Zaz90,Zaz91]), with mixed results. The most promising approach was that of Jan Zazula, of the Institute of Nuclear Physics in Cracow: he had coupled Fluka87 with the Evap-5 evaporation module which he had extracted from the Hetc/KFA code, and then interfaced the code with the only available multi-group neutron cross section library extending to 50 MeV and beyond, the HILO library.

The main limitations of these approaches, was their inability to address the real deficiencies of the FLUKA87 hadron interaction model, their lack of nuclear physics details and therefore the unreliability of their excitation energy predictions, which indeed were never intended by the original authors for any real use.

Furthermore, it became apparent that HILO had several weaknesses: the cross section set had been put together by extending a low-energy one of rather coarse structure based on evaluated experimental data with the addition of much less accurate data calculated with an intranuclear cascade code (HETC); for the same reason the library did not contain any information on (n,gamma) generation above 20 MeV and was based on a different Legendre angular expansion below and above that energy. And because the library contained a very small number of materials, the possibilities of application were rather limited.

The approach followed by Ferrari and Sala to overcome those shortcomings was two-fold:

  • improve/substitute the hadronic interaction models in order to describe with reasonable accuracy low energy particle production and medium-low energy particle interactions
  • develop an ad-hoc neutron cross section library for FLUKA extending up to 20 MeV (in collaboration with G.C. Panini and M. Frisoni of ENEA - Bologna [Cuc91])

The former point is discussed in detail in the section on hadronic models, the latter in the following.

Since Ferrari and Sala had started to work on a preequilibrium model (later known as PEANUT, see next section) which was expected to cover intermediate energies more accurately than the traditional intranuclear cascade codes, it was decided to lower the FLUKA energy cutoff to 20 MeV (thus making HILO unnecessary) and to create a dedicated multigroup neutron cross section library to be used with FLUKA, with the more usual upper energy limit of 20 MeV. The task was carried out with the essential collaboration of G.C. Panini, an expert of an ENEA laboratory in Bologna specialised in nuclear data processing for reactor and fusion applications. Several neutron cross section libraries have been produced for FLUKA over the years as a result of a contract between INFN-Milan and ENEA [Cuc91].

These libraries, designed by Ferrari, had a format which was similar to the ANISN one [Eng67] used for example by MORSE [Emm75], but which was modified to include partial cross sections and kerma factors for dose calculations (critically revised). Because at that time there was still a US embargo on the most recent ENDF/B evaluated file, the cross sections were originally derived from the European compilations JEF-1 and JEF-2. (Later, they were regularly updated with the best ones available from JEF, ENDF, JENDL and others). The choice of materials was particularly tailored on detector and machine applications for high-energy colliders, including also cryogenic liquids at various temperatures, and was much wider than in most other libraries: it contained initially about 40 different materials (elements or isotopes), which became soon 70 (in 1991) and are now more than 130. Hydrogen cross sections were also provided for different H molecular bonds (H gas, water, polyethylene). Doppler reduced broadening was implemented for a few materials at liquid argon (87 K) and liquid helium (approximately 0 K) temperatures.

The incorporation of the neutron multigroup transport module into FLUKA by Ferrari was loosely based on the approach followed in the MORSE and other multigroup codes, Ferrari and Fassò had a deep expertise about. The low energy neutron transport and interaction routines had been rewritten from scratch progressively introducing many extra features which are detailed in the following. Capture and inelastic gamma generation was still implemented in the multigroup framework, but gamma transport was taken care of by the EMF part of FLUKA. Survival biasing was left as an option to the user with the possibility to replace it by analogue survival.

Energy deposition computed via kerma factors was preserved, but in the case of hydrogen the recoiling protons were explicitly generated and transported. The same was done with protons from the 14-N(n,p) 14-C reaction due to its importance for tissue dosimetry, and later for all reaction on 6-Li.

The new low energy neutrons transport was ready at the end of 1990 [Fer91b]. The corresponding FLUKA version was called FlukaN for a while to underline the neutron aspect, but the name was soon abandoned.

At the beginning of 1997, the possibility to score residual nuclei produced by low energy neutrons was introduced. Many improvements were made in that same year. Federico Carminati, who was using FLUKA for calculations related to C. Rubbia's Energy Amplifier, added to the program a few routines and nuclear data necessary to score low-energy fission products. Pointwise cross sections were introduced for the neutron interactions with hydrogen. Ferrari and Sala also developed a model to sample gammas from neutron capture in Cadmium, an important reaction for which no data were available in evaluated cross section files. Similar schemes for capture photon generation were established in 2001 for other nuclei (Ar, Xe) [Fas01b]. Pointwise transport and interactions for 6-Li were also provided.

Hadron event generators

The two Leipzig event generators developed in the 80's, one for intermediate energies and the other for high energies (> 5 GeV), were a remarkable achievement with great potentialities. In particular the high energy model was among the first developed in the world based on partonic ideas and quark degrees of freedom (specifically on the so called Dual Parton Model [Cap80,Cap80a]).

The part of the code concerning hadron-nucleus primary interactions at energies above 4 GeV has been extensively extended and updated since 1987 and is now virtually a new model, even though the physics foundations are still essentially the same. Several bugs and approximations have been removed too. The intermediate energy resonance model has also been deeply modified and its use is currently restricted to few particles over a restricted energy range. The newly developed pre-equilibrium-cascade model PEANUT has progressively replaced this model.

The main lines of the work developed mostly in Milan by Ferrari and Sala starting from 1990 can be summarised as follow [Fer96b,Col00]:

  • Further develop and improve the high energy DPM based part of the models. These was performed in 4 main stages, which eventually led to an almost completely new code still based on the same physics foundations with some extensions
  • Introduce a self-consistent nuclear environment in both the medium and high energy models, allowing for a physically meaningful excitation energy and excited residual A and Z calculations
  • Develop a state-of-the-art evaporation/fission/break up/deexcitation stage to be used to describe the "slow" part of nuclear interactions. These actually took place in two major steps
  • Develop a completely new model (PEANUT) based on a novel approach for describing the low-to-intermediate (up to a few GeV) energy range, while progressively phasing out the improved version of the intermediate energy Leipzig code. This effort took also place in two main steps.

High energy model improvements

In all the developments described in this paragraph and also of some in other paragraphs, J. Ranft always acted as the main mentor and source of theoretical and often practical support. Even when he did not contributed to the code directly, his ideas, help and suggestions were essential part of its development

.

The two models developed by the Leipzig group were initially improved by removing a number of known bugs and approximations (mainly, but not only, in the kinematics). In the years 1990-1991 all hyperons and anti-hyperons were added as possible projectiles, and most important, nuclear effects, previously restricted to Fermi momentum, were expanded and treated more accurately, with an explicit treatment of the nuclear well potential, the inclusion of detailed tables of nuclear masses to account for nuclear binding energy, a consistent exact determination of nuclear excitation energy and an overall "exact" conservation of energy and momentum on an event-by-event basis. These changes were the minimal modifications required for introducing a sensible evaporation module and related low energy particle production: they made up the first stage of upgrade of the intermediate and high energy event generator and were performed by Ferrari and Sala.

In the following years, negative Binomial multiplicity distribution, correlations between primary interactions and cascade particles and better energy-angle distributions were implemented. Sea quark distributions were updated, new distributions were used for the number of primary collisions using an improved Glauber cascade approach, and Reggeon mediated interactions (single chains) were introduced at the lower energy end of the application range of the Dual Parton Model. An initial improvement of the diffraction treatment as well of the hadronisation algorithm were performed. These developments ended up in the 1993 version, which represented the second stage of the high energy generator development (and which was made available to GEANT3 users, see later).

Several major changes were performed on both the intermediate and high energy hadron generator in the years 1994-1996 by Ferrari and Sala. The latter was extensively improved, bringing its results into much better agreement with available experimental data from as low as 4 GeV up to several hundreds of GeV. A fully new treatment of transverse momentum and of all DPM in general was developed, including a substantially improved version of the hadronisation code and a new driver model for managing two-chain events. The existing treatment of high-energy photonuclear reactions, previously already based on the VMD model but in an approximate way, was improved by implementing the contribution of all different vector mesons, as well as the quasielastic contribution. The simulation of diffractive events was completely reworked distinguishing between resonant, single-chain and two-chain events, and a smeared mass distributions for resonance was introduced. This version of the model was completed in 1996 and performed very well together with the new "sophisticated" PEANUT when applied to a variety of problems, ranging from radiation protection, to cosmic ray showers in the atmosphere and to the test beam of the ATLAS calorimeters.

The latest round of improvements originated by the new interest of Ferrari and Sala for neutrino physics, triggered by their participation in the ICARUS experiment and resulted in several improvements in the high-energy interaction model. In 1998, a new chain fragmentation/hadronisation scheme was put to use, and a new diffraction model was worked out once more according to rigorous scaling, including low mass diffraction and antibaryon diffraction. In 1999, charm production was set up by Ranft and Ferrari (reasonable at least for integrated rates), and charmed particle transport and decay were introduced. The chain building algorithm was thoroughly revised to ensure a continuous transition to low energies, and a significant reworking was done on the chain hadronisation process, providing a smooth and physically sound passage to chains made up by only two particles, resulting in an overall better description of particles emitted in the fragmentation region. This model was thoroughly benchmarked against data taken at WANF by NOMAD and the particle production data measured by SPY. It constituted the basis for all calculations performed for CNGS, both in the early physics design stage and later in the optimisation and engineering studies.

Cascade-Preequilibrium model (PEANUT)

There were two main steps in the development of the FLUKA preequilibrium cascade model (PEANUT) by Ferrari and Sala:

  • The so called "linear" preequilibrium cascade model
  • The full preequilibrium cascade model

The first implementation of the FLUKA cascade-preequilibrium model, the "linear" one was finalised in July 1991 [Fer94]. The model, loosely based for the preequilibrium part on the exciton formalism of M. Blann and coworkers called Geometry Dependent Hybrid Model (GDH) [Bla71,Bla72,Bla75,Bla83a,Bla83b] now cast in a Monte Carlo form, was able to treat nucleon interactions at energies between the Coulomb barrier (for protons) or 10-20 MeV (for neutrons) and 260 MeV (the pion threshold). The model featured a very innovative concept, coupling a preequilibrium approach with a classical intranuclear cascade model supplemented with modern quantistic corrections. This approach was adopted for the first time by FLUKA and independently by the LAHET code [Pra89] at LANL. Capture of stopping negative pions, previously very crudely handled (the available alternatives being forced decay or energy deposited on the spot) was also introduced in this framework. This first implementation was called "linear" since in the cascade part refraction and refrection in the nuclear mean field was not yet taken into account, resulting in straight ("linear") paths of particles through the nuclear medium. First order corrections for these effects were anyway implemented on the final state angular distributions. This model immediately demonstrated superb performances when compared with nucleon induced particle production data. Its implementation into FLUKA allowed to overcome some of the most striking limitations of the code and permitted the use of the new neutron cross section library through its ability to produce sound results down to 20 MeV: in this way it opened a huge range of new application fields for the code.

However, despite its nice performances, the "linear" cascade-preequilibrium model was always felt by Ferrari and Sala as a temporary solution for the low end side of particle interactions, while waiting for something even more sophisticated. The work on the "full" cascade-preequilibrium, which in the meantime had been called PEANUT (Pre-Equilibrium Approach to Nuclear Thermalisation) started at the end of 1991 and produced the first fully working version by mid-1993. Despite its improved quality this version was not included into any of the general use FLUKA versions until 1995, due to its complexity and the overall satisfactory results of the "linear" one for most applications. Till 1995, the full version was in use only by a few selected groups, including the EET group led by Carlo Rubbia at CERN, which meanwhile decided to adopt FLUKA as their standard simulation tools above 20 MeV, mostly due to the superior performances of PEANUT full version.

It would be too long to describe in details all features of this model, which represented a quantum jump in the FLUKA performances and a significant development in the field. Actually, PEANUT combines an intranuclear part and a preequilibrium part (very similar in the "linear" and full versions), with a smooth transition around 50 MeV for secondary nucleons and 30 MeV for primary ones. It included nuclear potential effects (refraction and reflection), as well as quantal effects such as Pauli blocking, nucleon-nucleon correlations, fermion antisymmetrisation, coherence length (a new concept introduced by Ferrari-Sala which generalises to low energy and two body scattering the formation zone concept) and formation zone. The model featured a sophisticated pion complex optical potential approach, together with 2 and 3 nucleon absorption processes and took into account the modifications due to the nuclear medium on the pion resonant amplitudes. For all elementary hadron-hadron scatterings (elastic, charge and strangeness exchanges) extensive use was made of available phase-shift analysis. Particle production was described in the framework of the isobar model and DPM at higher energies, using a much extended version of the original HADRIN code from Leipzig, and the FLUKA DPM model at higher energies.

In 1995, distinct neutron and proton nuclear densities were adopted and shell model density distributions were introduced for light nuclei. The initial model extended the energy range of the original "linear" one from 260 MeV to about 1 GeV in 1994, with the inclusion of pion interactions. Giant Resonance and Quasideuteron photonuclear reactions were added in 1994 and improved in 2000. In 1996--1997 the emission of energetic light fragments (up to alphas) in the GINC stage emission has been described through the coalescence mechanism.

The upper limit of PEANUT was further increased in 1996 to 1.8 GeV for nucleons and pions, and to 0.6 GeV for K+/K0; then again one year later (2.4 GeV for nucleons and 1.6 GeV for pions), and in 2000 (3.5 GeV for both pions and nucleons). In 1998, PEANUT was extended to K- and K0bar's induced interactions. In the 2005 version, all nucleon and pion reactions below 5 GeV/c of momentum are treated by PEANUT, while for kaons and hyperons the upper threshold is around 1.5 GeV (kinetic energy). Since 2005 also anti-nucleon interactions are treated in the PEANUT framework. It is planned to progressively extend PEANUT up to the highest energies by incorporating into its sophisticated nuclear framework the Glauber cascade and DPM part of the high energy model.

One of the fall-outs of the work done for ICARUS was the introduction of nucleon decays and neutrino nuclear interactions in 1997 [Cav97], which prompted improvements in PEANUT, for instance concerning Fermi momentum and coherence length. Quasielastic neutrino interactions can be dealt with by PEANUT natively; in 1999, the code was coupled with the NUX neutrino-nucleon interaction code developed by Andre' Rubbia at ETH Zurich to produce full online neutrino-nucleus interactions, including resonance production and deep inelastic scattering. The combined FLUKA(PEANUT)+NUX model gave outstanding results when compared with NOMAD data, therefore giving support to all predictions done for ICARUS.

Negative muon capture was also introduced in 1997 due to ICARUS needs. To much surprise, it turned out to be a key factor in the understanding of the unexpected background at the nTOF facility during its initial operation phase in 2001.

Evaporation/Fission and Fermi Break-Up

Evaporation was initially implemented in FLUKA in 1990-1991 by Ferrari and Sala through an extensively modified version of the original Dresner's model based on Weisskopf's theory [Dre61]. Relativistic kinematics was substituted to the original one; fragmentation of small nuclei was also introduced, although initially only in a rough manner. The mass scale was changed to a modern one and the atomic masses were updated from a recent compilation. Improvements included also a more sophisticated treatment of nuclear level densities, now tabulated both with A and Z dependence and with the high temperature behaviour suggested by Ignatyuk [Ign75]. A brand new model for gamma production from nuclear deexcitation was added, with a statistical treatment of E1, E2 and M1 transitions and accounting for yrast line and pairing energy. This "initial capability" evaporation was used together with the first stage improved high energy hadron generator and the HILO library for the very first calculations carried out in 1990 for the LHC detector radiation environment. Later, in 1991, with the introduction of the "linear" preequilibrium model, a full model coverage down to 20 MeV was available and the new neutron cross section library developed together with ENEA-Bologna [Cuc91] started to be used.

In 1993 the RAL high-energy fission model by Atchison [Atc80], kindly provided by R.E. Prael as implemented in the LAHET code, was included after some extensive modifications to remove some unphysical patches which the presence of a preequilibrium stage had now made unnecessary. The model was further developed and improved along the years and little is now left of the original implementation. Competition between evaporation and fission in heavy materials was implemented. This development was set off by a collaboration on energy amplifiers with C. Rubbia's group at CERN. Eventually, Ferrari joined that group in 1998.

In 1995, a newly developed Fermi Break-up model, with a maximum of 6 bodies in the exit channel, was introduced by Ferrari and Sala to describe the deexcitation of light nuclei (A =< 17). This development provided better multiplicities of evaporated neutrons and distributions of residual nuclei. The deexcitation gamma generation model was improved and benchmarked in the following year.

A completely new evaporation treatment was developed by Ferrari and Sala in 1996 and 1997 in substitution of the improved Dresner model. This new algorithm adopted a sampling scheme for the emitted particle spectra which no longer made any Maxwellian approximation, included sub-barrier effects and took the full energy dependence of the nuclear level densities into account. Gamma competition was introduced too. These physics improvements allowed a much more accurate description of the production of residual nuclei. A refinement of this new package took place in 2000/2001. The production of fragments up to mass 24 has been tentatively included around 2003 and subsequently developed and benchmarked [Bal04] and is now available in the distributed version as an option to be activated by the user.

Radioactivity evolution and radioactive decays

FLUKA is capable of making predictions about residual nuclei following hadronic and electromagnetic showers since late 1994. The accuracy of these predictions steadily improved along the years, in particular after the introduction of the new evaporation/fragmentation and the improvements and extensions of the PEANUT model: versions before end 1996 were unlikely to give satisfactory results. Of course, all FLUKA versions prior to 1989 were totally unable to formulate predictions on this issue. Since 1995, an offline code by Ferrari was distributed together with FLUKA, which allowed to compute offline the time evolution of a radionuclide inventory computed with FLUKA, for arbitrary irradiation profiles and decay times. In 2004--2005, this capability has been brought online by Ferrari and Sala, with an exact analytical implementation (Bateman equations) of the activity evolution during irradiation and cooling down, for arbitrary irradiation conditions. Furthermore, the generation and transport of decay radiation (limited to gamma, beta-, and beta+ emissions for the time being) is now possible. A dedicated database of decay emissions has been created by Ferrari and Sala, using mostly informations obtained from NNDC, sometimes supplemented with other data and checked for consistency. As a consequence, results for production of residuals, their time evolution and residual doses due to their decays can now be obtained in the same run, for an arbitrary number of decay times and for a given, arbitrarily complex, irradiation profile.

Biasing

Variance reduction techniques, a speciality of modern FLUKA, have been progressively introduced along the years. Transport biasing under user control is a common feature of low-energy codes, but in the high energy field biasing has generally been restricted to built-in weighted sampling in event generators, not tunable by the user. In addition, Monte Carlo codes are in general either weighted or analogue, but not both. In the modern FLUKA, the user can decide in which mode to run the code, and has the possibility to adjust the degree of biasing by region, particle and energy.

Many different biasing options have been made available. Multiplicity reduction in high-energy hadron-nucleus interactions was the first one to be introduced by Fassò (in 1987), to manage the huge number of secondaries produced by the 20 TeV proton beams of SSC. Ferrari made possible for the user to tune it on a region dependent basis. In 1990 Ferrari added also geometry splitting and Russian Roulette for all particles based on user-defined region importances and several biasing options for low-energy neutrons, inspired by MORSE, but adapted to the FLUKA structure.

Region, energy and particle dependent weight windows were introduced by Fassò and Ferrari in 1992. In this case the implementation was different from that of MORSE (two biasing levels instead of three), and the technique was not applied only to neutrons but to all FLUKA particles. Decay length biasing was also introduced by Ferrari (useful for instance to improve statistics of muons or other decay products, or to amplify the effect of rare short-lived particles surviving at some distance from the production point). Inelastic length biasing, similar to the previous option and also implemented by Ferrari, makes possible to modify the interaction length of some hadrons (and of photons) in one or all materials. It can be used to force a larger frequency of interactions in a low-density medium, and it is essential in all shielding calculations for electron accelerators.

Two biasing techniques were implemented by Fassò and Ferrari, which are applicable only to low-energy neutrons.

Neutron Non Analogue Absorption (or survival biasing) was derived from MORSE where it was systematically applied and out of user control. In FLUKA it was generalised to give full freedom to the user to fix the ratio between scattering and absorption probability in selected regions and within a chosen energy range. While it is mandatory in some problems in order to keep neutron slowing down under control, it is also possible to switch it off completely to get an analogue simulation.

Neutron Biased Downscattering, also for low-energy neutrons, gives the possibility to accelerate or slow down the moderating process in selected regions. It is an option not easily managed by the average user, since it requires a good familiarity with neutronics.

Leading particle biasing, which existed already in EGS4, was deeply modified in 1994 by Fassò and Ferrari, by tuning it by region, particle, interaction type and energy. A special treatment was made for positrons, to account for the penetrating power of annihilation photons.

In 1997, in the framework of his work for ICARUS and CNGS, Ferrari implemented biasing of the direction of decay neutrinos.

Scoring

The stress put on built-in generalised scoring options is another aspect of FLUKA "philosophy" which differentiates it from many other programs where users are supposed to write their own ad-hoc scoring routines for each problem. This characteristics, which was already typical of the old Ranft codes, has allowed to develop in the modern FLUKA some rather sophisticated scoring algorithms that would have been too complex for a generic user to program. For instance the "track-length apportioning" technique, introduced in 1990 by Fassò and Ferrari, used in dose and fluence binning, which computes the exact length of segment travelled by the particle in each bin of a geometry independent grid. This technique ensures fast convergence even when the scoring mesh is much smaller than the charged particle step.

Different kinds of fluence detectors (track-length, collision, boundary crossing) were implemented in 1990-1992, replacing the corresponding old estimators. The dimension limitations (number of energy intervals) were removed and replaced by a much larger flexibility due to dynamical memory allocation. Scoring as a function of angle with respect to the normal to a surface at the point of crossing was also introduced. Facilities were made available to score event by event energy deposition and coincidences or anti-coincidences between energy deposition signals in different regions, and to study fluctuations between different particle histories.

The pre-existent option to write a collision file was completely re-written and adapted to the more extended capabilities of the new code. In 1991, time gates became applicable to most scoring facilities, allowing to ignore delayed radiation components such as multiply scattered low energy neutrons.

In 1994, two new options were added: residual nuclei scoring and scoring of particle yields as a function of angle with respect to a fixed direction. In the latter case, several new quantities can be scored, such as rapidity, various kinematical quantities in the lab and in the centre-of-mass frame, Feynman-x etc.

In 2005, the possibility to follow on-line the radiation from unstable residual nuclei has been implemented, together with an exact analytical calculation (Bateman equations) of activity evolution during irradiation an cooling down. As a consequence, results for production of residuals and their effects as a function of time can be performed in the same run.

Heavy ions

Heavy ion transport, energy loss, effective charge and associated fluctuations, multiple scattering, was developed by Ferrari as early as 1998 largely based on already existing tools in FLUKA.

There was an increasing demand for extending the FLUKA interaction models to heavy ions, both for basic and applied physics applications (cosmic rays, hadrotherapy, radiation problems in space). A long standing collaboration has been going on since 1997 with Prof. L. Pinsky, chair of the Physics Department at the University of Houston. This collaboration became formal in 2000 with a NASA Grant covering three years of FLUKA developments in the field of heavy ion transport and interactions, as well as the developmentv of user friendly tools based on ROOT for a better management of the code (project FLEUR). Further support came from ASI, as a grant to a collaborating group in Milan for hiring a person for one year devoted to these issues.

The DPMJET code has been interfaced to cover the high (> 5 GeV/n) energy range, and an extensively modified version of the RQMD-2.4 code is used at lower energies.

At very low energy, below ~ 0.1 GeV/n, a treatment based on the Boltzmann Master Equation (BME) is in preparation.

In 2004, a model for electromagnetic dissociation of ions in ion-ion interactions has been implemented [Bal04].

DPMJET interface

DPMJET is a high energy hadron-hadron, hadron-nucleus and nucleus-nucleus interaction model developed by J. Ranft, S. Roesler and R. Engel, capable to describe interactions from several GeV per nucleon up to the highest cosmic ray energies. There are strong ties between the FLUKA and the DPMJET teams (with J. Ranft being author of both) and the collaboration is going on since the mid 90's. An interface with DPMJET-2.5 was developed by Toni Empl (Houston), Ferrari and Ranft [Emp02]. The interface allows to treat arbitrary ion interactions in FLUKA at whichever energy in excess of 5 GeV/n. The excited projectile and target residual leftovers are passed back to the FLUKA evaporation/fission/break up routines for the final deexcitation and "low" (in the excited residual rest frame) energy particle production. As part of the interface work a new fast multiion/multienergy initialisation scheme has been developed for DPMJET and a new cross section algorithm has been worked out for runtime calculations based on a fine mesh of DPMJET runs with various ions and energies.

An interface with the new DPMJET-3 [Roe01] has been developed in collaboration of Stefan Roesler (CERN) and is now available.

RQMD interfaces

A very similar interface has been developed by Francesco Cerutti (University of Milan and INFN), Toni Empl (University of Houston), Alfredo Ferrari, Maria Vittoria Garzelli (University of Milan and INFN) and Johannes Ranft, with the Relativistic Quantum Molecular Dynamics code (RQMD) of H. Sorge. Also in this case the evaporation and deexcitation of the excited residuals is performed by FLUKA. Significant intervention on the original code were necessary to bring under control the energy/momentum balance of each interaction to allow for a meaningful excitation energy calculation. This brand new development allows FLUKA to be used for ions from roughly 100 MeV/n up to cosmic ray energies. The results of this modified model can be found in [Aig05, And04, Fas03]. Work is in progress to develop new original code to replace this RQMD interface.

Code size

The present FLUKA alone totals about 400,000 lines of Fortran code (17 MBytes of source code), plus some 60,000 lines (2 MBytes) of ancillary codes used offline to generate and/or test the various data files required for running. Out of these, roughly 1/3 are associated with PEANUT. As a term of comparison, the latest release of the previous FLUKA generation, FLUKA87, contained roughly 30,000 lines (1.2 MBytes), out of which very few survive in the present code, mostly in the high energy generator and in the old intermediate energy one.


Last updated: 11th of December, 2008

© FLUKA Team 2000–2024

Informativa cookies