RE: [fluka-discuss]: Electron models and dose clarification

From: Zafar Yasin <Zafar.Yasin_at_cern.ch>
Date: Sun, 29 Mar 2015 16:00:48 +0000

Dear Mary,

Thank you for answer and detailed explanation. I hope fluence to dose conversion factors will not effect my units.

Secondly, for data merging files for my new run, in data , and I am seeing all files, .bnn and fort., but from my previous run, and no file from
my recent run (although I have all new files from my recent run in the cycles, 001 002, etc ).

Then, I deleted all files in data, .bnn and fort., and rerun the file. It gives data merging errors and in the
out file like, "Processed file is older than some of the fort.###. May be the run is still going on?
Error processing file:dump_30.bnn" ,etc.

I even make a new input file and it indicates same errors. I could not search about these in FLUKA discussion forum.

If one can also comment on these, thank you in advance.

Zafar


________________________________
From: me_at_marychin.org [me_at_marychin.org]
Sent: 27 March 2015 23:11
To: Zafar Yasin; fluka-discuss_at_fluka.org
Subject: Re: [fluka-discuss]: Electron models and dose clarification

Dear Zafar,

Your PHOTONUC card needs WHAT(4) and WHAT(5), unless the material you wish photonuclear reactions for is the default pre-defined material 3 (hydrogen).
To confirm successful request we usually look for the following line in .out:
 ***** Gamma Photo-nuclear int. activated for media # ...... with code ....

As you pointed out, there is already a wealth of discussions on LAM-BIAS, many by Alberto Fasso. I have only the following to complement.

Computing time should indeed increase because you'll be tracking more (pseudo)-particles. Having more samples is better than having no samples. This comes at a cost -- tracking samples takes time. Within a given computing time, a run with biasing should give a smaller standard error compared to a run without biasing. Simulation efficiency is defined not by the computing time, but by the figure of merit, FOM =1 / ( s^2 T). s is the standard error; T is the computing time.

In the counter-productive case of poor biasing, the FOM may even drop below that of a run without biasing. This is where we need to optimise the biasing factor. In the bad case of over-biasing, a single primary can even take forever and the simulation can appear to hang.

As for .out becoming too big, I don't expect so. If without biasing there were no inelastic interactions but with biasing there were, then there will indeed be additional counters at the end tabulating:
Number of stars generated
Number of secondaries generated in inelastic interactions
Number of decay products
Number of particles decayed
Number of stopping particles
Number of secondaries created by low energy neutron
but these are tables of finite number of lines and should not cause the file size to blow out of control.

Whether biasing is necessary or not and how far should we bias depends on the irradiation condition and the scoring. Inspect the tabulated counters at the end of .out.

The activation of EM transport as well as low-energy neutrons will also be reported in .out.

When you multiply pSv/primary with primary/hr, you are normalising / linear-calibrating according to your beam intensity. AUXSCORE is different, it allows selective filtering of particles which are being scored. 'Particles which are being scored' are quite different from beam particles ('primaries').

To plot scored quantities with the geometry imposed: FLAIR > Plot > Geometry > Use: Auto. Normally this is already on by default when we use Oz to generate the plots.

:) mary
On 27 March 2015 at 04:45 Zafar Yasin <Zafar.Yasin_at_cern.ch> wrote:

Dear Fluka experts,

I am using FLUKA to model a beam dump for 2.5-5 GeV electron beam. I am not an expert
in FLUKA and want to clarify the following, although I have read in fluka discussion forum about these.

Firstly, when I activate PHOTONUCL and LAB-BIAS cards, the computing time increases
so much and output file is also too big, either these card are necessary at such higher energy range?
Is their need to activate any special models or libraries for electrons, photons and neutrons?

Secondly, I want to calculate dose rates in (µSv/hr) and for this I am using USERBIN with DOSE-EQ option.
This gives dose in pSv/primary. If I multiply this with say my 105 eletrons/hr,
then the I can get dose in pSv/hr or µSv/hr. In this case what will be the role of AUXSCORE card?

Thirdly, I want to plot dose rates superimposed over geometry or else as my plotting seem not convincing, plot attached.

My relevant cards for scoring are:

PHOTONUC 1.
LAM-BIAS 0.0 0.005 ELECTRON PHOTON
USRBIN 10. DOSE-EQ -21. 80. 80. 125.Dose-eq
USRBIN -80. -80. -50. 200. 100. 200. &
USRBIN 10. DOSE -22. 80. 80. 125.
USRBIN -80. -80. -50. 200. 100. 200. &

Thank you in advance

Zafar Yasin
ELI-np





__________________________________________________________________________
You can manage unsubscription from this mailing list at https://www.fluka.org/fluka.php?id=acc_info
Received on Sun Mar 29 2015 - 19:23:38 CEST

This archive was generated by hypermail 2.3.0 : Sun Mar 29 2015 - 19:23:43 CEST