RE: [fluka-discuss]: Electron models and dose clarification

From: <me_at_marychin.org>
Date: Sun, 29 Mar 2015 15:01:30 -0700 (PDT)

Dear Zafar,

Fluence-to-dose conversion factors will necessarily change the units. Why would
you hope that they won't? That's the whole purpose of the conversion, which is
put into action because we don't have a human model standing there in our
geomertry to allow dose deposition by different particle types (each of
different radiation weighting factor) in different organs (eacg of different
radiosensitivity). Generally (i.e. in physics textbooks), fluence is in units of
counts per area; dose equivalent is in Sieverts. In Monte Carlo simulations,
quantities are typically normalised per primary (so that results from short and
long runs are comparable, so that we don't get 3 Sv/hr from a run consisting 1e3
primaries, but 6 Sv/hr from a run consisting 2e3 primaries). So, for fluence we
get counts per area per primary, for dose equivalent we can Sieverts per
primary. Regulatory authorities impose limits such as one person is allowed to
be exposed to x microSieverts within y time, so in radioprotection applications
we typically use Sieverts / time as the measure.

To make sure that the results are from the most recent run:
1. Run > Data > Refresh, check the time stamp of *_fort.*
2. Run > Data > Process
3. Run > Files > data, check the time stamp of *.bnn

:) mary

> On 29 March 2015 at 09:00 Zafar Yasin <Zafar.Yasin_at_cern.ch> wrote:
>
> Dear Mary,
>
> Thank you for answer and detailed explanation. I hope fluence to dose
> conversion factors will not effect my units.
>
> Secondly, for data merging files for my new run, in data , and I am seeing
> all files, .bnn and fort., but from my previous run, and no file from
> my recent run (although I have all new files from my recent run in the
> cycles, 001 002, etc ).
>
> Then, I deleted all files in data, .bnn and fort., and rerun the file. It
> gives data merging errors and in the
> out file like, "Processed file is older than some of the fort.###. May be
> the run is still going on?
> Error processing file:dump_30.bnn" ,etc.
>
> I even make a new input file and it indicates same errors. I could not search
> about these in FLUKA discussion forum.
>
> If one can also comment on these, thank you in advance.
>
> Zafar
>
>
>
> ---------------------------------------------
> From: me_at_marychin.org [me_at_marychin.org]
> Sent: 27 March 2015 23:11
> To: Zafar Yasin; fluka-discuss_at_fluka.org
> Subject: Re: [fluka-discuss]: Electron models and dose clarification
>
> Dear Zafar,
>
> Your PHOTONUC card needs WHAT(4) and WHAT(5), unless the material you wish
> photonuclear reactions for is the default pre-defined material 3 (hydrogen).
> To confirm successful request we usually look for the following line in .out:
> ***** Gamma Photo-nuclear int. activated for media # ...... with code ....
>
> As you pointed out, there is already a wealth of discussions on LAM-BIAS,
> many by Alberto Fasso. I have only the following to complement.
>
> Computing time should indeed increase because you'll be tracking more
> (pseudo)-particles. Having more samples is better than having no samples. This
> comes at a cost -- tracking samples takes time. Within a given computing time,
> a run with biasing should give a smaller standard error compared to a run
> without biasing. Simulation efficiency is defined not by the computing time,
> but by the figure of merit, FOM =1 / ( s^2 T). s is the standard error; T is
> the computing time.
>
> In the counter-productive case of poor biasing, the FOM may even drop below
> that of a run without biasing. This is where we need to optimise the biasing
> factor. In the bad case of over-biasing, a single primary can even take
> forever and the simulation can appear to hang.
>
> As for .out becoming too big, I don't expect so. If without biasing there
> were no inelastic interactions but with biasing there were, then there will
> indeed be additional counters at the end tabulating:
> Number of stars generated
> Number of secondaries generated in inelastic interactions
> Number of decay products
> Number of particles decayed
> Number of stopping particles
> Number of secondaries created by low energy neutron
> but these are tables of finite number of lines and should not cause the file
> size to blow out of control.
>
> Whether biasing is necessary or not and how far should we bias depends on the
> irradiation condition and the scoring. Inspect the tabulated counters at the
> end of .out.
>
> The activation of EM transport as well as low-energy neutrons will also be
> reported in .out.
>
> When you multiply pSv/primary with primary/hr, you are normalising /
> linear-calibrating according to your beam intensity. AUXSCORE is different, it
> allows selective filtering of particles which are being scored. 'Particles
> which are being scored' are quite different from beam particles ('primaries').
>
> To plot scored quantities with the geometry imposed: FLAIR > Plot > Geometry
> > Use: Auto. Normally this is already on by default when we use Oz to generate
> the plots.
>
> :) mary
>
> > > On 27 March 2015 at 04:45 Zafar Yasin <Zafar.Yasin_at_cern.ch> wrote:
> >
> >
> > Dear Fluka experts,
> >
> >
> >
> > I am using FLUKA to model a beam dump for 2.5-5 GeV electron beam. I am
> > not an expert
> >
> > in FLUKA and want to clarify the following, although I have read in fluka
> > discussion forum about these.
> >
> >
> >
> > Firstly, when I activate PHOTONUCL and LAB-BIAS cards, the computing time
> > increases
> >
> > so much and output file is also too big, either these card are necessary
> > at such higher energy range?
> >
> > Is their need to activate any special models or libraries for electrons,
> > photons and neutrons?
> >
> >
> > Secondly, I want to calculate dose rates in (µSv/hr) and for this I am
> > using USERBIN with DOSE-EQ option.
> >
> > This gives dose in pSv/primary. If I multiply this with say my 105
> > eletrons/hr,
> >
> > then the I can get dose in pSv/hr or µSv/hr. In this case what will be the
> > role of AUXSCORE card?
> >
> >
> >
> > Thirdly, I want to plot dose rates superimposed over geometry or else as
> > my plotting seem not convincing, plot attached.
> >
> >
> >
> > My relevant cards for scoring are:
> >
> >
> >
> > PHOTONUC 1.
> > LAM-BIAS 0.0 0.005 ELECTRON PHOTON
> > USRBIN 10. DOSE-EQ -21. 80. 80.
> > 125.Dose-eq
> > USRBIN -80. -80. -50. 200. 100. 200. &
> > USRBIN 10. DOSE -22. 80. 80. 125.
> > USRBIN -80. -80. -50. 200. 100. 200. &
> >
> >
> >
> > Thank you in advance
> >
> >
> > Zafar Yasin
> >
> > ELI-np
> >
> >
> >
> > >
>
>



__________________________________________________________________________
You can manage unsubscription from this mailing list at https://www.fluka.org/fluka.php?id=acc_info
Received on Mon Mar 30 2015 - 01:33:26 CEST

This archive was generated by hypermail 2.3.0 : Mon Mar 30 2015 - 01:33:28 CEST