[fluka-discuss]: Normalization for the 2nd step of a two-step process

From: Mina Nozar <nozarm_at_triumf.ca>
Date: Thu, 14 Nov 2013 14:06:56 -0800

Hello everyone,

I am not sure of the normalization in the 2nd step of a two-step process.

In the first step:

I write out particles I am interested in (type and boundary) crossing
via mgdraw.f and USRDUMP. Since I am using importance biasing in the
first step, I write out the weights as well.

So in the output file I have lines like this:

Id, x, y, z, cosx, cosy, cosz, kinetic energy, weight
*8 -.2635E+03 0.6864E+02 0.2944E+04 -0.6332783577022189E+00
-0.3722034999484587E+00 -0.6785448226109299E+00 0.6606E-06 0.6400E-05**
**7 -.2635E+03 0.6589E+02 0.2946E+04 -0.4822515648543289E+00
-0.8047950128287192E+00 0.3460323908560768E+00 0.8389E-03 0.2133E-06**
**7 -.2635E+03 0.7252E+02 0.2941E+04 -0.7274812055368878E+00
0.1436728665088557E+00 -0.6709166885834075E+00 0.1702E-03 0.2133E-06*

Out of the 702M primaries in the first step, I get 717984 particles
written out.


In the second step:

Using source.f, I read in the above info. and assign the particle weights:

*WTFLK(NPFLKA) = Weight(line)
  WEIPRI = WEIPRI + WTFLK(NPFLKA)*

Manual says *WEIPRI* is the total weight of the primary particles. So
is this basically the sum of weights (column 7 above) of particles that
get read in? Does *WEIPRI* get written out somewhere?

I then set up several runs (450 M events). So I think the way I
understand it, the program loops over the 717984 several times to get to
the 450 M primaries. But does the looping happen in a random way? Am
I correct to think that the sum of column 7 in the input file IS NOT
equal to the *WEIPRI*?

And my last question is how to normalize information in (per primary)
from the second step, given the above?


Thank you very much,
Mina
Received on Fri Nov 15 2013 - 00:02:20 CET

This archive was generated by hypermail 2.3.0 : Fri Nov 15 2013 - 00:02:21 CET