RE: [fluka-discuss]: Normalization for the 2nd step of a two-step process

From: <nozarm_at_triumf.ca>
Date: Wed, 20 Nov 2013 20:26:01 -0800 (PST)

Dear Alberto,

Thank you for the explanation. I agree with what you say with the
drawbacks; however, I am only trying to 'speed up' parts of the simulation
that need a few iterations to see the effectiveness of different shielding
material/configuration for the upstream part. I would be using the single
step simulation at the end. Is it ok to use it for this purpose?

From the output of the second step (source particles for the 2nd step) I get:

Total particles:717984, Sum of weights: 1427.37

From the first step output files, I get:
Total primaries:702000000, Sum of weights: 702000000

So, the normalization should be 1427.37/702000000

and I use this normalization in second step when looking at various
distributions scored. Correct?

Thank you Ablberto.
Mina

> Hi Mina and Vittorio,
>
> first of all let me say that I strongly advise against two-step
> calculations. Sometimes they are necessary (I have done some myself), but
> they present several drawbacks.
> 1) They make nearly impossible to do a good estimate of the statistical
> errors. I say "nearly" because theoretically it can be done, but the way
> to do it is so complicated that in practice it is never done.
> 2) In the second step of a two-step calculation one just "cuts out" a
> large portion of phase space, while it is never sure that some particles
> from
> that portion could not provide a contribution to the score of
> interest. The possible ignored contributions are not necessarily
> direct, but could
> arise from multiply scattered particles or by their secondaries. The
> principle of good biasing is to sample the whole phase space, although
> more
> frequently in the parts which are more likely to contribute, and less
> frequently in those that are less likely. But in a 2-step calculation,
> the
> portion of phase space which is not used to create the second step
> source is simply sampled ZERO times, while the second step source is
> generally sampled MANY times. It is a kind of extreme splitting which
> makes systematic errors easy.
> 3) The last drawback, which is the subject of the present thread, is the
> difficulty of normalization. It is very easy to make mistakes.
> I will try to clarify this point.
>
> FLUKA's standard scoring (i.e. neither the scoring written by the user via
> MGDRAW, nor the event-by-event scoring provided by some commands, but the
> scoring of most detectors such as USRBIN, USRTRACK, etc.) is normalized
> per unit primary WEIGHT. Primary weight, not number of primaries! (They
> may have the same value, but only in the complete absence of biasing - and
> this is not the case of Mina's problem).
> Therefore, all the scores output in the second step have been divided by
> the total weight of the particles used in that step. And those particles
> could have been used exactly once, or several times, or a non-integer
> number of times, e.g. 3.2 times, depending on how the source has been
> written. But it does not matter: what is important is what is in the file,
> not how many times the file is used in the 2nd-step run.
> The correction factor to be applied is the ratio W2/W1, where W2 is the
> sum of weights in the file used in the second step, and W1 is the sum of
> the weights used in the first step.
> W2 must be calculated by a small program or a spreadsheet, and W1 can be
> found at the end of the standard output of the first step:
> "Total number of particles run: xxxxxxxxx for a weight of
> xxxxxxxx"
>
> Alberto
>
> ________________________________________
> From: owner-fluka-discuss_at_mi.infn.it [owner-fluka-discuss_at_mi.infn.it] On
> Behalf Of Mina Nozar [nozarm_at_triumf.ca]
> Sent: Friday, November 15, 2013 10:35 AM
> To: Vittorio Boccone
> Cc: fluka-discuss_at_fluka.org
> Subject: Re: [fluka-discuss]: Normalization for the 2nd step of a two-step
> process
>
> Dear Vittorio,
>
> Thank you.
>
> But where (how) exactly do I apply the additional normalization factor of
> (717984/702M)? Is this at post-processing stage of the second step or
> before?
>
> Best wishes,
> Mina
>
> On 13-11-15 08:08 AM, Vittorio Boccone wrote:
> Hi Mina,
> I assume you are saving the file with those line (which I extracted from
> the previous long topic)
>> WRITE(IODRAW,100) JTRACK,XSCO,YSCO,ZSCO,CXTRCK,CYTRCK,CZTRCK,
>> & ETRACK-AM(JTRACK),WTRACK
>
> WTRACK is indeed the weight of the particle with respect to primary that
> generated it.
>
> You get 717984 particle from the first step out of 702M, to be used as
> primaries for the second step.
>
> For this reason you must apply an additional normalization factor
> 717984/702M to the weight of the
> primary which you load in the second step.
>
> You then loop over the 700K particle (400M times) in a random way or
> sequentially. The random seed
> history is what makes the history of the particle different. You just need
> to be sure that this 700K
> are a representative sample of your real distribution.
>
> Best regards
> Vittorio
>
> Dr. Vittorio Boccone - University of Geneva
>
> o Address:
> UniGe: Département de physique nucléaire et corpusculaire
> 24 Quai Ernest-Ansermet, CH-1211 Geneve 4, Switzerland
> CERN: CERN, CH-1211 Geneve 23, Switzerland
>
> o E-mail:
> dr.vittorio.boccone_at_ieee.org<mailto:dr.vittorio.boccone_at_ieee.org>
> (professional)
> vittorio.boccone_at_gmail.com<mailto:vittorio.boccone_at_gmail.com> (private)
>
> On 14 Nov 2013, at 23:06, Mina Nozar
> <nozarm_at_triumf.ca<mailto:nozarm_at_triumf.ca>> wrote:
>
> Hello everyone,
>
> I am not sure of the normalization in the 2nd step of a two-step process.
>
> In the first step:
>
> I write out particles I am interested in (type and boundary) crossing via
> mgdraw.f and USRDUMP. Since I am using importance biasing in the first
> step, I write out the weights as well.
>
> So in the output file I have lines like this:
>
> Id, x, y, z, cosx, cosy, cosz, kinetic energy, weight
> 8 -.2635E+03 0.6864E+02 0.2944E+04 -0.6332783577022189E+00
> -0.3722034999484587E+00 -0.6785448226109299E+00 0.6606E-06 0.6400E-05
> 7 -.2635E+03 0.6589E+02 0.2946E+04 -0.4822515648543289E+00
> -0.8047950128287192E+00 0.3460323908560768E+00 0.8389E-03 0.2133E-06
> 7 -.2635E+03 0.7252E+02 0.2941E+04 -0.7274812055368878E+00
> 0.1436728665088557E+00 -0.6709166885834075E+00 0.1702E-03 0.2133E-06
>
> Out of the 702M primaries in the first step, I get 717984 particles
> written out.
>
>
> In the second step:
>
> Using source.f, I read in the above info. and assign the particle
> weights:
>
> WTFLK(NPFLKA) = Weight(line)
> WEIPRI = WEIPRI + WTFLK(NPFLKA)
>
> Manual says WEIPRI is the total weight of the primary particles. So is
> this basically the sum of weights (column 7 above) of particles that get
> read in? Does WEIPRI get written out somewhere?
>
> I then set up several runs (450 M events). So I think the way I
> understand it, the program loops over the 717984 several times to get to
> the 450 M primaries. But does the looping happen in a random way? Am I
> correct to think that the sum of column 7 in the input file IS NOT equal
> to the WEIPRI?
>
> And my last question is how to normalize information in (per primary) from
> the second step, given the above?
>
>
> Thank you very much,
> Mina
>
>
>
Received on Thu Nov 21 2013 - 05:26:01 CET

This archive was generated by hypermail 2.3.0 : Thu Nov 21 2013 - 06:29:03 CET