Re: [fluka-discuss]: Normalization for the 2nd step of a two-step process

From: Mina Nozar <nozarm_at_triumf.ca>
Date: Fri, 15 Nov 2013 10:35:47 -0800

Dear Vittorio,

Thank you.

But where (how) exactly do I apply the additional normalization factor
of (717984/702M)? Is this at post-processing stage of the second step
or before?

Best wishes,
Mina

On 13-11-15 08:08 AM, Vittorio Boccone wrote:
> Hi Mina,
> I assume you are saving the file with those line (which I extracted
> from the previous long topic)
> > WRITE(IODRAW,100) JTRACK,XSCO,YSCO,ZSCO,CXTRCK,CYTRCK,CZTRCK,
> > & ETRACK-AM(JTRACK),WTRACK
>
> WTRACK is indeed the weight of the particle with respect to primary
> that generated it.
>
> You get 717984 particle from the first step out of 702M, to be used as
> primaries for the second step.
>
> For this reason you must apply an additional normalization factor
> 717984/702M to the weight of the
> primary which you load in the second step.
>
> You then loop over the 700K particle (400M times) in a random way or
> sequentially. The random seed
> history is what makes the history of the particle different. You just
> need to be sure that this 700K
> are a representative sample of your real distribution.
>
> Best regards
> Vittorio
>
> Dr. Vittorio Boccone - University of Geneva
>
> o Address:
> UniGe: Département de physique nucléaire et corpusculaire
> 24 Quai Ernest-Ansermet, CH-1211 Geneve 4, Switzerland
> CERN: CERN, CH-1211 Geneve 23, Switzerland
>
> o E-mail:
> dr.vittorio.boccone_at_ieee.org <mailto:dr.vittorio.boccone_at_ieee.org>
> (professional)
> vittorio.boccone_at_gmail.com <mailto:vittorio.boccone_at_gmail.com> (private)
>
> On 14 Nov 2013, at 23:06, Mina Nozar <nozarm_at_triumf.ca
> <mailto:nozarm_at_triumf.ca>> wrote:
>
>> Hello everyone,
>>
>> I am not sure of the normalization in the 2nd step of a two-step
>> process.
>>
>> In the first step:
>>
>> I write out particles I am interested in (type and boundary) crossing
>> via mgdraw.f and USRDUMP. Since I am using importance biasing in
>> the first step, I write out the weights as well.
>>
>> So in the output file I have lines like this:
>>
>> Id, x, y, z, cosx, cosy, cosz, kinetic energy, weight
>> *8 -.2635E+03 0.6864E+02 0.2944E+04 -0.6332783577022189E+00
>> -0.3722034999484587E+00 -0.6785448226109299E+00 0.6606E-06 0.6400E-05**
>> **7 -.2635E+03 0.6589E+02 0.2946E+04 -0.4822515648543289E+00
>> -0.8047950128287192E+00 0.3460323908560768E+00 0.8389E-03 0.2133E-06**
>> **7 -.2635E+03 0.7252E+02 0.2941E+04 -0.7274812055368878E+00
>> 0.1436728665088557E+00 -0.6709166885834075E+00 0.1702E-03 0.2133E-06*
>>
>> Out of the 702M primaries in the first step, I get 717984 particles
>> written out.
>>
>>
>> In the second step:
>>
>> Using source.f, I read in the above info. and assign the particle
>> weights:
>>
>> *WTFLK(NPFLKA) = Weight(line)
>> WEIPRI = WEIPRI + WTFLK(NPFLKA)*
>>
>> Manual says *WEIPRI* is the total weight of the primary particles.
>> So is this basically the sum of weights (column 7 above) of particles
>> that get read in? Does *WEIPRI* get written out somewhere?
>>
>> I then set up several runs (450 M events). So I think the way I
>> understand it, the program loops over the 717984 several times to get
>> to the 450 M primaries. But does the looping happen in a random
>> way? Am I correct to think that the sum of column 7 in the input
>> file IS NOT equal to the *WEIPRI*?
>>
>> And my last question is how to normalize information in (per primary)
>> from the second step, given the above?
>>
>>
>> Thank you very much,
>> Mina
>>
>>
>>
>
Received on Fri Nov 15 2013 - 20:28:43 CET

This archive was generated by hypermail 2.3.0 : Fri Nov 15 2013 - 20:28:44 CET