From: Alberto Fasso' <fasso_at_mail.cern.ch>

Date: Fri, 18 Feb 2011 00:43:23 +0100

Date: Fri, 18 Feb 2011 00:43:23 +0100

Dear Roger,

I don't see why the group algorithm should change the statistical errors.

The statistical errors are just a measure of the dispersion around an average

of the results from a number of runs (the results of a run being a "sample"

taken from a statistical distribution). They have nothing to do with the

physics. People calculate standard deviations of quantities which

are not physical at all. An example from wikipedia:

"the standard deviation on the rate of return on an investment is a measure of

the volatility of the investment".

The group algorithm is based on probabilities (cross sections, angular

distributions) just as what you call "microscopic analogue Monte Carlo".

Tracking of neutrons happens along straight lines (since they are not charged),

the sampled position of an interaction is based on the total cross section

(probability per cm), the kind of interaction (absorption, scattering) is

based on the relative probability of the possible outcomes, and the

jump from an energy group to another (which seems to worry you) happens

according to a matrix of probabilities. The possible angles of scattering are

sampled so that the angular distribution is reproduced up to its 6th

moment.

The content of a bin is the result of all these random processes, in exactly

the same way as if the microscopic analogue Monte Carlo would be used.

And averaging over neighbouring bins just increases the size of each sample,

therefore reducing the standard deviation: that is true both with and without

the group algorithm.

Biasing is another matter. One of the purposes of biasing is to reduce the

variance of the results (the other one is to reduce the CPU time needed to

get the same average result). One samples from a distribution different from

the actual physical one, with the same average and smaller variance. Averaging

over neighbouring bins increases the size of the sample also in this case,

although it is a sample from a different distribution. So, in addition to

the variance reduction obtained thanks to the biasing, one gets a further

reduction due to the larger sample size.

But remember what Mario told you: all this concerns PRECISION, i.e. the

dispersion of results, not ACCURACY, i.e. how much your average results are

close to the quantity you want to calculate. Averaging neighbor bins makes you

lose spatial resolution: you could have obtained the same result by asking in

input for larger bins.

Alberto

On Thu, 17 Feb 2011, Roger Hälg wrote:

*> Dear Alberto
*

*>
*

*> Thank you for your illustrative explanations. I have a further question.
*

*> FLUKA is a microscopic analogue Monte Carlo code with the exception of
*

*> the handling of low energy neutrons. Does this group-algorithm change
*

*> anything concerning statistical errors when averaging over neighbouring
*

*> bins? Or in other cases when biasing is used?
*

*>
*

*> Regards,
*

*>
*

*> Roger Hälg
*

---1970694515-761093718-1297986203=:25341--

Received on Fri Feb 18 2011 - 10:44:03 CET

*
This archive was generated by hypermail 2.2.0
: Fri Feb 18 2011 - 10:44:19 CET
*