Re: Problem with source.f

From: Alberto Fasso' (fasso@SLAC.Stanford.EDU)
Date: Fri Nov 03 2006 - 03:34:42 CET

  • Next message: Lev Shekhtman: "reciol spectrum from low energy neutrons in lAr"

    I was sure that somebody would not like my message about double
    precision. Like everybody else, I was taught by my physics
    professors (many years ago) to never use more significant figures than
    indicated by the experimental uncertainties.

    My experience has shown me that, WHEN DOING CALCULATIONS WITH COMPUTERS,
    that teaching is badly wrong. It is not a question of uncertainty,
    it is something related to the way computers work and do rounding and
    truncation, especially when calculating a small difference between
    two large numbers, or when solving a system of equations. It is essentially
    a question of consistency. Modern textbooks on numerical computation
    are well aware of that, but the old myth of the significant figures
    matching experimental uncertainty is still very alive.

    I can tell here an anecdote concerning the first years of modern FLUKA,
    when Alfredo Ferrari and myself started modifying an old existing code,
    written in the classical physicist style I targeted in my previous mail.
    The code had a number of problems (apparent non-conservation of energy,
    frequent crashes and so on).
    We tried to clean it and improve it: for instance, in that code, at different
    places the number pi was written in many different ways: 3.14, 3.1415,
    3.14159, atan(1.)*4.... None of these ways was wrong: after all, the
    error of the approximation was always better than 0.05 %. The same
    was true for the particle masses, written at different places with a different
    number of decimal figures, but always within the known experimental error.
    After rewriting each number with the maximum number of figures allowed
    by the computer, and ALWAYS IDENTICAL, the performance of the code
    improved dramatically (and energy was conserved within a factor 1.E-11!)
    Since then, we decided that FLUKA should be fully in double precision,
    and all numbers should be written exactly with the (about) 16 digits allowed
    by double precision.
    Other areas where this approach had a dramatical effect were particle transport
    and geometry. Cosines MUST be normalized so that the sum of the squares is 1
    in double precision. Body description, especially for slanted bodies
    (cones, tilted planes), must be entered with full precision to avoid the
    risk of crashes or wrong results. Yet, a cosine or a cone is never
    "known" (or measured, or whatever) with an accuracy better than a few percent.

    There are many other tricks used in FLUKA to minimize computer error.
    a**2 - b**2 is never written this way, but always (a+b)*(a-b). Even solving
    a simple 2nd degree equation requires special attention: what if
    the valure of b**2 is very close to 4*a*c? A few percent inaccuracy
    can become amplified to an error of several orders of magnitude.

    So, believe it or not, this paranoia of ours concerning numerical
    precision has become one of the best assets of FLUKA, nearly as important
    as the quality of the physical models, and muuuuuch more important than
    the availability (or non-availability) of friendly user interfaces.
    Unfortunately most often it is from the latter that people judge the
    quality of a code.
    So, if you were "irritated" by my message, perhaps now you can understand
    how irritated I am when I see user-written routines spoiling the quality
    of our code, when just a little discipline would allow to obtain the
    best results. I am writing this not "to get into arguments", but
    just because it is an issue of great importance that most users ignore.

    Kind regards,

    Alberto

    On Thu, 2 Nov 2006, Joseph Comfort wrote:

    > Hi Francesco,
    >
    > Thank you for the suggestion. It worked. And I have a better
    > understanding of how to prepare geometries.
    >
    > On another topic, the message about double precision that was sent out
    > by someone earlier rather irritated me, and I can't get over it. We
    > need to keep in mind that the data upon which the models are being built
    > are seldom (if ever) better than 1% or so, absolute. If a model can get
    > within a few percent (3-4 significant digits) of good data, it is doing
    > well. This is just a comment and I don't mean to get into arguments,
    > nor to lose any friendships.
    >
    > Thank you,
    >
    > Joe

    -- 
    Alberto Fasso`
    SLAC-RP, MS 48, 2575 Sand Hill Road, Menlo Park CA 94025
    Phone: (1 650) 926 4762   Fax: (1 650) 926 3569
    fasso@slac.stanford.edu
    

  • Next message: Lev Shekhtman: "reciol spectrum from low energy neutrons in lAr"

    This archive was generated by hypermail 2.1.6 : Fri Nov 03 2006 - 09:47:51 CET