RE: Workstation Computers for FLUKA

From: Chris Theis <>
Date: Fri, 26 Nov 2010 17:59:28 +0000

Hello Nicholas,

> I don't know if you've looked at the performance of GPU cards lately, but the
> latest Tesla cards can perform double precision at a 515 GFLOPS peak, where
> as a i7 980X (top of the line i7) scores about 106 GFLOPS.

coming from commercial 3D software development I'm trying to stay
up-to-date in that field. So I take the liberty to claim that I'm quite
well aware of the terrific performance and possibilities the GPUs offer.
However, in practice there are a number of things to pay attention to:

Currently the implementation of IEEE754 strongly depends on the model of
your GPU. Some implement it not at all, some to a certain extent only
and some are fully compliant. So you have a very nice mixture out there
in addition to the problem of proprietary platforms like CUDA. Well, the
second point can be solved via OpenCL, however the first point is a bit
more tricky. The situation in professional computer graphics is a bit
different than for scientific applications. A studio can buy large
number of graphics cards and they will render their images on this
identical farm. So they will not experience a divergence of their
results. Even if they did (which actually is the case for motion blur
algorithms in combination with global illumination) it will not matter
much because they customer's eye will hardly notice.

However, if you take a typical scientific working environment you will
first of all have a very heterogeneously computing environment. In
addition imagine the following case - you run a simulation and get some
results. Sometime later you re-run an old simulation (maybe also with a
new version of your MC code) and you get a deviation. Now the fun part
starts because you will have to figure out where this deviation actually
comes from - could it be
  that the physics model has changed, could it be a bug or is it maybe
only because you have bought a new graphics card? I can assure you that
  down these things can be quite a nightmare.

I'm certainly in favor of using modern GPUs but what I want to point out
is that for scientific purposes the applicability comes with a number of
constraints. In view of quality assurance you would first of all have to
ensure homogeneous computing environments until we reach the stage where
GPUs have become standardized enough (this should hopefully happen
soon!). And the second constraint is that one would need to use either C
or something like
CUDA FORTRAN (2003) to implement the algorithms.

But I surely would be interested to read the opinion of the FLUKA
development team on this subject because presentations on GPU
implementations are popping up more and more frequently at various

Received on Sat Nov 27 2010 - 13:24:30 CET

This archive was generated by hypermail 2.2.0 : Sat Nov 27 2010 - 13:24:31 CET