Re: FLUKA in MPI Environment

From: Niels Bassler <>
Date: Sat, 01 Nov 2008 18:41:11 +0100

On Sat, 2008-11-01 at 01:27 +0100, Chris Theis wrote:

> What one could do without adapting the source is to use a scheduler/load
> balancing system (e.g., Condor) to send multiple individual FLUKA jobs
> to a cluster where each node will run its own batch of calculations.

If anyone is interested, I have uploaded a python script ""
which does the job for a condor cluster. You can find it in

The idea is to have a simple interface to condor for submitting fluka
jobs. uses a similar input parameter format as the rfluka
command, but running multiple jobs (specified with the usual -M option)
in parallel instead of the usual sequential execution. I.e. in order to
submit foo.inp on 20 nodes, you simply execute

rcfluka -M20 foo

After execution, the files are renamed as if the job was submitted with
rfluka. The output can therfore directly be used by the uswxxx routines
The output data is not averaged, though... this means that if you submit
a -M20 run with 100.000 primary particles specified in the input file,
it corresponds to have simulated 2.000.000 particles.

The script can take user routines as an argument as well, and compiles
it "on-the-fly" on the remote node: -M5 foo -s fluscw.f,comscw.f -l ldpm3qmd

therefore a working FLUKA installation and g77 must be available on any
node for this script to function. The advantage of this is that it
enables the use of heterogenous clusters with different flavours of
linux-es and glibc, etc. However, currently only the vanilla universe is
supported. Multiple condor pools are not implemented (since we only have
a single one here), but this is probably easy to fix.

Please check rcfluka -h for complete option list. The "-t" option is
good for testing the submission process first. "-d" disables the use of
random seed, which may be useful for debugging purposes. It is also
strongly recommended to test your own input files first with a single
ordinary "rfluka" run first, before submitting to the cluster.

Cheers and hope someone may find this useful,

Niels Bassler
Received on Sat Nov 01 2008 - 19:31:10 CET

This archive was generated by hypermail 2.2.0 : Sat Nov 01 2008 - 19:31:10 CET