[fluka-discuss]: RES: Estimating hardware configurations for bremsstrahlung simulations

From: Marlon Saveri Silva <marlon.saveri_at_lnls.br>
Date: Fri, 26 Feb 2016 16:48:54 +0000

                Thanks for answering. I didn't tried use Linux directly, just because I'm using windows for other works at the same time, but seems to be a good idea.

                Well, actually I don't know how Fluka is really running. When I use some spawns I see in the System Monitor all processors (and memory) being used. So, I supposed it was in parallel (number of spawns in Flair = number of processors (or cores) in parallel?), but according to http://willworkforscience.blogspot.com.br/2012/01/quick-and-dirty-guide-for-parallelizing.html, it's said that FLUKA runs in "serial"; so, I didn't understand what he meant with "serial".

                About these different cores... Does it mean that I need run fluka through another software or code different from Flair, programming a task division?

                About the computational time, once I know it takes 8 days for 1E8 cycles for our present hardware configuration, is it possible to estimate the number of processors/cores/memory in order to run in 1 day?

                And, finally, about Rick answer; it seems to be an amazing solution. We're looking at gas bremsstrahlung and neutrons. What's the easy way to import such spectra (outputs of the initial run for neutrons and photons) and use as input for other runs? Is there some automatic tool or do you suggest using a USRBDX card, get the spectra, save in a .dat and then open them by source.f file? Is this second solution able to deal with all parameters of this new source (different weights, angular directions)?







-----Mensagem original-----
De: owner-fluka-discuss_at_mi.infn.it [mailto:owner-fluka-discuss_at_mi.infn.it] Em nome de akardeep_at_barc.gov.in
Enviada em: sexta-feira, 26 de fevereiro de 2016 09:10
Para: Marlon Saveri Silva fluka-discuss_at_fluka.org <fluka-discuss_at_fluka.org>
Assunto: Re: [fluka-discuss]: RE:Estimating hardware configurations for bremsstrahlung simulations



you can spawn the run that assigns different cycles of a run to different cores.



De: Rick Donahue [mailto:rjdonahue_at_lbl.gov]
Enviada em: quinta-feira, 25 de fevereiro de 2016 18:37
Para: Marlon Saveri Silva <marlon.saveri_at_lnls.br>
Assunto: Re: [fluka-discuss]: Estimating hardware configurations for bremsstrahlung simulations

Marlon,

You mentioned 1 atm so I'm guessing you're looking specifically at gas brem. If so then maybe you only need one
initial run with e- to get the downstream brem spectrum. Subsequent runs can just sample this brem spectrum as
the source. You can normalize results per watt of brem power. If you start w/ e- in each run then you're using
a lot of computing time calculating the same brem spectrum over and over. Good luck.

-Rick



De: Trinh Ngoc-Duy [mailto:Ngoc-Duy.Trinh_at_ganil.fr]
Enviada em: sexta-feira, 26 de fevereiro de 2016 05:36
Para: Marlon Saveri Silva <marlon.saveri_at_lnls.br>; fluka-discuss_at_fluka.org
Assunto: RE:Estimating hardware configurations for bremsstrahlung simulations

Dear Marlon Saveri Silva,

From my experiences, i see that when i run heavy program on a virtual machine, the performance of CPU is slow down (RAM does not play a big role there- your 11 GB is already big). Have you ever try to install Linux on your machine and run FLUKA directly within this ?

From your e-mail, i suppose that you run FLUKA on parallel on difference cores of your machine. If you want to run FLUKA on supercomputer, remind that you will have to run FLUKA on difference cores of difference CPU of supercomputer. The task will be more complicated. I never tried it before.

The computational time depend in fact on each simulation, so you have to estimate it for each job. It's difficult to choose one computational parameter for all job.

Best regards.

Ngoc Duy TRINH, GANIL, France


________________________________
De : owner-fluka-discuss_at_mi.infn.it<mailto:owner-fluka-discuss_at_mi.infn.it> [owner-fluka-discuss_at_mi.infn.it] de la part de Marlon Saveri Silva [marlon.saveri_at_lnls.br]
Envoyé : jeudi 25 février 2016 20:30
À : fluka-discuss_at_fluka.org<mailto:fluka-discuss_at_fluka.org>
Objet : [fluka-discuss]: Estimating hardware configurations for bremsstrahlung simulations
Dear Fluka experts,

   I'm working with some bremsstrahlung simulations for synchrotron beamlines in order to check dose absorbed by some components and equivalent dose in some regions. I'm already applying 1atm inside electrons pipe and using that recommended biasing techniques (from ADONE example) to make faster simulation. Ergo, In order to get USRBIN statistic better than 20%, I need at least 1E8 primaries. Using 11GB RAM and 7 Intel Xeon 3.2GHz processors in a Ubuntu Virtual Machine mounted at Windows 7 64bits, 4 cycles and 6 SPAWNS, it spends more than 8 days, generating more than 6Gb of files.

   I believe some of you have already done such kind of study and could enlighten me about the best hardware configuration for such case; for example, what do we need for the purpose of running that simulation, with 5E8 primaries, in 1 day.

   Since there're lots of cases to simulate (and some requires more than 1E8 primaries), I intend to rent space in a supercomputer from another institution. In order to accomplish this, I need to fill a form telling them the computational parameters I need (number of cores, processors, hd space, parallelism technique, ram memory...) and the number of hours of simulation.

Regards,

Marlon Saveri Silva
Mechanical Engineer
Beamlines Instrumentation and Support Group - SIL
Brazilian Synchrotron Light Laboratory- LNLS - CNPEM

________________________________
Préservons notre environnement, n'imprimez ce mail que si nécessaire.
Preserve our environment, print this email only if necessary.


__________________________________________________________________________
You can manage unsubscription from this mailing list at https://www.fluka.org/fluka.php?id=acc_info
Received on Fri Feb 26 2016 - 17:48:54 CET

This archive was generated by hypermail 2.3.0 : Fri Feb 26 2016 - 19:17:49 CET