Re: [fluka-discuss]: Fluka on OpenPBS - only one core is used

From: Ševčik Aleksandras <aleksandras.sevcik_at_ktu.edu>
Date: Tue, 21 Mar 2017 14:37:50 +0000

Many thanks for your inputs! I've successfully run in parallel the tasks, and hopefully will have now much serious tool


and yes, I thought so about the statistics, just wanted to double check if I don't miss any additional point when dividing the job into many parts.

Thanks again!

Regards

Alex

________________________________
From: Philippe Schoofs <philippe.schoofs_at_cern.ch>
Sent: Tuesday, March 21, 2017 3:29:51 PM
To: Ševčik Aleksandras; Alfredo Ferrari
Subject: RE: [fluka-discuss]: Fluka on OpenPBS - only one core is used

Hi Alex,

If you run 1 run with 10E9 primaries, you only obtain 1 estimate of the values you are looking for (for example energy deposition in a given region). If you have 10 runs with 1E8 particles, every run will return an estimate.
In order to assess the confidence interval on that estimate you need their variance. And it is impossible to compute the variance of an ensemble of only one estimate (see eq on slide 33 in the course I kinked in my previous email). It would therefore be a bad idea to only have 1 run of 1E9 particles, as you would not have any idea of the error on the values you’re calculating

Note that in slide 34, it is advised to use “at least 5-10 batches” of primaries for that reason

Cheers
Philippe



From: owner-fluka-discuss_at_mi.infn.it [mailto:owner-fluka-discuss_at_mi.infn.it] On Behalf Of Ševcik Aleksandras
Sent: 21 March 2017 14:51
To: Giorgi Kharashvili <georgek_at_jlab.org>
Cc: Alfredo Ferrari <fluka-discuss_at_fluka.org>
Subject: Re: [fluka-discuss]: Fluka on OpenPBS - only one core is used


Dear George,



Thanks for the input. I understand that this is equivalent to just copy-pasting the input files, changing to any random RANDOMIZ What(2) numbers and sending all the tasks to the cluster as separate ones?



Just interesting, is 1 run x 10^9 primaries statistically equal to 10 runs x 10^8 primaries?



A.

________________________________
From: George Kharashvili <georgek_at_jlab.org<mailto:georgek_at_jlab.org>>
Sent: Tuesday, March 21, 2017 1:43:02 PM
To: Ševčik Aleksandras
Cc: Alfredo Ferrari
Subject: Re: [fluka-discuss]: Fluka on OpenPBS - only one core is used

Dear Alex,

One of the ways to do this is to utilize flair and spawn as many jobs as you wish on your local machine, run all of them with Submitting queue = null. This will generate the input files with consecutive numbers as What(2) in the RNDOMIZe card. You can then copy the input files to your cluster, run, copy the resulting files back to your local machine, and do the data analysis in the same flair file.

Best regards,
George

--
George Kharashvili
Jefferson Lab Radiation Control
757-269-6435
----- Original Message -----
From: "Ševčik Aleksandras" <aleksandras.sevcik_at_ktu.edu<mailto:aleksandras.sevcik_at_ktu.edu>>
To: "FLUKA Discussion List" <fluka-discuss_at_fluka.org<mailto:fluka-discuss_at_fluka.org>>
Sent: Tuesday, March 21, 2017 5:42:14 AM
Subject: [fluka-discuss]: Fluka on OpenPBS - only one core is used
Dear experts,
I'got an access on the cluster grid that uses OpenPBS. The only help I can get is the google, and combined with no any experience it leads to this forum as the last chance.
In a nut, I have a script which successfully launches the job, see the attached file. But only one core is used out of 8 (each node has 8 cores).
1) how should I modify the script to launch 8 jobs at once ? I imagine that process is similar like Flair uses. I'm aware that .inp file should contain RANDOMIZ card with default value.
2) If I want to utilize 3 nodes with 8 cores each, can I launch 24 jobs then? As I understand , 12 runs x 1E8 events are not equivalent to 24 runs x 5E7 events , are they?
Regards
Alex
__________________________________________________________________________
You can manage unsubscription from this mailing list at https://www.fluka.org/fluka.php?id=acc_info
Received on Tue Mar 21 2017 - 17:02:56 CET

This archive was generated by hypermail 2.3.0 : Tue Mar 21 2017 - 17:03:00 CET