Dear Ana,
it took us a bit of time to reproduce the artefact you pointed out, since
the sudden local decrease in your flat dose profile was of the order of few
per mil. The problem is due to the fact that, at scoring level, the proton
energy loss by ionization is distributed uniformly along the particle step,
so each bin traversed by the latter gets a dose value proportional to the
respective step fraction. In reality, the stopping power's change - normally
increase - along the step implies that the energy loss is larger towards the
step end. This variation is fully taken into account during the
evaluation of the energy loss suffered by a particle in a given step,
however when that loss is scored it is uniformly distributed in the step
itself. Therefore, if the simulated step length is too large with
respect to the input scoring grid, one sees the artefacts also shown here:
https://indico.cern.ch/event/489973/contributions/2000430/attachments/1269537/1881112/04_AdvancedSettings2016.pdf
slide 28
We would like to stress that this is purely an artefact due to the scoring,
which is already complex enough in "apportioning" the step length among the
possibly several bins crossed by it. The physics is "exact" in the limit
of the accuracy of our stopping power models and of the integration
algorithm used to compute the average energy loss over a step accounting
for dE/dx variations.
Coming to your specific case, the introduction of a geometry boundary
shortens the nearby particle steps (which are forced to end on the
boundary). This actually means that the respective scored dose value will be
lower than the one of a 'normal' longer step starting from the same
initial point, due to the smaller increase of the stopping power along the
step, BUT the consequent local dose decrease is not compensated by larger
dose values that should in principle come from the final fraction of
particle steps ending where the shortened steps start.
To visualize it, think about a 2 cm step, starting 1 cm prior to the
boundary, over which there is an overall 2% dE/dx variation (numbers are
purely fictitious). In absence of the boundary, the SCORED dose along
the 2 cm step will be 1% higher than the one at the starting point and
1% lower that the one at the end point. If the step is split into two substeps
by the boundary, the first half will score a dose 0.5% higher than the
one at the initial point (and 0.5% lower than the one at the
sub-step final point which is also the median point of the initial 2 cm
long step). The second half also scores a dose 0.5% higher than the one at
its initial point and 0.5% less than the one at its final point, but now
its initial point is the median point of the original step, and it is easy to
verify that the second half will produce a SCORED dose 1% higher than
the one of the 1st half.
Actually life is significantly more complex because dE/dx does not vary
linearly with x, step lengths corresponding to the same fractional loss
are varying wildly with energy, multiple scattering plays a role, and,
in FLUKA, particles moving from a boundary are restarted with short
steps becoming progressively longer, which somehwat washes out the "up"
effect on the right side of the boundary (that in fact you do not see).
The solution is to properly shorten (ALL) transport steps down to =<1mm by
STEPSIZE. Since this is going to have a huge CPU penalty, you may want to
apply it only to the geometry regions of interest, e.g. creating a new
boundary somewhat upstream (we used 2 cm upstream) of the boundary you are
interested in. This way you can apply STEPSIZE only to the regions
downstream of this new boundary, assuming that you do not care about the
small scoring artefact that you will obviously see at the position of
the latter.
Another more complex solution would be to alter the scoring algorithm to
take into account the stopping power gradient along the step, in order to
calculate accordingly the dose values in the scoring bins traversed by the
track step. This is something we have discussed internally already some
time ago, we are a bit scared by the complexity of the implementation in
the already very complex "track apportioning" scoring scheme. Also, if
it turns out to be as CPU expensive as shortening the steps, there would
be no advantage...
Maybe we will implement something like that in the future.
As a side remark, not really relevant but with still some impact, in your
input you forgot to apply FLUKAFIX to the 1st material (WATER), which
stayed at the larger - default - fractional energy loss.
Kind regards
Alfredo, Anton, Francesco and friends
**************************************************
Francesco Cerutti
CERN-EN/STI
CH-1211 Geneva 23
Switzerland
tel. ++41 22 7678962
fax ++41 22 7668854
__________________________________________________________________________
You can manage unsubscription from this mailing list at
https://www.fluka.org/fluka.php?id=acc_info
Received on Thu Jun 16 2016 - 21:16:02 CEST