Examples of Monte Carlo techniques applied for nuclear data uncertainty propagation

. The aim of this work is to review different Monte Carlo techniques used to propagate nuclear data uncertainties. Firstly, we introduced Monte Carlo technique applied for Uncertainty Quantification studies in safety calculations of large scale systems. As an example, the impact of nuclear data uncertainty of JEFF-3.3 235 U, 238 U and 239 Pu is demonstrated for the main design parameters of a typical 3-loop PWR Westinghouse unit. Secondly, the Bayesian Monte Carlo technique for data adjustment is presented. An example for 235 U adjustment using criticality and shielding integral benchmarks shows the importance of performing joint adjustment based on different set of integral benchmarks.


Introduction
This paper focuses on the application of Monte Carlo (MC) methodologies for the propagation of nuclear data uncertainties.The first section of this paper presents MC technique to perform Uncertainty Quantification (UQ) analysis in large-scale systems which is increasingly necessary in reactor safety calculations of LWRs [1].The second section is devoted to review current nuclear data adjustment methodologies based on a Bayesian approach in combination with a Monte Carlo technique.

Monte Carlo technique for Uncertainty Quantification
Two different methodologies have been developed for propagating nuclear data uncertainties to large scale systems.The first method is based on the MC approach, which uses random samples of nuclear data libraries and perform a separate reactor calculation for each random sample.The second one is based on the first order perturbation theory approach, which makes use of sensitivity coefficients and available covariance files.
The Sensitivity and UQ (S/U) is based on first order perturbation method.The UQ can be obtained through the sandwich rule, after sensitivity coefficients are computed.This technique allows an easy decomposition of the uncertainty in isotopic partial cross-section components.Different techniques have been developed for the determination of the sensitivity coefficients of the neutronics parameters to nuclear data such as the Standard Perturbation Theory (SPT) used for keff uncertainties, and Generalized Perturbation Theory (GPT) used for power distribution uncertainty [2].These S/U techniques show different weaknesses in uncertainty quantification studies.For instance, one can point out the following drawbacks: 1) a low efficiency for a large number of response functions because the number of adjoint functions increases linearly with the number of responses, 2) the applicability is only for small uncertainties because it is based on a linear-approach and large uncertainties may exceed the range of linearity of the model, and 3) it shows severe limitations as consequence of the non-linearity of multi-physics (neutronics, thermohydraulic, depletion, …) in reactor calculations.
The MC approach overcomes those limitations, but due to its stochastic nature it carries an inherent statistical uncertainty.To reduce the statistical uncertainty a large number of samples is often required, which implies a larger computational time.This can be seen as a significant disadvantage compared to first order approximation techniques.MC approach can also be classified by their nuclear data uncertainty inputs, thus "Total Monte Carlo" (TMC) [3] relies on model parameter covariances; NUDUNA [4] and SANDY [5] take as input the information provided by nuclear data evaluations or multigroup covariance applying them to continuous energy data, XSUSA [6] takes the form of multigroup covariance matrices applying only for multigroup data.
For the MC approach, the nuclear data are sampled at the beginning of the simulation, then the reactor core simulator that solves coupled multi-physics at different levels of approximation is executed.In this work, we took advantage of the SEANAP system [7] developed for and applied to 3-D PWR core analysis.Already in the past SEANAP has demonstrated a very good agreement with the broad sets of parameters and cycles analysed at the Spanish PWR units [8].One of the major advantages of using SEANAP is its very low CPU time requirement as compared to other codes.A full core cycle and branching lattice calculations can be performed in only few minutes.
For this work, SANDY provides full MC sampling of the nuclear data files for transport and depletion calculations.Each nuclear data random file is used to perform a separate core calculation.With SANDY we samples reaction cross sections, fission neutron multiplicities and prompt fission neutron energy distributions using the evaluated nuclear data and covariances from JEFF-3.3.Then, the random files were processed with NJOY in WIMS format, which serve as input for the SEANAP code.The statistics of all SEANAP simulations finally yields the desired uncertainty quantification.In this work, sampling simulations for 235 U, 238 U and 239 Pu are carried out separately.Table 1 shows UQ analysis for the critical boron concentration in a 3-loop PWR Westinghouse unit along the burnup.The total uncertainty of boron concentration is around ~65 ppm.The main contributor to this uncertainty is 235 U-nubar, the contribution due to 238 U cross-section is constant during the cycle.For the 239 Pu, its contribution is larger than 235 U at end-of-cycle.

Bayesian Monte Carlo technique
In this section, we review the current nuclear data adjustment methodologies based on the Bayesian approach together with MC techniques.Hereafter, experimental data is referred to integral information, such as criticality integral benchmarks (e.g.keff and spectral indices) shielding/transmission benchmarks (e.g.neutron leakage), etc.It is well-recognized that integral benchmarks should be avoided for the adjustment of general-purpose libraries because this can lead to potential compensating effects.These compensating errors can be due to both the impact of other isotopes in the benchmark and errors in the calculation attributed to complicated multi-physics.
However, it is also recognized that such integral data may be used to perform tune or fine adjustment of specific nuclear data to improve the overall performance of an entire general-purpose library.Thus, "tuned" nuclear data adjustments should rely on high-fidelity experiments that can be used as simple (e.g. a single isotope), well-understood and socalled clean benchmarks.In this sense, one should avoid adjustments that include error sources unrelated to nuclear data into the evaluation procedure.Consequently, reactor core measurements (such as the experimental data presented in Table 1) can be used only to derive specific adjustments for nuclear data libraries for reactor applications.
Two distinct methods of nuclear data adjustment methodologies can be categorized into deterministic and stochastic methods.

Generalized Linear Least Squares (GLLS)
The GLLS is a deterministic method consisting on minimization, Eq. 1: (1) The GLLS is based on several assumptions [9]: 1) experimental and nuclear data are normally distributed, 2) linear approximations between all observables, and 3) model and experimental data are uncorrelated.Thus, GLLS can be seen as a Bayesian approach in the sense that experimental data are used to adjust prior values.Although probability density functions are not considered explicitly, the GLLS is used to derive the posterior means and covariances of nuclear data.

Monte Carlo nuclear data adjustment methodologies
MC nuclear data adjustment methodologies have been developed to avoid the need to linearize non-linear models, and to handle models which are not necessarily normally distributed [9], (e.g.model parameters (x) which lead through a model transformation

=M(x))
. These methodologies are able to include integral data, reducing uncertainties while performing better posterior nuclear data.The Bayesian MC techniques are based on a direct application of Bayes' Theorem, Eq.2: (2) The prior probability p0(σ|σC,VC) and likelihood L(yE,VE|σ) are independent probability density functions.The principle of maximum entropy will ensure that p0(σ|σC,VC) and L(yE,VE|σ) are known and can be approximated by multivariate normal distributions.In case the normality assumption is not acceptable, σ may be mapped onto an approximately normally distributed vector by an invertible transformation [4].
Under these conditions the Bayes' Theorem yields a multivariate normal posterior probability density function.This Bayesian approach can be defined as Multivariate Normal Bayesian Model (MNBM) and the resulting equations are widely known as the MOCABA equations [4,10].A stochastic methodology can be used to randomly sample the "a priori" nuclear data.There are codes such as SANDY and NUDUNA able to provide samples of nuclear data.They work on normal (or log-normal) distributions because no further information on the distribution of the nuclear data are provided in evaluated files.In this work, SANDY code is used assuming normal distributions.The sampling can be also performed at the level of nuclear parameters in nuclear reaction codes such as TALYS [11,12].In this case, unlikely that p0(σ|σC,VC) will be normal, resulting in a loss of information about the prior function if it is approximated just by a normal distribution.

Bayesian Monte Carlo approach
The requirement of a well-known prior and likelihood probability density function is a serious limitation of the previous approach if non-informative prior distributions are known [9,11].To overcome this limitation the Bayesian Monte Carlo (BMC) approach [9,11,12,13] takes advantage of TMC method to generate "a priori" random files which are not explicitly normal.The BMC technique will incorporate this integral "a priori" information through likelihood factors that are defined in Eq. 3: Based on this definition, one can calculate a "weight" for any k-sample set, as follows in Eq.4, which normality assumption is implicitly given by chi-squared: These weights are used to calculate "a posteriori" moments (see Eq.5).

𝜎 𝑖
One can work with different definitions of weights.Simpler definitions of ωk values can lead to very small weights.To overcome this limitation different renormalized k-functions have been introduced.For instance, the Backward-Forward MC method (BFMC) [12,13] proposed a renormalization function to avoid such large dispersion of weights.Recently, other methods have been developed with BMC to compensate for the model deficiencies and inconsistent data.[14,15]  | ∝  0 |  ,   × (  ,   |)

Selection of integral Benchmarks
The selection of dedicated integral experiments determines the scope of the nuclear data adjustment.Nuclear data evaluations have used mainly criticality benchmarks to match C/E values (e.g.keff and spectral indices).However, the keff value is a global parameter that can be achieved through many possible combinations of nuclear data.It can lead to inherent compensating effects in the adjustment work [12,13].In recent works, other sources of integral data are pointed out to extend the application and the prediction capabilities of future evaluations.Apart of the well-known spectral index measurements, one can considerer other measurements such as delayed neutron fraction and shielding/transmission leakage neutron spectra.[16] As an example of BMC adjustment, two different integral benchmarks have been selected to see the impact of 235 U adjustment between 500 keV and 10 MeV: For criticality, the Godiva (HMF1) benchmark; for shielding, the LLNL-pulsed sphered for 235 U. A comparison of sensitivity profiles is performed to assess the importance of those benchmarks.Fig. 1 shows the experimental values for neutron leakage and their sensitivities to the continuum-inelastic scattering (MT91) cross-section of 235 U, sensitivities are calculated with MCSEN5 code [17].For this analysis, MT91 is chosen because it is the main inelastic channel between 2 to 10 MeV.Fig. 2 shows the results of the BMC technique for 235 U adjustment.The 235 U random files are obtained from TENDL-2014, a total number of 5000 samples were used in this work.This number seems enough to reach an adequate convergence in BMC [18].The high sensitivity of the fission cross section in Godiva shows an increase in fission around +1.4% if only Godiva is used in the adjustment.LLNL-pulsed sphere goes in the opposite direction, with -1.0%.However, the high-sensitivity in MT91 for the pulsed sphere gives an increase up to +2.5%.These changes are dominant in the case of a jointly adjustment.
BMC technique also predicts "a posteriori" covariances.The Godiva experiment leads to a strong reduction of the uncertainty in fission cross-section of around 1.5%.The LLNLpulsed sphere only changes the inelastic uncertainty with a reduction of the uncertainty in 0.6%.In addition, the Godiva experiment leads to a cross-correlation between fission and nubar as a consequence of the criticality constraint (see Fig. 3-right).However, no crosscorrelation is seen for the LLNL-pulsed sphere (see Fig. 3    In Table 2, prior and posterior results for keff in criticality, and chi-squared in shielding case are shown.It can be concluded that a joint constraint is important to get a better and reliable adjustment of nuclear data and for avoiding compensation effects.

Conclusion
Two different examples have been used to demonstrate the importance of Monte Carlo techniques in the procedure of evaluation of nuclear data.First, reactor core measurements have been used to perform an UQ assessment which can help to assess nuclear data trends and determine the main contributors and uncertainty targets for the evaluation procedure.
The second example has shown the importance of selection of benchmarks in the Bayesian approach avoiding as much as possible compensations in the "a posteriori" evaluated data.Benchmarks should be selected focusing on an individual isotope, energy range, reaction type and different constraints.

Fig. 2 .
Fig. 2. Results for BMC adjustment, fission (left) and total inelastic (right) cross-sections.The figures show the predicted posterior values using experimental adjustments with Godiva(HMF1), LLNLpulsed sphere or both benchmarks together divided by prior cross-sections.

Table 1 .
UQ assessment for critical boron concentration in a PWR Westinghouse unit.SEANAP system is used to perform calculations.Calculation performed with a total of 2100 random files generated by SANDY tool (300 samples for each case).Measured and Calculated values are shown. ) left).