A NOVEL COMPUTATIONAL PLATFORM FOR THE PROPAGATION OF NUCLEAR DATA UNCERTAINTIES THROUGH THE FUEL CYCLE CODE ANICCA

This paper presents the first results of a computational platform dedicated to the propagation of nuclear data covariances, all the way to fuel cycle scenario observables. Such platform, based on in-house codes developed at SCK•CEN in Belgium, both for the creation of the many-randomized nuclear data libraries based on ENDF format and for fuel cycle scenario-studies (known as SANDY and ANICCA, respectively), was employed for the uncertainty assessment of the time-dependent inventory computed from a mono-recycling of Plutonium scenario based on a PWR fleet. An essential part of the procedure that deals with the creation of input data libraries to ANICCA, has been carried out this time by the SERPENT2 code. Due to the fact that its neutron transport and depletion parallelized calculation in 72 cores for up to 1640 days and 60 MWd/kg-HM takes almost one hour, it is feasible to finish a total of 100 ANICCA runs based on randomized input libraries created from ENDF/B-VII.1 neutron-reaction covariances in about one week. Therefore, it is consider that the computation of the output population statistics can be inferred from 100 observables representing time-dependent mass inventories. To mention a few results from the aforementioned NEA/OECD benchmark scenario, it was found out that the relative standard deviation of the accumulated plutonium in the final disposal after 120 years was of 7%, while for curium it corresponded to 8%. Thus, sources of uncertainty arising from neutron-reaction covariances do have an impact in the final quantitative analysis of the fuel cycle output uncertainties


INTRODUCTION
ANICCA (Advanced Nuclear Inventory Cycle Code) has been continuously under development at SCK•CEN since 2012 [1]. Its conception originated to address the different challenges that arise at the different fuel cycle stages, ranging from the front end (i.e. mining, enrichment, fuel fabrication, irradiation) all the way to the back end (i.e. interim storages, reprocessing and final repositories), in order to help policy makers in Belgium (and also at a regional level) to shape the future of nuclear energy in a sustainable manner. The code has become versatile enough to include different reactor technologies while simulating closed scenarios, and has been verified in international benchmarks for cases that include different fueledtypes of light water reactors (LWR) and even accelerator driven systems (ADS) [2,3]. Thus, ANICCA shares the same common interests from other state-of-the-art computational tools that are used around the world to solve the different issues relevant to the nuclear fuel cycle.
In recent years, it has been stressed that the use of computer codes employed for the simulation of physical phenomena are inherently surrounded by uncertainties [4]. The simulation of the many stages related to nuclear fuel cycles, for the final computation of the different observables of interest as a function of time (i.e. mass isotopic inventory at different locations such as interim and final disposal, reactors cores and pools, factories, etc., as well as other related to the public safety such as activation and radiotoxicity), require the handling of many models which, in nature, contain many input parameters. Therefore, the sources of uncertainty in such type of codes are very vast. A few examples where sources of uncertainty were considered to arise only from parametric inputs, can be found in references [3,5]. There, input parameters ranging from fuel enrichment, power, thermal and loading factors, burnup, reprocessing capacity, time-related variables for cooling, fabrication processes, start-up and/or shutdown of reactors, etc., were considered to be random variables to be further sampled in order to propagate such uncertainties all the way to observables of interest. Nevertheless, other important sources of uncertainty come actually from the nuclear reaction-data that are used as inputs by deterministic or Monte Carlo (MC) neutron transport models. These are fundamental to obtain the neutron spectra from the reactor core that is used in the depletion calculation; essential to obtain the isotopic inventory as a function of burnup.
Depletion calculations have been characterized in the past for being lengthy and expensive in terms of computational resources. Simply, the inclusion of a full core depletion calculation in fuel cycle studies is nowadays still quasi unfeasible, due to the fact that many re-loadings take place while simulating a specific scenario. Thus, it is customary to perform one depletion calculation (either for a representative or a full core model) whose results, in the end, would serve as the basis input data to the reactor irradiation model. For the specific case of the ANICCA code, pre-computed libraries based on energy-collapsed fluxes, crosssections, fission yields, criticality level and burnup, form the basis of its core irradiation model. Once a scenario is being executed, ANICCA will make use of such a-priori information for computing the isotopic vector at the required burnup for a certain equivalent-based fuel composition. In this work, pre-computed irradiation libraries were obtained via depletion calculations performed by the SERPENT2 code [6]. Instead of only computing a single nominal library, the goal was to produce many of the so-called randomized ANICCA input libraries from the many SERPENT2 calculations based on ENDF/B-VII.1 [7] covariance data. To achieve this purpose, the in-house SANDY [8] computational platform was used for randomizing raw nuclear data files that, after being processed by NJOY [9] for the production of ACE formatted files, could be fed into SERPENT for the further computation of ANICCA input libraries.
In this way, not only the nominal simulation of a fuel cycle scenario corresponding to a "mono-recycling of Pu in PWR" was carried out but instead, many fuel cycle simulations were run allowing the quantification and assessment of the degree of uncertainty associated to interesting fuel cycle outputs. Therefore, the objective is to perform an uncertainty analysis on a well-known scenario that was previously proposed by the Nuclear Energy Agency (NEA/OECD) in 2012 [10], where the sources of uncertainty considered would solely come from neutron-reaction covariances. Although some previous work had already been done in propagating nuclear data-related uncertainties all the way to fuel cycle observables by using surrogate models [11], not many references can be found when it comes to the propagation of covariances associated to the major nuclear data libraries that exist around the world.

METHODOLOGY
This work made use of a computational scheme for the propagation of nuclear data covariances all the way to fuel cycle observables, that usually are presented as a function of time. In a general configuration, this is depicted below in figure 1, followed by a brief description of the SANDY and ANICCA codes, respectively.

SANDY
This code is a numerical tool employed to perform the random sampling of the parameters stored in nuclear data libraries. Since it can read and process any library file in the ENDF-6 [12] format, the tool is compatible to most of the libraries provided by the international data groups, including the general-purpose libraries JEFF, ENDF/B and JENDL. SANDY's random samples are written in perturbed copies of the original ENDF-6 file and can be used as inputs for a statistically-based uncertainty propagation. The advantages of using a sampling-based tool like SANDY, rather than algorithms based on perturbation theory are clear. SANDY can work with any nuclear physics model and solver as long as the nuclear data that they work with is in the ENDF-6 format. The responses calculated with SANDY take into account first and higher order effects and are not limited by constraints of linearity.
SANDY can sample covariance matrices from the following nuclear data: x Cross-sections (MF=3) x Resonance parameters (MF=2) x Fission neutron multiplicities (MF=1) x Secondary particle angular distributions (MF=4) x Secondary particle energy distributions (MF=5) x Radioactive decay and fission yields (MF=8) The variables that are meant to be sampled with SANDY assumed to fit a normal (or LogNormal) multivariate probability density function (PDF), where the mean value corresponds to the nominal estimate from the data, and the standard deviation is extracted from the diagonal of the covariance matrix. The random sampling procedure is carried out in SANDY by performing a Cholesky decomposition of the correlation matrix C, in order to calculate the matrix L that fulfills the following relation: Where the correlated normally distributed samples ே(,) are related to a matrix formed by ݊ ௦ independent sets of n correlated samples from a standard normal distribution (e.g. ܰ(, )) such as: SANDY also ensures that the extracted covariance from the original file is positive-definite and returns a lower triangle matrix L that is unique with real and positive diagonal elements. More info about this methodology, and in general about SANDY can be found in [8].

ANICCA
Many facilities that are part of the nuclear fuel cycle can be modeled with ANICCA. Components representing the fuel cycle front-end, mining, initial-legacy, fuel fabrication and energy-production are included. For the back-end, reprocessing, interim storages, reactor cooling pools and final repositories are included. Different fuels can be fabricated such as UOX and MOX for LWR and Fast Reactors (FR), as well as an innert matrix composed of Pu and minor actinides for ADS. They can be created using fixed fraction of materials or equivalence-based methods. The latter are fundamental when multi-reprocessing is required, as the reactivity of the fuel needs to be preserved. There are three equivalence models (U-236 compensation, Pu-239 and MOX_EU) implemented in ANICCA to obtain a reference reactivity level for certain type of fuels. Such reactivity level, given initially by the irradiation libraries, is re-calculated when the fuel is loaded in the reactor and considered during the fabrication process. A more detailed description of the code modules and its associated models can be found in reference [3].
To perform the burnup process in ANICCA, a specific library for each fuel/reactor type need to be used. For this work, this library has been created by means of the SERPENT2 code. Its general output file sumsup all the information about the irradiated fuel by type and the evolution of the isotopes generated per simulated time-step. Along with this file, at every time step cross-sections and fission yields files need to be generated as well. All this leads to the management of vast quantities of output files to create the averaged library for ANICCA. This process is carried out by a library builder script included in the code, which takes the required files to weigh them in flux, burnup and volume for each type of fuel in the reactor to create a single library.
This library contains the information needed for ANICCA to perform the burnup process in a single step (or more steps, if required) by means of a build-in routine based on CRAM16 [13]. The information of the library includes an averaged flux, the effective full-power days of irradiation (EFPD), as well as averaged isotopic cross-sections and fission yields. This library is also required to create the nuclear fuels, since a calculated parameter from the reference fuel simulated in SERPENT2 is also included to fabricate MOX or advanced fuels from the different material stocks, maintaining the reactivity conditions as the initial loading.

ANALYSIS OF THE SCENARIO AND RESULTS
An exhaustive benchmark for nuclear fuel cycle simulation codes, was organized by the NEA/OECD in 2012 [10]. This benchmark offered three different scenarios: the first scenario is an open cycle; the second one a mono-recycling of Pu in a PWR and the third one, a mono-recycling of Pu in a PWR fleet plus the deployment of GEN-IV fast reactors. In this paper, the uncertainty quantification results are discussed for the second scenario, which is described below.

Mono-recycling of Pu in a PWR fleet
In this study, a PWR fleet loaded with UOX and MOX is taken into account in order to fulfill the energyrequirements depicted in figure 2. Starting from 60 GWe supported only by a PWR-UOX fleet, during the first years the cycle experience the start-up of the PWR-MOX fleet (and the consequent reduction of the UOX one) until equilibrium is reached at the fifth year. From thereafter, mono-recycling takes place to support the constant 5GWe supplied by the MOX fleet.

Figure 2. Installed electric capacity of the scenario
To provide a better understanding of the required re-processing to support the PWR-MOX fleet, the cycle fuel material flow-chart is presented in figure 3. said that scenario output observables have been sampled 100 times and statistical techniques can be applied to infer information on the output population.
This sample size is low when trying to find a convergence of the uncertainty while inferring the output population standard deviation. Nevertheless, other techniques such as the one based on non-parametric tolerance intervals [14], can be applied to know the limits in which the population would lie with a certain confidence. For a sample size of 100 elements, it can be said that between the maximum and the minimum value of the sample from the observable of interest, it can be inferred that 95% of the population is being covered with a 95% confidence.  This paper introduces a computational scheme with the objective of propagating (via a Monte Carlo-based methodology) uncertainties related to neutron reaction data all the way through time-dependent mass inventories, which have been computed via fuel cycle codes. An application to a mono-recycling Pu scenario that has been widely studied by a previous NEA/OECD benchmark, corresponded to the test case for assessing the degree of inventory uncertainty in the reactor and the final disposal associated to ENDF/B-VII.1 nuclear data covariances from 14 different nuclides.
The relative standard deviation after 100 calculations gives a first quantitative idea of the degree of uncertainties. For instance, the MA inventory in the reactor at the end of cycle shows a 2.3% relative STD, compared to the 1.1% value from the Pu inventory. On the other hand, the final disposal presents a higher degree of uncertainty as a function of time. The qualitative behavior of the output sample relative STD in the disposal is increasingly monotonic, due to the fact that the inventory in the disposal increases with time. At the end of the scenario, a maximum relative STD of 8% is observed for the Plutonium mass, while a 7% is observed for the Curium and only a 2% for the Americium ones. As an example of the interpretation of the tolerance interval concept, the Curium case in the final disposal at the end of cycle can be used as a reference. It can be said with a 95% confidence that 95% of the total Cm in the final disposal would lie between 6 and 8 tonnes.
Finally, it was observed that the total mass in the final disposal presented a very little uncertainty (only 0.06% as a maximum value). Since the final disposal is mainly conformed by fission products and, in return, EPJ Web of Conferences 247, 13007 (2021) PHYSOR2020 https://doi.org/10.1051/epjconf/202124713007 while depleting at constant power cross-section uncertainties are not propagated through fission products, then there would be many nuclides in the final disposal isotopic vector that are not sensitive to nuclear data changes. This needs to be investigated in the future in order to develop a technique where the depletion methodology employed for the creation of the fuel cycle code libraries, would take into account the proper cross-section uncertainty treatment towards all the nuclides computed by ANICCA.
Since these were preliminary results on the application of an in-house Monte Carlo-based computational scheme for the uncertainty quantification of fuel cycle observables, more results (and a larger statistical sample) should be drawn in the future. Due to the fact that the SERPENT2 code is consider to be fast while computing depletion calculations, 200 runs where feasible to do in one week time (100 for MOX and 100 for UOX libraries). By employing 72 cores, a parallel SERPENT2 depletion calculation tallying all the energy-collapsed inventory reactions rates reaching 1640 days and 60 MWd/kg-HM (divided in 44 timesteps) took about 52 minutes. This means that the use of SERPENT2 and the access to a regular computer cluster, makes feasible to carry out studies like the one presented in this paper.