Unresolved resonance range processing and probability tables generation in the GAIA2 system

The objective of this paper is to present the work carried out at the French Institut de Radioprotection et de Sûreté Nucléaire (IRSN) to process nuclear data in the unresolved resonance range (URR). Recently, a great deal of effort has been devoted at IRSN to develop an independent nuclear data processing system, GAIA2. First, a nuclear data storing object, independent from the ENDF-6 format, has been implemented in order to transmit information between the components of a module-based scheme. Then, the generation of probability tables in the URR has been added as an independent module named TOP (as Table Of Probability), and tested on a selected set of benchmarks. The methods used and the results are discussed, and some limitations in the manner to construct the tables are pointed out.


Introduction
Recently, IRSN has been looking into improving its nuclear data processing tool, which is called the GAIA2 system. GAIA2 aims at producing processed cross sections in continuous-energy ACE formatted files to be used in criticality safety applications by Monte-Carlo codes, such as MCNP or MORET. GAIA2 development started in 2012 ;the code first release included features such as processing of the resolved resonance range (RRR) for all R-Matrix formalisms, and Doppler-broadening techniques based on Fourier transforms [1]. Since then, two main steps have been added toward a full self-supporting system: the development of a nuclear data handler as a set of C++ classes able to store the relevant nuclear data in an independent format, and the generation of probability tables in the unresolved resonance range (URR). The former is briefly mentioned here as it is of interest for the upcoming inclusion of new features. The nuclear data handler is used in the flow of GAIA2 to store and transmit the initial and processed nuclear data between the several modules which form the code. The main reason which has driven this development is the necessity to get rid of the constraints linked to the ENDF-like format, which is still massively used between the different steps of the processing. Moreover, the use of an externalized storing object enables the development of the modules as distinct executables, which facilitates the validation routines. Two executables enable the transformation of a nuclear data handler in a PENDF file and vice versa. PENDF files can then be converted in ACE files by using the NJOY module ACER. A simplified scheme of the nuclear data handler in Unified Modeling Language (UML) can be seen on Figure  1. We will now focus on the development of one of the GAIA2 module, named TOP, which implements the probability tables generation in the URR. In this range, resonances cannot be distinguished experimentally, hence resonances parameters are provided as average quantities only, which can be exploited to obtain a distribution probability of cross section values at particular energies, and their representation as probability tables.

Cross section sampling in the UPP
For a single reference energy in the URR, probability tables are produced at different temperatures using a Monte-Carlo technique derived from the one implemented in the AMPX code from ORNL [2]. Acceptable statistical ladders of resolved resonances are produced around the reference energy from average resonance parameters and statistical theoretical laws (Wigner law for the spacing between resonances, and χ² law for the resonance neutron, capture, and fission widths). The Single-Level Breit-Wigner (SLBW) formulas derived from the R-Matrix theory and ψ-χ Doppler broadening are used to reconstruct cross sections (total and corresponding partials) at the reference energy for all temperatures. The operation is repeated a large number of times (10 000 times have been used in the next calculations) to obtain a representative and exploitable sampling of the cross sections. The resonances are sampled as pairs around the energy, as represented on Figure 2. One can notice that in order to mimic the real repartition of resonances, the central spacing must not be sampled in the Wigner distribution but in xW(x).This is necessary to avoid the inspection paradox, which is a well-known particularity of punctual processes in this context. Enough pairs must be sampled so that the most distant resonances are of almost no influence in the final cross section computation at the reference energy (for instance 120 pairs of resonances were used for 238 U). For a more detailed description of the procedure, one can look at the references [2] and [3]. To estimate the quality of the procedure, the sampling average values can be compared to the average values computed from the average resonance parameters directly [4]. Table 1 displays some examples of the average values for 238 U from the nuclear data library JEFF-3.2 [5].  Table 1 we have compared the average cross sections values above and below the competitive threshold energy. Indeed, the URR procedure relies on the SLBW formalism, for which only the neutron, fission and capture widths are provided in the ENDF format. To take into account other channels, a competitive reaction width can be specified. For 238 U for instance, the first inelastic level starts around 44.91 keV. A corresponding resonance average competitive width is provided above this threshold, to be added to the other partial widths when the partial cross sections are computed. As the total resonance cross section is the sum of elastic, capture, and fission cross sections, a background cross section must be added to it. Probability tables may be normalized to one after they have been built, when the "LSSF = 1" procedure of the ENDF applies (which is the case for 238 U). Then, Monte-Carlo codes are instructed to multiply them by a tabulated value found in the evaluation. The resulting cross sections sampling of GAIA2 have been cross-checked to the NJOY code. Figure 3 shows the empirical distribution functions of the total and capture sampled cross sections for 238 U by NJOY and GAIA2. Even if the methods used by the two codes are different (NJOY reconstructs a punctual cross section over a whole range and not only at the reference energy), the results are very close.

Construction of a probability table from cross sections sampling
From the obtained set of sampled cross sections, one can derive a probability table which will be used by the Monte-Carlo codes in the URR, to take into account the self-shielding. Such a table should be seen as a discrete representation of the probability distribution function (density or cumulative) representative of the cross section at a particular energy. In practice when a cross section must be used in the URR, Monte-Carlo codes sample a random number between 0 and 1, and select a corresponding value in a cumulative probability table. It must be underlined that currently; these codes only pick cross sections among a fixed number of given values (typically 20) which correspond to the bin values of the table. This loss of continuity (the real underlying cross section distribution is a continuous function) in the selection procedure was originally motivated by speed and storing constraints. This has a direct influence in the way a probability table is generated. The goal is to provide an accurate and exhaustive representation of the distribution function keeping in mind that the bin values are not interpolated and that a low amount of points is required, not to slow the Monte-Carlo computations.
Actually, the simplest density estimator which corresponds to such requirements would be a histogram. The construction of a histogram is very easy, as one should only provide bins limits and iterate through the sampled values to find each bin probability. Usually each bin's midpoint is used as the reference for the bin. In our case however, as Monte-Carlo codes give a great deal of interest to the average cross section value (which is related to the reaction rate in neutronics computations), we use for each bin the average of the cross-section values which are in the bin. From the mathematical point of view of the density estimation, the bias due to the discretization becomes harder to estimate, but it provides a set of data points for which the mean is exactly the same than the sampling. Two methods have been investigated to determine the bin limits. First, a logarithmic binning has been used, as cross section distributions usually have a huge tail (the most significant at low temperatures). Then, an equiprobable distribution of the bins has been used, which gives each value an equivalent weight. This latter option could be very convenient for Monte-Carlo codes, which would no longer have to implement a search routine to pick cross section values. GAIA2 enables the probability tables to be constructed according to both methods, whose outcomes for 238 U at 40keV are displayed on Figure 4, along with the empirical distribution function.
From the picture, the logarithmic binning appears relatively weak to be used without interpolation: almost half of the y-axis is covered by three points only. On the other hand, the equiprobable binning looks more robust, even if the tails of the cumulative cross section are not well represented. In the next part, we use these tables in criticality benchmarks calculations, which confirm this analysis. Of course, the discretization effects are less significant for both approaches if we increase the table's number of bins. To suppress these limitations though, the probability table method used by Monte-Carlo codes could be replaced by an interpolation on a direct fit on an estimator of the underlying cumulative distribution function. For now, the ACE format and the Monte-Carlo codes themselves do not enable such a use.

Benchmarking results and discussion
The implementation of the probability tables processing has been tested on five benchmarks from the ICSBEP handbook (IEU-MET Table 2 (NJOY with probability tables and GAIA2 with probability tables) and Table 3 (NJOY without probability tables and GAIA2 without probability tables). A direct comparison between GAIA2 and NJOY for both cases (with and without probability tables) exhibits a very good agreement between the codes. Experimental values are available for these benchmarks [6], and are displayed in Table 2. Corresponding plots along with C/E comparisons are displayed on Figure 5.  In order to investigate the impact of the probability tables binning on benchmarks computations, the probability tables from GAIA2 (computed by the module TOP) have been computed using an equiprobable and a logarithmic binning. Then, these tables have been combined with the NJOY processing, in order to focus only on the unresolved resonance range. The same set of benchmarks than previously has been used with the resulting processed files. These calculations have been made with a final statistical uncertainty of 20 pcm. Results are stored in Table 4, and plotted on Figure 6. For the sake of clarity, one should note that Figure 5 refers to comparison between NJOY and GAIA2 whereas Figure 6 corresponds to the methods developed in this work which are included in GAIA2.  According to the keff computations, several comments can be made about the importance of the probability tables processing. First, it appears that without the 238 U probability tables the results largely differ for IEU-MET-FAST-007 and IEU-MET-FAST-016. Secondly, the NJOY computations with probability tables provide equivalent results when the tables from the PURR module of NJOY have been replaced by the GAIA2 tables computed with an equiprobable binning. Third, the whole GAIA2 processing (resolved and unresolved processing) is in good agreement with the NJOY results. Fourth, the benchmark results with a logarithmic binning for the tables show discrepancies with the results for equiprobable tables, while both tables are obtained from the exact same set of sampled cross sections. In this paper, several new developments implemented in the IRSN GAIA2 nuclear data processing code have been presented. In particular, a first version of the probability tables computation in the unresolved resonance range has been detailed. It appears that deriving the probability tables from a set of sampled cross sections must be achieved carefully, even if an equiprobable repartition of the table's bins seems rather acceptable for the benchmarks calculations which have been conducted. To properly estimate the influence of the discretization process of the tables, Monte-Carlo codes should however implement an interpolation routine for the choice of the cross section value on a direct estimation of the cumulative cross section. In any case, the GAIA2 implementation has been compared to NJOY on 238 U from JEFF-3.2, which has been processed by both codes. Results are in very good agreement on some criticality benchmarks from the ICSBEP sensitive to the unresolved range. As both codes use a common formalism to compute cross sections, this represents a first validation step for upcoming developments. The nature of such developments is two-fold. The first point still concerns the construction of a probability table once cross sections have been sampled. Presently, the proposed equiprobable binning does not represent the tails with precision, even if the results on criticality benchmarks seem rather good. A method allowing the tails to be well represented should be developed, without degrading the representation in the bulk. The second point concerns the cross sections sampling itself, as some open questions remain. In particular, a precise estimation of the sufficient number of resonances to generate for each sampling should be investigated. Moreover, the use of the Wigner law to sample the resonance spacing consists of an approximation of the underlying physics, as resonances are actually correlated, following properties of the Random Matrix Theory. Producing more physically acceptable resonance ladders and more accurate probability tables appears to be the relevant steps to achieve toward a full unresolved resonance range processing.