APPLICATION OF RECENT DEVELOPMENTS IN INTEGRAL DATA ASSIMILATION TO IN-DEPTH ANALYSIS OF UH1.2 EXPERIMENT AND TRANSPOSITION TO WHOLE PWR CORE

Recent developments in the Integral Data Assimilation (IDA) methods within Bayesian framework have been achieved at CEA to tackle the problem of correlated experiments (through technological uncertainties) and neutron transport model numerical effects. Hence, reference Monte-Carlo and deterministic calculations (TRIPOLI4 and APOLLO3) are used together to solve neutron transport equations and get the sensitivity profiles. Furthermore, the analysis of the mock-up experiments technological parameters is performed to get accurate uncertainties and correlations between the experiments (finally the covariance experimental matrix required for IDA). We apply here the IDA approach with a new, extend set of statistical indicators (Cook’s distance, Bayesian and Aikike Information criteria (BIC, AIC)) implemented in the nuclear physics CEA CONRAD code, to the integral experiments UH1.2 in reference and voided configurations (standard PWR fuel assembly in the EOLE mock-up reactor). The adjusted multigroup cross-sections and posterior covariances are compared by choosing different ingredients in the assimilation process. Finally, the investigated key neutron parameters; reactivity, reactivity worth (void effects) and fissions rates are transposed (with the same CONRAD code) to a standard PWR core. This in-depth analysis enables us to predict the residual uncertainties and biases due to the multigroup cross-section adjustments assessing at the same time the similarity of these integral experiments for the main PWR neutronic safety parameters. In addition, technological parameters uncertainties and their impact on Bayesian adjustment process are taken into account through a global experimental covariance matrix. We point out that the UH1.2 experiments bring relevant additional information to PWR keff calculations reducing significantly the posterior results but are less relevant for fission rate distribution in reference and voided configurations.


INTRODUCTION
The Integral Data Assimilation is the final step of the French Verification/Validation and Uncertainty Quantification (so-called VVUQ) process of the Scientific Calculations tools mainly applied to neutron transport codes libraries (JEF2.2 and JEFF3.1) in the past 30 years. Based on the Bayesian statistical inference principles this essential step has been more recently assorted with the transposition of integral mock-up reactor results (trends on nuclear data and/or multigroup cross-sections) to real core applications.
In modern reactor computation and analysis, the main drawback of this approach is that it requires the assumption that the only sources of uncertainty come from nuclear data. This problem is overcome by using besides multigroup deterministic code, Monte-Carlo pointwise reference calculations with high level of statistical convergence (thanks to modern parallelized computers), reducing drastically numerical methods errors. In addition, technological uncertainties (material balance and "as built" random uncertainties) of integral experiments have to be taken into account through dedicated experimental correlation matrices.
In this paper we focused our work on the analysis of EOLE Reactor UH1.2 integral experiments [1] thanks to our advanced VVUQ process and transpose the investigated key neutron parameters results (regarding the measured critical states, void reactivity effects and fission rates) to a real PWR core (SLB1 core configurations, zero power starting reactor). After a brief description of the fundamentals and mathematical background of the Bayesian inference (and new diagnostic criteria implemented in the CONRAD code detailed in [2]), we present the in-depth uncertainty analysis of UH1.2 experiments. First, the Integral Data Assimilation (considering the measured physical quantities one by one or all together) is done and then, results are transposed to a standard PWR core (keff, void reactivity worth and fission rate).
We investigated notably, the impact of technological parameter correlations on posterior uncertainties matrices and the outgoing adjusted cross-sections. We used the Cook's distance, BIC and AIC statistical criteria to assess the complexity of Bayesian model (most relevant parameters, residual "pollution" errors).

INTEGRAL DATA ASSIMILATION AND TRANSPOSITION WITHIN THE FRENCH VVUQ PROCESS
The French CEA VVUQ process of scientific calculation tools and neutron transport codes in particular is divided into three steps. A first Verification step is devoted to code development correction of each elementary functions, ensuring non-regression through versioning supervision and dedicated test cases. Then, the second Validation step aim at optimizing and calibrating the main relevant errors coming from deterministic method assumptions. We use mainly Monte-Carlo reference calculation to fulfill this objective. Finally, the third Uncertainty Quantification step is devoted to the assessment of residual error or the global calculation route thanks to mock-up reactor core or in situ operated core experiments). The Integral Data Assimilation and Transposition steps take part in this final Uncertainty Quantification step. Hereafter we briefly describe the fundamentals and mathematical background of the statistical Bayesian inference approach on which both IDA and transposition are based and we define the advanced statistical indicators that have been recently implemented in the CONRAD code for diagnostic purposes.

IDA and transposition : mathematical background
Bayesian inference has been extensively used in the past 30 years to improve knowledge on nuclear data (European JEFF file for instance) and their corresponding multigroup cross-sections (APOLLO libraries) for neutron transport applications. High-quality of nuclear data is of prime importance while considering the design of advanced thermal and fast reactors. The quantification of nuclear data uncertainties ranked generally as the most important source of uncertainty in neutron transport calculation and impact directly safety studies of the operating nuclear power plants. For improving the quality of these data the basic idea is to the use of mock-up measurements as new integral information , thanks to Bayesian inference method (least-square technique or Monte-Carlo are implemented for instance in the CONRAD code).
Using Bayes' theorem and especially its generalization to continuous variables x  (which stands for mulitgroup cross-sections or nuclear data vectors), one can write the following relation between conditional probability density function p when the analysis of a new data set y  is considered: Here U represents the "background" or "prior" information of the knowledge of x  . U is supposed to be independent of y  through the model M (discretized neutron transport equation in our case). In this framework, the denominator is just the normalization constant. The formal rule used to take into account information coming from new observations is: A data assimilation analysis can be seen as an estimation of at least the first two moments of the posterior density probability of a set of parameters ⃗ knowing a prior information on these parameters and a likelihood, which gives the probability density function of observing a data set ⃗ knowing ⃗. To solve this problem, two major solutions exist: 1.
Adding approximations and hypothesis, one can obtain an equation that can be solved numerically: find the minimum of a cost function, which is mostly a generalized chi-square.

2.
Using Monte-Carlo sampling of all prior distributions and estimate the final posterior distributions. The standard fitting procedure (e.g. generalized least square method) is used here with some necessary assumptions: the prior probability density and the likelihood are considered as Gaussian multi-parametrized distributions. Thus, if the prior data and its covariance are respectively ⃗ and (resp. ⃗ and for integral data) the parameters can be re-estimated (up-dated) by minimizing the cost function: Using the least square fitting technique, the new values are: Where ⃗ is the theoretical model (i.e. the result of computer code simulations) depending on the investigated basic parameters ⃗ (i.e. the multigroup cross-sections). The posterior covariance matrix is obtained at the same time: (4) Where is the sensitivity matrix, i.e. the matrix of physical quantity derivatives (thanks to perturbation methods generally) and n stands for the iteration number to converge numerically the fit.
Beyond the classical "chi-square" test, additional statistical criteria are required to provide the influence of each individual data (parameters and integral data) on the overall adjustment process. The Cook's distance recently implemented in the CONRAD code meet this objective. If ̃is the original posterior data (the adjusted multigroup cross-sections in our case) and ̃when the i th parameter is discarded in the Bayesian model, the Cook's distance can be calculated as (see [3] for instance) : To investigate more in depth the data hierarchy, the Akaike, Bayesian and Deviation Information criteria (resp. , , ) are also used. They are defined for a collection of Bayesian models = { ( ; ) ∶ ∈ } respectively such as: Where is the number of model parameters, ln ( ) the log of the maximum likelihood function and is the effective degrees of freedom ( being the number of integral experimental data used in the Bayesian adjustment process).
Using Kullback-Leibler residual information [3], the effective degrees of freedom can be also calculated. Using information theory arguments a measure for the effective number of parameters in such hierarchical model is obtained as the difference between the posterior mean of the deviance and the deviance at the posterior means of the parameters of interest. In general, approximately corresponds to the trace of the product of Fisher's information and the posterior covariance, which in normal models is the trace of the 'hat' matrix projecting information onto fitted values, says: Besides the Bayesian adjustment process the final transposition step is nothing else than the posterior results applied (propagated through first order Taylor expansion) to the real application case R, says: To assess the similarity of the experience E with respect to the concept R we classically used the scalar product of sensitivity vectors normalized to the covariance matrix M:

UH1.2 experiments
The reference homogeneous core is made of a regular lattice of 1400 UOx standard 3.7% U235 PWR fuel pins (Zircaloy 4 cladded) 80 cm high and 17 stainless steel guide tubes corresponding to the security and pilot control rods. Except the pilot control rods, the UH1.2 core is /2 radially symmetric (see Figure 1). The radial fission rates measurements were performed in 1989 [5] during the first experimental phase. The void configurations (30% and 50% void) have been done by replacing the 7x7 central AG3 fuel pins cladding by larger diameter ones (colored in dark blue in Figure 2) simulating hot power moderation ratio. For the 100% void configuration, the central 7x7 lattice is replaced by a massive AG3 tank built with the same 7x7 cells cross-section. In the reference UH1.2 configuration many axial fission rates measurements have been carried out with dedicated 235 U, 238 U, 239 Pu fission chambers located at different radial positions from the center to the periphery of the core using 2 cm spacing axially (all along the fissile height). The axial bucklings are inferred by adjusting by generalized mean squares the measured fission rates to the cosine fundamental function. Fission rates distributions have been measured through integral gamma spectrometry directly performed on the fuel pins. Measurements covered radially the main diagonal : close to the core-reflector interface for the reference configuration, in the central region, within and without the 7x7 cells lattice for voided configurations. In1989, for the reference configuration experimentalists associated a 1.5% experimental accuracy (1 std. dev.) including technological uncertainties (material and designed geometry), fuel pin positions in the core, gamma spectrometer statistical counting error. This 1.5% uncertainty value has been further considered as overestimated if we refer to the specific studies performed in 1992 during the UH1.4 and UH1.4-ABS experiments, which states values close to 1% (0.8 % in [5]).
For the calculation to experiment comparisons results have been normalized to several points outside the 10x10 central zone and 3 ranges far from the reflector in order to avoid systematic errors (due to severe perturbations) in the normalization process.

Monte-Carlo and deterministic computations
The computational analysis of UH1.2 experiment has been carried out by using both 3D TRIPOLI4 ® [4] Monte-Carlo and APOLLO3 ® [5] deterministic codes in order to get C/E values. A dedicated APOLLO3 ® calculation route has been developed and applied to these experiments in a previous work (see ref [1]). This calculation route is based on 2D MOC lattice calculations (the calculation pattern being a single reflective cell or 3x3 cells in the case including the guide tube cell and self-shielding calculations performed by using the Livolant-Jeanpierre equivalence method) coupled with a 3D cell-by-cell core calculations (APOLLO3 ® IDT solver, [1]). The application library is the JEFF3.1.1 281 groups cross-section library recommended for thermal reactors.
The resulting C/E discrepancies for both keff and fission rates are summarized in the following tables. Deterministic (APOLLO3 ® IDT) and probabilistic (TRIPOLI4 ® ) calculations of these measurements are fully consistent and show an overestimation of the calculated keff for both reference and void configurations (and consequently weak reactivity worths discrepancies) and a slight underestimation of the local fission rate in the central void area (3x3 central pins).

Sensitivity analysis
The cross-sections sensitivity calculations with respect to keff, reactivity worth and local fission rates have been carried out by using the most recent perturbation methods implemented in the APOLLO3 ® code (respectively the standard, equivalent and generalized perturbation methods). Concerning the technological parameters (namely lattice pitch, 235 U enrichment, fuel and cladding geometry), we get the sensitivity coefficients (i.e. experimental correlations) by using direct perturbation calculations. The linearity and additivity property for these parameters with respect to keff and fission rate parameters has been checked for each case (linear within the technological parameters uncertainty margins).

Integral Data Assimilation
The assimilation of UH1.2 integral data has been done thanks the standard Bayesian routine (Generalized Least Square method) implemented in CONRAD [2] with a new, extend set of statistical diagnostic criteria (Cook's distance, Deviation, Bayesian and Aikike Information Criteria). We first considered keff results and then, void reactivity worths and fission rate ratio (center to peripherical fission rate ratio) in the four configurations. Technological uncertainties have been accounted through experimental covariance matrices and several tests have been done to evaluate precisely their impact on the adjustment results.
As non-significant numerical methods biases have been observed by using APOLLO3 ® the cross-sections and the associated nuclear data can be considered here as independent of the model. The fitted parameters are basically the 26 group cross-sections used in the APOLLO3 ® 3D IDT core calculations (condensed from the initial 281groups CEA library thanks to a 2D SHEM-MOC calculation route, as shown in [1]). They are associated with the 26g COMAC_V2 covariance matrix (see ref. [6] and [7] for details) recommended for JEFF3.1.1 library.
In the following table, we present the keff IDA results when all measurements (keff and fission ratio in reference and voided configurations) and experimental correlations (through technological uncertainties) are taken into account. The first analysis of the produced uncertainties seems to indicate that the additional voided experiments do not bring relevant additional information (but merely uncertainty reduction) since the experiments are strongly correlated (close to 0.99 correlations). Nevertheless, if we consider the Cook's distances (detailed in the presentation) -which appears to be the most relevant criteria (compared to IC/BIC/DIC criteria) to assess the complexity of Bayesian models -the relevance of scattering cross-sections for 235 U (elastic) and 238 U (elastic and inelastic) is highlighted. In particular we found out trends on 238 U (n,n') cross-sections (5-10% reduction after the threshold) consistent with the lastest CEA/DEN evaluation which is planned to be included in the new JEFF4 evaluation.

Transposition to standard PWR core calculations
Once the IDA process is done, we transposed the UH1.2 experiments IDA results (taking account the experimental correlations) to a standard (water reflected) PWR multi-enriched (3 enriched zones) and classical pitch (control rods withdrawn). We get the corresponding sensitivity profiles for the investigated key neutron parameters (keff, local fission rates) as indicated previously (same calculation route).
In the next table the transposition results of all UH1.2 experiments to the PWR case are reported. It appears clearly that UH1.2 experiments information reduce significantly the keff prior uncertainty. On the other hand, we the fission ratio (center to periphery) are slightly affected (around 1%) due to lowest similarity coefficients (less than 0.4).

CONCLUSIONS
Recent developments in the Integral Data Assimilation (IDA) methods within Bayesian framework have been achieved at CEA (implemented in the CONRAD code) and applied to UH1.2 experiments to be finally transposed to a standard PWR core. Computational analysis has been carried out thanks to reference Monte-Carlo and deterministic calculations (TRIPOLI4 ® and APOLLO3 ® ) to evaluate the impact of potential deterministic model/methods biases. Attention is paid to get the global experimental correlation coming from technological data uncertainties. First, we pointed out that despite the presence of strong correlations between reference and voided configurations (for keff and fission rates measurements) new interesting results (influence of 235 U and 238 U scattering cross-section in voided configuration) are observed after indepth Bayesian analysis (thanks to the new implemented Cook's distance parameters). In addition, the UH1.2 keff IDA transposition to a standard PWR core yielded a significant reduction in C/E (by a factor of 2 for UH1.2 experiments) and transposed uncertainty (half the prior uncertainty in the PWR case). The use of the local fission rate measurement in the IDA process did bring also a bias correction (around 1.5%) but poor transposition results to the standard PWR case (since the similarity coefficients are too low due to the significant core size differences). Prospects and new IDA calculations are in progress to get better representative experiments (CAMELEON and FLUOLE mock-up measurements) of poisoned and baffle reflected PWR cores. Investigations of (large core) size effects on local power are also planned by adding large core benchmarks as new integral data in the IDA process (the Japanese Kuca coupled core experiments [8]).