ANSWERS TOOLS FOR UNCERTAINTY QUANTIFICATION AND VALIDATION

ANSWERS is developing a set of uncertainty quantification (UQ) tools for use with its major physics codes: WIMS/PANTHER (reactor physics), MONK (criticality and reactor physics) and MCBEND (shielding and dosimetry). The Visual Workshop integrated development environment allows the user to construct and edit code inputs, launch calculations, postprocess results and produce graphs, and recently uncertainty quantification and optimisation tools have been added. Prior uncertainties due to uncertainties in nuclear data or manufacturing tolerances can be estimated using the sampling method or using the sensitivity options in the physics codes combined with appropriate covariance matrices. To aid the user in the choice of appropriate validation experiments, the MONK categorisation scheme and/or a similarity index can be used. An interactive viewer has been developed which allows the user to search through, and browse details of, over 2,000 MONK validation experiments that have been analysed from the ICSBEP and IRPhE validation sets. A Bayesian updating approach is used to assimilate the measured data with the calculated results. It is shown how this process can be used to reduce bias in calculated results and reduce the calculated uncertainty on those results. This process is illustrated by application to a PWR fuel assembly.


INTRODUCTION
When calculating best estimate reactor parameters of interest it is not only important to provide an accurate estimated value of a given parameter, but also to provide a reliable estimate of the uncertainty on that estimated value. The move in recent years from pessimistic estimates to BEPU (best estimate plus uncertainty) requires the use of sophisticated tools for uncertainty quantification (UQ) [1]. The aim of an ongoing strand of ANSWERS [2] development work is to establish UQ tools for use with the major ANSWERS' physics codes, including: WIMS/PANTHER (reactor physics), MONK (criticality and reactor physics) and MCBEND (shielding and dosimetry). For some years ANSWERS has been developing Visual Workshop, an Integrated Development Environment to accompany the physics codes. This allows the user to construct and edit code inputs, launch calculations, post-process results and produce graphs, and recently uncertainty quantification and optimisation tools have been added.
Initial UQ tool development focused on the sampling method in which the user can specify statistical distributions rather than numerical values for user-specified input parameters [3]. We have also produced sampled nuclear data libraries in which the data on the evaluated nuclear data files are selected from statistical distributions, rather than using the reported central values. Monte Carlo sampling or Latin hypercube sampling can be chosen by the user. Additionally, capabilities have been included in the physics codes to calculate sensitivities which can be combined with a covariance matrix for the input parameters as an alternative way of undertaking UQ. These methods are described and results for a PWR fuel assembly are presented.
The above approaches do not account for evidence obtained from plant measurements or validation experiments, which can be used to refine best estimate values for parameters and their uncertainties. When using validation data, a major concern is what constitutes appropriate data. Two main tools are provided to aid the user in the choice of appropriate experiments: the MONK categorisation scheme (see ref [4] for details) and a similarity index described in Section 5. To aid this an interactive viewer has been developed which allows the user to search through details of roughly 2,000 MONK validation experiments that have been analysed from the ICSBEP and IRPhE validation sets.
ANSWERS has investigated a number of methods for combining plant calculations and validation data including: data assimilation, Bayesian updating, maximum likelihood estimation and extreme value theory. In this paper we concentrate on the Bayesian updating approach and describe how this is implemented in ANSWERS software. It is shown how this process can be used to reduce bias in calculated results and reduce the uncertainty on the estimated quantities. This process is illustrated by application to a PWR fuel assembly.

VISUAL WORKSHOP
Visual Workshop is the ANSWERS' IDE (integrated development environment) for preparing and verifying models, launching calculations, post-processing results and graphical display, see Figure 1. It is designed to work with ANSWERS' physics codes, including WIMS, MONK ® , MCBEND and RANKERN. Visual Workshop also contains tools to help the user undertake uncertainty analyses with ANSWERS' codes, as described in Sections 3 to 6 below.

SAMPLING TOOL FOR UNCERTAINTY QUANTIFICATION
Tools have also been implemented in Visual Workshop for uncertainty quantification and optimization [5]. A sampling methodology is available for estimating prior uncertainties, by running a number of calculations in which uncertain input parameters are varied by choosing values from user-specified distributions; Monte Carlo, stratified and Latin hypercube sampling options are currently available [3]. Wilks method [6] is also available for user-defined probability and confidence levels [3]. Figure 2 shows an example input for the sampling tool, for estimating prior uncertainties due to uncertainties arising from manufacturing tolerances (geometry, composition and density). In this simple, illustrative example, a 19 x 19 UO2 fuel assembly partially immersed in water is investigated. The uncertainty in the calculated value of k-effective (using MONK's "K(THREE)" estimator) arising from uncertainties in fuel enrichment, fuel density, length of fuel pins, pitch of the fuel pins, fuel pellet diameter and clad thickness, is estimated. This is achieved by sampling the uncertain parameters from normal distributions in this instance; truncated-normal, uniform and beta distributions are also available. Only five sampled calculations are requested in order to keep the output to manageable proportions for display in Figure   From the output it is a simple matter to estimate the mean and standard deviation and such basic statistics are saved in the runref.statistics.csv file.
The nuclear data used by the codes are themselves subject to uncertainty. The values of the cross-sections etc. in the evaluated nuclear data files, such as the JEFF, ENDF/B, CENDL and JENDL series of evaluations, are provided with uncertainties by the evaluators. The cross-sections etc. must be processed to produce the continuous energy (BINGO) nuclear data libraries required by the MONK and MCBEND Monte Carlo codes and also to produce the multigroup libraries required by WIMS/PANTHER. In order to propagate the evaluated nuclear data uncertainties through the physics calculations, sets of nuclear data libraries have been produced in which the evaluated parameters are drawn from statistical distributions chosen to represent the nominal values and their associated uncertainties. These are processed into sets of sampled BINGO and WIMS libraries as described in [5]. Sets of 25, 60 and 120 Latin hypercube sampled libraries have been produced. In addition a set of 1,000 Monte Carlo sampled libraries has been generated in the WIMS energy group scheme as a reference set. These libraries can be chosen for use with the UQ calculations to allow the uncertainty resulting from nuclear data to be evaluated. The sampled libraries can also be used in combination with variations in the geometric and compositional data to estimate the total uncertainty [3].

VALIDATION DATABASE VIEWER
Once the prior uncertainty has been estimated the next task is to choose measured data for validation. Here "validation" is defined to be the process by which measured data are combined with calculated results to refine the calculated values for parameters of interest; i.e. to remove calculation bias and update the estimated uncertainty. At the time of writing, the ANSWERS' criticality database contains 828 Tier 1 (independently checked) experimental configurations and 1205 Tier 2 (self checked) configurations for use with the MONK reactor physics and criticality code, see ref [5] for more details. The Tier 1 and 2 validation cases are displayed in Figure 3   To assist with the choice of measured data for MONK analysis, a validation database viewer has been implemented in Visual Workshop. The viewer allows the user to search and browse the Tier 1 and 2 cases in the MONK validation database, and click on individual cases to display details, as shown in Figure 4.  This gives a value that indicates how similar the nuclear data sensitivities of systems B and S are, that essentially ranges from 0 (no similarity) to 1 (complete similarity). A Similarity Index tool evaluates the similarity indices for each of the validation experiments appropriate to the chosen application and displays the results in descending order of magnitude.

Figure 5. Screen Shot from the Similarity Index Tool
An example is shown in Figure 5. In this case, the top 20 matches all have ESUM similarity indices between 0.94 and 0.95. (Also given are the total sensitivity and two quantities, AVALS and DSUM, associated with an alternative similarity measure not discussed here.)

VALIDATION
A number of methods are being made available within Visual Workshop to combine the measured data with the calculated results to improve the estimated value of keffective and its uncertainty. The UK Working Party on Criticality (WPC) produced a summary of general techniques available to derive the safety criterion used in criticality assessments [8],including (where EPD = error in physical data and USL = upper sub-critical limit):  EPD -standard error method;  EPD -standard deviation method;  Systematic bias and uncertainty -subtraction;  Systematic bias and uncertainty -addition;  USL method 1 -H to fissile material ratio;  USL method 1 -mean log of exponential energy of neutrons causing fission (MLENCF);  USL method 1 -mean log of exponential energy of neutrons causing capture (MLENCC).
In addition, a Bayesian updating scheme is available based on the method discussed in ref [9] and also the generalized linear least squares (GLLS) method described below. The estimated bias for application case α, kα,bias, is given by (using the Einstein summation convention over repeated suffices): where , is the sensitivity of the application (α), experiment (ε), respectively to nuclear data item i, is the nuclear data covariance matrix, is the covariance between experiments ε and δ resulting from uncertainties in dimensions and compositions etc. and (∆ )/ is the relative code bias for experiment δ.
The posterior uncertainty, , , is related to the prior uncertainty, , , by:

EXAMPLE CALCULATION
An example calculation has been performed for a GBC-32 flask holding PWR fuel elements with a burnup of 45 GWd/te and five years of cooling; actinide only compositions were transferred from the reactor to the flask using the COWL material transfer facility in MONK [10]. The similarity to 1967 experimental configurations was evaluated and those with similarity index > 0.78 were chosen, giving 175 experiments for consideration. The prior uncertainty was estimated using the sensitivity matrix and the nuclear data covariance matrix ( ). The MONK calculations were run using 5,000 superhistories per stage with a target standard deviation of 0.0002 on keffective.
The results of the GLLS analysis are displayed in Table I. Note that the use of the experimental data has more than halved the estimated uncertainty on the calculated result. Also the bias corrected value of keffective plus three standard deviations is less than 0.95. Note, however, that the correlation between experiments within an experimental series has been neglected. Estimating such correlations is a complex and time-consuming process. A way to approach this is described in [11,12]. A simple approach to get around this is discussed below. Note that, although the posterior estimate of keffective is higher than the prior estimate, the posterior estimate of keffective plus three standard deviations is lower than the prior estimate. For comparison, results for two of the methods listed in Section 6 are displayed in Table II. Both of the methods indicate that the maximum allowable value for the prior keffective is less than the value of 0.9241 arrived at above. Hence the operation would be not be considered safe despite the GLLS analysis indicating that the posterior best estimate value of keffective is nearly nine standard deviations below 0.95. Neglecting correlations in the uncertainties of experiments in a single series can lead to an underestimate of the uncertainty. One way to address this is to use only a single experiment from each series. In this case the experiment with the highest similarity index was chosen from each series. This reduced the number of experimental configurations used to 13. The results of the revised analysis are shown in Table III. Again the use of the experimental data leads to a significant reduction in the estimated uncertainty. In this case 0.95 is more than seven standard deviations above the posterior best estimate value for keffective.. Although the posterior estimate of keffective is higher than the prior estimate, the posterior estimate of keffective plus three standard deviations is again lower than the prior estimate. For comparison, results for two of the methods listed in Section 6 are displayed in Table IV. In this case use of the EPD methods would again indicate that the operation is not safe, but the USL method would suggest that it is safe. This illustrates some of the issues associated with establishing safe critical limits, but also shows that the ANSWERS tools available for uncertainty analysis can greatly assist in providing increased confidence, or when used carefully could potentially support a less conservative approach.

CONCLUSIONS
ANSWERS is developing a coherent set of tools to aid the user in the estimation of uncertainty on predicted values. The tools are implemented in the Visual Workshop IDE so that they are available for use with ANSWERS' WIMS/PANTHER, MONK and MCBEND physics codes. The tools have been applied to the criticality safety of irradiated PWR fuel elements in a flask. More traditional approaches are compared to the best estimate plus uncertainty (BEPU) approach. The BEPU approach is shown to provide a higher degree of confidence in the criticality safety of the configuration studied.