CROSS-EVALUATION OF HIGHLY ENRICHED METAL URANIUM FAST CRITICALITY EXPERIMENTS

Criticality experiments are the foundation of the criticality safety validation, the reactor parameter prediction and the nuclear data validation. Criticality experiments have been used in the ﬁeld of nuclear data adjustment in the last decades. In applications like criticality safety validations and nuclear data adjustments, many criticality experiments are used together in one application. In practice, experts found that some experiments have bad in-ﬂuence in nuclear data adjustments, and they excluded them in these applications. But the reason why these experiments should be excluded is not clear. To give these exclusion a clear physical explanation, we have developed the cross-evaluation method, which could evaluate the random biases of the experimental results by analyzing the C − E (Calcula-tion result - Experiment result) values of similar experiments. In this paper, we use the cross-evaluation method to assess the random biases of some highly enriched metal uranium fast criticality experiments. By the cross-evaluation method, experts could choose criticality experiments which should be used in the applications of criticality safety validations or nuclear data adjustments, and might ﬁnd the reason why some experiments should be excluded in applications of nuclear data adjustments.


INTRODUCTION
Criticality experiments are the foundation of the criticality safety validation [1], the reactor parameter prediction [2] and the nuclear data validation [3]. Up to now, there are about 5000 criticality experiments on public in the world, and have been collected into ICSBEP (International Criticality Safety Benchmark Evaluation Project) Handbook [4].
In applications of cross section adjustments, experts found that there might be conflictions between some criticality experiments [5]. Besides that, some experts think that the experimental errors of some criticality experiments might be under-estimated [6].
The experimental uncertainties of many highly enriched metal uranium fast criticality experiments are around 0.003. If the errors of criticality experiments fit the normal distribution, the errors of some criticality experiments might exceed 3 times of the standard deviation, which is about 0.009. Cases which could occur in a criticality experiment are shown in Fig. 1. Therefore, it is meaningful to evaluate the random error of a criticality experiment. Based on that, we could choose criticality experiments with small random errors to be used in applications. In this paper, the cross-evaluation method is shown and the preliminary results are discussed. If there are several criticality experiments have been performed independently and their physical designs are idential, these experimental results could be compared with each other directly to see which experiment has the smallest random error. But in ICSBEP, almost all the experiments have different physical designs. Because there are differences between criticality experiments more or less, it is reasonable that the experimental results are different to each other. Therefore, we can't assess the random errors of the criticality experiments by comparing the experimental results directly.
Although there are few criticality experiments whose physical designs are identical, but there are some experiments which are similar to each other. For example, in ICSBEP-2003 (2003 version of ICSBEP Handbook) [4], there are several spherical highly enriched uranium assemblies reflected by natural uranium, which are similar to each other. They use identical fission material and identical reflector material, the radiuses of the active zones and the thicknesses of the reflectors in these experiments are close to each other.
In this section, we will show the cross-evaluation method which could assess the random errors of criticality experiments by doing statistical analysis to a group of similar experiments. There are two problems should be solved. The first problem is that how to measure the similarity or dissimilarity between criticality experiments, and the second problem is that what characteristic the results of similar experiments should have. If these two problems are solved, we could analyze the experimental errors by comparing the characteristic which the statistical analysis shows to us with the characteristic which the similar experiments should have.
In this section, we will recommend a dissimilarity measure for assessing the dissimilarity between criticality experiments. Then the cross-evaluation method will be shown.

Similarity or Dissimilarity of Criticality Experiments
In this paper, we define that the two criticality experiments are similar to each other if the calculation biases (calculation should be done based on a same nuclear data library and by a same program) of these two experiments are close to each other.
If the Monte Carlo method is used, the calculation error is introduced in by the nuclear data mostly. Therefore, the calculation bias could be expressed by Eq. (1) approximately, where C is the calculation bias, C is the calculation result, T is the true value of the experiment, S is the sensitivity vector of the experiment (to nuclear data) and −→ σ is the biases of the nuclear data. The true value T and the biases of the nuclear data −→ σ are unknown, therefore we can't calculate C directly.
Suppose that there are two criticality experiments, whose calculation biases are C i and C j respectively. According to the definition of the similarity, if C i and C j are closer, the two experiments are more similar to each other. Based on Eq. (1), we could express | C i − C j | as Eq. (2) approximately.
If the calculations (Monte Carlo) are done based on a same nuclear data library, the biases of the nuclear data are same in the two calculations. Therefore, the assessment of which means that the two experiments are totally same to each other.
Based on the analysis above, we could use S i as the characteristic vector of the criticality experiment and use existing similarity or dissimilarity measures to assess the similarity or dissimilarity of two criticality experiments. We suggest using covariance weighted Euclidean distance (marked as F dissimilarity measure) to assess the dissimilarity between criticality experiments, and the definition is shown in Eq. (3) , where M is the covariance matrix of the nuclear data.

Figure 2: Physical meaning of F dissimilarity measure
To explain the physical meaning of F dissimilarity measure, we suppose that there are two criticality experiments, marked with i and j respectively, and their sensitivity vectors are S i and S j . We assume that there is a hypothetical critical experiment whose sensitivity vector is S i − S j , which is shown in Fig. 2. The calculation uncertainty of this system is (S i − S j ) T M (S i − S j ) which is consistent with F dissimilarity measure. Because the calculation bias of the hypothetical experiment C d equals to C i − C j , F dissimilarity measure means the uncertainty of the difference between the calculation biases of the two criticality experiments.

Cross-Evaluation Method
The bias of the experimental result could be defined as Eq. (4) , where E is the experimental result. Minus Eq. (4) with Eq. (1), we obtain Eq. (5). We could get E and C by experiment and calculation. If we analyze a group of criticality experiments, and get a series of equations like Eq. (5), we can calculate the average of E − C values in this group of criticality experiments and we obtain Eq. (6).
If the criticality experiments in this group are similar to each other (assessed by F dissimilarity measure), we could make sure that the standard deviation of the differences between the calculation biases of the experiments in this group is smaller than the threshold of F dissimilarity measure. If the threshold is small enough, the calculation biases of the experiments in this group could be treated equally to each other approximately, as Eq. (7). Therefore, the mean of C i could be expressed as Eq. (8) approximately.
Substitute Eq. (8) into Eq. (6), we could obtain Eq. (9) . We could calculate the left part of Eq. (9) , then we could analyze the right part of the equation.
There are two kinds of errors which might exist in the total experimental error, one is random error and the other is systematic error. The bias of the experimental result could be separated into two parts, one is the random part E rand,i and the other is the systematic part E sys , as Eq. (10). The systematic error is treated as constant in a group of experiments.
The mean of the random biases should approximately equal to 0 (when there are sufficient criticality experiments in this group), see Eq. (11). Combine Eq. (9) and Eq. (11) , we obtain Eq. (12). We could define T i as Eq.(13) . Compare Eq.(13) and Eq.(14), we could see that T i only contains systematic error but E i contains both systematic error and random error, therefore we recommend using T i instead of E i in application. If there is no systematic error, T i is equal to the true value T approximately.
In practice, the number of a group similar criticality experiments is small in general, therefore, the precondition of the approximately equal equations is not satisfied in most cases. But selecting a group of similar criticality experiments and analyzing the experimental results by the crossevaluation method may help us to evaluate the experimental results. This will be shown in the next section.

CROSS-EVALUATE HIGHLY ENRICHED METAL URANIUM FAST CRITICALITY EXPERIMENTS
To test the cross-evaluation method developed in the last section, we use it to analysis some highly enriched metal uranium fast criticality experiments. The first part of this section introduces the experiments which are analyzed in this paper, and the second part shows the results.

Experiments Analyzed
We have analyzed 121 highly enriched metal uranium fast criticality experiments in ICSBEP-2003. The experiments analyzed are listed in Table 1.
From the analysis results above, we could see that the results of these experiments in each group correspond well to each other. Based on Fig 3, we could see that the experimental result of HMF-002-2 is closer to the mean value of that group than other experiments in the group. Therefore, we recommend using HMF-002-2 other than other experiments in this group.

CONCLUSIONS
In this paper, the cross-evaluation method of criticality experiments has been developed and the results of the method have been shown and discussed. The results showed that the cross-evaluation method could evaluate the random biases of the criticality experiments when there are some experiments very similar to them. When the number of experiments which are similar to each other is small, the cross-evaluation method could also recommend which experiment should be used in applications.
Although the cross-evaluation method has been developed, it should be tested more widely, especially should be tested on the experiments which have been speculated to have larger errors than they claimed. Related works are being performed.