The TCA benchmark for validation of temperature feedback calculations

The TCA benchmark was investigated as a possible candidate for validation of temperature feedback calculations. This benchmark has low-enriched uranium fuel, light water moderation and reflection, and a temperature range of 20–60 C. The use of three different nuclear data libraries was considered, viz. ENDF/B-VIII.0, JEFF-3.3, and JENDL-4.0. Since the results were not as good as hoped for, additional studies were performed to identify the cause(s) of discrepancies. The benchmark values depend on a choice of delayed neutron data, so it was investigated whether this could be the cause of discrepancies. Also, an assessment was made based on critical configurations only, i.e. avoiding the use of delayed neutron data, whether the calculations exhibit a bias in relation to the benchmark in the results for the effect of temperature. Indications were found that such a bias exists. It is concluded that the choice of delayed neutron data has a significant effect on the benchmark values themselves. The use of three major nuclear data libraries leads to a range of benchmark values for each configuration, rather than one value. Also, one has to take into account the possibility of a bias in the calculation of temperature effects. It is not clear at this point what the cause of the bias is.


INTRODUCTION
The effect that temperature has on the reactivity of a reactor is important. A negative temperature feedback can be an important safety feature. Calculations of temperature effects therefore merit careful attention and need to be validated. The International Reactor Physics Experiments Handbook (IRPhE) [1] provides several benchmarks that can be used for this purpose. For the present paper the TCA benchmark was selected, because it has many features in common with the High Flux Reactor in Petten.
Benchmark calculations were performed for TCA, as reported in Section 2. The results were not entirely convincing, prompting further study. One topic was the role of delayed neutron data in the benchmark. In Section 3 it is argued that the benchmark values published in Ref. [1] involve an interpretation based on a specific delayed neutron data set. In Section 3 the benchmark values are re-evaluated used different nuclear data, leading to a different comparison between calculations and benchmark values, but still not a convincing one. It was investigated whether, by sidestepping the role of delayed nuclear data, one can establish whether there is a bias between calculations and benchmark, see Section 4. The conclusions are presented in Section 5.
Subsequently the water height was slightly increased above the critical level in order to obtain the differential reactivity worth of the water level by the positive reactor period method. The measured quantities were the water height increase (H) and the reactor period (). Based on the measured values for these quantities, the equation was used to get a value for the reactivity in units of a dollar ($). In this equation i,n is the effective delayed neutron fraction of nuclide n in the i th time group,  is the total effective delayed neutron fraction of nuclide n, and i,n is the time constant of the i th time group of nuclide n. In this equation it is assumed that the number of time groups is 6.

Codes and Data
Calculations were performed using MCNP version 6.1.1 [2]. Models for the various configurations of TCA were developed from scratch, solely on the basis of the IRPhE benchmark description. The temperature in each (non-void) cell was set according to the benchmark specification. Since the nuclear data were also processed for exactly the same temperature, this means that no temperature treatment is performed by MCNP. The MCNP option 'kopts' was used for calculating reactor kinetics parameters  and , using a block size of 10.
The thermal scattering data had to be treated differently. Using NJOY one can only process thermal scattering data at temperatures specified in the ENDF evaluation. To create thermal scattering ACE files for the exact temperatures used in the TCA benchmark, the MCNP utility MAKXSF was used to interpolate between NJOY generated ACE files. It was checked with the new interpolation method of Marquez Damian [7] that the MAKXSF interpolation did not introduce a bias in the calculated values for the temperature coefficient of reactivity.

Fixed Water Height at Several Temperatures
For each of the TCA configurations, the water height was fixed at a certain level. Using this water level, calculations were performed using nuclear data that were processed for the exact temperature of the subcase at hand. The fixed water level (Hc) per case was calculated using the fit in the benchmark description: The values for the fit parameters a0-a3 were copied from the benchmark description. For each benchmark case the fixed water level was set to Hc (T=40 C). The values are listed in Table I.

From k-Eigenvalues to Temperature Feedback Coefficient
The values for keff were translated into values for reactivity , using = 1 − 1/ eff . The values at different temperatures were fitted to the polynomial The temperature feedback (T) was calculated as the derivative of (T): The benchmark values for the feedback coefficient are given in units of 'dollar cents' per degree.
Therefore the value obtained here for (T=40 C) was divided by the effective delayed neutron fraction .
The value for  given in the benchmark description was used.

Results
The calculated results for the temperature feedback coefficient are reasonably close to the benchmark values. The results for the first two pure water cases compare favorably with the benchmark value, when taking uncertainties into account. This does not hold for the smallest configuration with pure water, A3, for which the calculated value is significantly higher than the benchmark value for all three nuclear data libraries.
The results for the borated water cases are all reasonably close to the benchmark value when taking the uncertainties into account. The results for the cases with gadolinium in the water show a bias: all results are higher than the benchmark values, for all three nuclear data libraries. In general one can say, however, that the differences between the benchmark values and the results based on the various libraries are not large. It should be noted that the analysis so far makes inconsistent use of nuclear data. The benchmark value in IRPhE is based on an analysis of the raw data, making use of calculated values for the effective delayed neutron fraction (in six time groups) and the time constants (in the same time groups). These calculations were performed by the benchmark evaluators, using the code DANTSYS with nuclear data from JENDL-3.3. For the calculations presented here, a different code and different nuclear data libraries were used. Also for the calculated temperature coefficients it was necessary to convert the unit of reactivity to dollar-cents. This was done by dividing by the value for β calculated with DANTSYS with JENDL-3.3 nuclear data.

CONSISTENT USE OF NUCLEAR DATA
This use of different sets of nuclear data for one and the same benchmark calculation is not necessary. The MCNP models for the various configurations can be used to calculate  and . This was done for each of the four (A and B) or five (C) configurations per benchmark case, each case with its water height as specified in the benchmark description. The averaged results for  are shown in Table III where the proportionality constant R is defined as the sum of the ratios (i/)/i. A very simple way of comparing the delayed neutron data of various libraries is to look at the value of R. This value is 13.0 for the IRPhE data, the JENDL-4.0 data and the JEFF-3.3 data, while it is 11.1 for the ENDF/B-VIII.0 data. In other words, the ENDF/B-VIII.0 delayed neutron data lead to a 15% different reactivity value than the data of JEFF-3.3 and JENDL-4.0. The benchmark value for the temperature coefficient of reactivity have been re-evaluated based on the delayed nuclear data for each library separately. The results are listed in Table V, together with updated results for the calculated values based on consistent use of nuclear data.

INVESTIGATION OF POSSIBLE BIAS
The results are still not entirely satisfactory, with almost half the calculated results being more than one standard deviation away from the 1 experimental range. One can ask the question whether anything can be said about the validity of calculations of temperature effects by looking at critical configurations only.
In this way one would avoid the influence of the delayed neutron data. In fact, the  and  results reported in the previous Section were based on simulations of all critical configurations described in the benchmark. One can also use the calculated results for keff, and look at trends of reactivity with temperature. The reactivity values are listed in Table VI for the three nuclear data libraries. It should be noted that in values for keff, the delayed neutron data play only a minor role, whereas for i/ and i they are dominant.
For each of the benchmark configurations, the temperature increases from sub-case 1 through sub-case 4 (A and B) or sub-case 5 (C). It can be seen in the Table that there is a trend, irrespective of the choice of nuclear data library, that the calculated reactivity is higher (or less negative) at higher temperatures. Since these models correspond to critical configurations, the ideal result would be zero for all cases, or at least more or less the same calculated reactivity value for all temperatures. In reality the calculations of reactivity at higher temperatures are higher than at lower temperatures, implying that a feedback coefficient that is based on these calculations is less negative than it should be (or too positive when the coefficient is positive as in case C3). A2a When one analyzes the results in Table V, it becomes clear that the mismatch between calculations and benchmark values is in line with the trend observed in the calculations for critical configurations. As an illustration, an attempt was made to correct for the bias that is indicated by the results in Table VI. First the bias was estimated, rather crudely, by fitting the data of Table VI to a straight line around the temperature of 40 C. The slope of this line is an estimate for the bias in the temperature coefficient of reactivity. Next, for each of the benchmark cases, the bias estimate was added to the calculated results for the temperature coefficient of reactivity. The results are shown in Figure 2. For lack of space, only the JEFF-3.3 results are shown. It appears that the presence of a bias can explain part of the discrepancies between the calculations and the benchmark values. For JENDL-4.0 (results not shown, for brevity) the results are similar. For ENDF/B-VIII.0 (results not shown, for brevity) are inconclusive, with improvements for the C configurations, but worse results for the A and B configurations when the bias correction is applied.

CONCLUSIONS
Calculations were performed for the TCA benchmark. Because the results were not entirely convincing, some extra work was done. First the benchmark was re-evaluated based on different delayed neutron data sets. Next it was studied whether there is a bias between calculations and benchmark, by examining only critical configurations. The conclusions are as follows.