Power calibration methodology at the CROCUS reactor

In the present article, we detail the method used to experimentally determine the power of the CROCUS zero-power reactor, and to subsequently calibrate its ex-core monitor fission chambers. Knowledge of the reactor power is a mandatory quantity for a safe operation. Furthermore, most experimental research programs rely on absolute fission rates in design and interpretation – for instance, tally normalization of reaction rate studies in dosimetry, or normalization of power spectral density in neutron noise measurements. The minimization of associated uncertainties is only achieved by an accurate power determination method. The main experiment consists in the irradiation, and therefore, the activation of several axially distributed Au-197 foils in the central axis of the core, which activities are measured with a High-Purity Germanium (HPGe) gamma spectrometer. The effective cross sections are determined by MCNP and Serpent Monte Carlo simulations. We quantify the reaction rate of each gold foil, and derive the corresponding fission rate in the reactor. The variance weighted average over the distributed foils then provides a calibration factor for the count rates measured in the fission chambers during the irradiation. We detail the calibration process with minimization of respective uncertainties arising from each sub-step, from power control after reactivity insertion, to the calibration of the HPGe gamma spectrometer. Biases arising from different nuclear data choices are also discussed.


I. INTRODUCTION
HE CROCUS reactor is a two-zone, uranium-fuelled light water moderated facility operated by the Laboratory for Reactor Physics and Systems Behaviour (LRS) at the Swiss Federal Institute of Technology Lausanne (EPFL). With a maximum power of 100 W, it is a zero-power reactor used for teaching and research purposes, most recently for studies on intrinsic and induced neutron noise, highly-localized measurements, and nuclear data [1]- [19]. Knowledge of the reactor power is a mandatory quantity for a safe operation. Furthermore, most experimental research programs rely on absolute fission rates for design and interpretation [20] -for instance, tally normalization of reaction rate studies in dosimetry, or normalization of power spectral density in neutron noise measurements. The minimization of associated uncertainties is only achieved by an accurate power determination method. We present hereafter the method used to determine the reactor power and to subsequently calibrate the ex-core monitor fission chambers [21]- [23]. The main experiment consists in the irradiation, and therefore, the activation of several axially distributed 197 Au foils in the central axis of the core, which activities are measured with a HPGe gamma spectrometer. The effective cross sections are determined using MCNP6.1 [24] and Serpent2 [25] Monte Carlo simulations. We quantify the reaction rate of each gold foil, and derive the corresponding fission rate in the reactor. The variance weighted average over all foils then provides a calibration factor for the count rates measured in the fission chambers during the foil irradiation. We detail the calibration process with minimization of respective uncertainties arising in each sub-step, from power control after reactivity insertion, to the calibration of the HPGe gamma spectrometer. Biases arising from different nuclear data choices are also discussed.

II. EXPERIMENTS
In this part we briefly present CROCUS, the methodology and experimental setup for foil activations and measurements, and the dedicated experimental campaign.

A. The CROCUS reactor
A complete description of the reference core of CROCUS is available in the International Reactor Physics Experiments Handbook (IRPhE) [26], [27]. The reactor has been licensed for operating at a maximum power of 100 W, i.e. a total neutron flux of ~2.5·10 9 cm -2 ·s -1 at the core center. Criticality is controlled either by water level using a spillway, or by two B4C absorber control rods, with an accuracy of ±0.1 mm (equivalent to approximately ±0.4 pcm) and ±0.5 mm (up to ±0.2 pcm), respectively. CROCUS operates at a core temperature from 17.5°C to 22.5°C using a controlled water loop with secondary and tertiary circuits, two heat exchangers and an electrical heater.
The core is located in an Al-6060 grade open vessel of 130 cm in diameter, 160 cm in height, and 1.2 cm in thickness. The vessel is filled with demineralized light water used as both moderator and reflector. The core active part has the approximate shape of a cylinder of 100 cm in height and about 60 cm in diameter. It consists of two interlocked fuel zones with square lattices of different pitches: • an inner zone of 336 UO2 rods with an enrichment of • a varying water gap between the two zones because of the two different pitches. A picture of the reactor and its critical configuration are shown on Figure 1. Both uranium fuel rods consist of a 1-m pile of cylindrical pellets cladded in aluminum. The rods are maintained vertically by two octagonal aluminum grid plates spaced 1 m apart. In the current experimental configuration for the COLIBRI program [28]- [30], the grids have a 1-mm cadmium layer to limit axial neutron leakage to the environment, i.e. structures activation, with the active zone of the fuel starting in the middle of the lower cadmium layer.
The neutron flux is monitored using two Merlin Gerin CC54 compensated ionization chambers (north and south), and two Photonis CFUM21 10-mg 235 U fission chambers (east and west), which are the calibrated power and safety monitors.
The reactor possesses six independent shutdown mechanisms to bring it to a subcritical state in less than one second: two cruciform cadmium (Cd) safety blades in the inner zone, and four expansion tanks to drain the moderator, set in the vessel corners and controlled each by a valve. The safety systems are also used for shutdown under normal operation conditions.

B. Methodology and experimental setup
The used calibration methodology consists in three steps: -in-core irradiation of dosimeters, monitored by the fission chambers to be calibrated, -determination of the dosimeters' absolute activities with an High-Purity Germanium gamma spectrometer, -simulation of the irradiation by Monte Carlo codes, to extract each dosimeter's reaction rate, and the corresponding fission rate, i.e. power of the reactor. Gold dosimeters are employed as the 197 Au(n,γ) cross section is a standard for neutron flux measurements in the thermal energy range, with a high cross section and low uncertainties [31]. In this study, 14 disc-form foil dosimeters with a diameter of ⌀15 mm, and a thickness of 10 µm were used. For each irradiation, six dosimeters were axially distributed on a PMMA plate (see Figure 2). They are laminated in a plastic film to avoid deformation and contamination in the vessel's water.
The irradiation plate is positioned above the core center while the reactor is brought to criticality at the desired power by increase of the water level. Once the reactor is stabilized (~10 min for the delayed neutron contribution), the plate is dropped in the core central axis. The enf-of-irradiation is a fast process, less than a second due to the safety systems, which also drain around half of the vessel's water. The dosimeters with the irradiation plate are recovered by opening the lid of the reactor's cavity immediately after the shutdown. Then, the dosimeters activities are measured one by one using a Canberra HPGe gamma spectrometer (detector: GC4518/S; shielding: VG-BB-98/16D1-2). The HPGe detector is calibrated by a point-like calibration source of 152 Eu (σefficiency at 411.8 keV (Au) = 0.5%).
As the monitors' count rates are recorded during the irradiation, based on the measured activities and calculated reaction and fission rates, we derive the monitors' count rates per fission, or per unit of power.

C. Experimental campaign
In order to investigate possible biases, as well as to quantify and reduce uncertainties, a set of irradiations was performed with selected repetitions and change of parameters (see Table I). All irradiations were carried out without control rods (guide tubes were left empty), so criticality was controlled only by changing the water level. At criticality, the PuBe start-up source is sent back to its storage shielding. This configuration allows the flux distribution to be as symmetrical as possible, without any local perturbations. The primary water-cooling circuit was engaged for at least one hour before the irradiation, in order to stabilize the water and temperature in the reactor vessel.
The irradiation in the reference configuration at (20±0.1)°C was carried out three times at different power levels (10, 20 and 40 W), and with different sets of dosimeters. One irradiation was performed with additional instrumentation in the reflector, in order to modify the critical water level. Two irradiations were accomplished in the reference configuration but at two different temperatures, at 18 and 22°C, in order to assess the temperature dependence. A supplementary irradiation was undertaken at a subcritical level with a presence of the start-up source, at a water level of 800 mm, to observe the deformation of the flux distribution.

III. MONTE CARLO CALCULATIONS
The irradiations were simulated with the MCNP6 and Serpent2 Monte Carlo codes, and using JEFF3.3 [32], ENDF/B-VII.1 [33], and IRDFF-v1.05 [34] nuclear data libraries for comparison purposes. The detailed modelling included each individual dosimeter, and the water level of each irradiation. Models were used to compute reaction rates for each dosimeter at their location during the irradiation, and the reactor's fission rate. The reactor power was derived by multiplying the latter with the effective energy released per fission, and it was directly exectuted by the codes (e.g., for Serpent2, 3.245507•10 -11 ± 0.0025% J/fission). In the case of the IRDFF library, the calculations were performed with Serpent2 only, by replacing only the dosimetry cross sections. The use of the IRDFF gold files for propagating the uncertainties, for all foils axially distributed during one experiment, namely irradiation #1, from a cross section to individual reactions rates as a function of the neutron energy, is presented in Figure 4.

IV. RESULTS
The experimentally determined reaction rates were averaged by weighting in order to calculate a single calibration factor for each monitor and irradiation. The dispersion has been used to estimate biases and uncertainties, but the results used by the facility are those from the first reference irradiation. As MCNP is the validated code for CROCUS by a regulatory body, the preliminary calibration factors based on MCNP6.2 with JEFF3.3 for the 2019 campaign are (2708 ± 54) cps/W for Monitor 1 (east) and (2784 ± 56) cps/W for Monitor 2 (west). The dosimeters at the top positions were discarded, as they present systematic errors, probably due to their position close to the water level. We observe a consequent shift in the distribution, which would require further investigation. For illustration purposes, the measured reaction rates are compared to the calculated ones using Serpent2 and JEFF3.3 with an arbitrary normalization in Figure 5.
Results obtained with the support of calculations with MCNP6.2 and Serpent2 are in a good agreement, in the order of their statistical uncertainty (σstat=0.5 %). We observe a discrepancy of the order of +1% between JEFF3.3 and ENDF/B-VII.1, which is low but still significant as compared to the statistical uncertainty (σstat=0.2%). There is also a systematic discrepancy of (+1.1±0.1)% between JEFF3.3 with or without IRDFF for gold, when compared using Serpent2.

V. CONCLUSION AND OUTLOOK
In the present article, we introduce the methodology used in CROCUS for the power calibration of the safety monitors. It is based on the in-core neutron dosimetry technique using the 197 Au(n,γ) reaction for an activation analysis, complemented with detailed Monte Carlo calculations. For the 2019 calibration campaign, a set of irradiations was carried out to identify possible biases and quantify uncertainties. Preliminary calibration factors based on MCNP6.2 with JEFF3.3 are 2708 ± 54 cps/W for Monitor 1 (east) and 2784 ± 56 cps/W for Monitor 2 (west). Good agreements were found between codes (within the low statistical uncertainties), as well as limited discrepancies between nuclear data libraries (in the order of %). Further investigations shall include the understanding of discrepancies at the water-air interface, as well as the detailed study of temperature effects, and flux distributions at subcriticality.