14 MeV NEUTRON IRRADIATION EXPERIMENTS-GAMMA SPECTROSCOPY ANALYSIS AND VALIDATION AUTOMATION

An important area of research required for fusion reactor design is the study of materials under high energy neutron irradiation. Deuterium-Tritium (D-T) reactions release 14.1 MeV neutrons and material studies of such high energy neutrons focusing on transmutation and activation are paramount for fusion tokamak devices such as ITER and DEMO. In order to understand neutron damage and transmutation-induced radioactivity in fusion regime energies, a series of experimental campaigns were performed at the ASP facility based at Aldermaston in the UK, which uses a deuteron accelerator to bombard a tritiumloaded target and generate 14 MeV-neutron emission rates of up to 2.5× 10 s−1. In this work, a holistic treatment of the 11,000 gamma spectra (time series data) collected over five experimental campaigns is applied to identify radioisotopes and validate nuclear data and the inventory code, FISPACT-II. Whilst previous analysis has examined single spectra and foil irradiation’s using traditional, human-driven methods, this work applies novel methods using Artificial Neural Networks (ANN) and classification algorithms to allow a fully automated approach. Using such methods we show good broad agreement with FISPACT-II inventory simulations, and an overview of results are given as C/E values.


INTRODUCTION
Designing tokamak devices such as ITER and DEMO requires in-depth knowledge and understanding of material damage, transmutation and activation due to 14.1 MeV neutrons, arising from Deuterium-Tritium (D-T) reactions. These reactors expect to experience neutron fluxes of the order of 10 18 neutrons m −2 s −1 , and quantities such as nuclear heating, neutron damage and transmutation-induced radioactivity are crucial for their design and operation [1]. Currently, no facilities exist which offer the ability to expose materials to neutron fluxes and energies equivalent to those predicted for fusion. Whilst plans are underway to develop a DEMO Oriented Neutron Source (DONES) [2] to tackle this issue, very little data is available for material exposure under 14 MeV neutron irradiation. In addition to material damage, gamma spectroscopy offers the ability to monitor the plasma inside the reactor through the determination of neutron fluences in such intense and harsh environments.
The multi-physics and inventory code FISPACT-II [3] allows time dimensional analysis of neutron irradiated materials, based on an extensive set of modern nuclear data libraries. Codes such as FISPACT-II are crucial when developing fusion nuclear technology, but are highly reliant on accurate and complete nuclear data libraries. Spectral data is included in the nuclear data libraries and FISPACT-II is capable of reading spectral lines and can appropriately produce discrete spectra based on radioisotopes present in the inventory. This offers a direct approach to compare and validate existing nuclear data libraries with experimental data in the fusion energy regime [4] [5]. This work then presents an automated approach to perform direct comparison and validation between ASP gamma spectra data and inventory simulations for a wide range of materials and reactions.

ASP FACILITY
The ASP facility based at Aldermaston in the UK, generates deuterium-tritium neutrons via a low-energy high-current deuteron accelerator, operating at 50 kV extract potential, focused onto a tritium target [6]. Prior to impact on the target the ionised deuterium beam is focused and collimated along the 18 meter accelerator tube and assembly, and is focused to a diameter of roughly 1 cm. The subsequent fusion reaction between deuterium and tritium then produces neutrons at an emission rate of up to 2.5 × 10 11 s −1 . Thin film foils consisting of a variety of materials are placed directly in front of the target for an irradiation period and promptly extracted via the specially adapted pneumatic rabbit system. The rabbit system can perform the extraction to the measurement area in around 10 seconds. Gamma spectroscopy measurements are then performed using a High-Purity germanium (HPGe) gamma-ray spectrometer to measure the decay emission energy spectrum.

Modelling
A Computer-Aided Design (CAD) model has been constructed for the accelerator assembly [7]. A simplified version of this model is then used with MCNP6 [8] to estimate the surface average fluence at various distances from the target, and provide an expected neutron spectrum at the target. A near isotropic source definition used in the model derives from a basic kinetic model and models incident neutrons near the target with binned energy and angular distributions. The estimated incident particle energy spectrum at the foil surface has been determined with and without the rabbit system, using FENDL 3.1 [9] and ENDF/B.VII [10] nuclear data libraries.
A Geant4 [11] and MCNP model was used to estimate gamma-ray detector efficiencies based on the Monte Carlo method, matching incident peak energies with energy deposited. The latter was used to match experimental calibration data at the time of the experiments, with the former being used for comparison and uncertainty estimates. It was assumed that foils are placed directly in contact with the top of the detector end cap, however, this was not always strictly the case. The foil to end cup distance is estimated at 1 cm, with values taken 1 cm either side to provide a conservative estimate of the uncertainty due to the source position. An exponential fit of the form exp 5 i=0 a i ln i (E/E max ), with E max = e 2 ∼ 7.39 MeV aligned well with the data points. Since the MCNP fit has been well adapted (refinements mainly to the dead layer) specifically to match with experimentally collected data, the fit is used to estimate detector efficiencies in this work.

Experimental Campaigns
Five experimental campaigns represent a series of five individual trips to Aldermaston, with each consisting of an itinerary of foils to be irradiated. Campaign 1 involves only single foil irradiation's, with 238 U-based fission counters used to monitor neutron fluences. The fission counters were placed on either side of the target. Spatial fluctuations of the beam can dramatically affect results from the fission counters and therefore provide an unreliable estimate of the neutron fluence. Due to this, campaign 1 is excluded from this work.    other (around 0.84 MeV). However it was possible to discriminate between the peaks using the gamma-ray spectrometer employed in this work.

PEAK IDENTIFICATION
Traditionally, peak searching algorithms perform moving averages over sliding windows, with additional techniques taking into account first and second derivatives [15] [16] [17]. In our approach, algorithms existing in open source tools and packages, such as ROOT [18], were assessed and combined with these traditional methods. Most of these methods require some prior knowledge of where to expect peaks, or for agnostic procedures, fail to identify peaks in close proximity, known as multiplets, as is the case for 27 Al(n,p) 27 Mg, and 56 Fe(n,p) 56 Mn reactions. Additional difficulties arise when attempting to identify peaks in regions governed by low statistics and counts varying dramatically across a log scale. We examine the use of supervised learning techniques using Artificial Neutral Networks (ANNs) for peak classification [19][20] for comparison.
With 16384 channels across the HPGe detector the required network and size of the input layer to the network is too vast to computationally realise. Instead a discretized bin method was used, whereby the energy regime was fragmented into equal sizes of N bins, where N ∈ {5, 7, 9, 11, 13, 15, 17, 19, 21}. Each ANN was trained on real and synthetically generated data sets, based on the campaign data and a Geant4 detector model, using labels from simulation and through use of other conventional peak finding algorithms. The training data sets consist of over 200,000 samples, with roughly equal proportions of each label (peak or no peak). The networks consisted of 6 hidden layers, with the input layer consisting of N perceptrons and values log normalised. The output layer of size 1 represents the probability of a peak, as in figure 2.
A more traditional method of peak finding, using a moving average window taking into account  Figure 2: The architecture of the neural network used for peak finding. It consists of N input nodes varying between 5 to 21 in odd numbers (so the peak is always at the centre index), 6 hidden layers of varying number of perceptrons, and a single output perceptron indicating the peak probability.
standard deviation thresholds and examining first and second derivatives is used for comparison. Using a windows size of 40 and threshold of mean plus sigma threshold of 2 was determined to provide the optimal parameters for this method, given constraints on maximising true positives and minimising false-positives. The ANN shows good agreement with conventional methods and surpasses it for multiplet recognition, as can be seen in figure 3. It is stressed that both methods required no prior knowledge of the peaks and did not enforce any requirements on the peak. No smoothing or background subtraction was applied, to avoid loss of peak information. Considering this, the ANN produces remarkable results and can correctly identify the duplet for 27 Mg and 56 Mn at around 843 and 846 keV, which is missed by conventional methods. It is also noticeable that for the experiment presented in the figures, and in general, the ANNs produces a larger number of false-positives in the high energy range when counts are below 500. This varies depending on the number of input nodes, N , but is likely due to a lack of accurately labeled data in this regime.

FLUX ESTIMATION
The reference peaks in table 1 are used to estimate the incident neutron flux per each experiment. Due to poor statistics, low intensities and efficiencies, and long half-life of 24 Na we choose to ignore these peaks and base estimates on Gaussian fits to peaks at 843, 846, 1014, and 1810 keV. The SNIP method [21] is used to estimate and remove background counts, line efficiencies and detector efficiencies are both taken into account. Thus the activity at the end of the irradiation, A 0 , can be determined based on an exponential fit of integral counts with real time.
The flux, φ, is then related to the activity via equation 1 [22]. With α RR representing the total reaction rate, N p as the number of parent particles, λ as the decay constant of that nuclide, and t irrad as the irradiation time. Reaction rates are determined from FISPACT-II version 4.0 using TENDL 2017 [23] libraries.
The calculated fluxes for each experiment are shown in figure 4, showing values in the region of 10 8 to 10 9 neutrons cm −2 s −1 . It is obvious from this figure that the flux decreases with time, starting with a mean flux of (8.5 ± 1.9) × 10 8 for campaign 2, reducing to (5.4 ± 1.5) × 10 8 and (3.4 ± 1.1) × 10 8 for campaigns 3 and 5 respectively. Whilst there is no record of changes to the target it cannot be ruled out. An alternative theory for the decrease may have been due to tritium depletion of the beam target. The beam was indeed moved between experiments to reduce such effects, which also introduces wide variation in flux estimates within campaigns.

COMPARISON WITH FISPACT-II
An automated system has been constructed in order to form a digital twin of each experiment, using FISPACT-II. UKAEA has developed an internal version of FISPACT-II which allows it to be driven via an Application Programming Interface (API). Besides the several performance advantages which makes automation of large data sets now possible, it allows trivial interfacing with other codes and allows such holistic studies to be realised. The FISPACT-II API allows direct access to nuclear data lines based on current inventories and allows bespoke gamma bounds to match that of a HPGe detector with 16384 channels. It is then trivial to compare simulation results with experiment following the integration of gamma lines to match experiment measurement times. Figure 5: The comparison of FISPACT-II simulated gamma lines (red) to experimental gamma lines (black) following peak identification, background subtraction and Gaussian broadening using a sliding window approach (top) and a neural network (bottom) for peak identification, for the full energy range. C/E values are given below each spectrum.
All experimental data is stored using a NoSQL database (MongoDB) and each experiment is analysed by querying the database and constructing an equivalent FISPACT-II simulation. Peak matching and background removal algorithms are performed on the data, as described previously, and selected peaks are fitted with Gaussian distributions which are matched to broadened FISPACT-II lines, taking into account detector efficiencies. This comparison is shown for a Au experiment (experiment 61) at 924 seconds (final measurement time) following irradiation in figure 5 using C/E values. The FISPACT-II simulation line count is given as C, divided by the experimental data line count estimation, given as E, for each peak.  Figure 6: The comparison of FISPACT-II simulated gamma lines (red) to experimental gamma lines (black) following peak identification, background subtraction and Gaussian broadening using a sliding window approach (top) and a neural network (bottom) for peak identification, in the reduced energy range 50 keV to 450 keV.
electron annihilation peak. In the case of the latter FISPACT-II identifies spectral lines that are very close to the signature peak at much lower intensity causing a low C/E value. This is much more evident for the ANN, due to the larger number of peaks detected. Ignoring the outliers it is evident that generally there is good agreement with most peaks, centred around 1, however a slight skew towards C < E is noticeable, particularly in the low energy regime (< 500 keV). The peaks identified with the ANN show a much larger number agreeing with calculation and altogether show a better general agreement.

CONCLUSIONS
An automated approach to peak identification can save considerable time in gamma spectroscopy, which is typically performed by hand at the cost of prolonged human effort. Along with traditional windowing methods, neural networks show promising results which can dramatically improve the number of peaks correctly identified, specifically for multiplets, but also introduce high falsepositive rates, typically in higher energy and low count regimes. Much work is still needed to develop these methods further and hybrid approaches can be adopted in the future to improve multiplets whilst reducing false-positive rates.
It can be seen that whilst there is some good agreement between simulation and experiment, wide gaps remain. It is hard to conclude and comment on the direct source of disagreement, better understanding and propagation of uncertainties is needed in modelling.
This holistic approach has shown the potential that this process can be fully automated and can be used to perform analysis immediately after data collection, providing feedback and data quality assessments to experimentalists whilst on site. Therefore allowing data to be analysed and experiments to be altered and adjusted within timescales of hours instead of years.