Event based neutron activation spectroscopy and analysis algorithm using MLE and metaheuristics

Techniques used in neutron activation analysis are often dependent on the experimental setup. In the context of developing a portable and high efficiency detection array, good energy resolution and half-life discrimination are difficult to obtain with traditional methods [1] given the logistic and financial constraints. An approach different from that of spectrum addition and standard spectroscopy analysis [2] was needed. The use of multiple detectors prompts the need for a flexible storage of acquisition data to enable sophisticated post processing of information. Analogously to what is done in heavy ion physics, gamma detection counts are stored as two-dimensional events. This enables post-selection of energies and time frames without the need to modify the experimental setup. This method of storage also permits the use of more complex analysis tools. Given the nature of the problem at hand, a light and efficient analysis code had to be devised. A thorough understanding of the physical and statistical processes [3] involved was used to create a statistical model. Maximum likelihood estimation was combined with metaheuristics to produce a sophisticated curve-fitting algorithm. Simulated and experimental data were fed into the analysis code prompting positive results in terms of half-life discrimination, peak identification and noise reduction. The code was also adapted to other fields of research such as heavy ion identification of the quasi-target (QT) and quasi-particle (QP). The approach used seems to be able to translate well into other fields of research.


Introduction
In-field neutron activation for use in geology and medical physics requires a robust experimental setup and a potent analysis algorithm to produce useful data. Considering a multi-crystal detection array and event based acquisition, we developed an analysis method based on maximum likelihood estimation combined with metaheuristics. This fitting method needs to be able to fit a wide range of both analytic and piece-wise functions while taking into account the random nature of the data.

Basis for the method
The experimental setup consists of an array of NaI crystals with photomultiplier tubes connected to an analogue-to-digital converter. The maximum amplitude of the impulsion is recorded with a time stamp in a database. This type of recording enables the development of a more sophisticated fitting method. At first we consider the ability to create histograms with both time and energy windows selected post experimentation.

Mathematical basis of the method
The basis for the method is to look at the number of counts in a given channel of a given histogram as the outcome of multiple binomial trials. By letting p, the probability of success for a given trial, take any value between 0 and 1, we can compute the likelihood of obtaining a number of counts x for an arbitrarily large number of trials n.
Given a scenario where p is a function of the channel, one can compute as a function of channel number c and then calculate the total likelihood of obtaining such a distribution of counts by multiplying the likelihood of each individual .
Once the equation for !"!#$ has been determined, it is only a matter of optimizing the function by varying the parameter . We have chosen a metaheuristic approach that will be discussed a bit later.
In the case of multiple random processes, the number of counts in a given channel c are the result of the sum of two or more binomial distributions. For a sufficient number of counts, we can assume that the different random processes are Poisson distributions and the resulting distribution is also a Poisson distribution of mean, equal to the sum of the daughter means. This means that our function ! ( ) can be computed for different parameters and used in an algorithm to optimize The special case where the number of trials and counts are too small to permit the Poisson approximation and multiple processes are presumed, no general form of the combination of two binomial functions exist. In our case, the only time this will be of interest is for single count per channel fitting. In this case, must be computed in an n-loop where we sum all likelihoods for every permutation of attribution of the single count in a channel to a given process. Needless to say this prompts a high computation cost compared to cases where the Poisson approximation is possible. Alternately, we can calculate the convolution of the random processes and use the resulting distribution as our function.

Metaheuristic optimisation
Given the large number of possible functions with an arbitrary number of parameters, a nonanalytic method was decided upon for greater adaptability. The algorithm computes values for p following a function or sum of functions of the type ! ( + + ℎ + ⋯ ) then the algorithm computes the likelihood function ( ) !"!#$ described above. We proceed to modify the parameters that go into the p function and compute again. By metaheuristic iterations we try to converge to a EPJ Web of Conferences EPJ Web of Conferences 10018-p.2 maximum. Dependant on the task at hand, various methods can be used. In most cases encountered in our work, a simple local search algorithm seems to work well.

Simple curve fitting
The following are simple curve-fitting tests used as a proof of concept for the algorithm.  Simple and multiple functions have been fitted on both simulated and experimental data with good results in terms of peak identification and noise reduction. In addition to this, a precision single count-per-channel fitting was done for exponential functions. The method enables us to fit functions without the need to bin events together which reduces the error due to large binning.

Advanced applications
To test the limits of our optimisation algorithm, we tried to fit the sum of 1024 Gaussian functions with averages of increasing values and predetermined variance. Essentially we had 1024 free parameters, i.e. the amplitudes. We fed an unresolved double peak similar to one that we could encounter in a typical NaI spectrum. The result in the amplitude space is a resolution of the two peaks. Because of the large amount of free parameters, the convergence is slow and was stopped after an arbitrary amount of time. We expect that given enough time the amplitude space spectrum would become even clearer.

Future works
The next step in completing the analysis algorithm is the creation of a pseudo functions that fit the full energy peak with the Compton edge and background for different energy gammas in NaI crystals. These functions will then be easy to fit to our experimental spectra to extract the element composition of our irradiated material. Once this is done, we will add an exponential tail in the time dimension to these functions to build three dimensional element-block functions. Then, we will be able to fit element amplitudes directly on the raw data without having to make energy and time cuts.