Metrology of Acquisition Chains and Signal Processing of LMJ Experiments

Since the first experiment in 2014, more and more plasma diagnostics are being deployed on the Laser MegaJoule (LMJ) facility manufactured by C.E.A/D.A.M. These diagnostics aim at measuring radiations or particles emitted during laser experiments to study high-energy physics, especially inertial confinement fusion (ICF). Different types of sensors surround the LMJ target chamber and realize the conversion of the quantities of interest to an electric signal. The signal is then transmitted via coaxial cables, acquired by a broadband oscilloscope, and digitally post-processed. Each step of this typical acquisition chain adds measurement errors and increases the global uncertainty. First, a numerical model of the digitizer alongside a specific hardware system designed to perform its metrology in situ will be presented. It computes errors sources such as offset, gain and skew, and provides a measurement of the effective number of bits (ENOB) of the digitizer. The experimental characterization of the electrical chain via its transfer function measurement will also be detailed. Finally, the numerical methods deployed to handle the inverse problem, based on deconvolution processes, will be introduced, including future developments exploiting Bayesian inferences and statistical approaches.


I. INTRODUCTION
HE Laser MegaJoule (LMJ) is a major component of the French "Simulation Program" led by the Military Applications Division of the Atomic Energy Commission (CEA-DAM). This program aims at validating theoretical models by performing numerical simulations and experiments, especially in high-energy physics domains. To reach these extreme conditions, LMJ will deliver shaped pulses from 0.7 ns to 25 ns with a maximum energy of 1.8 MJ of UV light on the target. In 2019, the first successful ICF campaign has been carried out and the associated nuclear diagnostic will serve as example for the following sections. The aim was to measure the primary neutron yield, the ion temperature and the neutron bang time in deuterium (D2) implosion by using a neutron time-of-flight (nTOF) diagnostic. This diagnostic is composed of several detectors made of fast plastic scintillators coupled to gated photomultiplier tubes (GPMT). These detectors are placed all around the target chamber, which is a 10-meter diameter aluminum sphere, covered with a neutron shielding made of 40 cm thick of borated concrete. The electric signal coming out of the GPMT is then transmitted through ultralow-loss coaxial cables to a digital oscilloscope, which is located in a Faraday room next to the target chamber. Many parameters have to be estimated precisely (integral value, fullwidth at mid-height, instant of maximum), however the cable length can be up to 80 meters and attenuators of tens of dB are used along the lines, which make the output signal strongly filtered and attenuated. This requires determining accurately the transfer function of the chain so that deconvolution algorithms can be implemented to retrieve the initial signal measured by the detector. Moreover, error sources due to the digitizing process itself decrease the signal to noise ratio. This has motivated the development of a customized system to perform different metrology procedures with the digitizers associated to this diagnostic. Based on these results, uncertainties associated to the acquisition would be estimated more accurately.   As indicated in Table 1, the influence of these noise sources increases with both the frequency and the amplitude of the input signal. The bandwidth of the digital oscilloscopes used for the ICF campaign was set to 4 GHz, which implies an absolute noise floor value ℎ of 57 µVRMS. Then, since the electric signal transmitted by the detector has a frequency signature up to 100 MHz, the phase noise ℎ is estimated around 885 µVRMS for a specified intrinsic jitter value of 1 psRMS. Finally, the input signal is splitted before being sent to the 4 channels of the digitizer, while each of them is set to a different vertical sensitivity value. This configuration allows to measure the signal of interest with the highest and most appropriate full-scale ratio and it can reduce the quantization noise by a factor of 10. Indeed, since the vertical resolution of the ADC is 8 bits and the full-scale is varying between 0.4 V and 4 V, can be bounded between 450 µVRMS and 4.5 mVRMS.

B. Effective Number of Bits (ENOB)
An easy way to get access to the effective noise due to the random errors described above is to measure the effective number of bits (ENOB) of the digital oscilloscope. The procedure is the following: -A pure sinusoidal signal is sent to the input of the oscilloscope -The signal is fitted using a least-square optimization -The standard deviation of the difference between the measurement and the fit is computed to obtain the effective noise -ENOB is computed by using the following formula: This value quantifies the quality of the ADC and is given for a specific full-scale ratio (FSR). Fig. 3 shows the ENOB values for each settings of vertical sensitivity of the digitizer (from 5 mV/div to 1 V/div) and for a sinusoidal input signal which frequency is varying between 1 kHz and 20 MHz (upper image). The simulations based on the noise sources identified in Table 1 can be extended to 4 GHz (lower image). It can be seen that the phase noise starts being the main limiting factor of the ENOB above 1 GHz, while the quantization noise and thermal agitation limit the ENOB in the lower frequency range. The nominal value is around 6.3 ENOB.

C. Customized system for in situ metrology
During the ENOB measurement, the sinusoidal signal is fitted to compute the residual noise. This operation cancels the influence of other error sources due to the oscilloscope such as: -Zero Volt error -Offset error -Gain error -Timebase error -Impedance mismatch -Skew error To estimate these errors, each of them requires a specific metrology procedure that could only be carried out by the manufacturer. Moreover, the constraints of the LMJ facility impose to place the acquisition devices inside Faraday rooms and to remove these instruments periodically for metrological purposes would drastically reduce the operational time for experiments. To prevent this, a customized system has been designed so that each digital oscilloscope of every diagnostic can be tested in situ at all times. The system depicted on Fig. 4 is composed of a digital multimeter, a waveform generator, a high-voltage fast pulse generator, and a pure sinusoidal crystal generator. A metrology test can be launch by the operator in the control room and a full report containing all the errors for every channel measured at each vertical sensitivity is generated, in order to help quantifying the uncertainties to the acquisition system. In order to measure h(t) for the nuclear diagnostic, an extremely fast step generator featuring a rise time tr of 45 ps was used. The step is first sent directly to the digitizer as the reference signal, and then it is sent in the chain at the detector level. The time derivative of the step response gives the impulse response. However, the reference signal is not a perfect Dirac i.e. an infinitely high and fast signal of unit energy. It is fast enough to cover the whole bandwidth of the experiment (BW ≈ 0.35/trise ≈ 7.7 GHz), but its amplitude has to be calibrated by dividing the step response by the integral value of the reference signal. The Fourier transform of the impulse response gives the transfer function in the frequency domain H(f), which is used to check the behavior of the chain in terms of attenuation and delay in the bandwidth of interest (cf. Fig. 5). The low-pass filtering effect clearly appears on the amplitude curve (red) and it will be corrected by the deconvolution algorithm.

B. Deconvolution
One of the simplest approaches to deconvolution is to set up an inverse filter. Using Fourier transform, the convolution equation becomes a simple product, and the estimated input signal e(t) is obtained by dividing the output by the transfer function: . 6 shows an example of inverse filtering. The blue trace is a typical output of the nTOF detector. The main spectral components of this signal are located around 50 MHz.
According to the amplitude plotted on Fig. 5 the attenuation due to the chain is -13,5 dB (i.e. a factor of 4,73). A simple gain correction leads to the red curve, while the green curve shows the estimated input by inverse filtering. In the case of the nuclear diagnostic, several information must be extracted from the parameters of these signals: the area under the curve corresponding to the neutron yield, the full-width at halfheight (FWHM) corresponding to the ionic temperature, and the instant corresponding to the maximum of emission i.e. the bang time. Table 2 details how the deconvolution correction is much more efficient and how it improves the accuracy of the measurement. Taking into account the behavior of the chain in terms of amplitude and delay for the whole bandwidth allows to straighten the curve and to obtain a rise and fall time in good agreement with the predictions. Fig. 6. Output signal (blue) and estimated inputs using a simple gain correction (red) and an inverse filter deconvolution (green). In the case of the nuclear diagnostic, the signal to noise ratio is high enough so that the deconvolution by inverse filtering is efficient. However, this process may not give relevant results in many other cases. To illustrate the limits of this method, the convolution equation can be re-written by constructing the matrix H with the elements hi of the impulse response: Solving the inverse problem then requires satisfying the three Hadamard's conditions [2]: existence, uniqueness, and stability. Indeed: -The solution may not exist: the operator H may not be invertible ( − may not exist). In many systems, the first term h0 of the impulse response is equal to zero (ℎ( ) ~ . − ) in which case the matrix H is singular. It can be avoided by taking h1 as the first term and to consider that the impulse is displaced one unit of time from its true position.
-The solution may not be unique: H admits more than one inverse (∃ 1, 2 / 1( ) = 2( ) = ). To make sure the solution is unique one has to narrow the solutions to the space of functions whose spectrum is included into the system bandwidth.
-The solution may not be stable, a small variation ds of the output generates a large deviation of the estimate ( lim →0 ‖ − ( ) − − ( + )‖ ≠ 0 ). The stability can be estimated by computing the conditioning factor of H, which is the ratio between its highest and its lowest eigenvalues. If this factor is much larger than 1, then a small noisy perturbation of the output will result in a large variation of the estimated input.
These inherent limitations of inverse filtering lead to the consideration of other deconvolution methods, that could take into account a priori information about the input to be recovered. This is one of the main advantages of probabilistic methods.

C. Probabilistic approaches
Since deterministic methods lack tools to handle previous knowledge about statistical properties of unknown parameters and noise errors, probabilistic approaches can be implemented to solve the inverse problem. One of them is based on the Bayes theorem [1]: Where P(ej) is the a priori probability of the cause ej and P(si/ej) the probability that the output si was caused by the input ej. This term P(si/ej) represents one of the element of the matrix H containing the impulse response that has been measured on the diagnostic. Using this Bayesian inference approach will make it straightforward to include uncertainties on the data and on the computed solution of the inverse problem (ej) via probability laws.

IV. CONCLUSION
The paper describes how the acquisition chains of plasma diagnostics installed on the LMJ facility are controlled. First by designing a customized instrument to perform the metrology procedures used for the digital oscilloscopes, especially the ENOB measurement, that can be launched before every campaign. Then the way the behavior of the entire chain is characterized by the measurement of its impulse response is detailed, and the final results obtained with the deconvolution algorithm based on inverse filtering are illustrated. New methods based on Bayesian approach will be used to be able to infer more precisely the uncertainty of the computed solutions.