Uncertainty Considerations that Impact the Use of Dosimetry Metrics in Modern Semiconductors

. The use of calculated dosimetry metrics is a crucial element in predicting neutron damage to modern semiconductors under various conditions, e.g., control electronics in avionic systems, satellite sensors, or power output from solar panels. These dosimetry metrics have little value unless they are accompanied by a quantified uncertainty. This paper outlines a mathematical framework that captures the response models for most semiconductor damage metrics and addresses some of the challenges faced in quantifying the relevant sources of uncertainty. The energy-dependent correlations in the damage functions are a critical underpinning in propagating the uncertainty back to a measured quantity. Significant issues are associated with “model defect” in some of the models used, i.e., assumptions in the model form that are not easily considered in the uncertainty estimate. Other issues relate to fundamental differences between the experimental measured quantity and the calculated metric used to represent the damage mode.


Introduction
Dosimetry addresses the quantification of radiation metrics. The heart of dosimetry is in the proper treatment of the uncertainty in a calculated metric associated with a given measurement. A dosimetry measurement without a supportable uncertainty has no value in an application. This paper addresses important considerations in the expression of the uncertainty in dosimetry for neutron damage metrics that are useful in addressing damage in modern semiconductors due to exposure to neutron fields.
Uncertainties are routinely available for the measurement of a dosimeter response, and for a calculated metric that describes the radiation field to which the dosimeter was exposed. However, one critical aspect that is often lacking in a complete uncertainty quantification is a consideration of the correspondence between the measurement and the calculated radiation metric.

Calculated Damage Metrics
Examples of calculated damage metrics, facility type D , of a given type and in a given facility, include quantities such as the total kerma, displacement kerma, ionizing kerma, and displacement-per-atom (dpa). The damage metric is expressed by a scalar, as shown in Equation 1, and represents a type-dependent response scaling factor, type  , which multiples the scaled convolution of the Because the semiconductor damage from neutron interactions, e.g., the energy deposition, comes from the outgoing recoil ions resulting from the neutron interaction, most scalar response functions can be expressed as a summation over the various reaction channels and an integration over the response due to the energy and angle of the resulting recoil particles.
The uncertainties in the underlying nuclear data that go into the response functions are addressed in Section 5. However, in addition to that nuclear data uncertainty, how the component nuclear data uncertainty is propagated through these equations can have a significant effect -and involves some assumptions. In some cases, there can be a strong correlation between different nuclear data uncertainties, e.g., a correlation between the reaction-channel-specific neutron cross sections within a given isotope -and between different isotopes in a naturally occurring element.
In addition to the issues about cross correlations between relevant parameters, often the representation of the uncertainty in form of a covariance matrix, and in parameter sampling, makes the assumption that the uncertainty can be represented by a normal distribution. This can be a significant assumption -and it is often not supported by a careful consideration of the physics for the parameter or the sampled data. For example, when large uncertainties are present, the physical constraints of non-negativity in some quantities means that the uncertainty often should not be described as a normal/Gaussian distribution, e.g., in the cases where a log-normal distribution or a Poisson distribution are indicated by the physical basis. If one knows the distribution associated with a parameter that influences the damage metric, it can be properly propagated into a damage metric -even if the approach is through the use of a Total Monte Carlo sampling. Issues arise when insufficient data, or a physics-based argument, are not available to support use of the assumption that a parameter is normally distributed.

Observed Damage Metrics
Typical measured quantities that represent damage metrics in semiconductors include: 1. Temperature, e.g., thermocouple response for a silicon calorimeter; 2. Current, e.g., the response of a diamond photoconductive detector; 3. Resistivity, e.g., Frenkel pair creation in a metal; 4. Minority carrier recombination lifetime, e.g., gain of a bipolar transistor or light output from an LED [2]; 5. Track density, e.g., optically measured fission tracks in an emulsion containing fissile isotopes.
The observed damage metrics all have a direct measurement uncertainty, but, when they are correlated with a calculated damage metric, that correspondence can introduce critical dependencies based on assumptions about the physical basis for the response of the measurement device. For example, in the case of the temperature measurement, one has to include consideration of the uncertainty associated with the inference that the thermocouple temperature is indicative of the temperature in the calorimeter material itself, e.g., silicon is used as the base calorimeter material whereas the thermocouple may be Type B [Platinum-Rhodium]. One must include heat loss and thermal conductivity corrections in relating these two quantities. For the current measurement in the PCD, one must address the uncertainty in: a) the charge collection, e.g., a spatial component to the efficiency of the charge collection/sweep-out; b) the charge recombination, i.e., a charge density-dependent recombination factor, a factor that depends upon the charged particle that produces the ionization, during charge sweep-out. For the minority carrier lifetime in a bipolar transistor or an LED, one must address uncertainty due to: a) chargeinjection annealing within the active area of the semiconductor; b) temperature/time-dependent annealing based on the time between the displacement damage.

Connection between Measurement Metric and Calculated Quantity
In addition to the direct nuclear data uncertainties, there are additional uncertainties associated with the assumed connection between the measured quantity and the calculated quantity. The following subsections address some representative considerations here. The uncertainty considerations below are in addition to the measurement issues, which were addressed in Section 3, and the direct nuclear data uncertainties.

Calorimeter Temperature Measurement
The general assumption in the calculated metric used here is that the change in thermocouple temperature can be directly related to the calculated kerma in the calorimeter material. One uncertainty consideration is that a better match is available through the metric of the deposited dose rather than the metric of kerma. While current radiation transport tools can provide a high fidelity estimate of the dose, if a sufficiently detailed CAD-based geometric model with complete material identification (with impurity levels) is used, many users revert back to use of the "easier to calculate", geometry insensitive, kerma metric -and do not compensate for this model assumption in their uncertainty analysis. The lack of charge particle equilibrium in the actual application can play a role when different materials are in close proximity to the active calorimeter material, e.g., use of a high-Z thermocouple in a silicon calorimeter may result in a lack of electron equilibrium and a difference between a kerma and a dose in the silicon. Further, a kerma assumes that emitted secondary photons are not locally deposited but are transported out of the material -an assumption that may not be valid for low energy x-ray emissions. Neutron-induced activation can also deposit energy over a decay period for the residual particle -and this resulting energy may conflict with the assumptions underlying the calculated kerma metric if calorimeter temperature measurements are made over an extended time period.

Diamond PCD Current Measurement
The general assumption in the use of the calculated PCD response metric is that the PCD time-dependent current is directly proportional to the rate for the ionizing kerma in the diamond. As noted in Section 4.1 with respect to total dose and total kerma, the difference between an ionizing kerma metric and the more relevant ionizing dose metric is also a concern here. In addition, the neutron ionizing kerma is generally obtained by subtracting the displacement kerma from the neutron total kerma. As such, there is a large uncertainty in this subtraction process due to the fidelity of the modelling of the recoil spectrum -an uncertainty addressed in Section 5.2. Since there can be a spatial dependence to the charge collection efficiency within the diamond, and because there can be a charge density dependence on the charge recombination correction based on the neutroninduced recoil particles and their energy, there can also be a dependence upon the stopping power -an uncertainty addressed in Section 5.3.

Carrier Lifetime Transistor Gain Measurement
The general assumption in the calculated response metric used here is that the transistor gain change is related to the displacement kerma through the Messenger-Spratt equation [2]. Several factors in the behavior of the bipolar transistor can affect the validity of this assumed behavior, e.g., recombination in the neutral base and emitter-base depletion region show different dependences on the applied voltage, base doping and neutral base width [3].
Published work with the same devices in different neutron fields clearly show that, for silicon, the displacement kerma fails to match the observed response for low energy neutrons. Part of the issue here is that, while displacement kerma may be proportional to the initial Frenkel pair creation, the observed semiconductor device is sensitive to the exact type of evolved defects -not merely to the initial defects. Deep level transient spectroscopy (DLTS) has shown that, in silicon, it is the divacancy and the vacancy-phosphorus defects (defects that result from the vacancies interaction with phosphorus dopants) that affect the minority carrier recombination lifetime -and, furthermore, that even the charge state of the divacancy, i.e., the local electron density, can be a critical consideration in the transistor gain degradation.

Status of Nuclear Data for Semiconductors
Section 4 addressed some issues associated with the uncertainty in relating a calculated metric to observed damage in a semiconductor, i.e., model defect. In this section we look at how, even when we use the best models, various nuclear data uncertainties can affect the understanding of dosimetry metrics in semiconductors. We start by addressing nuclear data evaluations -but, nuclear data involves a lot more than what is found in the current ENDF-6 format evaluations, e.g., stopping powers and damage partition function. Other subsections detail some less visible nuclear data needs.

Nuclear Data Evaluations
Nuclear data evaluations, and the availability of MF33 covariance data, has shown significantly improvements in recent file releases. Figure 1 compares the legacy (circa 1990) and most recent GaAs cross sections. Significant difference are seen -and significant improved fidelity in the representation of the resonance region. However, more improvement in the consistency of the nuclear data is still desired. The latest nuclear data evaluations still have deficiencies in the photon emitted energy that results in the calculated kerma being much larger than what is kinematically possible. While this was a significant problem in the legacy, ENDF/B-V, cross section, this is supposed to be an attribute that is now examined when new nuclear data files are verified prior to release. File format verification appears to be lacking in many cases, e.g., where the recoil spectra appear to have been entered in the MF6 file using the flag for the wrong coordinate system (lab vs. CM) [4]. This is a case that should have been caught during testing through a simple comparison based on kinematic constraints.

Recoil Atom Spectra
More nuclear data evaluations now contain MF6 recoil spectra, but the ENDF-6 format does not even permit the evaluator to include uncertainty data for the recoil spectra. The recoil spectra generally lack any validation data -and it can vary considerably with the modelling code/evaluation.

Stopping Power
Validation data is lacking for the recoil ion stopping powers for most non-silicon semiconductor materials, e.g., Ga in GaAs, N in GaN, and Al in AlN. The stopping power has an obvious strong energy-dependent correlation, which can affect systematic uncertainties in comparisons of dosimetry metrics, but this correlation is rarely quantified.

Experimental Data
There is a need for improved experimental data in a number of areas. One area is in cross section data at high incident neutron energies, i.e., between a fission spectrum and the 14-MeV DT accelerator data. A second area is, as noted in the Section 5.3 discussion, in validation data for ion stopping powers. A third area is for validation evidence supporting the energydependent modelling of the recoil spectra. A fourth area is in the energy-dependent characterization of the associated emitted gamma spectra -which is currently very poorly characterized except for thermal capture gammas. All of these quantities can have a very large effect on the calculated damage metrics of interest. Equation 2 showed that the nuclear data is modified by various terms to account for the physical response of semiconductor or dosimeter. There can be significant uncertainty in the modelling of these modifying termsand this uncertainty has a strong energy-dependent correlation that needs to be considered -and propagated into damage metrics. Figure 5 shows the effect of varying the model for the division of the recoil particle energy into ionization and displacement deposition paths. There can be large differences here based on the code used and the calculational method, e.g., binary collision approximation code vs. molecular dynamics code, and based on the form used for the interatomic potential. Reference [6] shows that there can be big differences in the modelling of the damage efficiencies for polyatomic lattice materials where there is a significant difference in the atomic weight of the recoil ion and lattice atoms.

Stochastic Effects
Most damage metrics use an average value for the relevant response, e.g., a neutron deposited energy or the number of Frenkel pairs. However, for modern small feature-size (< 5nm) electronics, there may only be one or two neutron interactions in the sensitive volume and stochastic effects must be considered in a material response. Figure 7 shows the large variation in Frenkel pair generation that can be seen for a 1-MeV neutron incident on a silicon lattice.

Summary
This paper has provided a framework for representing the calculated damage metrics and highlighted the challenges faced in providing quantified uncertainties for the calculated responses that correspond to measured responses in semiconductor materials.