Malfunctions in radioactivity sensors ' networks

Abstract— The capacity to promptly and efficiently detect any source of contamination of the environment (a radioactive cloud) at a local and a country scale is mandatory to a safe and secure exploitation of civil nuclear energy. It must rely upon a robust network of measurement devices, to be optimized vs. several parameters, including the overall reliability, the investment, the operation and maintenance costs. We show that a network can be arranged in different ways, but many of them are inadequate. Through simulations, we test the efficiency of several configurations of sensors, in the same domain. The denser arrangement turns out to be the more efficient, but the efficiency is increased when sensors are non-uniformly distributed over the country, with accumulation at the borders. In the case of France, as radioactive threats are most likely to come from the East, the best solution is densifying the sensors close to the eastern border. Our approach differs from previous work because it is "failure oriented": we determine the laws of probability for all types of failures and deduce in this respect the best organization of the network.

The topic "Malfunctions in Sensors' Networks" [3] deals with the propagation of radionuclides in the atmosphere and their likelihood of detection by a suitable network of sensors, accounting for the features of the network and its possible malfunctions originating from uncertainties, failures and false alarms.
The objectives of the ongoing research are: -Checking the effectiveness of a given network of sensors: is it possible to reconstruct the shape of a radioactive cloud relying upon the indications provided by a given network of sensors?And how accurately?
-Optimizing the design of the networks of sensors: is it possible to increase their robustness (vs.uncertainties, failures and false alarms) keeping the investment and exploitation costs at a reasonable level?
To answer these questions, the problem will be addressed through a graduate step-by-step approach, adopting time by time various assumptions upon the type of cloud, its direction, speed, and so on.We start with a complete investigation of the information given by a single sensor and the possible errors, and gradually increase to a whole network.

A. Theory
First of all, the cloud is "pixellised".In such an approach, no precise geometrical shape is attributed to the cloud.A radioactivity level, which is a real number, is attributed to each pixel of the domain.Certainly, this assumption leads to a loss in precision, because the cloud has no specific boundaries anymore, moreover the usual difficulties connected with pixels hold: the radioactivity level is homogenized inside a given pixel.
Compared to the presentation of the cloud as a collection of geometrical shapes with different levels of radioactivity, our approach is closer to reality.In fact, there is always a natural level of radioactivity (which may differ from one zone to another) and radioactive clouds do not have precise boundaries.A pixel of 1 x 1 km seems reasonable, in terms of precision.
Also, the administrator of the network of sensors has no knowledge about the shape of the cloud.He has at his disposal the measurement results provided by the sensors only, plus some meteorological information (such as the speed and direction of the wind).
In this simulation, the radioactive cloud is assumed moving along a straight line from East to West: the entrance point is located in A and the exit point in B .For simulation sake, it is assumed that the cloud has the shape of a disk of the radius r : as we said earlier, the sensors only see pixels.
We denote by V the speed of cloud propagation; in the numerical examples below, we will assume 10 km/h V  .Let  be the length of the segment AB and C the center of the disk representing the radioactive cloud.The equation of the assuming the origin of time ( 0 t  ) at the moment C is located in position A .

B. The domain
Since the shape of the domain does not matter, we take a simplified domain, namely a square, the dimensions of which are comparable to those of continental France, namely 750 x 750 km.Remembering the size of the pixel assumed here below (1 x 1 km), the domain is thus divided into 562 500 pixels.The origin of the axes is located in the square bottom left.

C. Time units
The sensor monitors the environment continuously, but transmits the information every ten minutes only.In other words, all motions are discretized and viewed every 10 minutes.The time unit (TU) is denoted by  .
The network sees the radioactive cloud as a collection of pixels.The coordinates of its center , C C x y are determined from the law of uniform linear motion.We denote by n the number of time units; at each TU, n is increased by 1.The time will be written as: T n   

A. Introduction
We know from the Fukushima and Chernobyl accidents ( [4], [5]) that a radioactive cloud spreads far from the initial emission point and gets larger and less radioactive with time.Therefore, it can be assumed that, on the average and at least in a first approximation, areas which are far from the nuclear power plant are likely to be less contaminated than the nearby ones.
Let's consider situations where the shape of the radioactive cloud changes with time: the radius increases and the radioactivity decreases at each time step.The increase of the radius is likely to allow an earlier detection of the cloud compared to the stationary case (cloud detected after 30 hours).If we assume an increase of 0.2 km per time unit, the sensor will detect the cloud earlier (26.78 hours), and after 24.19 hours if the radius increase is 0.4 km per time unit.
When the radioactivity decreases, the results will be different, depending on the initial radioactivity value of the cloud.Assuming a radioactivity decrease of 0.01 µSv/h per time unit, we can consider three bounding cases: -A cloud with an initial concentration of contaminants generating 29 µSv/h: even with the decrease, the cloud will remain dangerous, with a minimum value of 25.6 µSv/h at the last time of detection by the sensor; -A cloud with an initial concentration of contaminants generating 27 µSv/h: with the decrease, the radioactivity will go from 25.4 µSv/h (first time of detection by the sensor) to 23.6 µSv/h (last time of detection); -A cloud with an initial concentration of contaminants generating 25 µSv/h: with the decrease, the radioactivity will never cross the threshold of 25 µSv/h.
Besides, the false alarms, failures and the uncertainty of measurement can modify the detection, mainly in the last two cases: the sensor can miss the cloud or send a false alarm.However, the increase of the radius, if the cloud is radioactive enough, can improve the likelihood of detection.For instance, if we assume an increase of 0.4 km per time unit, as the cloud increases in radius (average radius 207.4 km at the time of first detection), it becomes almost twice bigger compared to the previous case.
Assuming a repair-time of 24 h in the case with increased radius, part of the cloud can be detected before the failure appears and the last part the sensor has been repaired.This situation cannot happen in the case of constant radius, since the diameter is less than 240 km.In the situation with an increasing radius, we can calculate the true radius of the cloud even if the failure occurs.

B. General description
Assuming a linear increase of the cloud radius with (  [km] at each time unit), it can be expressed as follows: In this situation, the cloud is likely to reach the sensor faster compared to the constant-radius case.Let us denote by the distance spanned by the fixed shape cloud from the entrance point to the point of its first detection.Then the first announced time for the cloud detection is: n is the number of time units that corresponds to the first detection of the cloud, then the last announced time for the cloud detection is: The radioactivity of the cloud decreases of an amount [µSv/h] at each time unit.We assume in our simulations that the radioactivity level of the cloud decreases linearly:

C. Numerical application: parameters
Let us now consider different couples of changes in radius and radioactivity of the cloud.We assume the following couples of values and adopt them in all examples: We will keep these values in all examples.

D. First simulation
The initial value of the radioactivity is (1) 29 radioactivity  µSv/h and 0.01   µSv/h.The initial value of the radius is The average radius is 150 km.The radius of the cloud, taking into account the uncertainty due to the detection delay, is: The first and last announced times of the cloud detection are 1

T 
(h) and 2 57.33 T  (h).The uncertainty has got a quite low influence so that the sensor actually detects one cloud the size of which is: This size is 2.5 km bigger compared to the original case: We overestimate the size of the cloud.

E. Second simulation
The 4.66 T  (h).We will simply say that the sensor "sees" two clouds.
Consequently, the sensor can overestimate the value of radioactivity and send a false alarm from the time 33.33 T  h.Therefore the sensor sends a false alarm from time 33.33 h to 34.66 h.Then, from the indications given by the sensor, one may deduce that there are five clouds, which is not correct.
The size of the circular region including all small clouds is: Since the radioactivity of the cloud is lower than the threshold, during all the time of possible detection, the sensor is not able to detect the cloud nevertheless it can send a false alarm.
In this case the sensor "sees" three false clouds during the detection time.The size of the circular region including all small false clouds is: Therefore, due to the likelihood of the false alarms, a nonzero radius detection region can show-up.

G. Fourth simulation
The initial value of the radioactivity is In this case, the sensor underestimates the value of radioactivity only once during the detection and it "sees" two clouds.
The size of the circular region including all small false clouds is: As a consequence, we overestimate the size of the cloud.

H. Fifth simulation
The sensor undergoes a failure.The repair lasts for 24 h and we can miss 240 km of the cloud.Since the radius increases each time unit and the average radius for this case is 414.8 km, we can see one part of the cloud before the failure and the other after the repair of the sensor.We take the values: (1) 100 The sensor detects the radioactive cloud at 24.333 h and then, from 38.666 h to 63.333 h, it is not operating.After the repair, the sensor detects the last part of the cloud.
The size of the circular region including all small clouds is:

A. Introduction
In this paragraph, we assume that the cloud enters a region where it can be detected by several sensors.The information about the radioactive cloud is more reliable when we can rely upon several sensors1 , see [6].
We consider that each sensor provides with information within a circular region of radius 20 km around it.We do not actually know the real shape of the cloud, so that, in a first approximation, we attribute it a circular shape.
Therefore, the area spanned by each sensor and the area where the radioactive cloud is located form two circles, generating different situations: -The circle of the cloud contains the circle spanned by the sensor; -The circle of the cloud contains only a part of the circle spanned by the sensor; -The circle of the cloud does not contain the circle around the sensor.
We define the ratio of "efficient detection" as the area spanned by the network divided by the area of the radioactive cloud.If the ratio is 1, then the network is able to completely detect the cloud.We can find the maximum ratio for a given simulation at the time when the detection is the best.If we have the appropriate position of the sensors in the network and enough sensors, we could detect almost all the radioactive cloud.
An "efficient" network has to: -Detect the radioactive cloud as soon as possible (time of first detection); -Detect the cloud as long as possible (detection duration); -Span the greatest possible area (ratio of detection).

B. General description
Let's consider several cases modifying the location of the sensors as well as their number, and assuming different trajectories for the cloud.We determine the maximum "efficient detection" ratio, the first time of detection and the duration of the detection.
The area spanned by a sensor is delimited by a circle of radius R .It can be in, partially in, or outside the radioactive cloud each time unit.When the sensor starts facing the cloud, its area can be into the cloud or partially in it.The distances are given by the formulas: The angles are given by the formulas: The areas are given by the formulas: Then the area inside the cloud is: 4. The circles intersect but the sensor does not detect the radioactivity level because it is located outside the cloud.At first, let's assume that the cloud has got constant shape and radioactivity and that the cloud may move along different trajectories, starting from the East border of France.In the simulations, the radioactivity of the cloud is assumed 35 µSv/h, not accounting for the measurement uncertainty.
We simulate the movement of the radioactive cloud and calculate the ( ) ratio n each time unit.We can find the maximum among the values of the ( ) ratio n in each simulation ( ratio ), when the radioactive cloud was better detected.The ratio increases with the number of sensors for all types of networks: -For the networks with 20 sensors, the ratio is bigger for the third network, 0.120 ratio  . The first time of detection is smaller (0.333 h) for the third network, but the duration of detection is not as long as for the first or second networks.
-For the networks with 40 sensors, the ratio is bigger for the second network, 0.1622 ratio  . The time of the first detection is 0.333 h for the third network for all the trajectories of the cloud, the duration of detection at some trajectories is less than what the first and second networks require.
-For the networks with 60 sensors, the ratio is bigger for the third network, 0.2217 ratio  . The time of the first detection is 0.333 for the third and fourth networks independently on the trajectory of the cloud, and the duration of detection is bigger for the fourth network.
-For the networks with 100 sensors, it is bigger for the first network, 0.3028 ratio  .
-For the networks of 200 sensors, it is bigger for the third network, 0.5375 ratio  .
-For all the networks with 100, 200 sensors, the time of the first detection is 0.333 h and the duration of detection is maximal for all the trajectories of the cloud.
The network with 400 sensors has a ratio of 0.9523.
The mathematical expectations and standard deviations of the maximum ratio for each network are the following: -The mathematical expectation is bigger for the third type of network, among the networks with 20 sensors (M = 0.013), but the standard deviation is  =0.016.
-The mathematical expectation is bigger for the third type of network, among the networks with 40 sensors (M=0.158), the standard deviation is -The mathematical expectation is bigger for the first network, among the networks with 60 sensors (M=0.213),but the standard deviation is  =0.016.
-The mathematical expectation is bigger for the first network, among the networks with 100 sensors (M=0.296), the standard deviation is -The mathematical expectation is bigger for the third network, among the networks with 200 sensors (M=0.532), the standard deviation is -The network with 400 sensors covers almost all the area and the mathematical expectation is 0.946.
The third type of network shows good results in achieving maximum ratio and early detection of the cloud.When it has 200 sensors, it shows a maximum duration of radioactive cloud detection and ratio is 0.5375 .

D. Average cost of the network per year
It is worth considering not only good ratio and the time characteristics of detection of a network but also its cost.Let's consider the average cost per year of a network with N sensors.The network with high number of sensors is good for the detection but has got high cost, too.
Assuming that the probability to provide with a false alarm is for the failure for all the sensors at some time unit 2. The probability for each sensor to have at least one failure per year, without false alarms is: 0.9804 Proba (failure per year)  The probability that each sensor provides at least with one false alarm per year, without accounting for failures is:

Proba (false alarm per year)
The average cost of the network per year consists of: -The casual maintenance cost; the sensors must be inspected all the time to keep them operating.The casual maintenance cost for one sensor per year is CM K , then for N sensors the cost will be CM K N  ; -The cost of a false alarm, which mobilizes services.
Experts have to the move to the place where the sensor is located and inspect it.The cost of one false alarm is FA K .If we consider that all sensors present malfunctions independently, the cost is likely to increase with the number of false alarms, therefore with the number of sensors.The false alarm occurs with some probability, then for N sensors the cost will be We take the values: The cost function of the number of sensors increases linearly with the number of sensors.Let us denote the mathematical expectation of the maximum detection ratio for each network as ( ) M ratio , then the efficiency of the network is: Let's now investigate the networks' efficiency, which corresponds to the best combination of cost and ratio of detection.The efficiency of each network is shown in Fig. 6.
The greatest efficiency ( 6 3.579 10   ) is met with the third network with 20 sensors (that has more sensors at the East border).We can use the first, the second and the third type of networks with 20, 40, 60 sensors as they have better efficiency than the networks with 100, 200, 400 sensors.
The third type of network shows better results in efficiency almost for all numbers of sensors.

E. Results
The following considerations hold: -The first time of the cloud detection and the duration of the detection are different for the networks made of 20,40, 60 sensors and equal for the networks with 100, 200 sensors; -The "efficient detection" ratio increases with the number of sensors.In the case of France, the most interesting network is the one with more sensors at the East border (see footnote 3).In most cases, it has a good ratio and an earlier detection of the cloud; -If we consider the detection of the cloud only (so that the value of "efficient detection" ratio does not matter) the networks with 60 sensors have got efficiency for detection equivalent to that of the networks with more sensors.The fourth type of network (more sensors on the borders) is able to detect the cloud early (0.333 h) and the duration of the detection is maximum when it has 60 or more sensors; -If we want to detect and reconstruct the cloud intensity and size, considering the "efficient detection" ratio is mandatory.It is bigger than 0.5 (50%) for almost all types of networks with 200 sensors.
Let's compute the average costs of the networks per year, taking into account the casual maintenance, the false alarms and the failures of the sensors.For each network, we can evaluate its efficiency (i.e. the "efficient detection" ratio divided by the cost of the network).
Our conclusions about the costs of the networks are as follows: -The average cost of the network per year increases linearly with the number of sensors; -The third type of network (sensors on the East border) with 20 sensors has the best efficiency; -The third type of network with 40 sensors has almost the same efficiency as the first type of network with 20 sensors, so we can use more sensors for the detection of the cloud and the efficiency will remain the same; -The networks with 20, 40, 60 sensors have better combination of detection ratio and cost than the networks with 100, 200, 400 sensors.

V. CONCLUSION
The capacity to promptly and efficiently detect any source of contamination to the environment at a local and a country scale is mandatory to a safe and secure exploitation of civil nuclear energy worldwide.This capacity must rely upon a robust network of measurement devices which is to be optimized vs. several main parameters including its overall reliability, the investment, the operation and maintenance cost.
The present paper investigates the efficiency vs. cost of such network of detectors to several parameters, including their density, lay-out, etc., also considering the major failure modes which may affect them.
Eventually, a modern version of "Archimedes' method" is proposed for optimization.It relies upon systematic comparisons among the results of simulations stored in suitable databases and the actual measurements on-site.In our approach, the best network would be made of three items: -A few radioactivity sensors, preferably put on the East border; -Several mobile units, to be sent where detection is supposed to occur; -A database of simulations, to be used for comparisons.

1 CrF
is bigger by 12.2 km compared to the case in the numerical simulation, where the values of the estimated radius are between =138.97 km, 2 C r =140.63 km.We overestimate the size of the cloud because of the false alarms that occur due to the uncertainty of measurements.
average radius is 207.4 km.The radius, taking into account the uncertainty due to the delay in detection, is:

1 Cr 2 Cr
by 1,765 km compared to the case in the numerical simulation, where the values of the estimated radius are between =206.57km, =208.23 km.We overestimate the size of the cloud.In this situation, the sensor presents a failure but we can calculate the size of the cloud, since we see a part after the repair.

1 .
The circles do not intersect, when the distance between them l satisfies the relationship (

Fig. 1 .
Fig. 1.The cloud and the area spanned by the sensor do not intersect.

Fig. 2 .
Fig. 2. The area spanned by the sensor is inside the cloud.

3 .Fig. 3 .
Fig. 3.The area seen by the sensor is partially inside the cloud.

Fig. 4 .
Fig. 4. The sensor is not able to detect the cloud.Since the sensor does not detect the cloud in this case, the part of its area in the cloud is not considered: 0 in S  If several sensors are located in the region facing the cloud, their influence areas are to cover only part of it.It is possible to evaluate the ratio of the cloud the sensors are able to detect over its actual size.If N is the number of sensors, this ratio is:

Fig. 5 .
Fig. 5.The 4 types of networks used for the simulations.

FAKN;-
Proba(false alarm per year)   The cost of the failure, as the device must be repaired or replaced.The cost of one failure is BR K .With the number of failures, therefore with the number of sensors, this cost increases, but failure occurs with some probability, then for N sensors the cost will be BR K N Proba(failure per year)   .Therefore, the average cost of the network per year is: false alarm per year) K N Proba(failure per year)

Fig. 6 .
Fig. 6.Efficiency of the 4 types of networks, depending on the number of sensors.
EPJ Web of Conferences 170, 08002 (2018) https://doi.org/10.1051/epjconf/201817008002ANIMMA 2017 -The third type of network with 20, 40, 200 sensors has the best efficiency compared to other types of networks with the same number of sensors, therefore it is better to use the network with more sensors at the East border;