The previous parts of this section have presented the basics of gross counting, detector selection, pulse processing and nuclide identification. Once the system is selected, installed and operating, the system will collect and display the natural gammaray and neutron background. So how do we know if a radioactive source is present in the vehicles, pedestrians, freight or other objects being measured? How can the portable and mobile search tools determine if the radiation levels are increasing significantly or are the variations just the natural fluctuations seen in radiation measurements? When does a radiation alarm occur and what does it mean? The following discussions attempt to provide simple answers to these questions.
Before we try to understand the fluctuations in radiation measurements and their associated statistics, it is important to define some terms. The following table presents definitions for a false alarm, a nuisance or innocent alarm and a true alarm.
Therefore an innocent or real alarm is a valid alarm where the instrument is working properly. We might want to minimize the innocent alarms if possible, but they are valid responses of the instruments to the presence of radiation. The instrument should be set up so the false alarm rate is minimized as much as possible. Some false alarms will occur because of the statistical fluctuations in the background count rate.
From where does the fluctuation in the background count rate come? More to the point, what is the source of the variation in any radiation measurement? Radioactive decay is a random process. While we can predict with great accuracy the number of atoms that might decay and emit radiation in a given time frame, given any single atom there is no way to predict whether it will decay in that time. This statement indicates that radioactive decay is a random process. In any series of measurements, such as measuring the count rate each second, the frequency of occurrence of any particular value follows some probability distribution.
If we take repetitive measurements of the background or of the background plus some source then we find a distribution of count rates that follow a normal (Gaussian or bell shaped) distribution. The average value is the mean and we can calculate how many measurements are within the mean plus or minus some count rate. For a normal distribution, plus or minus one sigma includes 68.3% of the population. This is also called the standard deviation of the data distribution.
If repeated measurements of a single variable are taken, a bellshaped, normal distribution function or Gaussian curve will be generated. The true value is assumed to be the centroid of the normal distribution function or its mean value.
2 σ = 1.96 * 1 σ = 95% CL,
3 σ = 2.58 * 1 σ = 99% CL.
If the same measurement with a +10% uncertainty is stated at 2 σ, then there is confidence that 95 times out of 100 the measurement is within 10% of the true value. Since these levels of confidence are statistically predictable as properties of a normal distribution function, one level of confidence can be converted to another by a multiplier.
To convert 1 σ to 1 σ, multiply by 1.0.
To convert 1 σ to 2 σ, multiply by 1.96.
To convert 1 σ to 3 σ, multiply by 2.58.
For most radiation measurements, the statistical distribution is a Poisson distribution. In this distribution the uncertainty of the measurement is equal to the square root of the value. As an example, if we measure 100 counts per second (cps) then the uncertainty is equal to the square root of 100 or 10 counts per second. We can use the Normal distribution as an approximation to the Poisson distribution at higher count rates. The Normal or Gaussian distribution is the familiar to us as the classic bell curve shown in the figure below.
Note that the distribution shows a maximum which is called the mean of the distribution and a standard deviation which is the variation around the mean. In a Normal distribution, the standard deviation can be positive or negative. The red area in the figure bounds plus or minus 1 sigma where sigma (σ) is the standard deviation. Using our example above, for 100 cps background and a 10 cps standard deviation, the Normal distribution predicts that 68.3% of the measurements will fall between 100 – 10 = 90 and 100 + 10 = 100. Stated another way, about 7 out of 10 measurements should fall between 90 and 110. Similarly 2 sigma is 2 * 10 or 20 so plus or minus 2 standard deviations represents 95.5% of the population of measured data shown in the red plus blue areas of the graph. About 19 out 20 measured data points will have values between 80 cps and 120 cps. At plus or minus 3 standard deviations, 99.7% of the population (997 out of a thousand) will fall between 70 and 130 cps. We could calculate 4, 5, 6 sigma and higher is desired. 6 sigma represents about 999,997 out of a 1,000,000 measurements.
All measurements have associated uncertainties. For radioactivity measurements, the uncertainty arises from variations in detection equipment and analysis procedures, human error, natural background radiation, counting uncertainty, variances in the distribution of the compound targeted for analysis in the media being analyzed and other sources.
Counting uncertainty is calculated from the background and sample data and is used to determine if a sample (package, person, vehicle or container) contains enough radioactivity over the measured background to exceed an alarm threshold. This uncertainty exists because radioactive atoms disintegrate in a random way. That is to say, not all of the particles/energy released strike the detector. This means that if the number of radioactive disintegrations from one sample is counted multiple times, each for the same duration that number will vary around some average value. Background radiation makes this true even for a sample that has no radioactivity. If a sample containing no radioactivity was analyzed multiple times, the result should vary around an average of zero. Therefore, samples with radioactivity levels very close to zero will have results that are negative values approximately 50% of the time. In order to avoid censoring data, these negative values, rather than “not detectable” or “zero,” are reported for radionuclides of interest. This provides more information than merely truncating to the detection limits for results near background activities and allows for improved statistical analyses and measures of trends in the data.
The table to the right shows the sigma over background that corresponds to one false alarm per day, week and month for 1 second measurements. Data taken for shorter or longer time will differ from these values. Note that if you have 10 channels of data, there is a probability that each will have a false alarm in these time frames so on average one would expect a false alarm for any one channel in 1/10 of the time stated here.
Now that we have examined the distribution around a set of background measurements that defined a Normal probability distribution, consider the case of counting a sample. The sample measurement includes the radiation from the sample plus the background. As a result its mean will be higher than the mean of the background distribution. In the figure the background distribution is shown in red and the source distribution is shown in blue. These distributions come from data where many seconds of measurements ware made for each. As the source radioactivity gets smaller and approaches the background count rate, the two distributions may overlap.
In order to determine an appropriate value for which set an alarm, we want to evaluate the probability of two types of possible errors.
These two main types of errors that may be made when reporting levels of radioactivity are:
Reporting something as not present when it actually is (a false negative), and;
Reporting something as present when it actually is not (a false positive).
Both types of errors are undesirable, but they have very difference consequences. A false negative means you may miss detecting radioactive material that is the very reason for having the system. A false positive means you mistakenly had a alarm when no significant amount or radioactivity was present. A false alarm might result in a secondary inspection taking time and manpower. Most false alarms will not reoccur, so the simplest way to ensure an alarm is not false is to repeat the measurement.
There are other concerns about setting an alarm level. If it is too low in order to make the system more sensitive to radiation, the false alarm rate may increase to the point where so many false alarms occur that either the throughput of the system is affected or the user loses confidence in the system. If the alarm set point is too high we may lose sensitivity and not be able to meet the regulatory or other sensitivity requirements. One way to avoid this problem is to ensure the system has a low enough background and a large enough response function (total efficiency) to allow the distributions to be as separated as possible. This is accomplished by having the right size and type of detector in the system.
Confidence in Detection Standards It is the goal of the various national and international standards such as ANSI 42.35 to minimize the error of saying something is not present when it actually is. To do this, a two standard deviation (2s) reporting level is used. In a distribution of analysis results for one sample, the average analysis result, plus or minus () two standard deviations (2s) of that average, approximates the 95% confidence interval for that average. When a sample analysis result is greater than 2s from zero, we have about 95% confidence the value came from a distribution with an average greater than zero. The uncertainty of measurements in this report are denoted by following the result with a “” 2s uncertainty term and all results that are greater than 2s from zero are reported in the text.
By using a 2s value as a reporting level (i.e. reporting results that are greater than two times their uncertainty), we are controlling the error rate for saying something is not there when it is, to less than 5% (we have 95% confidence the value is greater than zero).
However, there is a relatively high error rate for false detections (reporting something as present when it actually is not) for results near their 2s uncertainty. This is because there is variability around zero for samples with no radioactivity which may substantially overlap the variability around the sample result. Variability associated with current analysis techniques were used to calculate the level at which we are 95% certain the sample results is greater than the distribution of values for a sample with no radioactivity. This level is known as the detection limit. When sample net results are greater than the detection limit, we have 95% confidence the results are not false detections.
If a sufficient number of replicate analyses are run, it is to be expected that the results will fall in a normal Gaussian distribution. Standard statistics allow an estimate of the probability of any particular deviation from the mean value, and it is common practice to report the mean of either one or two standard deviations as the final result. In routine analysis such replication is not carried out, and it is not possible to report a Gaussian standard deviation. With counting procedures, however, it is possible to estimate a Poisson standard deviation directly from the count, and data are commonly reported as the measured value of one or more Poisson standard deviations equal to the square root of the counts. This type of reported value is considered to give some indication of the range in which the true value might be expected to occur.
The simplest possible case to consider is one in which the background is negligible and the sample activity is zero. It is sometimes not realized that if a series of counts is taken on such a system, half the net values should be less than zero. Negative counts are not possible, of course, but when there is an appreciable background, the entire scale is shifted up and the situation becomes one where half the sample counts on a zero activity sample are less than background. This negative net could occur frequently in lowlevel measurements, causing considerable concern. Actually, such results are to be expected.
Actually, it must be considered that the background is not a fixed value, but that a series of replicates are normally distributed. The desired net activity is thus the difference between the gross and background activity distributions. The interpretation of this difference becomes a problem if the two distributions intersect.
Pasternack and Harley (1970) developed a procedure for calculating what they defined as the lower limit of detection (LLD), the smallest amount of sample activity that will yield a net count for which there is confidence at a predetermined level that activity is present. The concept in calculations was practical only for gamma counting in the original form because it required that the number of counts be sufficient for the Poisson distribution to approach the Gaussian distribution so that Gaussian statistics could be used.
Actually, the approximation is good down to a few total counts, and the calculations can be applied to any system.
The LLD may be approximated as
LLD » (kα + kß) s0
where
kα is the value for the upper percentile of the standardized normal variate corresponding to the preselected risk for concluding falsely that activity is present, kß is the corresponding value for the predetermined degree of confidence for detecting the presence of activity, s0 is the estimated standard error for the net sample activity.
A still shorter approximation may be made if the values of α and β are set at the same level and if the gross activity and background are very close. In the latter case,
snet = Ö(s2gross +s2bkg) = sb Ö2
The equation then becomes
LLD = 2 Ö(2 k sb)
The values of k for common α's are:
α

1ß

k

2Ö2k

0.01

0.99

2.327

6.58

0.02

0.98

2.054

5.81

0.05

0.95

1.645

4.66

0.10

0.90

1.282

3.63

0.20

0.80

0.842

2.38

0.50

0.50

0.000

0.00

Thus, if we have a counter with a background of 80 cps and count sample and background for 1 second,
sb = Ö80 = 8.95 .
For the 0.05 level of α, that is, a 5% chance of observing a count considered background and a 95% (1  β) chance the measured signal is significant, then
LLD = 4.66 • 8.95 = 41.6 cps
Therefore, a sample would need to have a gross count rate of about 122 cps. This corresponds to
Note that if a 50% chance of finding activity is accepted, the LLD is zero. This is to be expected from the previous qualitative description. The concept of LLD is really designed for gross counting applications. LLD is a good measure of the minimum activity detectable on a system that can ever be attained for a given nuclide. In gross counting, the LLD measures the smallest amount of a radionuclide added to a background which would result in a 5% probability that the peak is not real, and a 95% probability that the peak is real. In other words, α is equal to ß.
From the above discussions, one can set alarm levels to meet both a desired false positive alarm rate and false negative rate (missed alarms). For false positives equal to false negatives, the table below provides the Normal distribution confidence limit multiplier to attain a given rate. To ensure no less than 1:10,000 false alarms occur (about 8 per day), the alarm set point should be no less than 3.9 sigma over background.
Sigma Calculations





n

1:n

sigma

2

5.00E01

0.67

3

3.33E01

0.97

5

2.00E01

1.28

10

1.00E01

1.64

20

5.00E02

1.96

25

4.00E02

2.05

50

2.00E02

2.33

100

1.00E02

2.58

200

5.00E03

2.81

500

2.00E03

3.09

1000

1.00E03

3.29

2000

5.00E04

3.48

5000

2.00E04

3.72

10000

1.00E04

3.89

20000

5.00E05

4.06

50000

2.00E05

4.27

100000

1.00E05

4.42

500000

2.00E06

4.75

1000000

1.00E06

4.89

1500000

6.67E07

4.97

1735123

5.76E07

5.00

5009334

2.00E07

5.20

5.04E+08

1.99E09

6.00

