5% or 1% level of significance for the number of measurements
(n). If Tis larger than that value, then x
H
or x
L
is an outlier.
Further information on statistical techniques is available else-
where.
5–7
5. References
1. SPIEGEL, M.R. & L.J. STEPHENS. 1998 Schaum’s Outline—Theory and
Problems of Statistics. McGraw-Hill, New York, N.Y.
2. LAFARA, R.L. 1973. Computer Methods for Science and Engineering.
Hayden Book Co., Rochelle Park, N.J.
3. TEXAS INSTRUMENTS,INC. 1975. Texas Instruments Programmable
Calculator Program Manual ST1. Statistics Library, Dallas, Texas.
4. BARNETT,V.&T.LEWIS. 1995. Outliers in Statistical Data, 3rd ed.,
John Wiley & Sons, New York, N.Y.
5. NATRELLA, M.G. 1963. Experimental Statistics, Handbook 91. Na-
tional Bur. Standards, Washington, D.C.
6. SNEDECOR, G.W. & W.G. COCHRAN. 1980. Statistical Methods. Iowa
State University Press, Ames.
7. VERMA,S.P.&A.QUIROZ-RUIZ. 2006. Critical values for 22 discor-
dancy test variants for outliers in normal samples up to sizes 100, and
applications in science and engineering. Revista Mexicana de Cien-
cias Geologicas 23(3):302.
1010 C. Glossary
This glossary defines concepts, not regulatory terms. It is not
intended to be all-inclusive.
Accuracy— estimate of how close a measured value is to the true
value; includes expressions for bias and precision.
Analyte—the element, compound, or component being analyzed.
Bias— consistent deviation of measured values from the true
value, caused by systematic errors in a procedure.
Calibration check standard—standard used to determine an
instrument’s accuracy between recalibrations.
Confidence coefficient—the probability (%) that a measurement
will lie within the confidence interval (between the confidence
limits).
Confidence interval—set of possible values within which the
true value will lie with a specified level of probability.
Confidence limit—one of the boundary values defining the
confidence interval.
Detection levels—various levels in use are:
Instrument detection level (IDL)—the constituent concentration
that produces a signal greater than five times the instrument’s
signal:noise ratio. The IDL is similar to the critical level and
criterion of detection, which is 1.645 times the sof blank
analyses (where sis the estimate of standard deviation).
Lower level of detection (LLD) [also called detection level and
level of detection (LOD)]—the constituent concentration in
reagent water that produces a signal 2(1.645)sabove the mean
of blank analyses. This establishes both Type I and Type II
errors at 5%.
Method detection level (MDL)—the constituent concentration
that, when processed through the entire method, produces a
signal that has 99% probability of being different from the
blank. For seven replicates of the sample, the mean must be
3.14sabove the blank result (where sis the standard deviation
of the seven replicates). Compute MDL from replicate
measurements of samples spiked with analyte at concen-
trations more than one to five times the estimated MDL.
The MDL will be larger than the LLD because typically
7 or less replicates are used. Additionally, the MDL will
vary with matrix.
Reporting level (RL)—the lowest quantified level within an
analytical method’s operational range deemed reliable
enough, and therefore appropriate, for reporting by the
laboratory. RLs may be established by regulatory mandate
or client specifications, or arbitrarily chosen based on a
preferred level of acceptable reliability. Examples of
RLs typically used (besides the MDL) include:
Level of quantitation (LOQ)/minimum quantifiable level
(MQL)—the analyte concentration that produces a signal
sufficiently stronger than the blank, such that it can be
detected with a specified level of reliability during
routine operations. Typically, it is the concentration
that produces a signal 10sabove the reagent water
blank signal, and should have a defined precision and
bias at that level.
Minimum reporting level (MRL)—the minimum concen-
tration that can be reported as a quantified value for a
target analyte in a sample. This defined concentration
is no lower than the concentration of the lowest cali-
bration standard for that analyte and can only be used
if acceptable QC criteria for this standard are met.
Duplicate—1) the smallest number of replicates (two), or 2)
duplicate samples (i.e., two samples taken at the same time
from one location) (field duplicate) or replicate of laboratory
analyzed sample.
Fortification—adding a known quantity of analyte to a sample or
blank to increase the analyte concentration, usually for the
purpose of comparing to test result on the unfortified sample
and estimating percent recovery or matrix effects on the test to
assess accuracy.
Internal standard—a pure compound added to a sample extract
just before instrumental analysis to permit correction for in
efficiencies.
Laboratory control standard—a standard usually certified by an
outside agency that is used to measure the bias in a procedure.
For certain constituents and matrices, use National Institute of
Standards and Technology (NIST) or other national or inter-
national traceable sources (Standard Reference Materials),
when available.
Mean—the arithmetic average (the sum of measurements divided
by the number of items being summed) of a data set.
Median—the middle value (odd count) or mean of the two middle
values (even count) of a data set.
Mode—the most frequent value in a data set.
Percentile—a value between 1 and 100 that indicates what percent-
age of the data set is below the expressed value.
Precision (usually expressed as standard deviation)—a measure of
the degree of agreement among replicate analyses of a sample.
INTRODUCTION (1010)/Glossary
4
INTRODUCTION (1010)/Glossary