We need your consent to use the individual data so that you can see information about your interests, among other things. Click "OK" to give your consent.
Standard Guide for The Use of Various Turbidimeter Technologies for Measurement of Turbidity in Water
STANDARD published on 1.5.2011
Designation standards: ASTM D7726-11
Note: WITHDRAWN
Publication date standards: 1.5.2011
SKU: NS-39025
The number of pages: 17
Approximate weight : 51 g (0.11 lbs)
Country: American technical standard
Category: Technical standards ASTM
Keywords:
back-scatter, benchtop turbiditylight-scattering, continuous monitoring turbidity, in-situ turbidity, nephelometer, on-line turbidity, portable turbidity, ratio turbidity, sediments, turbidimeter, turbidity, turbidity application, turbidity interferences, turbidity meter, turbidity technology, turbidity units, water monitoring ^DOI:, ICS Number Code 17.180.30 (Optical measuring instruments)
Significance and Use | ||||||||||||||
Turbidity is a measure of scattered light that results from the interaction between a beam of light and particulate material in a liquid sample. Particulate material is typically undesirable in water from a health perspective and its removal is often required when the water is intended for consumption. Thus, turbidity has been used as a key indicator for water quality to assess the health and quality of environmental water sources. Higher turbidity values are typically associated with poorer water quality. 5.1.1 Turbidity is also used in environmental monitoring to assess the health and stability of water-based ecosystems such as in lakes, rivers and streams. In general, the lower the turbidity, the healthier the ecosystem. Turbidity measurement is a qualitative parameter for water but its traceability to a primary light scatter standard allows the measurement to be applied as a quantitative measurement. When used as a quantative measurement, turbidity is typically reported generically in turbidity units (TU’s). Turbidity measurements are based on the instruments’ calibration with primary standard reference materials. These reference standards are traceable to formazin concentrate (normally at a value of 4000 TU). The reference concentrate is linearly diluted to provide calibration standard values. Alternative standard reference materials, such as SDVB co-polymer or stabilized formazin, are manufactured to match the formazin polymer dilutions and provide highly consistent and stable values for which to calibrate turbidity sensors. When used for regulatory compliance reporting, specific turbidity calibration standards may be required. The user of this method should check with regulatory entities regarding specifics of allowable calibration standard materials. The traceability to calibrations from different technologies (and other calibration standards) to primary formazin standards provides for a basis for defined turbidity units. This provides equivalence in the magnitude of the turbidity unit between the different measurement technologies when they are all calibrated on standards that are traced to primary formazin. This means that a TU is equivalent in its magnitude to a nephelometric turbidity unit (NTU), and all other units as described in this guide. See Table 1. Turbidity is not an inherent property of the sample, such as temperature, but in part is dependent on the technology used to derive the value. Even though the magnitude of turbidity units are equivalent and are based on turbidity standards, the units do not maintain this equivalence when measurement of samples is practiced. Turbidity standards are generally free of interferences and samples are not. Depending on the type of technology employed for measurement, the magnitude of the different interferences on a given sample can differ significantly with respect to the different measurement technologies. The user of a turbidity technology should expect to observe a lack of measurement equivalence across different turbidity measurement designs when common samples are analyzed. See Section 6 on interferences. Depending on the application, some instruments are calibrated on a sample that has been characterized (or defined) by some independent means. The calibration may include one or more samples that have been characterized with respect to the application of its use. See Test Method . Turbidity is not a quantative measure of any chemical or physical property of water. Different expected interactions between a given measurement technology and a given sample with a unique combination of interferences can significantly impact the final turbidity result. As stated in 5.3, depending on the technology used, the result will differ. It is imperative to provide a linkage of metadata that is reflective of the design type (i.e. technology) used to generate the turbidity values. In all ASTM standards, the measurement units are reflective of the design criteria and the information is presented in Table 1. The actual reporting units, signified by a two to four-letter code, are based upon distinguishing design criteria for each of the common measurement technologies. The intent of attaching the measurement unit to the determined turbidity value is to indicate the type of technology used. Even though various instrument designs may be grouped by technology type (i.e. FNU, NTU, FBU, etc. and refer to Table 1), instruments within a group should not be considered to be identical nor it is proposed that sample values obtained will be alike. Instruments within each technology may still have other design differences whereby samples give different results. For example, pathlength differences between two instruments with the same reporting units can impact measurements and the relative difference in results. Discussion of Table 1 Table one provides a summary of technologies and their respective reporting units that are in the different ASTM methods. The reporting unit is a two to four letter-code that has been assigned to a unique type of technology. The reporting unit follows every reported turbidity measurement and serves as metadata to the respective measurement. The key design features are based on three criteria. 1) Type of light source used; 2) Primary detector angle with respect to the incident light beam; 3) Number of detectors used. If the measurement unit begins with an “F” then the light source is a near-IR wavelength. Most designs will encompass a light source that is in the 860±60 nm range. The strength of this wavelength is that most natural colors do not absorb at this level, which reduces or eliminates color interference. Two things that interfere at the near-IR are carbon black and copper sulfate. Second, the incident light beams are easily collimated, which extends the overall operational range. Third, the output of the light source can be regulated to provide a stable output over time. The weakness is that longer wavelengths are less sensitive to smaller particles with respect to response at very low turbidities. If the measurement unit either begins with an “N” or is a two-letter unit (e.g. BU, AU), the incident light source will be in the 400-680-nm range. The strength of this wavelength range is increased sensitivity to smaller particles when compared to longer wavelengths (such as those in the near infared (IR) range. The weakness of this wavelength range is that color that absorbs at the same wavelengths, as those that are emitted by light source will cause a negative interference. Second, if the source is an incandescent light source, additional optics is required to maintain collimation and stability over time. The light source will typically need to periodic replacement over the life of an instrument. If the measurement unit includes an “R” it is a nephelometric method that utilizes a 90-degree detector plus one other detector. This is referred to as a ratio metric technique and helps to compensate for color interference, regardless of the wavelength of the incident light source. The technique also helps to linearize the response to turbidity at higher levels and can provide an extended measurement range. The technique can also help to stabilize measurement outputs. The technique is the most flexible across different applications because of the combination of sensitivity to low turbidity ranges and the ability can measure very high turbidity levels. If the measurement unit has a “B” it indicates a backscatter technique. These techniques typically have a wide range, but are not sensitive at low turbidities. They are also more susceptible to color and particulate absorbance interferences. If the measurement unit has an “A” it indicates an attenuation or absorbance measurement. The measurement is a combination of light that is attenuated and absorbed, in combination. Color is a significant interference, except for applications that require color to be considered part of the overall turbidity measurement. The method is very sensitive to wavelength and thus, the reporting unit should also include the wavelength of the incident light beam. If the measurement unit contains an “M” it indicates a technology in which at least two incident light beams and two detectors are employed. The method also encompasses a ratio technique. These designs are very similar to ratio techniques as are the advantages and limitations. Other units (a) mNTU – The technology indicates a monochromatic incident light source in visible wavelength range and a nephelometric technique. The technology design allows for an improved limit of detection over conventional light sources. Its primary use for low turbidity measurements, such the monitoring for membrane breaches and ultra-purification processes. (b) SSU – The “SS” portion of the unit indicates a surface scatter technique is being used. The technique positions both the light source and detector that are in the same horizontal plane above the sample. Light that is scattered by particles at or very near the surface and detected at an angle that is at 90 degrees to the centerline of the incident light beam. The system has a high detection range, but low sensitivity. It is also susceptible to color interferences, but to a lesser degree than techniques that pass light completely through the sample. The technique is valuable for applications where it is desirable for the sample not to touch the optics of the instrument. The table provides information regarding to the most prominent applications and discusses interference concerns. This information is based on technologies that are in the field at the time this method was written, but does not constitute endorsement to any given manufacture of a given technology. In some cases, a design can be successfully used outside of the stated applications in Table 1. The user should perform testing to ensure the technology meets limit of detection, sensitivity, and range requirements that insure representative data can be acquired. Range of Measurement —Table 1 provides guidance on the estimated range of use for the different measurement technologies. A key design criterion is the pathlength of measurement. This is the actual distance that light travels through a sample to generate the scatter that ultimately becomes detected. It encompasses both the incident light distance and the receive angles for the scattered light detectors. The longer the pathlength, the lower the measurement ranges, but the better the sensitivity. Shorter pathlengths may provide a greater range, but a poorer sensitivity and a poorer the limit of detection. |
||||||||||||||
1. Scope | ||||||||||||||
1.1 This guide covers the best practices for use of various turbidimeter designs for measurement of turbidity in waters including: drinking water, wastewater, industrial waters and for regulatory and environmental monitoring. This guide covers both continuous and static measurements. 1.1.1 In principle there are three basic applications for on-line measurement set ups. The first is the bypass or slipstream technique a portion of sample is transported from the process or sample stream and to the turbidimeter for analysis. It is then either transported back to the sample stream or to waste. The second is the in-line measurement the sensor is submerged directly into the sample or process stream, which is typically contained in a pipe. The third is in-situ where the sensor is directly inserted into the sample stream. The in-situ principle is intended for the monitoring of water during any step within a processing train, including immediately before or after the process itself. 1.1.2 Static covers both benchtop and portable designs for the measurement of water samples that are captured into a cell and then measured. 1.2 Depending on the monitoring goals and desired data requirements, certain technologies will deliver more desirable results for a given application. This guide will help the user align a technology to a given application with respect to best practices for data collection. 1.3 Some designs are applicable for either a lower or upper measurement range. This guide will help provide guidance to the best-suited technologies based given range of turbidity. 1.4 Modern electronic turbidimeters are comprised of many parts that can cause them to produce different results on samples. The wavelength of incident light used, detector type, detector angle, number of detectors (and angles), and optical pathlength are all design criteria that may be different among instruments. When these sensors are all calibrated with the sample turbidity standards, they will all read the standards the same. However, samples comprise of completely different matrices and may measure quite differently among these different technologies. 1.4.1 This guide does not provide calibration information but rather will defer the user to the appropriate ASTM turbidity method and its calibration protocols. When calibrated on traceable primary turbidity standards, the assigned turbidity units such as those used in Table 1. are equivalent. For example, a 1 NTU formazin standard is also equivalent in measurement magnitude to a 1 FNU, a 1 FAU, and a 1 BU standard and so forth. 1.4.2 Improved traceability beyond the scope of this guide may be practiced and would include the listing of the make and model number of the instrument used to determine the turbidity values. 1.5 This guide does not purport to cover all available technologies for high-level turbidity measurement. 1.6 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information only. 1.7 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. 1.8 This guide does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. Refer to the MSDSs for all chemicals used in this procedure. |
||||||||||||||
2. Referenced Documents | ||||||||||||||
|
Do you want to make sure you use only the valid technical standards?
We can offer you a solution which will provide you a monthly overview concerning the updating of standards which you use.
Would you like to know more? Look at this page.
Latest update: 2024-12-23 (Number of items: 2 217 157)
© Copyright 2024 NORMSERVIS s.r.o.