Test Accuracy Ratio (TAR) and Test Uncertainty Ratio (TUR) are common measurement risk assessment tools used in metrology. TAR is commonly defined as “Accuracy of the Device under Test (DUT) / Accuracy of the Standard”. It is most used early in the design of a calibration system to qualify a measurement standard’s capability to provide the accuracy desired.
The primary limitation of using TAR for measurement risk decisions that users should be aware of is that the definition does not capture uncertainty influences from the unique calibration system and processes used in each user’s lab.
A standard’s accuracy specification is ultimately based on operating within a set of assumptions (commonly defined by the standard’s manufacturer). In a flow calibration, such limits may include a specific flow range, an operating pressure or temperature range, a settling or stabilization time requirement, or certain methods of averaging measurement results. The lab’s ability to control these factors within these limits is a key part of achieving the accuracy specification of the standard.
A set of operating assumptions is also applicable to the DUT itself. For example, calibrating the DUT outside of its specified flow range may reduce the accuracy or repeatability of the measurement reported by the DUT. Such DUT measurement results may reduce the calibration confidence when the result is compared to the standard’s measurement.
In comparison, TUR is commonly defined as “Accuracy of the DUT / Uncertainty of the Measurement Ensemble”. The term measurement ensemble refers to the other equipment and processes used to compare the DUT’s measurement to the standard. In a flow calibration, factors such as flow profile, calibration media, connecting volume, and others will also impact flow measurement technologies in numerous ways. It may only be practical to quantify TUR after validation testing of each unique calibration system.
In the end, these ratios are often helpful tools for users that design and manage calibration processes. Some accreditation standards include explicit definitions for these values that are required to be used for all accredited calibrations performed in the lab, but relying solely on an accreditation standard’s prescribed TUR value may not provide a calibration result that is within the standard’s acceptance test limits for all calibrations performed in the lab. Any such inconsistencies should be reviewed and discussed with the responsible metrologist and/or the auditor used for the lab’s accreditation.
Notably, different industry associations also provide definitions for these ratios that are inconsistent with each other and are different than the calculations shown above, so confirming that the values are all found via the same methods is critical to making accurate comparisons. For best results, these ratios should be used as a small part of a comprehensive strategy that manages how measurement risk decisions are made in the lab.
Let’s talk DryCal!
Talk with one of our experts to get your questions answered and see how we can help you solve your calibrator pain points.