Measurement uncertainty - definition, calculation & significance for metrology

Measurement uncertainty as an important factor in quality assurance

Anyone in industrial metrology who wants not only to work economically but also to remain competitive must be able to rely on precise measurement results. However, accurate measurements are not possible without classifying the quality of the measured values. In this context, measurement uncertainty plays an important role. Though not only the measurement results are decisive, but also the suitability of the measuring equipment for the measurement task is important for quality assurance. So how can the measurement accuracy be improved by determining the measurement uncertainty and which parameters are important for this information?

In this article, we address the most important aspects of measurement uncertainty - from its definition and determination to its significance for metrology in practice.

What does measurement uncertainty mean - a brief definition

The International Dictionary of Metrology ("International vocabulary of basic and general terms in metrology") gives the following definition of measurement uncertainty: "Uncertainty of measurement is a parameter associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand."1 This definition of measurement uncertainty assumes that a measurement result represents the best estimate of the value of a measurand. In addition, all components of measurement uncertainty contribute to the dispersion of measured values.

In practice, measurement uncertainty is often abbreviated to uncertainty. In contrast to everyday usage, however, the term uncertainty in metrology does not have a negative connotation but is an important parameter for the quality of measurements. For this reason, measurement uncertainty is by no means to be equated with measurement errors, because it rather serves to be able to precisely evaluate a result and thus to detect measurement errors.

Deviations or measurement uncertainties are in fact fundamental in order to be able to better assess and compare measurement results, measuring instruments and methods, and thus also to increase measurement accuracy. This is because even when using high-precision measuring instruments, such as coordinate measuring machines, measurement uncertainties occur that should be determined in the course of quality assurance.

The term standard deviation also appears in connection with measurement uncertainty. The standard deviation cannot be equated with the uncertainty of measurement, but it is an important factor for its determination. Basically, the measurement uncertainty is made up of various components. Of these, some can be determined from the statistical distribution of the measurement results of a series of measurements. These results can in turn be characterized by the standard deviation. Frequently, the measurement uncertainty is also given as the standard deviation of normally distributed data. In this case, one speaks of the so-called standard uncertainty or standard measurement uncertainty.

Important terms related to measurement uncertainty

In connection with the determination, calculation and definition of measurement uncertainty, other important terms also crop up repeatedly. The following terms should therefore be known in conjunction with measurement uncertainty:

Expanded Uncertainty of Measurement

The expanded measurement uncertainty is the product of the combined (standard) measurement uncertainty and a factor greater than 1. This characteristic value characterizes the area around the measurement result that is expected to encompass a large part of the distributed values that can be assigned to the measurand.

Relative Uncertainty

The relative measurement uncertainty indicates the ratio of measurement uncertainty and measured value and is independent of the absolute numerical value of the measurement uncertainty.

Absolute Uncertainty

The measurement uncertainty is specified as a positive number. The absolute measurement uncertainty corresponds to the unit of measurement of the measured value.

Statistical Measurement Uncertainty

The statistical measurement uncertainty (also standard uncertainty) is a component of the uncertainty of measurement determined from the statistical distribution of the results of a series of measurements and characterized by the empirical standard deviation.

Systematic Measurement Uncertainty

The systematic measurement uncertainty corresponds to the measurement deviation, i.e., the deviation of the measured value assigned to the measurand from the true value.

Measurement accuracy and uncertainty: Why is the uncertainty important?

In principle, measurements cannot provide exact values. This is because the value of a measurand cannot be determined exactly, and the result of a measurement can only ever be an estimate of the true value of the measurand. For this reason, a measure is needed to estimate the quality of a measurement result. This purpose is fulfilled by the measurement uncertainty. The uncertainty enables an indication of the probability that the measurement result obtained is the same as the true result. The determination of the measurement uncertainty is therefore essential in order to be able to evaluate the quality of measurement results and the measurement accuracy at all. For this reason, a metrological task simply cannot be solved without a prior determination of the measurement uncertainty. However, the uncertainty of measurement is not only important for the evaluation of measurement results, but it also provides information as to whether a test process or measurement method is suitable for a measurement task at all.

But why is it important to determine the uncertainty of measurement in the first place, if one knows the measurement deviation of the measuring device? This is simply because the data on the measurement deviation cannot automatically be transferred to the expected measurement uncertainty of a measurement task. This is because a measurement task includes many more factors than just the uncertainty of measurement. In addition to the uncertainty, the workpiece to be measured and the measuring device used also play a role in the result of the measurement task. For this reason, the determination of the measurement uncertainty, considering all relevant factors, is necessary to obtain meaningful measurement methods and results.

Metrology in practice: What is the significance of measurement uncertainty?

Having established that the determination of measurement uncertainty is indispensable for unambiguous measurement results, the question arises as to what significance uncertainty has for practical metrology. First, it is important, especially in everyday metrology, to know whether a measurement method or rather a measuring instrument is suitable for a measurement task at all and can deliver meaningful results. This is best done by determining the measurement uncertainty. Furthermore, specifying the uncertainty not only ensures the company's own quality assurance, but is also an expression of quality-conscious measurement and thus creates transparency for the customer. After all, those who test their measurement procedures and measuring instruments for suitability can convince with precise results.

Furthermore, the determination of the measurement uncertainty offers the following advantages in practice:

  • Clear assessment of the quality of measurement results
  • Revealing potential for improvement in the measurement process
  • Traceable measurement uncertainties
  • Creation of comparable measurement results

In addition to these advantages, measurement uncertainty also plays an important role in productivity and cost-effectiveness and can be decisive for competitiveness.

Increase productivity

Measurement uncertainty has a decisive influence on product quality and productivity. To guarantee economical production, the achievable measurement uncertainty must already be specified when tolerances are defined and for subsequent proof of conformity. To ensure conformity of the products in later production, the applicable limit values for measurements in production must be specified in advance. This is because only the specified limit values may be used for the decision on conformity during a measurement to check the workpieces. It is not possible to determine these without knowledge of the measurement uncertainty. In practice, the expanded measurement uncertainty is used in manufacturing for checking measurements.

Early in the product development process, the measurement uncertainty of the measuring equipment used must be determined to establish proof of conformity. Once the measurement uncertainty has been determined, the range of conformity can be calculated for each characteristic and limit values defined. This is important to compare the financial expenditure for manufacturing and measuring equipment during production planning and to minimize it with regard to the proof of conformity.

As a rule, a higher expenditure for the measuring equipment together with a low measurement uncertainty is considered economical, since in this constellation the range of conformity can be increased. Determining the measurement uncertainty can thus not only ensure economical and efficient production, but also strengthen the customer's confidence. If accurate and realistic results are already communicated during sample testing, this creates transparency and trust.

Precise determination of measurement accuracy for more economic efficiency

Are you working on new product series, facing complex measurement tasks or want to increase your productivity and optimize measurement processes? At ZEISS, we support you with competent measurement services for your metrological challenges.

How to determine the measurement uncertainty?

To ensure that the interpretation of measurement results is based on a standard that is uniform for all, an international procedure for determining measurement uncertainty is necessary. This was first published in 1994 (and later revised again in 2008) with the analysis of measurement uncertainty according to GUM. In the meantime, other procedures for determining measurement uncertainty have been developed in industrial metrology based on this guide, such as [VDA 5]. These newer measurement procedures are often more practice-oriented than the GUM, but both [VDA 5] and GUM are suitable for calculating measurement uncertainty. To understand the different procedures, it is worthwhile to take a closer look at the GUM as well as an overview of the determination of measurement uncertainty according to [VDA 5].

Determination of the measurement uncertainty according to GUM

The "Guide to the Expression of Uncertainty in Measurement (GUM)” is regarded as the internationally recognized procedure for determining the uncertainty of a measurement. The guide is still a binding standard at national metrology institutes and international organizations to ensure reliable and complete measurement results. The determination of the measurement uncertainty according to GUM is based on modern probability theory and requires the quantitative consideration of all influencing factors. The following factors are used to calculate the uncertainty: statistical parameters, derivations from previous measurement results, data from manuals or data books, or general knowledge about the measuring equipment, the measuring procedure or the manufacturer's specifications. In the case of measurement uncertainty according to GUM, the input quantities are described according to specified statistical rules. In the procedure as per GUM, the same procedure applies both for the calculation of the standard measurement uncertainty and for the determination of the expanded measurement uncertainty.

All sources for the determination of the measurement uncertainty are recorded in a list as a measurement uncertainty budget. The measurement uncertainty budget contains not only the associated standard deviations and information on their determination, but also observations on the measurement performed.

The calculation of the measurement uncertainty according to GUM represents a complex set of rules that has only been very roughly touched upon here. Although in theory the procedure should be applicable to any type of measurement, it is rarely usable to its full extent in industrial production. For this reason, the GUM is most often used in calibration laboratories to determine the calibration uncertainty of standards. However, based on the guideline, other methods for determining the measurement uncertainty have since developed, such as the fairly new standard [VDA 5].

A practicable implementation of the requirements of GUM for measurements with coordinate measuring machines is the determination of the measurement uncertainty according to [VDI 2617-8]. This focuses on the relationship between the test equipment suitability and the determined measurement uncertainty. In addition to this standard, a relatively new method for determining the measurement uncertainty exists.

The [VDA 5] standard of the German automotive industry describes a simplified procedure for determining uncertainty based on GUM. The [VDA 5] comprises a two-stage procedure for determining the test equipment usability and test process suitability based on the determined measurement uncertainty. For this purpose, an examination of the test equipment is carried out without taking environmental influences into account. The aim is to determine whether the measuring instrument is at all suitable for the measurement task with the required measurement uncertainty. The environmental influences are included as an estimated value and checked after installation.

The determination of the measurement uncertainty according to [VDA 5] is carried out in a total of four steps:

  1. Investigation of the suitability of the test equipment
  2. Determination of the standard deviation
  3. Measurement system analysis
  4. Summary of the individual standard uncertainties in the measurement uncertainty budget

Analysis of measurement uncertainty according to [VDA 5]

Practical implementation: Determining measurement uncertainty by means of measurement system analysis

The presented methods for the determination of the measurement uncertainty are overly complex and not very practicable for everyday use in metrology. That is why, for practical application, the approach exists to determine the uncertainty of measurement by performing a measurement system analysis. The measurement system analysis considers the measurement process as a statistical process, which includes both random and systematic influences. Measurement results can only be calculated by experimental investigations.

Measurement system analysis is composed of two studies conducted consecutively:

  • Type 1 capability study:
    Here, the suitability of the measurement system for the measurement task is determined. For this purpose, a measurement of calibrated parts is carried out, which is used as a parameter for the calculation of the measuring system capability.
  • Type 2 or Type 3 study:
    If the measuring equipment is suitable, a second procedure is used to test under actual operating conditions whether the measurement results are repeatable and comparable. This procedure is also called GR&R test.

Important terms for the determination of the measurement uncertainty

In connection with the calculation of measurement uncertainty, there are some terms that are important not only for the determination of uncertainty, but also for its basic understanding. These include:

Calibration:

Calibration of measuring instruments involves determining the relationship between the measured value of the output quantity and the associated true or correct value. The true or correct value is the input quantity of the present measurand for a measuring device under specified conditions. During calibration, a measurement uncertainty, the so-called calibration uncertainty, is specified.

Adjustment:

Adjustment refers to the setting or adjustment of a measuring instrument. This is intended to eliminate systematic measurement deviations to the extent necessary for the specified application or measurement task.

Traceability:

In metrology, traceability refers to the property of a measurement result to be related to a suitable standard by an unbroken chain of comparative measurements with specified measurement uncertainties. In most cases, this is an international or national standard.

National/International Standard:

A national standard is a standard that is recognized in a country by a national resolution as the basis for establishing the values of other standards. Similarly, the international standard is recognized by an international agreement as the basis of the standards of a given quantity.

Reproducibility

If, during a measurement, a displayed value on the measuring instrument is achieved again under the exact same environmental conditions, this is referred to as a reproducible measurement result.

Systematic deviation:

Systematic deviation refers to the value determined during a measurement minus the correct value of the measurand.

Linearity:

Linearity refers to the constant relationship between the output variable and the input variable of a test equipment when the latter changes. The deviations result in the linearity.

Repeatability:

Repeatability is the extent of mutual approximation between the results of measurements of the same measurand made at short time intervals under the same measurement conditions. For example, standard deviation is a measure of repeatability.

Intercomparison precision:

Intercomparison precision refers to the extent of mutual approximation between measurement results of the same measurand obtained under different measurement conditions. However, it is essential to ensure that only one variable is changed at a time. A measure of comparative precision is, for example, the overall mean value.

Stability:

In metrology, stability refers to the extent of mutual approximation between results of the same measurand that have been performed at specified time intervals.

Confidence level/Confidence range:

The specification of the confidence level used belongs to every specification of the measurement uncertainty. For example, a confidence level of 95 % means that the true value of the measurand is expected with a probability of 95 % in the specified range. If a standard deviation is specified, the confidence level is only 68 %. For a higher confidence level, the standard uncertainty would need to be multiplied by a coverage factor k. In this way, an expanded uncertainty with a higher confidence level is obtained. In measurement uncertainty the most used is the coverage factor k=2. This coverage factor corresponds to a confidence level of 95 %. Basically, the measurement uncertainty is not meaningful without the specification of the confidence level used.

Which factors have an influence on the measurement uncertainty?

In order to be able to classify the quality of measurement results with the help of measurement uncertainty, one must also consider the influences on the measurement results. The causes of measurement uncertainty can lie in the measuring equipment, in environmental influences or in the measurement procedure. From this, various factors can be derived that influence the measurement result and thus also the measurement uncertainty of coordinate measuring machines. These include, for example:

  • Measurement strategy (e.g., sensor selection, test planning, measurement method)
  •  Measuring device (e.g., sensor technology, measuring range, measuring software)
  • Environment (e.g., temperature in the measuring room, thermal radiation, dirt, vibrations)
  • User (e.g., choice of equipment, care, motivation)
  • Task (e.g., order orientation, quantity orientation, tolerance orientation)
  • Workpiece (e.g., shape deviation, dimension, material, mass)

For a meaningful determination of the measurement uncertainty, all these factors are included in the determination of the uncertainty. A distinction is usually made between random and systematic influences. If, for example, fluctuations occur in optical measurements due to extraneous light during the measurement, this is a random influence. Since the probability of random influences cannot be predicted and calculated, their magnitude is often an estimate.

Systematic influences, on the other hand, are the same for each repetition of the measurement. If their magnitude is known over the entire measurement range, their influence can be calculated from the measurement result. Typical systematic factors are, for example, errors during calibration or also a changed temperature.

Do you need measurements with meaningful results within the shortest possible time? With our optical measurement services, we can guarantee just that. Using high-precision CT and X-ray measurements, we at ZEISS offer a holistic assessment of components.

Conclusion: Specification of measurement uncertainty as a professional standard

Measurement uncertainty is an important criterion for the quality and reliability of measured values. Therefore, uncertainty in metrology is by no means synonymous with measurement errors or inaccurate measurement procedures, but rather a sign of professionalism. After all, only tested measurement procedures and tasks can deliver precise results. This is not only crucial for productivity and quality assurance, but also creates confidence in the measurement results. Measurement accuracy is therefore not even achievable without specifying the measurement uncertainty.

At ZEISS, we offer professional measurement services with reliable results in compliance with international standards on measurement procedures and measurement uncertainty. Our high precision coordinate measuring machines enable fast and accurate measurements. At ZEISS, measurement uncertainty is not only important in everyday metrology, but is also covered in our seminars, e.g. AUKOM Level 3.

Your contact to us