Why is Measurement Uncertainty Important?

Quality and cost are directly impacted by measurement uncertainty. Many industries including research, manufacturing, finance, and healthcare rely on reports that contain quantitative data from measurement results. Product quality, experiment results, financial decisions, and medical diagnosis can all be directly impacted by errors introduced from the omission of measurement uncertainty. Without awareness or consideration of the impact measurement uncertainty has on quality, the greater the probability of increased operating costs and failure rates. How much would it cost a manufacturer to issue a recall due to product failure or experience downtime as a result of a system failure? If quality were to improve, costs could be reduced, and organizations could become more efficient and more profitable.
For example, if an organization doesn’t account for measurement uncertainty, there is a real danger that calibration measurements could be incorrect. This could mean that instruments may not be performing to the proper standards and/or certifications, and could result in deficient product builds that would mean revenue losses in recalls and rebuilds.

Defining Measurement Uncertainty

The uncertainty of a measurement tells us something about its quality. It is the doubt that exists about the result of any measurement. You might think that well-made rulers, clocks and thermometers should be trustworthy, and give the right answers. But for every measurement, even the most careful, there is always a margin of doubt or level of uncertainty.
Since there is always some uncertainty about any measurement, we need to ask, ‘How big is the margin?’ and ‘How certain are we?’ Thus, two numbers are really needed in order to quantify an uncertainty. One is the width of the margin (interval) and the other is a confidence level, which states how sure we are that the ‘true value’ is within that margin.

For example:

We might say that the length of a certain stick measures 20 centimetres plus or minus 1 centimetre, at the 95 per cent confidence level. This result could be written:

  • 20 cm ±1 cm, at a level of confidence of 95%.

The statement says that we are 95 per cent certain that the stick is between 19 centimetres and 21 centimetres long.

Error versus uncertainty

It is important not to confuse the terms ‘error’ and ‘uncertainty’. Error is the difference between the measured value and the ‘true value’ of the item being measured. Whereas, uncertainty is a quantification of the doubt about the measurement result.

Whenever possible we try to correct for any known errors, but any error whose value we do not know is a source of uncertainty.

Where do errors and uncertainties come from?

Many things can undermine a measurement and flaws in the measurement may be visible or invisible. Since real measurements are never made under perfect conditions, errors and uncertainties can come from a variety of sources:

  • The measuring instrument – instruments can suffer from errors including repeatability, bias, changes due to ageing, wear, or other kinds of drift, poor readability, noise (for electrical instruments) and many other problems.
  • The item being measured – which may not be stable. (Imagine trying to measure the size of an ice cube in a warm room.)
  • The measurement process – the measurement itself may be difficult to make. For example, measuring the weight of small but lively animals presents particular difficulties in getting the subjects to co-operate.
  • ‘Imported’ uncertainties – calibration of your instrument has an uncertainty which is then built into the uncertainty of the measurements made.
  • Operator skill – some measurements depend on the skill and judgement of the operator. One person may be better than another at the delicate work of setting up a measurement or at reading fine detail. The use of an instrument such as a stopwatch depends on the reaction time of the operator.
  • Sampling issues – the measurements made must be properly representative of the process you are trying to assess. If you want to know the temperature at the workbench, don’t measure it with a thermometer placed on the wall near an air conditioning outlet. If you are choosing samples from a production line for measurement, don’t always take the first ten made on a Monday morning.
  • The environment – temperature, air pressure, humidity and many other conditions can affect the measuring instrument, or the item being measured.

Where the size and effect of an error are known (e.g. from a calibration certificate) a correction can be applied to the measurement result. But, in general, uncertainties from each of these sources, and from other sources, would be individual ‘inputs’ contributing to the overall uncertainty in the measurement.

Random or Systematic Uncertainties

The effects that give rise to uncertainty in measurement can be either:

  • Random – where repeating the measurement gives a randomly different result. If so, the more measurements you make, and then average, the better estimate you generally can expect to achieve.


  • Systematic – where the same influence affects the result for each of the repeated measurements. In this case, you learn nothing extra just by repeating measurements. Other methods are needed to estimate uncertainties due to systematic effects, such as different measurements, or calculations.

What is not a measurement uncertainty?

Miscalculations made by operators are not measurement uncertainties and they should not be counted as contributing to uncertainty. Mistakes can be avoided by working carefully and by checking work.

  • Tolerances are not uncertainties. They are acceptance limits which are chosen for a process or a product.
  • Specifications are not uncertainties. A specification tells you what you can expect from a product. It may be very wide-ranging, including ‘non-technical’ qualities of the item, such as its appearance.
  • Accuracy or inaccuracy is not the same as uncertainty. Unfortunately, the usage of these words is often confused. Correctly speaking, ‘accuracy’ is a qualitative term (e.g. you could say that a measurement was ‘accurate’ or ‘not accurate’). Uncertainty is quantitative. When a ‘plus or minus’ figure is quoted, it may be called an uncertainty, but not an inaccuracy.
  • Errors are not the same as uncertainties. It has been common in the past to use the words interchangeably in phrases like ‘error analysis’.
  • Statistical analysis is not the same as uncertainty analysis. Statistics can be used to draw all kinds of conclusions which do not by themselves tell us anything about uncertainty. Uncertainty analysis is only one of the uses of statistics.

How to calculate uncertainty of measurement

To calculate the uncertainty of a measurement, you need to identify the sources of uncertainty in the measurement. Then you must estimate the size of the uncertainty from each source. Finally, the individual uncertainties are combined to give an overall figure.

The two ways to estimate uncertainties

No matter what the sources of your uncertainties are, there are two approaches to estimating them: ‘Type A’ and ‘Type B’ evaluations. In most measurement situations, uncertainty evaluations of both types are needed.

Type A evaluations – uncertainty estimates using statistics (usually from repeated readings)
Type B evaluations – uncertainty estimates from any other information. This could be information from experience of the measurements, from calibration certificates, manufacturer’s specifications, from calculations, from published information, and from common sense.

There is a temptation to think of ‘Type A’ as ‘random’ and ‘Type B’ as ‘systematic’, but this is not necessarily true.

Eight main steps to evaluating uncertainty

The main steps to evaluating the overall uncertainty of a measurement are as follows.

  1. Decide what you need to find out from your measurements. Decide what actual measurements and calculations are needed to produce the result.
  2. Carry out the measurements needed.
  3. Estimate the uncertainty of each input quantity that feeds into the result. Express all uncertainties in similar terms.
  4. Decide whether the errors of the input quantities are independent of each other. If you think not, then some extra calculations or information are needed.
  5. Calculate the result of your measurement (including any known corrections for things such as calibration).
  6. Find the combined standard uncertainty from all the individual aspects.
  7. Express the uncertainty in terms of a coverage factor together with a size of the uncertainty interval and state a level of confidence.
  8. Write down the measurement result and the uncertainty, and state how you got both.

This is a general outline of the process. Also, uncertainty contributions must be expressed in similar terms before they are combined. Thus, all the uncertainties must be given in the same units, and at the same level of confidence.

Tips to reduce uncertainty in measurement

Always remember that it is usually as important to minimise uncertainties as it is to quantify them. There are some good practices which can help to reduce uncertainties in making measurements generally. A few recommendations are:

  • Ensure the measuring instruments are calibrated and use the calibration corrections which are given on the certificate.
  • Make corrections to compensate for any errors you know about.
  • Make your measurements traceable to national standards – by using calibrations which can be traced to a National Metrology Institute (i.e. NIST, NRC) via an unbroken chain of measurements.
  • Choose the best measuring instruments and use calibration facilities with the smallest uncertainties.
  • Check measurements by repeating them, or by getting someone else to repeat them from time to time or use other kinds of checks. Checking by a different method may be best of all.
  • Check calculations, especially where numbers are copied from one place to another.
  • Use an uncertainty budget to identify the worst uncertainties and address it.
  • Be aware that in a successive chain of calibrations, the uncertainty increases at every step of the chain.

Overall, use recognised good practices in measurements, for example:

  • Follow the maker’s instructions for using and maintaining instruments.
  • Use experienced staff and provide training for measurement.
  • Check or validate software, to make sure it works correctly.
  • Use rounding correctly in your calculations.
  • Keep good records of your measurements and calculations. Write down readings at the time they are made. Keep a note of any extra information that may be relevant. If past measurements are ever called into doubt, such records can be very useful.

Many more good measurement practices and calculations are detailed elsewhere, for example, the international standard ISO/IEC 17025 and the GUM contains a lot of useful information.

In Conclusion

Improving quality is the key to mitigating risks and reducing costs. However, measurement uncertainty is a parameter that is often overlooked. It is an important aspect of measurement that affects quality, costs, decisions, and risks. Accuracy should only be adequate to effectively satisfy each organization established requirements. Measurement uncertainty should be included and acknowledged to assess the quality of the results stated to meet the established accuracy requirements. Through awareness and education, organizations and consumers can experience better quality while reducing