Introduction to Accuracy, Precision & Error in Measurement — Physics
A concise, practical guide explaining the differences between precision and accuracy, types of measurement errors, uncertainty, resolution and how to propagate errors — with worked examples and FAQs for quick revision.
Definitions: Precision, Accuracy, Error & Uncertainty
Precision — how close repeated measurements are to each other (repeatability / reproducibility). It reflects random errors and statistical variability.
Accuracy (Trueness) — how close a measurement is to the true or accepted value. ISO often uses trueness for systematic offset; accuracy = trueness + precision (i.e., absence of both systematic and random errors).
Error — the difference between a measured value and the true value. In experimental contexts we focus on measurement errors (systematic and random), not “mistakes” like decimal misplacement.
Uncertainty — a quantified interval around a reported value that is believed to contain the true value with a stated level of confidence. It expresses the range of expected variability in repeated measurements.
Types of measurement errors
Systematic errors (bias)
Systematic errors always push results in the same direction and do not reduce with averaging. Examples: a meter stick that is printed incorrectly, an instrument with a constant offset, mis-calibrated zero, or environmental bias (consistent temperature shift).
Random errors (statistical)
Random errors vary unpredictably from one measurement to the next (both above and below the mean). Sources include small fluctuations in environment, instrument noise, and operator variability. These can be reduced by averaging and quantified statistically (standard deviation).
Key point
Increasing sample size reduces the random error (precision improves) but does not remove systematic error — you must detect and correct systematics (calibration, different methods, or corrections).
Resolution — the smallest detectable change
Resolution is the smallest change in the measured quantity that the instrument can reliably distinguish. Two common ways to express it:
- Absolute resolution (e.g., a multimeter with resolution 1 mV).
- Digital resolution (bits) for A/D converters: number of bits determines the smallest step size across a range. Higher resolution → smaller quantization steps → better ability to distinguish values.
Resolution affects precision: coarse resolution introduces quantization uncertainty. It may also contribute to systematic bias if not accounted for.
Propagation of uncertainty (basic rules)
When a result depends on measured quantities, their uncertainties combine. For common operations (assuming errors are independent and small):
Operation | Uncertainty rule (approx.) |
---|---|
Add / Subtract: \(Q = A \pm B\) | \(\sigma_Q = \sqrt{\sigma_A^2 + \sigma_B^2}\) (absolute) |
Multiply / Divide: \(Q = A\times B\) or \(Q = A/B\) | \(\frac{\sigma_Q}{|Q|} = \sqrt{\left(\frac{\sigma_A}{A}\right)^2 + \left(\frac{\sigma_B}{B}\right)^2}\) (relative) |
Power: \(Q = A^n\) | \(\frac{\sigma_Q}{|Q|} = |n|\frac{\sigma_A}{|A|}\) |
These come from linearizing functions (Taylor expansion) and assuming independent Gaussian-like errors. For more complex dependencies use full propagation (covariance matrix) or Monte Carlo sampling.
Examples & practice problems (with solutions)
Imagine four targets of measured cluster points:
- High precision & high accuracy — tight cluster around bullseye.
- High precision & low accuracy — tight cluster far from bullseye (systematic bias).
- Low precision & high accuracy — broad scatter centered near bullseye.
- Low precision & low accuracy — broad scatter away from bullseye.
You measure two lengths: \(A = 2.35 \pm 0.02\ \mathrm{m}\) and \(B = 1.12 \pm 0.01\ \mathrm{m}\). Find \(Q = A + B\) and its uncertainty.
A current \(I = 2.00 \pm 0.05\ \mathrm{A}\) flows through a resistor \(R = 10.0 \pm 0.2\ \Omega\). Find power \(P = I^2 R\) and uncertainty.
A digital voltmeter has range 0–10 V and 3-digit resolution (e.g., steps of 0.01 V). What is the quantization uncertainty roughly?
FAQs
Q: Can a measurement be precise but not accurate?
A: Yes — repeated results can be tightly clustered (high precision) but offset from the true value due to systematic error (low accuracy).
Q: Does taking more measurements always improve accuracy?
A: More measurements reduce random error (improve precision) but will not remove systematic bias. You must identify and correct systematic sources (calibration, method changes) to improve accuracy.
Q: How do you decide how many significant figures to report?
A: Match the reported digits to the uncertainty: report values to about the same decimal place as the uncertainty (usually one significant digit in the uncertainty, sometimes two for better precision).
Q: When should I use relative vs absolute uncertainty?
A: Use absolute uncertainty for sums/differences and relative (percentage) for products/quotients and when comparing measurement scales.
0 Comments