# 4.1: Background Material

- Page ID
- 28050

## Text References

## Percentage Uncertainty

In our previous labs, we discussed the importance of measuring and accounting for uncertainty in experimental results. In those cases, we calculated uncertainty from a range of measurements for a single quantity (landing position of a marble and the time elapsed to fall a known distance). Usually the uncertainty in a quantity has little meaning out of context. For example, if we are measuring the speed of an object, and compute the uncertainty in that speed to be \(\pm1.0\frac{cm}{s}\), then our knowledge of this object's speed is quite impressive if we are talking about a bullet, and not so impressive if we are talking about a tortoise. It is therefore useful to define *percentage uncertainty*, which is the ratio of the *absolute uncertainty* (the standard deviation we talked about previously) and the quantity in question:

\[\text{percentage uncertainty in measured quantity } x = e_x = \dfrac{\sigma_x}{x}\]

## Uncertainty Propagation

In this lab, we will not be measuring the physical quantity in question directly. Instead, we will measure multiple quantities, and put them together mathematically to compute what we are looking for. This poses us with a new problem – there will be uncertainties in all of our measurements, so how do we use these to determine the uncertainty of their combination? We will virtually never be adding or subtracting quantities, so we really only have to worry about multiplication/division and raising to powers.

Without going into the mathematical details behind it, we will simply state that whenever two uncertain quantities are multiplied or divided, the percentage uncertainty in the product or ratio is given by the *quadrature* of their individual percentage uncertainties:

\[\left.\begin{array}{c} z=x\cdot y \\ or \\ z=\dfrac{x}{y}\end{array}\right\} \;\;\;\Rightarrow\;\;\; e_z = \sqrt{e_x^2+e_y^2}\]

If the quantity we are calculating involves a power, then the rule is a little different. For example, if we have \(z=x^2\), it is *not* correct to simply use the quadrature formula above with \(x\) replacing the \(y\) (this would result in an uncertainty for \(z\) that is \(\sqrt{2}\) times the uncertainty of \(x\)). Instead, the rule is to multiply the percentage uncertainty of the measured quantity by the power:

\[z=x^n \;\;\;\Rightarrow\;\;\; e_z = n\cdot e_x\]

## Weakest Link Rule

Given that this is a physics lab, we don't want to be spending all of our time doing uncertainty calculations (not withstanding the focus of these first two labs), so we will employ a shortcut that will reduce our workload somewhat. For just about every case where we will need to propagate uncertainty associated with multiple measurements, one of the measurements will have a significantly larger percentage uncertainty than the others. Say for example that we make measurements of two quantities that are multiplied, where one of the percentage uncertainties is 1% and the other is 4%. Putting these together gives:

\[\left.\begin{array}{c} z=x\cdot y \\ e_x = 1\% \\ e_y = 4\%\end{array}\right\} \;\;\;\Rightarrow\;\;\; e_z = \sqrt{\left(1\%\right)^2+\left(4\%\right)^2} = \sqrt{17}\%=4.1\%\]

As you can see, the resulting percentage uncertainty differs very little from the larger of the two percentage uncertainties. We will therefore use the shortcut we call the *weakest link rule*, which consists of simply finding the component that has the largest percentage uncertainty, and using that as the total uncertainty. Note that we still need to include the power rule shown above, however. For example, if the quantity we are computing looks like \(z=xy^2\) and \(x\) has a 4% uncertainty, while the uncertainty of \(y\) is 3%, the square of \(y\) in the computation of \(z\) makes its 6% contribution the weakest link.

## Comparing Two Uncertain Results

We know how to determine whether an experimental result agrees with an "exact" (theoretical) number – we just check to see if the experimental result lands within the absolute uncertainty of the exact value. But something we will do in several labs is perform two different experiments to find the same value (this is most common when we don't actually have a theoretical number to check against). We will want to know if these two experiments confirm each other's results, but how do we do this, when both provide inexact answers? The answer to this (again, without going into details) is to compare the two results (which are of course both averages of the data), and determine whether they lie within a certain range, which is defined by the quadrature of the *absolute* uncertainties generated for each of the results:

\[range = \sqrt{\sigma_1^2+\sigma_2^2}\]

Let's look at a quick example. One experiment yields a (unitless) result of \(7.40\pm 3\%\), while the result of the other experiment is \(7.63\pm 2\%\) (perhaps these percentages were found for each experiment using the weakest link method). Do these two experiments agree to within uncertainty? Well, if we add 3% to the first result, we get 7.622, so the second result does not land within the uncertainty of the first. Conversely, the first result does not lie within the uncertainty of the second result. But the real question is whether their difference of 0.23 lands within the range:

\[\left. \begin{array}{l} 0.03\cdot 7.40 = 0.222 \\ 0.02\cdot 7.63 = 0.1526\end{array} \right\} \;\;\; range = \sqrt{0.222^2+0.1526^2} = 0.269 > 0.23\]

So these experimental results are consistent with each other to within uncertainty. Notice that if one of the results is "exact," (whether it is a theoretical answer or an experiment with very small errors) its uncertainty is zero, and the range is just the uncertainty of the other experiment.