Did you ever wonder how the tolerances of resistors influence a voltage divider? What happens if you put two components in parallel — does the effective tolerance increase or decrease?

I had some very wrong assumptions about the answers to these two questions and am glad I looked deeper in to the topic. Some very suprising, yet satisfying and reassuring insights into error boundaries and propagation lie ahead. So buckle up and get ready for some math! (I enabled LaTeX \LaTeX on my blog for this, hihi)

Basics#

What is component tolerance?#

Anyone who ever did any kind of DIY electronics project will ḱnow about the famous colorful rings of resistors that indicate their values. One needs to know that they don’t ship with the exact value that are being written on the package. They get sorted into different “classes” of quality, indicated by the last ring:

  • gold: 5%
  • silver: 10%
  • no ring: 20%

These percentages give us a range in which the actual measured value falls within. The bounds are equi-distant from the supposed value and describe garantueed boundaries, meaning there is 0% chance that the values are lying outside of that range.

I am suggesting the following framework for handling error bounds:

x=x˙±Δx \lang x \rang = \dot{x} \pm \Delta_x

x=x˙ (1±δx) \lang x \rang = \dot{x} ~ (1 \pm \delta_x)

with the error interval bounded as

 . ={xR, xminxxmax} \lang~.~\rang = \{ x \in \R ,~ x_{min} \le x \le x_{max} \}

where x x represents the real-world measured value, xmin/max x_{min/max} the respective boundary edges and x˙ \dot{x} the ideal or middle value. There are two types of errors that we will look at:

  • absolute error Δx \Delta_x: distance from middle value, with unit
  • relative error δx \delta_x: relative distance, unitless, provided in percent

We will assume that both of these are symmetric and can therefor be represented by a single value. We can deduce that:

Δx=x˙δx \boxed { \Delta_x = \dot{x} \cdot \delta_x }

Okay, let’s put this into some context. Let’s say you have a trusty 1kΩ (R˙ \dot{R} ) and want to know which the maximal (Rmax R_{max} ) and minimal (Rmin R_{min}) value ranges are that you can expect from it before measuring. Taking a look at the tolerance (δR \delta_R) gives you these options:

relative error absolute error min. value max. value
20% 200 Ω 800 Ω 1200 Ω
10% 100 Ω 900 Ω 1100 Ω
5% 50 Ω 950 Ω 1050 Ω
1% 10 Ω 990 Ω 1010 Ω
0.1% 1 Ω 999 Ω 1001 Ω

These are the guaranteed boundaries in which resistors with a given tolerance and nominal value can lie within. This is also (mostly) true for other components like capacitors and inductors.

Series Resistance#

Let’s assume we have two resistors Ra R_a and Rb R_b with their respective tolerances δRa \delta_{R_a} , δRb \delta_{R_b} and ideal values R˙a \dot{R}_a , R˙b \dot{R}_b . What will the resulting ideal resistance R˙series \dot{R}_{series} and its effective tolerance δRseries \delta_{R_{series}} be like, when we hook them up in series?

Let’s walk through it with a simple numerical example:

R˙a=R˙b=R˙=1000Ω \dot{R}_a = \dot{R}_b = \dot{R} = 1000Ω

δRa=δRb=δR=1% \delta_{R_a} = \delta_{R_b} = \delta_R = 1\%

If you will, ponder for a second and think of a solution yourself. Come back once you think you found a solution and check if you got it right.

Our core equation (note the missing bar over the R) is:

Rseries=Ra+Rb=2 R R_{series} = R_a + R_b = 2 ~ R

I think we can safely assume that the following should be true:

R˙series=R˙a+R˙b=2 R˙ \dot{R}_{series} = \dot{R}_a + \dot{R}_b = 2 ~ \dot{R}

The takeaway here is that we can continue using the usual understanding of our equations for the ideal values. Calculating the resulting ideal value does not change the outcome of the bounding errors!

The maxima of the possible errors would be defined and further solved like so:

Rseries=R˙series (1±δRseries)    δRseries=RseriesR˙series1 \lang R_{series} \rang = \dot{R}_{series} ~ ( 1 \pm \delta_{R_{series}}) \iff \delta_{R_{series}} = \frac{\lang R_{series} \rang }{\dot{R}_{series}} \mp 1

This gives us a direct link to the effective tolerance, if we know the ideal series resistance and its boundaries. Let’s go through the maximum error case:

Rmax=1000Ω (1+1%)=1010Ω R_{max} = 1000Ω ~ (1 + 1\%) = 1010 Ω

Rseriesmax=2 Rmax=2020Ω R_{series_{max}} = 2 ~ R_{max} = 2020 Ω

R˙series=2 R˙=2000Ω \dot{R}_{series} = 2 ~ \dot{R} = 2000 Ω

While keeping in mind that we assumed equal error bounds, the calculation of the minumum error is not needed because of symmetry.

δRseries=2020Ω2000Ω1=1% \boxed{\delta_{R_{series}} = \frac{2020Ω}{2000Ω} - 1 = 1\%}

Hopefully, you found the same result! Let’s interpret this:

In the case of same input values and tolerances, the effective tolerance for series resistance stayed exactly the same. This means we can assume the base tolerance of the two individual resistors for further calculations, like voltage dividers, etc.!

As a next step, we could take a look at the parallel resistance case. However, the formula is more complex (1Rp=1Ra+1Rb) (\frac{1}{R_p} = \frac{1}{R_a} + \frac{1}{R_b}) and (at least for me) I wouldn’t exactly know how to handle the error bounds of an inverse function from the top of my head or by logically combining error bounds. We have to find another, more rigorous way of describing intervals in a symbolic or arithmetic way.

Mathematical Adventures#

This an attempt at half rigorously proving my thesis of functional error bound calculus. This is not 100% mathematically sound, however I tried to to stick to best principles and am happy to receive feedback from others (and professionals)!

Derivation of Functional Error Bounds#

The Taylor series expansion can approximate functions based on an infinite series at specific point x˙\dot{x}:

f(x)n=0f(n)(x˙)n!(xx˙)n f(x) \approx \sum_{n=0}^{\infin}{\frac{f^{(n)}(\dot{x})}{n!}(x - \dot{x})^n}

When defining xx˙=Δx    x=x˙+Δx x - \dot{x} = \Delta_x \iff x = \dot{x} + \Delta_x and inserting in the above equation, we get:

f(x˙+Δx)n=0f(n)(x˙)n!(Δx)n=f(x˙)+f(x˙)Δx+f(x˙)2!(Δx)2+ f(\dot{x} + \Delta_x) \approx \sum_{n=0}^{\infin}{\frac{f^{(n)}(\dot{x})}{n!}(\Delta_x)^n} = f(\dot{x}) + f'(\dot{x})\Delta_x + \frac{f''(\dot{x})}{2!}(\Delta_x)^2 + \dots

The error shrinks very strongly with each additional term, so we assume that the first degree n=1 n=1 is accurate enough. Furthermore, we introduce the symmetry of the error bounds:

f(x˙±Δx)f(x˙)±f(x˙)Δx f(\dot{x} \pm \Delta_x) ≈ f(\dot{x}) \pm f'(\dot{x})\Delta_x

f(x)f(x˙)±f(x˙)Δx f(\lang x \rang) ≈ f(\dot{x}) \pm f'(\dot{x})\Delta_x

We found an approximation for our error bound model by looking at the Taylor series expansion more closely! Notice how the approximation of f(x)f(\lang x \rang) has a very familiar constuction: ideal value + absolute error. This is great, because that means chaining functions gf g \circ f will result in the same layout again!

Alright, now we can throw (almost) any function with a derivate at our error bounds model and get a workable result. However, what we still did not solve is the question on how to deal with functions that have more than one input variable, or in our case, more than one input error bounds variable.

Derivation of Functional Multi-Varibale Error Bounds#

Let’s first define new symbols:

x=[x1x2xn], x˙=[x˙1x˙2x˙n], Δx=[Δx1Δx2Δxn] , where .:RnR \bold{x} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}, ~ \bold{\dot{x}} = \begin{bmatrix} \dot{x}_1 \\ \dot{x}_2 \\ \vdots \\ \dot{x}_n \end{bmatrix}, ~ \bold{\Delta_x} = \begin{bmatrix} \Delta_{x_1} \\ \Delta_{x_2} \\ \vdots \\ \Delta_{x_n} \end{bmatrix}\textrm{ , where } \lang . \rang:\R^n \to \R

The Taylor series approximation has a lesser known higher-dimensional generalization that we can use. Specifically, we are interested in functions f:RnRf:\R^n \to \R , with the nn dimensional input vector x \bold{x} , consisting of ideal values x˙\bold{\dot{x}} and their associated absolute errors Δx \bold{\Delta_x} . We will evaluate the series at the point x˙\bold{\dot{x}} in nn-dimensional space for the first order:

f(x)f(x˙)+Df(x˙) (xx˙) \begin{equation*} f(\bold{x}) \approx f(\bold{\dot{x}}) + Df(\bold{\dot{x}})~(\bold{x}-\bold{\dot{x}}) \end{equation*}

This looks very familiar and strongly reminds of the one-dimensional case! The difference is that we deal with vectors and Df(x˙)Df(\bold{\dot{x}}), a matrix of partial derivatives, which in our case has the size of 1×n1 \times n:

Df(x˙)=[fx1(x˙)fx2(x˙)fxn(x˙)] Df(\bold{\dot{x}}) = \begin{bmatrix} \frac{\partial f}{\partial x_1}(\bold{\dot{x}}) & \frac{\partial f}{\partial x_2}(\bold{\dot{x}}) & \dots & \frac{\partial f}{\partial x_n}(\bold{\dot{x}}) \end{bmatrix}

We can apply the same trick as earlier, where we look at the equation from a different point of view with xx˙=Δx    x=x˙+Δx \bold{x} - \bold{\dot{x}} = \bold{\Delta_x} \iff \bold{x} = \bold{\dot{x}} + \bold{\Delta_x} while evaluationg the matrix multiplication and inserting the symmetry of our error bounds:

f(x˙±Δx)f(x˙)±Df(x˙)Δx=f(x˙)±nfxn(x˙)Δxn \begin{equation*} f(\bold{\dot{x}} \pm \bold{\Delta_x}) \approx f(\bold{\dot{x}}) \pm Df(\bold{\dot{x}}) \cdot \bold{\Delta_x} = f(\bold{\dot{x}}) \pm \sum_n \frac{\partial f}{\partial x_n}(\bold{\dot{x}}) \cdot \Delta_{x_n} \end{equation*}

Again, this look very promising. The form of the resulting equation has the layout ideal + error, where (luckily) the ideal part is just applying the function itself and the error now consists of not only one term, but nn terms!

Worst-Case Assumption#

There is one more addition I’d like to add to this error bound model. In the case of f(x)=x1x2f( \bold{x}) = \frac{x_1}{x_2} , the resulting partial derivatives end up being:

fx1(x˙)=1x˙2  and  fx2(x˙)=x˙1x˙22 \frac{\partial f}{\partial x_1}(\bold{\dot{x}}) = \frac{1}{\dot{x}_2} ~\textrm{ and }~ \frac{\partial f}{\partial x_2} (\bold{\dot{x}})= - \frac{\dot{x}_1}{{\dot{x}_2}^2}

With x=x˙±Δx \lang x \rang = \dot{x} \pm \Delta_x we now try to represent the absolute error by adding up the partial derivatives with the corresponding Δxn\Delta_{x_n}:

Δx=1x˙2Δx1x˙1x˙22Δx2 \Delta_x = \frac{1}{\dot{x}_2} \Delta_{x_1} - \frac{\dot{x}_1}{{\dot{x}_2}^2}\Delta_{x_2}

However, since we want to represent the maximum and minimum error bound in x\lang x \rang, adding a negatively signed term will not bring us closer to the actual worst-case bound. Therefor, we should always only consider the magnitude of each individual partial term, because we require symmetry by definition:

f(x˙±Δx)f(x˙)±nfxn(x˙)Δxn \begin{equation*} \boxed{ f(\bold{\dot{x}} \pm \bold{\Delta_x}) \approx f(\bold{\dot{x}}) \pm \sum_n \left| \frac{\partial f}{\partial x_n}(\bold{\dot{x}}) \right| \Delta_{x_n} } \end{equation*}

Finally we are able to handle more complex functions, so let’s take a look at a few examples.

Application#

Series Resistance (Reprise)#

Our trusty Rseries=R1+R2 R_{series} = R_1 + R_2 is representable for n=2n=2:

f(R)=R1+R2 , where R=[R1R2] f( \bold{R}) = R_1 + R_2 \textrm{ , where } \bold{R} = \begin{bmatrix} R_1 \\ R_2 \end{bmatrix}

When applying the Taylor series approximation from above, we need to compute the magnitudes of partial derivatives of ff in respect to all input variables and evaluate them at the ideal values R˙\bold{\dot{R}}:

fR1(R˙)=1  and  fR2(R˙)=1 \left| \frac{\partial f}{\partial R_1}(\bold{\dot{R}})\right| = 1~\textrm{ and }~\left| \frac{\partial f}{\partial R_2}(\bold{\dot{R}})\right| = 1

Hence:

f(R˙±ΔR)R˙1+R˙2±(ΔR1+ΔR2) f(\bold{\dot{R}} \pm \bold{\Delta_R}) \approx \dot{R}_1 + \dot{R}_2 \pm (\Delta_{R_1} + \Delta_{R_2})

With Rseries=R˙series±ΔRseries \lang R_{series} \rang = \dot{R}_{series} \pm \Delta_{R_{series}} we now try to represent the absolute and relative error of our series resistance:

ΔRseries=ΔR1+ΔR2 \Delta_{R_{series}} = \Delta_{R_1} + \Delta_{R_2}

δRseries=ΔRseriesR˙series=ΔR1+ΔR2R˙1+R˙2=R˙1δR1+R˙2δR2R˙1+R˙2 \delta_{R_{series}} = \frac{\Delta_{R_{series}}}{\dot{R}_{series}} = \frac{\Delta_{R_1} + \Delta_{R_2}}{\dot{R}_1 + \dot{R}_2} = \frac{\dot{R}_1 \delta_{R_1} + \dot{R}_2 \delta_{R_2}}{\dot{R}_1 + \dot{R}_2}

Great! Now we have a full representation of how the error bounds will behave. Note that we gained the behaviour for resistor values with different tolerances as well.

Another neat conclusion arises when we assume δR1=δR2=δR\delta_{R_1} = \delta_{R_2} = \delta_R, which is a very realistic case, since usually series resistance is done with same tolerance:

δRseries=δR R˙1+R˙2R˙1+R˙2=δR1=δR2 \boxed{\delta_{R_{series}} = \delta_R ~ \frac{\cancel{\dot{R}_1 + \dot{R}_2}}{\cancel{\dot{R}_1 + \dot{R}_2}} = \delta_{R_1} = \delta_{R_2} }

See how nicely the ideal resistance values cancel? This aligns with example computed by hand earlier. Let’s take some conclusions and do the same spiel with parallel resistance in the next section.

When combining resistors of the same tolerance class, the effective tolerance stays the same! And when mixing tolerances, one can compute the effective tolerance with:

δRseries=R˙1δR1+R˙2δR2R˙1+R˙2 \delta_{R_{series}} = \frac{\dot{R}_1 \delta_{R_1} + \dot{R}_2 \delta_{R_2}}{\dot{R}_1 + \dot{R}_2}

Parallel Resistance#

Finally, we are able to look at the parallel case. We make the same assumptions and steps as the series resistance derivation, only the function itself changes:

f(R)=11R1+1R2 , where R=[R1R2] f( \bold{R}) = \frac{1}{\frac{1}{R_1} + \frac{1}{R_2}} \textrm{ , where } \bold{R} = \begin{bmatrix} R_1 \\ R_2 \end{bmatrix}

Let’s compute the partial derivatives of ff in respect to all input variables and evaluate them at the ideal values R˙\bold{\dot{R}}:

fR1(R˙)=R˙22(R˙1+R˙2)2  and  fR2(R˙)=R˙12(R˙1+R˙2)2 \left| \frac{\partial f}{\partial R_1} (\bold{\dot{R}}) \right| = \frac{{\dot{R}_2}^2}{\left( \dot{R}_1+\dot{R}_2 \right)^2} ~ \textrm{ and } ~ \left| \frac{\partial f}{\partial R_2} (\bold{\dot{R}}) \right| = \frac{{\dot{R}_1}^2}{\left( \dot{R}_1+\dot{R}_2 \right)^2}

With Rparallel=R˙parallel±ΔRparallel \lang R_{parallel} \rang = \dot{R}_{parallel} \pm \Delta_{R_{parallel}} we represent the absolute and relative error:

ΔRparallel=R˙22ΔR1+R˙12ΔR2(R˙1+R˙2)2 \Delta_{R_{parallel}} = \frac{ {\dot{R}_2}^2 \Delta_{R_1} + {\dot{R}_1}^2 \Delta_{R_2}}{\left( \dot{R}_1 + \dot{R}_2 \right)^2}

δRparallel=R˙2δR1+R˙1δR2R˙1+R˙2 \delta_{R_{parallel}} = \frac{\dot{R}_2 \delta_{R_1} + \dot{R}_1 \delta_{R_2}}{\dot{R}_1 + \dot{R}_2}

Wow, that was quite fast. Other than computing the partial derivates and some equation juggling, this was pretty straight foreward. Let’s assume δR1=δR2=δR\delta_{R_1} = \delta_{R_2} = \delta_R again:

δRparallel=δR R˙1+R˙2R˙1+R˙2=δR1=δR2 \boxed{\delta_{R_{parallel}} = \delta_R ~ \frac{\cancel{\dot{R}_1 + \dot{R}_2}}{\cancel{\dot{R}_1 + \dot{R}_2}} = \delta_{R_1} = \delta_{R_2} }

And again, we find that the effective tolerance is completely indepenent of ideal resistor values and stays the same (when applying to most real-life scenarios)! This was a very surprising find for me. I always assumed that the parallel case might magically improve the effective tolerances somehow. Now I know better.

When combining resistors of the same tolerance class, the effective tolerance stays the same! And when mixing tolerances, one can compute the effective tolerance with:

δRparallel=R˙2δR1+R˙1δR2R˙1+R˙2 \delta_{R_{parallel}} = \frac{\dot{R}_2 \delta_{R_1} + \dot{R}_1 \delta_{R_2}}{\dot{R}_1 + \dot{R}_2}

This in turn means that trying to “fake” a higher E-series resistor by combining two lower E-series values in parallel is a valid strategy, with the benefit of not giving up on accuracy! For that purpose I made a calculator that explores exactly this principle.

Voltage Divider#

Why not look at another obvious test subject before we wrap it up? I’d like to explore a realistic approach of thinking in terms of voltages that are affected by component tolerances. Let’s think about how to setup the function and take a look a the following diagram:

V _ i R R _ _ 1 2 V _ o

The ratio of the output to input voltages is the most convenient way to find the formula:

VoVi=R2R1+R2 \frac{V_o}{V_i} = \frac{R_2}{R_1 + R_2}

We will assume for this example that the input voltage ViV_i does not have an error bound and is merely a constant:

f(R)=ViR2R1+R2 , where R=[R1R2] and Vi=const f( \bold{R}) = V_i \frac{R_2}{R_1 + R_2} \textrm{ , where } \bold{R} = \begin{bmatrix} R_1 \\ R_2 \end{bmatrix} \textrm{ and } V_i = const

Let’s compute the partial derivatives of ff in respect to all input variables and evaluate them at the ideal values R˙\bold{\dot{R}}:

fR1(R˙)=ViR˙2(R˙1+R˙2)2  and  fR2(R˙)=ViR˙1(R˙1+R˙2)2 \left| \frac{\partial f}{\partial R_1}(\bold{\dot{R}})\right| = -V_i \frac{{\dot{R}_2}}{\left( \dot{R}_1+\dot{R}_2 \right)^2} ~ \textrm{ and } ~ \left| \frac{\partial f}{\partial R_2}(\bold{\dot{R}})\right| = V_i \frac{{\dot{R}_1}}{\left( \dot{R}_1+\dot{R}_2 \right)^2}

With Vo=V˙o±ΔVo \lang V_o \rang = \dot{V}_o \pm \Delta_{V_o} we represent the absolute and relative error:

ΔVo=Vi R˙2ΔR1+R˙1ΔR2(R˙1+R˙2)2 \Delta_{V_o} = V_i ~ \frac{ {\dot{R}_2} \Delta_{R_1} + {\dot{R}_1} \Delta_{R_2}}{\left( \dot{R}_1 + \dot{R}_2 \right)^2}

δVo=R˙1R˙1+R˙2(δR1+δR2) \delta_{V_o} = \frac{\dot{R}_1}{\dot{R}_1 + \dot{R}_2}(\delta_{R_1} + \delta_{R_2})

Let’s assume δR1=δR2=δR\delta_{R_1} = \delta_{R_2} = \delta_R again, with r=R˙2R˙1r = \frac{\dot{R}_2}{\dot{R}_1}:

δVo=21+r δR \delta_{V_o} = \frac{2}{1+r}~\delta_R

It would also be very useful to know how δR\delta_R would affect the absolute error, since that is what we usually care about when it comes to voltages:

ΔVo=Vi 2r(1+r)2 δR \boxed{ \Delta_{V_o} = V_i ~ \frac{2 r }{(1+r)^2}~\delta_R }

For the case of r=1r=1 the absolute error reaches its maximum of ΔVo=ViδR2\Delta_{V_o} = V_i \frac{\delta_{R}}{2} and from there the absolute error becomes smaller and smaller, in any direction that the ratio moves, so one can remember that equation for a very rough estimate of how bad a voltage divider might perform!

Conclusion#

This little adventure into the math world brought us a very promising generalization of approximating error bounds in arbitrary functions. It has proven to be accurate (within its limits, i.e. order of magnitude) when compared to a traditional min-max worst case study for a specific set of values.

In my opinion, the most intersting part is that we can now study the behaviour of error bounds and based on which input variables they better or worsen. The prospect of being able to analyze any circuit with this new method exites me.

I deliberately left out statistical analysis of the error bound’s distribution function. That is a possible improvement point. If we can account for correlation effects, the error bound might become overall smaller. However, with statistical probalility comes also a certain risk of outliers - which my method does not do.

As final touch, I compiled a small table of basic operations and their resulting error bonuds.

Operation Absolute error relative error
x1+x2 x_1+x_2 Δx1+Δx2\Delta_{x_1} + \Delta_{x_2} (x˙1δx1+x˙2δx2)/(x˙1+x˙2) (\dot{x}_1 \delta_{x_1} + \dot{x}_2 \delta_{x_2})/ (\dot{x}_1 + \dot{x}_2)
x1x2  (x1>x2) x_1-x_2~~(x_1>x_2) Δx1+Δx2\Delta_{x_1} + \Delta_{x_2} (x˙1δx1+x˙2δx2)/(x˙1+x˙2) (\dot{x}_1 \delta_{x_1} + \dot{x}_2 \delta_{x_2})/ (\dot{x}_1 + \dot{x}_2)
x1x2 x_1\cdot x_2 x˙1Δx2+x˙2Δx1\dot{x}_1 \Delta_{x_2} + \dot{x}_2 \Delta_{x_1} δx1+δx2\delta_{x_1} + \delta_{x_2}
x1/x2 x_1/x_2 (x˙1Δx2+x˙2Δx1)/x22 (\dot{x}_1 \Delta_{x_2} + \dot{x}_2 \Delta_{x_1}) / x_{2}^{2} δx1+δx2\delta_{x_1} + \delta_{x_2}
1/x 1/x Δx/x˙2 \Delta_x / \dot{x}^2 δx\delta_x
xn  (n>0) x^n~~(n>0) nx˙n1Δxn \cdot {\dot{x}}^{n−1} \Delta_x nδxn \cdot \delta_x
x \sqrt{x} Δx/(2x˙) \Delta_x / (2 \sqrt{\dot{x}}) δx/2\delta_x / 2