Demystifying Electrical Component Tolerances
Table of Contents
Did you ever wonder how the tolerances of resistors influence a voltage divider? What happens if you put two components in parallel — does the effective tolerance increase or decrease?
I had some very wrong assumptions about the answers to these two questions and am glad I looked deeper in to the topic. Some very suprising, yet satisfying and reassuring insights into error boundaries and propagation lie ahead. So buckle up and get ready for some math! (I enabled on my blog for this, hihi)
Basics#
What is component tolerance?#
Anyone who ever did any kind of DIY electronics project will ḱnow about the famous colorful rings of resistors that indicate their values. One needs to know that they don’t ship with the exact value that are being written on the package. They get sorted into different “classes” of quality, indicated by the last ring:
- gold: 5%
- silver: 10%
- no ring: 20%
These percentages give us a range in which the actual measured value falls within. The bounds are equi-distant from the supposed value and describe garantueed boundaries, meaning there is 0% chance that the values are lying outside of that range.
I am suggesting the following framework for handling error bounds:
with the error interval bounded as
where represents the real-world measured value, the respective boundary edges and the ideal or middle value. There are two types of errors that we will look at:
- absolute error : distance from middle value, with unit
- relative error : relative distance, unitless, provided in percent
We will assume that both of these are symmetric and can therefor be represented by a single value. We can deduce that:
Okay, let’s put this into some context. Let’s say you have a trusty 1kΩ () and want to know which the maximal () and minimal () value ranges are that you can expect from it before measuring. Taking a look at the tolerance () gives you these options:
| relative error | absolute error | min. value | max. value |
|---|---|---|---|
| 20% | 200 Ω | 800 Ω | 1200 Ω |
| 10% | 100 Ω | 900 Ω | 1100 Ω |
| 5% | 50 Ω | 950 Ω | 1050 Ω |
| 1% | 10 Ω | 990 Ω | 1010 Ω |
| 0.1% | 1 Ω | 999 Ω | 1001 Ω |
These are the guaranteed boundaries in which resistors with a given tolerance and nominal value can lie within. This is also (mostly) true for other components like capacitors and inductors.
Series Resistance#
Let’s assume we have two resistors and with their respective tolerances , and ideal values , . What will the resulting ideal resistance and its effective tolerance be like, when we hook them up in series?
Let’s walk through it with a simple numerical example:
If you will, ponder for a second and think of a solution yourself. Come back once you think you found a solution and check if you got it right.
Our core equation (note the missing bar over the R) is:
I think we can safely assume that the following should be true:
The takeaway here is that we can continue using the usual understanding of our equations for the ideal values. Calculating the resulting ideal value does not change the outcome of the bounding errors!
The maxima of the possible errors would be defined and further solved like so:
This gives us a direct link to the effective tolerance, if we know the ideal series resistance and its boundaries. Let’s go through the maximum error case:
While keeping in mind that we assumed equal error bounds, the calculation of the minumum error is not needed because of symmetry.
Hopefully, you found the same result! Let’s interpret this:
In the case of same input values and tolerances, the effective tolerance for series resistance stayed exactly the same. This means we can assume the base tolerance of the two individual resistors for further calculations, like voltage dividers, etc.!
As a next step, we could take a look at the parallel resistance case. However, the formula is more complex and (at least for me) I wouldn’t exactly know how to handle the error bounds of an inverse function from the top of my head or by logically combining error bounds. We have to find another, more rigorous way of describing intervals in a symbolic or arithmetic way.
Mathematical Adventures#
This an attempt at half rigorously proving my thesis of functional error bound calculus. This is not 100% mathematically sound, however I tried to to stick to best principles and am happy to receive feedback from others (and professionals)!
Derivation of Functional Error Bounds#
The Taylor series expansion can approximate functions based on an infinite series at specific point :
When defining and inserting in the above equation, we get:
The error shrinks very strongly with each additional term, so we assume that the first degree is accurate enough. Furthermore, we introduce the symmetry of the error bounds:
We found an approximation for our error bound model by looking at the Taylor series expansion more closely! Notice how the approximation of has a very familiar constuction: ideal value + absolute error. This is great, because that means chaining functions will result in the same layout again!
Alright, now we can throw (almost) any function with a derivate at our error bounds model and get a workable result. However, what we still did not solve is the question on how to deal with functions that have more than one input variable, or in our case, more than one input error bounds variable.
Derivation of Functional Multi-Varibale Error Bounds#
Let’s first define new symbols:
The Taylor series approximation has a lesser known higher-dimensional generalization that we can use. Specifically, we are interested in functions , with the dimensional input vector , consisting of ideal values and their associated absolute errors . We will evaluate the series at the point in -dimensional space for the first order:
This looks very familiar and strongly reminds of the one-dimensional case! The difference is that we deal with vectors and , a matrix of partial derivatives, which in our case has the size of :
We can apply the same trick as earlier, where we look at the equation from a different point of view with while evaluationg the matrix multiplication and inserting the symmetry of our error bounds:
Again, this look very promising. The form of the resulting equation has the layout ideal + error, where (luckily) the ideal part is just applying the function itself and the error now consists of not only one term, but terms!
Worst-Case Assumption#
There is one more addition I’d like to add to this error bound model. In the case of , the resulting partial derivatives end up being:
With we now try to represent the absolute error by adding up the partial derivatives with the corresponding :
However, since we want to represent the maximum and minimum error bound in , adding a negatively signed term will not bring us closer to the actual worst-case bound. Therefor, we should always only consider the magnitude of each individual partial term, because we require symmetry by definition:
Finally we are able to handle more complex functions, so let’s take a look at a few examples.
Application#
Series Resistance (Reprise)#
Our trusty is representable for :
When applying the Taylor series approximation from above, we need to compute the magnitudes of partial derivatives of in respect to all input variables and evaluate them at the ideal values :
Hence:
With we now try to represent the absolute and relative error of our series resistance:
Great! Now we have a full representation of how the error bounds will behave. Note that we gained the behaviour for resistor values with different tolerances as well.
Another neat conclusion arises when we assume , which is a very realistic case, since usually series resistance is done with same tolerance:
See how nicely the ideal resistance values cancel? This aligns with example computed by hand earlier. Let’s take some conclusions and do the same spiel with parallel resistance in the next section.
When combining resistors of the same tolerance class, the effective tolerance stays the same! And when mixing tolerances, one can compute the effective tolerance with:
Parallel Resistance#
Finally, we are able to look at the parallel case. We make the same assumptions and steps as the series resistance derivation, only the function itself changes:
Let’s compute the partial derivatives of in respect to all input variables and evaluate them at the ideal values :
With we represent the absolute and relative error:
Wow, that was quite fast. Other than computing the partial derivates and some equation juggling, this was pretty straight foreward. Let’s assume again:
And again, we find that the effective tolerance is completely indepenent of ideal resistor values and stays the same (when applying to most real-life scenarios)! This was a very surprising find for me. I always assumed that the parallel case might magically improve the effective tolerances somehow. Now I know better.
When combining resistors of the same tolerance class, the effective tolerance stays the same! And when mixing tolerances, one can compute the effective tolerance with:
This in turn means that trying to “fake” a higher E-series resistor by combining two lower E-series values in parallel is a valid strategy, with the benefit of not giving up on accuracy! For that purpose I made a calculator that explores exactly this principle.
Voltage Divider#
Why not look at another obvious test subject before we wrap it up? I’d like to explore a realistic approach of thinking in terms of voltages that are affected by component tolerances. Let’s think about how to setup the function and take a look a the following diagram:
The ratio of the output to input voltages is the most convenient way to find the formula:
We will assume for this example that the input voltage does not have an error bound and is merely a constant:
Let’s compute the partial derivatives of in respect to all input variables and evaluate them at the ideal values :
With we represent the absolute and relative error:
Let’s assume again, with :
It would also be very useful to know how would affect the absolute error, since that is what we usually care about when it comes to voltages:
For the case of the absolute error reaches its maximum of and from there the absolute error becomes smaller and smaller, in any direction that the ratio moves, so one can remember that equation for a very rough estimate of how bad a voltage divider might perform!
Conclusion#
This little adventure into the math world brought us a very promising generalization of approximating error bounds in arbitrary functions. It has proven to be accurate (within its limits, i.e. order of magnitude) when compared to a traditional min-max worst case study for a specific set of values.
In my opinion, the most intersting part is that we can now study the behaviour of error bounds and based on which input variables they better or worsen. The prospect of being able to analyze any circuit with this new method exites me.
I deliberately left out statistical analysis of the error bound’s distribution function. That is a possible improvement point. If we can account for correlation effects, the error bound might become overall smaller. However, with statistical probalility comes also a certain risk of outliers - which my method does not do.
As final touch, I compiled a small table of basic operations and their resulting error bonuds.
| Operation | Absolute error | relative error |
|---|---|---|