30 July, 2017

Uncertainty Estimation (III): Resolution, Drift, Temperature & Measured Samples

6. Examples of uncertainty calculation: resolution, drift, influence of temperature on measurements



1st example: Resolution and Rounding


Evaluate the contribution to the uncertainty due to the resolution of a measuring instrument. For example the resolution of a millesimal micrometer (r = 0.001 mm) in the measurement of a standard gauge block

For both analog and digital equipment, the use of an instrument with an 'r' resolution assumes that the uncertainty of the measured values is between ± r/2 with equal probability throughout this range; In this case we have a rectangular distribution with a half-amplitude-magnitude of 0,5 µm.

Therefore, its contribution to uncertainty is assessed as:



Both digital and analogue resolution, as well as rounding, give rise to an uncertainty contribution that can be analyzed by taking into account a rectangular distribution with a r/2 limit value

2nd example: Drift

Calibrated standard blocks are used for the calibration of a micrometer. Assuming that the blocks are within the calibration period and is allowed a drift of ± d = ± 30 nm between calibrations, evaluate the contribution to the uncertainty due to the drift of the block value since the last calibration.

A triangular distribution can be considered for the drift, although a rectangular distribution could be taken. Taking into account that between calibrations does not exceed the value ± e, you get:


3rd example: Calibration Certificate

The gauge blocks used to calibrate one external micrometer are calibrated by comparison and have a certified uncertainty of 60 nm with a coverage factor k=2.

The assessment of the typical uncertainty associated with a certified value is obtained from the certificate itself knowing the coverage factor employed. The result in this case is:


4th example: Reference Standard

In order to calibrate an external micrometer some set of certified by comparison gauge-blocks are used, its certificate shows an uncertainty of 60 nm with a coverage factor k=2. Assuming that the gauge-blocks are within the calibration period and is allowed a drift of ± 30 nm between calibrations; Evaluate the uncertainty associated with the use of standard blocks as a reference in the micrometer calibration.

It is evaluated by the uncertainty propagation law as the positive square root of the quadratic combination of the uncertainty due to the calibration certificate and the one due to the derivation of the value blocks in time, ie:


5th example: Temperature Influence

In the above-mentioned case, we assume that the gauge blocks are made of steel, with an expansion coefficient of α = (11,5 ± 1) × 10-6 °C-1 and the measurement is performed under laboratory conditions in which we allow a temperature variation during the measurement process of ± 0,025 °C. In order to perform the measurement we use temperature sensors whose uncertainty is 0,02 °C and their resolution is 0,001 °C.; Evaluate the uncertainty associated with temperature

In the mentioned case we can assume a mathematical model:







In the mathematical model the corrections are not specified, since what is sought is to evaluate the uncertainty associated with temperature.

The coefficients of sensitivity are evaluated by calculating the partial derivatives of the mathematical model with respect to the coefficient of expansion (α) and with respect to the variation of temperature(Δt).

With this we obtain:



And the contribution to the uncertainty associated with the length:



In this case, we must take into account two different uncertainty contributions:

1.: One due to the uncertainty associated with the expansion coefficient of the gauge blocks used and,

2.: Two due to the uncertainty associated with the non-exact knowledge of the temperature difference with respect to 20°C reference temperature at the time of calibration.

1. u(α) : Uncertainty due to the coefficient of expansion

To evaluate this uncertainty we will assume a rectangular distribution around the mean value of the expansion coefficient whose half-range is 1 × 10-6, so the associated uncertainty is obtained as:



2. u(Ѳ), Uncertainty due to non-exact knowledge of the temperature difference with the reference temperature (20 °C) at the calibration time

To evaluate the value of the temperature of the blocks use temperature sensors Pt-100 with resolution 0.001 °C and allows a variation of temperature during the measurement of ± 0,025 °C.

We will consider the following sources of uncertainty:

2.1 u(ΔѲ) Temperature difference ΔѲ, during the measurement
Assuming a rectangular distribution we obtain:



2.2 u(Ѳr) Temperature sensor resolution Pt100 The associated uncertainty is:


2.3 u(Ѳc) Termometer Calibration

As specified in the calibration certificate its expanded uncertainty is U (Ѳcal)= 0,02 °C for k = 2. Therefore:


2.4 u(Ѳd) Termometer Drift

We can estimate a drift between calibrations of ± 0.01 ° C. In this case we will assume a rectangular distribution, more conservative than a triangular



Taking into account all the contributions associated with the unknown gage-block temperature value (47 to 50), we obtain:



By using formula (45) we obtain:




7. Steps to follow to estimate the result of a measurement

In a schematic way and following the recommendations of the GUM the steps to follow to estimate the uncertainty of measurement are the following:

1) Mathematically express the relationship between the measurand Y and the input quantities Xi on which Y depends in the form Y = f(Xi). The function f must contain all magnitudes, including corrections.

2) Determine xi, estimated value of the input variable Xi, either from the statistical analysis of a series of observations, or by other methods.

3) Evaluate the typical uncertainty u(xi) of each input estimate xi. The evaluation may be type A or type B.

4) Evaluate the covariates associated with all input estimates that are correlated.

5) Calculate the measurement result; That is, to estimate y of the measurand Y, from the functional relation f using for the input quantities Xi the estimations xi obtained in step 2.

6) Determine the combined standard uncertainty uc(y) of the measurement result y based on the typical uncertainties and covariance associated with the input estimates.

7) If it is necessary to give an expanded uncertainty U, whose purpose is to provide an interval [y − U, y + U] in which one can expect to find most of the distribution of values that could reasonably be attributed to the measurand Y, Multiply the combined standard uncertainty uc(y) by a coverage factor k, usually 2 or 3, to obtain U = k uc(y).
Select k considering the confidence level required for the interval [ if k = 2 ( 95%) ].

8) Document the measurement result and, together with its combined standard uncertainty uc(y), or its expanded uncertainty U.
When the result of a measurement is accompanied by the expanded uncertainty U = k uc(y), we must:

a) fully describe the manner in which the measurand Y has been defined;

b) indicate the result of the measurement in the form Y = y ± U, and give the units of y, and of U;

c) include the relative expanded uncertainty U/⎜y⎜, ⎜y⎜≠0, when applicable;

d) give the value of k used to obtain U [or, to facilitate to the user the result, provide both the value of k and that of uc(y)];

e) give the approximate confidence level associated with the interval y ± U, and indicate how it has been determined;




8. Conclusions


All measurement process is aimed at obtaining information about the measurand in order to evaluate its conformity with specifications, make comparisons or make other decisions. In any case, the outcome of the measure is as important as the quality of the measure. The quality of a measure is quantified by measuring the uncertainty of that measurement.

The result of any measurement and should be documented together with its combined standard uncertainty uc(y), or its expanded uncertainty U, indicating the coverage factor or confidence level associated with the interval and ± U

The evaluation of uncertainty is not a single mathematical task, but thanks to the Guide for the expression of measurement uncertainty can be analyzed according to general rules. 

This guide has facilitated the comparison of interlaboratory results since it has been widely extended and in this way a common language is used.

In this article we have tried to give a brief description, accompanied by examples, of the steps to follow in the determination of uncertainties following the GUM.

In a first step the physical model of the measurement must be represented by means of a mathematical model and it is necessary to identify each of the input quantities on which it depends, as well as their relationships, if they exist.

Subsequently, uncertainties are evaluated from an objective, statistical, and subjective point of view, ie taking into account all aspects that influence the outcome of a measure, such as factors inherent in the instrument, environment conditions, etc.

The uncertainty propagation law or other methods yields the combined standard uncertainty associated with the final estimate of the measurand. Finally, this uncertainty is amplified by a coverage factor to obtain an expanded uncertainty so that the confidence level of the interval  y± U  is greater.

Useful References:

26 June, 2017

Uncertainty Estimation (II): Combined & Expanded Uncertainties. Cover Factor kp

5. Estimation of Combined & Expanded Uncertainty. Cover factor kp

A physical measurement, however simple, has a model associated with the actual process. The physical model is represented by a mathematical model that in many cases implies approximations.


5.1. Uncertainties Propagation Law (UPL), or "Ley de Propagación de Incertidumbres (LPI)”. Combined Uncertainty
In most cases, the measurand Y is not measured directly, but is determined from other N quantities X1, X2, ..., XN , by a functional relationship f:.
The function f does not express both a physical law and the measurement process and must contain all the magnitudes that contribute to the final result, including corrections, even if these are of null value, to be able to consider the uncertainties of such corrections..
In principle, by means of type A evaluation or type B evaluation, we would be able to know the distribution functions of each of the input magnitudes, and we could derive from this the distribution function of the indirect magnitude.
Taking Taylor's first-order series around the expected value into account, we can obtain the uncertainty propagation law, which facilitates the estimation of variance.
The terms ci , cj are the coefficients of sensitivity and indicate the weight of each of the different input quantities in the output variable, represented by the measurement function. The second term is the covariance term in which the influence of input quantities on others appears, in case they are correlated. If the input magnitudes are independent, the equation can be simplified, the second term disappearing, leaving only the first.

Example 1:
UPL or “LPI” for models with shape:
In this case we obtain:
A tipical example for this shape model can be the Young’s modulus for metals elasticity.
E = σ/ε
When it is not possible to write the measurement model in an explicit way, the sensitivity coefficients can not be calculated analytically, but numerically, by introducing small changes xi + Δxi into the input quantities and observing the changes that occur in the output quantities.

5.2. Uncertainty Propagation Law “LPI” limitations.

Uncertainty Propagation Law can be applied when:
  1. Only an output magnitude appears in the mathematical model.
  2. The mathematical model is an explicit model, ie, Y = f (Xi).
  3. The mathematical expectation, the typical uncertainties and the mutual uncertainties of the input magnitudes could be calculated.
  4. The model is a good approximation to a linear development around the best estimator of the input magnitudes.
When it comes to nonlinear models, we can perform the second order approximation of the Taylor series, or even obtain the values of mathematical expectation and variance without approximations, directly, much more complex solutions mathematically than the law of propagation of uncertainties.

After the elaboration of the GUM guide, we have worked on additional guides to this one, for the evaluation of uncertainties by other methods. One of them has influenced uncertainty calculation using the Montecarlo method.

The basic idea of this method, useful for both linear and nonlinear models, is that assuming a model Y = f (Xi), where all input quantities can be described by their distribution functions, a mathematical algorithm programmed to generate a sequence of values τi = (x1, .......xN) where each xi is randomly generated from its distribution function (extractions from its distribution function). The value of y1 obtained from each sequence τi is calculated using the measurement model, the process being repeated a large number of times, on the order of 10E5 or 10E6. This high number of repetitions allows to obtain a distribution function for the magnitude y, and he calculation of their mathematical expectation and their standard deviation in mathematical form, which will lead us to the results of the best combined estimator and associated uncertainty.

In this process the results of the measurement of the input parameters are not really used to give a measurement result, but to establish with them the distribution function of the input magnitudes and to be able to randomly generate values of the function magnitude of those input quantities. As it is an automated generation, many more values can be generated than if the measurement were actually performed and with all of them finding the distribution function of the magnitude that depends on those input magnitudes. Mathematically known, the results and associated uncertainties are then obtained.

5.3. Expanded Uncertainty

The purpose of the combined standard uncertainty is to characterize the quality of the measurements. In practice what is needed is to know the interval within which it is reasonable to suppose, with a high probability of not being wrong, that the infinite values that can be "reasonably" attributed to the measurand are found. We might ask if we could use the combined standard uncertainty to define that interval (y-u , y+u). In this case, the probability that the true value of the measurand is inside the range (y-u, y+u) is low since, assuming that the distribution function of the measurand “y” is a normal function, we are talking about 68.3% of probability.

To increase this probability to more useful values for later decision making, we can multiply the combined uncertainty by a number called the “cover factorkp and to use the interval (y-uc(y) kp , y+ uc(y) kp).

The product kp uc(y) = Up is called expanded uncertainty, where kp is the coverage factor for a confidence level p.

Mathematically this means that:
 Hence the area of the density function associated with Y within this interval is:
The interval we want to know is ( y - Up , y + Up ).
The relation between p y kp depends, of course, on the density function f(Y) that is obtained from the information accumulated during the measurement process.

 

5.4. Statistical Distributions

5.4.1. Rectangular distribution
For a confidence level p we calculate the integral of the distribution function:
Standard deviation is then:
In order to calculate the coverage factor:
Operating shows:
Example:
Given a certain magnitude t, it is known to be described by a symmetrical rectangular distribution whose limits are 96 and 104. Determine the coverage factor and the expanded uncertainty for a 99% confidence level.

In this case we have a = 4, μ = 100, y σ = 2,31, and that for a confidence level of 99% the coverage factor is 1.71. Then, the expanded uncertainty is U99 = kp u = 1,71×2,31=3,949

5.4.2.- Triangular distribution
For a confidence level p, the integral of the density function is calculated:
As the standard deviation is σ= a/√6 Then kp=√6(1-√(1-p) :
Operating gives:


5.4.3. Normal distribution
In most measurement processes the distribution that best describes what is observed is the normal distribution.
Normal distribution with μ = 0 y σ = 1 is termed standard or standard distribution.
The integration required for the determination of confidence intervals is more complicated. However, in the case of the standard normal distribution there are tables that allow the calculation to be carried out in a simple way.

The way to obtain the confidence intervals is by typing. If we have a variable or magnitude Y that is distributed according to an N(µ,σ), we define a normal variable typified as Z=(Y-µ)/σ.
The integration of the standard distribution N (0,1) is in the tables. From these the confidence intervals are determined.

Example:
If, for a confidence level p the confidence interval defined for a distribution N (0,1) is (-kp, kp); As Z is a distribution N (0,1):
In a normal distribution the cover factor kp, for a confidence level p is:
The expanded uncertainty is calculated by Up (y) = kp · uc (y).

5.4.4.- Student's T-distribution

Student's T-distribution is used to make hypothesis tests when the sample size is small.
Let Z be a random variable of expectation µz and standard deviation σz of which n observations are made, estimating a mean z value and an experimental standard deviation s(z).

It is possible to define the following variable whose distribution is the T-Student with ν freedom degrees
The number of degrees of freedom is ν = n -1 for an amount estimated by the arithmetic mean of n independent observations.

If n independent observations are used to make a least square fit of m parameters, the number of degrees of freedom is ν = n -m

For a T-Student variable with ν freedom degrees, el the interval for the confidence level p is (-tp (ν) , tp (v)).

Factor tp (ν) is found inside T-Student tables.

Example:
Suppose that the measurand Y is simply a magnitude that is estimated by the arithmetic mean X of n independent observations, where s(X) is the experimental standard deviation:
The best estimate of Y is y = X with associated uncertainty uc (y) = s(X)
The variable is distributed according to the t-Student. Thus:
Cover Factor is tp(ν) and Expanded Uncertainty is Up (ν) = tp (ν) uc (y).

 5.5. Expanded Uncertainty determination after measurement.

The problem that arises is to determine the expanded uncertainty after a measurement has been made.
If it is a direct measure, we can act in different ways:
- Uncertainty determined according to type A assessment of uncertainty
The cover factor for a given confidence level is obtained from the Student's T-distribution with n-1 degrees of freedom, where n is the number of measurements, or the normal distribution, if the number of repetitions is sufficiently large.
- Uncertainty determined if the distribution Y is rectangular, centered on the estimator y

The cover factor is obtained from the rectangular distribution.

If it is a matter of calculating the expanded uncertainty for an indirect measure, it is somewhat complicated to calculate the distribution function, which must be performed by special analytical or numerical methods. But we can think of simplifying:

A first approximation would be to calculate the probability distribution function by convolution of the probability distributions of the input quantities.

A second approximation would be to assume the indirect measure as a linear function of the input quantities
 5.5.1. Central limit theorem

The central limit theorem in its different versions ensures that the sum of independent and equidistributed random variables converges to a normal one. On paper convergence is commonly very fast, but actual experiments make one despair before seeing the Gaussian bell. There is no contradiction in this, for example we can understand the probability 1/2 to obtain face as a limit when we throw infinite times a coin and we can not demand that after 20 or 30 runs we have a precise approximation counting the percentage of hits.
The following graphs show the histograms of the sum of the 10 dice scores compared to the corresponding normal when the experiment is repeated one hundred, one thousand and ten thousand times. Naturally they come from a computer simulation.
Sum of scores of ten dice thrown a hundred times.
Sum of scores of ten dice thrown a thousand times.
Sum of scores of ten dice thrown ten thousand times.
Suppose an indirect measure is a linear function of the input quantities according to equation 33:
• The central limit theorem says:
“The distribution associated with Y will approximate a normal distribution of expectation
and variance
with E(Xi) the expectetation of Xi and σ2(Xi) the variance of Xi. This happens if Xi are independents and σ2(Y) is much larger than any other individual component ci σ2(Y)”
Conditions of the theorem are fulfilled when the combined uncertainty uc(y) is not dominated by typical uncertainty components obtained by type A evaluations based on few observations or type B evaluations based on rectangular distributions.

Convergence towards the normal distribution will be the faster the greater the number N of variables involved, the more normal these are, and when there is no dominant.
Consequently, a first approximation to determine the expanded uncertainty defining a confidence level interval p will be to use the cover factor proper of a normal distribution, kp.
If the number of random readings is small, then the value of uA derived may be inaccurate, and the distribution of the random component is best represented by the t-Student distribution. We would overestimate the uncertainty, especially if q was small and uA y uB were comparable in size.

5.5.2. Effective degrees of freedom. Welch-Satterthwaite approach

The problem can be solved by using the Welch-Satterthwaite approximation formula, which calculates the number of effective degrees of freedom of the combination of the t-Student distribution with a Gaussian distribution. The resulting distribution will be treated as a t-Student distribution with a calculated number of freedom degrees.
or:
If relative uncertainties are used, the number of effective degrees of freedom is:
In evaluations type B, to calculate the effective degrees of freedom, we will use the approximation given by the following formula:
This formula is obtained by considering the "uncertainty" of uncertainty. The greater the number of degrees of freedom, the greater the confidence in uncertainty. In practice, in type B evaluations vi→∞

In Type A evaluations the calculation of the number of degrees of freedom will depend on the statistic used to evaluate the most probable value.

References:
Next: Uncertainty Estimation (III): Examples of uncertainty calculation: resolution, drift, influence of temperature on measurements, steps to follow to estimate the result of a measurement & Conclusions