Vous êtes sur la page 1sur 29

Chapter 4 - Measurement Accuracy

Measurement Accuracy

Terminology
Definitions

of Accuracy

Closeness with which an instrument reading approaches the true value of the variable being measured. The maximum error in the measurement of a physical quantity in terms of the output of an instrument when referred to the individual instrument calibrations. The degree of conformance of a test instrument to absolute standards. The ability to produce an average measured value which agrees with the true value or standard being used.

Measurement Accuracy

Terminology
Precision

A measure of the reproducibility of the measurements. Given a fixed value of a variable, precision is a measure of the degree to which successive measurements differ from one another. The degree to which repeated measurements of a given quantity agree when obtained by the same method and under the same conditions. Also called repeatability or reproducibility. The ability to repeatedly measure the same product or service and obtain the same results.

Measurement Accuracy

Book Terminology
Accuracy

- to refers to the overall closeness of an averaged measurement to the true value. Repeatability - the consistency with which that measurement can be made.

The word precision will be avoided.

Accuracy

takes all error sources into account

Systematic Errors Random Errors Resolution (Quantization Errors)

Measurement Accuracy

Terminology
Systematic

Errors

Errors that appear consistently from measurement to measurement Ideal Value = 100mV Measurements : 101mV, 103mV, 102mV, 101mV, 102mV, 103mV, 103mV, 101mV, 102mV Average Error : 2mV Caused by DC offsets, gain errors, non-linearities in the DVM Systematic errors can often be reduced through calibrations.

Measurement Accuracy

Terminology
Random

Errors

Notice that the list of numbers in the last slide vary from 101mV to 103mV. All measurement tools have random errors even $2 Million Automated test instruments Random Errors are perfectly normal in analog and mixedsignal measurements. Big challenge is in determining whether the random error is caused by a bad DIB design, bad DUT design or by the tester itself.

Measurement Accuracy

Terminology
Resolution

(Quantization Errors)

Notice that in the previous list of numbers, the measurement was always rounded off to the nearest milivolt. Limited resolution results from the fact that continuous analog signals must be converted to digital format (using ADCs) before a computer can evaluate the test results. The inherent error in ADCs and measurement instrumentation is called Quantization Error. Quantization error is a result of the conversion from an infinitely variable input voltage to a finite set of possible outputs from the ADC.

Measurement Accuracy

Terminology
Repeatability

Non-repeatable answers are a fact of life for mixed-signal test engineer Could be caused by random noise or other external influences If a test engineer gets the same value multiple times in a row, it should raise the question of ranging of the measurement tool. Repeatability is desirable, but it does not in itself guarantee accuracy.

Measurement Accuracy

Terminology
Stability

The degree to which a series of supposedly identical measurements remains constant over time, temperature, humidity, and all other time varying factors is referred to as stability. Testers are equipped with temperature sensors to allow recalibration if a certain change in temperature occurs. Caution must be exercised in the power-up of the tester, since temperature of tester electronics must stabilize before calibrations are accurate Also, if the test cabinet or test head are opened, the temperature must stabilize before any calibrations can be performed.

Measurement Accuracy

Terminology
Correlation

The ability to get the same answer using different pieces of hardware or software. Tester - to - Bench Correlation Tester - to - Tester Correlation Program - to - Program Correlation DIB - to - DIB Correlation Day - to - Day Correlation

Measurement Accuracy

Terminology
Reproducibility

Reproducibility is often incorrectly used interchangeably with repeatability Reproducibility is defined as the statistical deviations between a particular measurement taken by any operator on any group of testers on any given day using any DIB board. Repeatability is used to describe the ability of a single tester and DIB board to get the same answer multiple times as the test program is repeatedly executed. If a measurement is highly repeatable, but not reproducible, then the test program may consistently pass a particular DUT one day but then may consistently fail the same DUT on another day or on another tester.

Calibration and Checkers

Traceability to Standards
National

Institute of Standards and Technology

(NIST) Thermally stabilized standardized instrument

periodically replaced by a freshly calibrated source

Hardware Calibration
Any

mechanical process which brings a piece of equipment back into agreement with calibration standards

usually not a convenient process Robotic manipulations can be used to automate the process, but it is still not optimal

Calibration and Checkers

Software Calibration
The

basic idea behind software calibration is the separation of the instruments ideal operation from its non-idealities so that a model of the instruments non-ideal operation can be constructed, followed by a correction of the non-ideal behavior using a mathematical routine written in software. Most testers have extensive calibration processes for each measurement range in the tester instrumentation

Calibration and Checkers

System Calibrations & Checkers


Checkers

verify the functionality of the hardware instruments in the tester. Calibration and checkers are often found in the same program.

Several levels of checkers and calibrations are used Calibration reference source replacement and recalibration is performed approximately every six months Extensive performance evaluation (PV) process is used to verify the tester is in compliance with its published specifications Automated calibrations on test floor as conditions warrant them

Calibration and Checkers

Focussed Instrument Calibrations


Accuracy

of faster instruments can be improved by periodically referencing them back to slower more accurate instruments. Test specific calibration to focus on the exact parameters of the test Tester Focussed calibration may not be necessary on all tests any longer, yet DIB focussed calibrations will remain a major task of the test engineer

Calibration and Checkers

Focussed DIB Circuit Calibrations


Often,

circuits are added to the DIB board to improve the accuracy of the particular test, or to buffer a weak output of a device before it is tested. Since DIB circuits are added in series between the DUT and the tester, the contribution of calibration factors must be treated accordingly. It is critical that the test engineer have a clear understanding which characteristics of each DIB circuit affects the test being performed. Review of example 4-3 found on p 4-17.

Calibration and Checkers

DIB Checkers
Verifies

the basic functionality of the DIB circuits. Performed in the first run of the test program along with the calibration Every possible relay and circuit path should be checked to produce a go-nogo response verifying the functionality of as much of the DIB board as possible.

Calibration and Checkers

Tester Specifications
Test

engineers must determine if the tester instrument is capable of making the measurements they require. Due to the lack of information about specification values from the manufacturer, the test engineer needs to understand the spec conditions and the variations from that spec which will affect the performance of the instrument.

A good example is a specification of the noise floor of a tester. In a professional shielded room with no digital circuits operating, the noise floor will be totally different from when the same tester is operating at a university.

Calibration and Checkers

Tester Specifications
Example

of a DC meter

five output ranges (set by a PGA internally and calibrated) accuracy is specified as a percentage of the measured value - with a limit of 1 mV or 2.5 mV. Assumes the measurement is made 100 times and averaged single measurement may have greater measurement error along with repeatability error. Meter may also pass the signal through a low pass filter with the input either enabled or disabled. Indicates extra settling time if filter is disabled, is the spec still true????

Dealing with Measurement Error

Filtering
Acts

as a hardware averaging circuit and allows only the desired frequencies to pass.
The closer the cutoff frequency to the measurement frequency, the more noise is removed.

Unfortunately,

the lower the frequency, the longer the test time required for settling
Settling time is inversely proportional to the cutoff frequency.

Dealing with Measurement Error

Averaging
A

form of discrete time filtering that can be used to improve the repeatability of a measurement. To reduce the effect of noise on a voltage measurement by a factor of two, one has to take four times as any readings and average them. This quickly results in a point of diminishing returns with respect to test times.

Note: Do not average values in dB - always convert to linear form and average - then return them to dB form.

Dealing with Measurement Error

Guardbanding
If

a particular measurement is known to be accurate and repeatable with a worst cast uncertainty of , then the final test limits should be tightened by to insure that no bad devices are shipped to the customer.

Guardbanding Positive Test Limit = Positive Test Limit - Guardbanding Negative Test Limit = Negative Test Limit +

The

only way to reduce guardbanding is to increase accuracy and repeatability - this increases test time and may not be a viable option.

Basic Data Analysis

Datalogs
A

concise list of results generated by the test program


test number test category test description maximum and minimum test limits measured result Pass / fail indication

Sequencer: 1000 Neg Sequencer: 5000 DAC 5001 DAC 5002 DAC 5003 DAC 5004 DAC 5005 DAC Sequencer: 6000 DAC 6001 DAC 6002 DAC 6003 DAC 6004 DAC 6005 DAC Sequencer: 7000 DAC 7001 DAC 7002 DAC 7003 DAC 7004 DAC 7005 DAC 7006 DAC 7007 DAC 7008 Max 7009 Min

S_continuity PPMU Cont Failing Pins: 0 S_VDAC_SNR Gain Error T_VDAC_SNR -1.00 dB S/2nd T_VDAC_SNR 60.0 dB S/3rd T_VDAC_SNR 60.0 dB S/THD T_VDAC_SNR 60.00 dB S/N T_VDAC_SNR 55.0 dB S/N+THD T_VDAC_SNR 55.0 dB S_UDAC_SNR Gain Error T_UDAC_SNR -1.00 dB S/2nd T_UDAC_SNR 60.0 dB S/3rd T_UDAC_SNR 60.0 dB S/THD T_UDAC_SNR 60.00 dB S/N T_UDAC_SNR 55.0 dB S/N+THD T_UDAC_SNR 55.0 dB S_UDAC_Linearity POS ERR T_UDAC_Lin -100.0 mV NEG ERR T_UDAC_Lin -100.0 mV POS INL T_UDAC_Lin -0.90 lsb NEG INL T_UDAC_Lin -0.90 lsb POS DNL T_UDAC_Lin -0.90 lsb NEG DNL T_UDAC_Lin -0.90 lsb LSB SIZE T_UDAC_Lin 0.00 mV Offset V T_UDAC_Lin -100.0 mV Code Width T_UDAC_Lin 0.00 lsb Code Width T_UDAC_Lin 0.00 lsb

< <= <= <= <= <= < <= <= <= <= <= < < < < < < < < < <

-0.13 63.4 63.6 60.48 70.8 60.1 -0.10 86.2 63.5 63.43 61.3 59.2

dB dB dB dB dB dB dB dB dB dB dB dB

<

1.00

dB

<

1.00

dB

7.2 mV < 100.0 mV 3.4 mV < 100.0 mV 0.84 lsb < 0.90 lsb -0.84 lsb < 0.90 lsb 1.23 lsb (F) < 0.90 lsb -0.83 lsb < 0.90 lsb 1.95 mV < 100.00 mV 0.0 mV < 100.0 mV 1.23 lsb < 1.50 lsb 0.17 lsb < 1.50 lsb

Bin:

10

Basic Data Analysis

Histograms
A

graphical method used to view the repeatability of numerical data

Ideally the values of the acquired data should be closely packed Statistical relevance of the data is determined by the number of samples taken - in Test engineering, the minimum for statistical relevance is 100. Histograms also give numerical values which indicate the fit to the standard bell curve, including the mean and standard deviation

Basic Data Analysis

Normal (Gaussian) Distributions


Any

summation of a large number of random variables results in a Gaussian distribution. The variations in a typical mixed-signal measurement comes from a summation of many different sources of noise and crosstalk in both the device and the tester instrument.

The standard deviation of the Gaussian distribution is roughly equal to one sixth of the total variation from the minimum value to the maximum value In the example the standard deviation is 0.0029 dB, so we would expect to see values ranging from -0.139 dB to -0.121 dB. These values are labeled as Mean -3 sigma and Mean +3 sigma

Basic Data Analysis

Non-Gaussian Distributions
Bimodal

Outliers

Basic Data Analysis

Noise, Test Time and Yield


Yield

= total good devices / total tested devices Definite trade off between test time and production yield. Designer controls the design margins which reduce the need for guardbanding Centering of design within the specifications

May cost extra silicon or extra power May make the test unnecessary

Vous aimerez peut-être aussi