Académique Documents
Professionnel Documents
Culture Documents
by
Prof. Dr.-Ing. Oliver Nelles
University
University
University
University
To complete the MKS system and to improve the accuracy and generality of the units by the
help of modern physics, 1960 the SI system (Système International d'Unités) consisting of
7 units was founded.
• All units can be derived from the basic 7 SI units.
• The definition are mainly based on physical constants.
• In principle, these units could be understood by aliens!
University
University
Speed:
Acceleration:
Force:
Torque:
Torque and Energy have identical units!
Does this mean they are the same?
Energy:
Torque throughout this script is named
Power: with M, not T as usual in English,
because this is the common German
Magnetic Field: abbreviation for “Moment”.
Electric Voltage:
University
University
University
sensor
In many applications in signal processing a delay is not very tragic. If you see a football goal
100 ms delayed because of computations in your digital TV this is no significant drawback.
This is different in feedback control! The controlled variable must be fed back to the
comparison of desired with actual value immediately. Any delay due to a slow sensor
or filtering or other signal processing techniques deteriorates the control performance.
You can never make up for a delay in a subsequent step!
University
A measurement consists of a number and a unit. The number describes which multiple of the
unit is assigned:
Requirements:
1. The quantity to be measured must be qualitatively uniquely determined.
2. The standard unit must be defined by a convention.
These requirements are not met by many quantities in our everyday lives, like
wellness, beauty, intelligence.
University
University
http://en.wikipedia.org/wiki/File:Balance_scale_IMGP9755.jpg
University
University
1.2 1.2
1 1
0.8
Sampling 0.8
0.2 0.2
Quantization
Quantization
0 0
0 2 4 6 8 10 0 2 4 6 8 10
1.4 1.4
1.2 1.2
1 1
Sampling
0.8 0.8
0 0
0 2 4 6 8 10 0 2 4 6 8 10
University
A reference:
Mayer, J.R. Rene: “Measurement, Instrumentation and Sensors Handbook“, CRC
Press, 1999
A good book in English:
Morris A.S., Langari, R.: “Measurement and Instrumentation: Theory and
Application”, Academic Press, 2012
University
University
University
University
1.2 1.2
1 1
0.8
Sampling 0.8
0.2 0.2
Quantization
Quantization
0 0
0 2 4 6 8 10 0 2 4 6 8 10
1.4 1.4
1.2 1.2
1 1
Sampling
0.8 0.8
0 0
0 2 4 6 8 10 0 2 4 6 8 10
University
0 0
-1 -1
0 1 2 3 4 5 6 0 1 2 3 4 5 6
University
0 0
-1 -1
0 1 2 3 4 5 6 0 1 2 3 4 5 6
f = 0.5 Hz f = 0.3 Hz
1 1
0 0
-1 -1
0 1 2 3 4 5 6 0 1 2 3 4 5 6
University
Sampling Theorem
From the examples on the previous slide we see, that at least the double of the
sampling frequency is required to reconstruct the original signal from its sampled
version (f = 0.5 Hz sampled with f0 = 1 Hz). Real signals consist of many (typically infinite
many) frequencies. Then, this requirement relates to the highest contained frequency fmax.
If this theorem is violated, aliasing occurs, i.e., frequency components above the half
sampling frequency (f > ½ f0) are mirrored into a lower frequency range. By this effect high
frequency noise can disturb the signal in any frequency range. Thus aliasing should be
avoided or at least kept to a minimum.
It is practice to choose ~ f0 = 5…10 fmax
University
University
bandlimited
signal
University
Each signal component of frequency !1 is mirrored through the sampling process to:
As long as !1 lies inside the red area (solid), i.e., the sampling theorem is not violated, the
mirrored components (dashed) keep lying outside the red area (left figure).
As soon as !1 lies outside the red area (solid), i.e., the sampling theorem is violated, the
mirrored components (dashed) lie inside the red area (right figure). Aliasing occurs!
If a component changes from !1 to !0, a mirrored alias component at ! = 0 is created.
University
Quantization Noise
Although the quantization error is caused systematically, it appears
to be of random nature. Thus, one speaks of quantization noise that
any A/D conversion creates in principle. Since all values are of equal
probability, it can be modeled by an equal probability distribution.
In old synthesizers or CD players quantization noise could be heard
for low volume sounds.
University
University
University
University
University
University
University
University
Well-suited for
low-frequency signals
Reference frequency
is artificially generated.
University
Well-suited for
high-frequency signals.
University
University
University
University
• Step 2
0
• Ramp -2
• Periodic signals: sine, rectangular, ... -4
0 20 40 60 80 100 120 140 160 180 200
time t [s]
- Gaussian, 2
0
- uniform, ... -2
-4
• Frequency characteristics: 0 20 40 60 80 100 120 140 160 180 200
time t [s]
- white: all frequencies have the same power,
- band limited: only a certain frequency range is present, ...
University
min. power of e!
adaptive
filter
University
University
Source: http://www.techradar.com/news/phone-and-
communications/mobile-phones/background-noise-reduction-one-
of-your-smartphone-s-greatest-tools-1228924
University
22 features used
Quelle: www.markus-hofmann.de for face detection.
University
Component measurement
to supervise tolerances
Camera
Camera
University
University
University
University
University
University
How to design a filter that fulfills its task (disturbance suppression) well?
• What does “well” mean? → Criterion needed!
• Structure of the filters: linear/nonlinear, FIR/IIR, order, ... to be determined.
• Parameters of the filter to be determined.
• Prior knowledge about the disturbance is required:
- kind: stochastic or deterministic
- frequency range: single frequencies, certain frequency bands, ...
University
no
damage
Frequency emerging
Sound
analysis damage
advanced
damage
University
Notch
Controller Plant
Filter
University
+ Notch
filter
Resonances in the
dynamics of the shuttle
In English
Oppenheim A.V., Schafer R.W., Buck J.R.: „Discrete-Time Signal Processing“, Prentice-
Hall, 9. Ed., 2008, 950 p.
Ifeachor E., Jervis B.: „Digital Signal Processing: A Practical Approach“, Prentice-Hall,
8. Ed., 2001, 960 p.
University
University
University
The discrete-time unit step simply corresponds to the continuous-time unit step sampled
with T0. During the 1. sample the unit step and the delta impulse are identical!
1 Connection:
This corresponds to
–2 –1 0 1 2 3 4 5 k
University
0 1 2 3 4 5 6 7 8 9 10 11 12 13 k
University
Knowledge about the previous time steps k−1, k−2, ..., k−n is required.
University
Such a system is also called FIR (finite impulse response) because its output to an impulse
inputs decays to zero after m steps.
Such a system also called IIR (infinite impulse response) because its output to an impulse
inputs never decays to zero.
University
If the initial condition y(–1) is known the output y(k) can be calculated for all times k:
For difference equations of order n with n > 1 it can be calculated correspondingly. However,
in the general case n initial values y(–1), y(–2), ..., y(–n) are required because y(k) depends on
y(k–1), y(k–2), ..., y(k–n).
University
In the homogenous case we have y(–1) = 0 and thus the output y(k) for all times k becomes:
In the homogenous case we have y(–1) = 0 and thus the output y(k) for all times k becomes:
(identical with the impulse response)
University
Difference replace differentials, sums replace integrals. In discrete time the handling is much
simpler with the help of a computer. However, in this form, the number of sum terms
(summands) increases with k! Therefore we look for some other way to calculate the output
of a discrete-time system.
University
In discrete time the corresponding expression is the convolution sum. With it the output y(k)
to every input signal u(k) can be calculated:
u(k) y(k)
g(k)
Usually we assume that for negative times the input is equal to zero, i.e., u(k) = 0 for k < 0.
This means that the first sum must be calculated only up to i = k or alternatively the second
sum has to start at i = 0. Additionally, if the system is causal, i.e., g(k) = 0 for k < 0, then the
first sum can start at i = 0 and the second sum run up to i = k.
University
Obviously, both sums are identical! With the help of a computer the sums are very fast and
easy to calculate. It is much easier than the convolution integral in the continuous-time case.
WARNING: With increasing simulation times k → ∞ the number of terms in the sum
increases linearly. If the impulse response g(k) is of infinite length (IIR) then the
computational and storage demand increases without limits! This means that we have to find
out a way how to calculate the output of IIR systems in a more practical and efficient
manner. For systems with finite impulse responses of length L (FIR) the number of terms in
the sum is limited to L.
University
In discrete time we choose u(k) = !K(k) and calculate with the convolution sum:
= 1 for k = i
This is exactly the corresponding result as in the time-
continuous case.
University
Source: http://www.mathcs.org/analysis/reals/infinity/graphics/hilberts_hotel.jpg
University
Thus, for |x| < 1 (for |x| ≥ 1 the series diverges to infinity):
t t k
University
These formulas represent only a idealized model because in reality the impulses are not of
infinite height, of course. These Dirac impulses do not exist in reality. But they associate a
finite energy to each sampled signal point. Thus, also the multiplication with u(k) makes sense.
uc(t) us(t)
! (t+2T0) ! (t–3T0)
University
Laplace-Transformation:
If we choose for u(t) a sampled signal, i.e., u(t) = us(t) then we obtain:
Remember:
University
the Laplace transform of a sampled system is called the z-transform (the index “s“ can be
skipped because it is clear by the variable denotation “z” that we deal with discrete time):
z-Transform:
Frequency Response
To calculate the frequency response of a continuous-time system the Laplace variable s is
evaluated on the imaginary axis in the s-plane by setting s = i! for ! = 0 ... ∞. The frequency
response for a discrete-time system can be calculated in the same way. Correspondingly, the
z-variable becomes . For ! = 0 ... ∞ we run along the unit circle in the z-plane. It
would be circled infinite many times. Thus the frequency response is periodic which is caused
by the sampling! But according to the sampling theorem the frequency has to be limited to
! T0= ". So we circle only once! (Symmetry with respect to ±!!)
University
that the frequency response repeats all multiples of !0 (each time we circle around the unit
circle in the z-plane). This means the frequency response is a periodic function. It is identical
for: ! , ! ± !0 , ! ± 2!0 , ! ± 3!0 , ...
University
... ...
University
The unit step u(k) = " (k) has the following z-transform:
An unit step delayed by d time steps u(k) = " (k–d) has the following z-transform:
u(0) = 0, ..., u(d–1) = 0, u(d) = 1, u(d+1) = 1, ... →
Start Value
The start value of a sequence can be calculated from its z-transform by:
End Value
The end value (if it exists!) of a sequence can be calculated from its z-transform by:
University
u(k) u(k–1)
0 1 2 3 4 5 6 k 0 1 2 3 4 5 6 7k
Forward Shift (To the Left)
A prediction of time Tp = dT0 is equivalent to a forward shift (shift to the left) by d samples.
This operation corresponds to the Laplace transform . In the z-domain this means:
u(k) u(k+1)
0 1 2 3 4 5 6 k –1 0 1 2 3 4 5 6 k
University
0 T0 2T0 kT0 t
University
In G(z) as in g(k) all properties of a linear dynamic system are contained. For calculation of
the system output over time only the system input over time and either G(z) or g(k) are
required.
Multiplication
Convolution
The multiplication in the z-domain corresponds to the convolution sum in the discrete time
domain as the convolution integral in the continuous time domain.
University
or
If the impulse response sequence g(k) is of finite length the same is true for the number of
terms in G(z). If g(k) is of infinite length, however, the same is also true for G(z) and an
easier-to-handle alternative has to be found to avoid an infinite sum.
University
0.8
0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10
t [sec]
1
discrete time:
0.8
0.6
0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10
k
For a first order system this requires a impulse response of:
University
Because this infinite series is difficult to handle we compute the explicit sum with the formula
for infinite geometric series with x = 0.82 z–1:
Gain:
Therefore this G(z) corresponds to the G(s) in the sense of impulse response invariance.
University
The choice of the criterion distinguishes all type of such transformations. An invariance of
the impulse responses accounts for all frequencies in the same way because all frequencies
are weighted equally (constant spectrum of an impulse). Therefore it is commonly applied
for filter design.
An invariance of the step response, however, weights lower frequencies stronger and is the
appropriate choice for control applications where the manipulated variable typically is of
stepwise character. It also ensures a correct transformation of the gain.
University
The coefficient a0 can set to 1 through cancelation. This yields the following difference
equation in the time-domain:
In contrast to the s-domain, a dead time in the z-domain still keeps the transfer function of
rational type (numerator / denominator)!
University
For n = m this transfer function is identical to the one on the previous slide. For n > m a dead
time can be factored out in the numerator:
with d = n – m. The case m > n does not occur (negative dead time → non-causal)!
University
has numerator degree m and denominator degree n which are positive integers. G(z) is causal.
A transfer function of the form
b0m z m + . . . + b01 z 1 + b00
G(z) = 0 n
an z + . . . + a01 z 1 + a00
requires: denominator degree ≥ numerator degree or n ≥ m. If this requirement is met then
G(z) is causal. However, if m > n, then G(z) is non-causal negative dead times arise, i.e.,
values in the future have to be predicted.
The condition denominator degree ≥ numerator degree is known from the s-domain. There it is
a condition for properness or realizability, i.e., avoiding pure differentiators! For time-
discrete systems such limitations do not exist. Every causal system can be realized.
University
University
Example:
1.) New starting time step:
2.) Time transformation such that this value is mapped to y(k): k := k–3
3.) Transformation into the z-domain, separation of Y(z) and U(z), division to obtain
transfer function:
University
Examples:
non-causal! non-causal!
University
Examples:
non-causal!
non-causal!
University
The gain of G(z) can be calculated according to the final value limit theorem of the
z-transform by letting z = 1:
Gain:
The poles pi and zeros ni can be transformed into the s-domain via and can be
interpreted accordingly.
Immediately conditions for stability and phase minimality for poles and zeros result in the z-
domain.
University
Re Re
University
Phase Minimality
• A system has minimum phase if it has only stable and marginally stable poles and zeros.
The location of the zeros typically changes during the transformation from the s-domain into
the z-domain. Therefore the property “minimum phase” generally is not preserved during the
transformation.
University
Pole: (stable)
with
Zero: (unstable)
The corresponding all-pass in the z-domain has a stable pole and the inverse zero mirrored at
the unit circle. It is not the direct transformation form s to z!
University
University
University
University
Source: ftp://ftp.ifn-
magdeburg.de/pub/MBLehre/sv06_13
0509-ftp.pdf
University
University
1. harmonic
2 terms
Amplitude
0.5
2. harmonic
Amplitude 0
3. harmonic
-0.5
Frequency !
-1
• If non-periodic signals shall be dealt
with: period length → ∞,
-1.5
basic oscillation → 0. 0 2 4 6 8 10 12
Time t
University
2. harmonic
3. harmonic
-2
A transformation from the time domain to
-4
the frequency domain allows to examine 0 20 40 60 80 100 120 140 160 180 200
in the signal. 0
-2
-4
0 20 40 60 80 100 120 140 160 180 200
-2
-4
0 20 40 60 80 100 120 140 160 180 200
Zeit t
University
or Für
University
4
N=9 16
N=9
9.5 14
3 12
8.5 10
2 8
7.3 6
1 4
0.5 2
0 0
0 1 2 3 4 5 6 7 8 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Discrete time k Discrete frequency n
University
14
Phase of X(n)
100
12
50
10
0 This range contains no new
8
6
-50 information and can be
4 -100
generated by mirroring.
2 -150
Commonly therefore only the
0 -200
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 left range is displayed!
Discrete frequency n Discrete frequency n
University
• Linearity:
• Time shift:
• Frequency shift:
• Convolution:
• Multiplication:
Inverse DFT
For completeness, here the formula for the transformation back into the time-domain:
University
with
This can be written for n = 0, 1, 2, ... N–1 as the following equation system:
University
University
x(k)
0.8 0.8 0.8
x(k)
Time signal x(k)
signalx(k)
signalx(k)
0.6 0.6 0.6
timesignal
timesignal
0.4 0.4 0.4
Time
Time
0.2 0.2 0.2
0 0 0
0 2 4 6 8 10 12 14 16 18 0 5 10 15 20 25 30 0 5 10 15 20 25 30 35
Discrete time k Discrete time k Discrete time k
10 10 10
9 9 9
X(n)
X(n)
Amplitude of X(n)
of X(n)
of X(n)
8 8 8
7 7 7
Amplitude of
Amplitude of
6 6 6
Amplitude
Amplitude
5 5 5
4 4 4
3 3 3
2 2 2
1 1 1
0 0 0
0 2 4 6 8 10 12 14 16 18 0 5 10 15 20 25 30 0 5 10 15 20 25 30 35
Discrete frequency n Discrete frequency n Discrete frequency n
University
- N = 32:
- N = 40:
• A clever choice for N by zero padding can achieve frequency intervals of desired
size even if the original signal is shorter than N values.
If a certain frequency is interesting and the amplitude for this frequency is important
to know with high accuracy, it should be exactly contained in the frequency discretization
by an appropriate choice of N (see picket fence effect)!
University
Amplitude of X(n)
7
The DFT for N = 20 yields identical values N = 20
6
(for every second point) as the DFT for 5
N = 40. Identical values for the DFT
4
for N = 20 and N = 40!
3
Remark: 2 ...
1
• The phase of X(n) sometimes is interesting,
0
as well. We focus on the amplitudes but an 0 5 10 15 20 25 30 35
analysis of the phase can also be important. Discrete frequency n (N = 40)
• MATLAB creates the plots shown in these lecture notes.
fft() yields X(n) in the frequency range 0 to f0.
• Commonly the upper half of the spectrum is omitted because it does not carry any
additional information. Also a symmetric plot around the origin from –f0 /2 to +f0 /2 is
popular.
University
10 10 10
9 redundant! 9 redundant! 9
8 8 8
Amplitude of X(n)
7 7 7
6 6 6
5 5 5
4 4 4
3 3 3
2 2 2
1 1 1
0 0 0
0 5 10 15 20 25 30 35 40 -20 -15 -10 -5 0 5 10 15 20 0 2 4 6 8 10 12 14 16 18 20
University
Amplitude of X(n)
0.8 8
Amplitude of X(n)
16
Time signal x(k)
0.8
highest significant signal N = 40 14 N = 40
0.6 12
frequency lies below f0/2. 10
Otherwise we get aliasing! 0.4 8
6
0.2
4
2
0
0
0 4 8 12 16 20 24 28 32 36 0 4 8 12 16 20 24 28 32 36
Discrete time k Discrete frequency n
University
1 1 1
0.8 0.8 0.8
0.6 0.6 0.6
Time signal x(k)
12 12 12
10 10 10
8 8 8
6 6 6
4 4 4
2 2 2
0 0 0
0 5 10 15 20 25 30 0 5 10 15 20 25 30 0 5 10 15 20 25 30
Discrete frequency n Discrete frequency n Discrete frequency n
University
0.8
N = 32 0.8
N = 64 0.8
N = 128
0.6 0.6 0.6
0 0 0
0 5 10 15 20 25 30 0 10 20 30 40 50 60 0 20 40 60 80 100 120
n n n
University
Amplitude of X(n)
Time signal x(k)
1 12
0.5 10
0 8
-0.5 6
-1 4
-1.5 2
-2 0
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Discrete time k Discrete frequency n
University
-0.5
-1
2. Due to periodicity of the complex exp-function the DFT “thinks” the signal repeats itself
infinitely often, i.e., the original signal for k = 0, 1, ... N–1 is repeated for
k = N, N+1, ... 2N–1 and 1
Amplitude of X(n)
0.6
Time signal x(k)
0.4 8
0.2 left
0 6
-0.2
neighbor
-0.4 4
-0.6
2
-0.8
-1
0
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Discrete time k Discrete frequency n
University
• Additionally the spectrum “smears” (leaks) across the whole frequency range. This is a
direct consequence of the discontinuity of the time signals that induces disturbing “steps”
in the (thought) periodic signal.
→ Leakage Effect 1
discontinuity!
periodicly continued
time signal x(k)
0.5
-0.5
-1
0 10 20 30 40 50 60
Discrete time k
University
x(k)
0 0
-0.5
-1 -1
0 5 10 15 k 20 25 30 0 10 20 30 k 40 50 60
20 20
N = 32 N = 64
|X(n)|
|X(n)|
10 10
0 0
0 5 10 15 n 20 25 30 0 10 20 30 n 40 50 60
1 1
N = 128 N = 256
0.5 0.5
x(k)
x(k)
0 0
-0.5 -0.5
-1 -1
0 20 40 60 k 80 100 120 0 50 100 k 150 200 250
20 20
N = 128 N = 256
|X(n)|
|X(n)|
10 10
0 0
0 20 40 60 n 80 100 120 0 50 100 n150 200 250
University
w(t)
0.6
The original signal can be thought 0.4
of as a multiplication with the 0.2
0.5
Rectangular Windows
xp(k)
0
of length L:
-1
0 10 20 30 40 50 60
k
• This multiplication in the time-domain corresponds 1
to a convolution in the frequency-domain:
w (k)
0.5
L
0
Here W(n) is the Fourier transform and DFT of the 0 10 20 30
k
40 50 60
x(k)
0
-1
0 10 20 30 40 50 60
k
University
1
N = 32
|W(n )|
0.5
0
0 5 10 15 20 25 30
n
n = 1 → f = f0 /N n = N–1
The zeros of the DFT of the rectangular window of length N lie at multiples of f0 /N. If the
time signal is an oscillation of frequency M f0 /N, then the zeros are at integer values of n.
This means that in this case a convolution with such a signal is trivial and no leakage effect
results.
University
University
Uniform /
Blackman
Rectangular
Hann Bartlett
Hamming Gauss
University
x(k)
0 0
-0.5 -0.5
-1 -1
0 50 100 150 200 250 0 50 100 150 200 250
k k
100 100
80 80
|X(n)|
|X(n)|
60 60
40 40
20 20
0 0
0 50 100 150 200 250 0 50 100 150 200 250
n n
University
80 80
60 60
|X(n)|
|X(n)|
significant leakage less leakage
40 into high frequencies 40 into high frequencies
20 20
0 0
0 5 10 15 20 0 5 10 15 20
n n
Signal frequency f1 = 10.5 Hz Signal frequency f1 = 10.5 Hz
Observations:
• Hann window reduces leakage effect significantly.
• Since the Hann window has a smaller area than the rectangular window signal energy is
lost and the amplitudes in the spectrum are smaller. It makes sense to normalize with
respect to the window area in order to compensate for this influence.
University
Source: https://community.plm.automation.siemens.com/t5/Testing-Knowledge-Base/Window-Correction-Factors/ta-p/431775
University
Non-stationary Signals:
• Signals that do change their characteristics / properties over time.
• In practice most signals are non-stationary. However, for a short time interval they can be
considered, at least approximately, stationary. Examples:
o Signals with trends, i.e., with slowly changing mean. This is typical for larger time
scales. If we look at stock indices over years (not days!). A varying mean changes the
d.c. value of the spectrum for n = 0 or f = 0 Hz
o By wear the properties of construction elements change over time. Certain signals of
machines (rotation speed, sound, ...) might change their characteristics like the
frequency of their peak value.
o Instead of wear also a failure can be the cause for such changes. However, this happens
much faster!
University
University
0.5 0.5
x(k)
x(k)
0 0
-0.5 -0.5
-1 -1
0 50 100 150 200 250 300 350 400 450 500 0 50 100 150 200 250 300 350 400 450 500
k k
80 80
Frequencies of
60 60
0 Hz to 60 Hz
|X(n)|
|X(n)|
40 40
20
Frequencies of 20
0 Hz to 20 Hz
0 0
0 50 100 150 200 250 300 350 400 450 500 0 50 100 150 200 250 300 350 400 450 500
n n
University
University
w(k)
0.4 0.4
0.2 0.2
0 0
0 50 100 150 200 250 300 350 400 450 500 0 50 100 150 200 250 300 350 400 450 500
k k
University
University
University
University
Original signal
7.3
1
0.5
0
-0.5
-1
1
-7.3 1
0 50 100 150 200 250 300 350 400 450 500
0.8 0.8
0.6 0.6
1
0.4 0.4
0.8
0.2 0.2
0.6
0 0
0 50 100 150 200 250 300 350 400 450 500 0.4 0 50 100 150 200 250 300 350 400 450 500
0.2
University
|X(n)|
|X(n)|
60 60 60
40 40 40
20 20 20
0 0 0
0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50
n n n
100 100 100
90 k0 = 255 90 k0 = 255 90 k0 = 255
80 strong 80 strong 80 strong
70 70 70
noise noise noise
60 60 60
|X(n)|
|X(n)|
|X(n)|
50 50 50
40 40 40
30 30 30
20 20 20
10 10 10
0 0 0
0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50
n n n
University
Compromise:
or Area = const.
• With the width of the window in the short-time DFT not only the time resolution !t
but implicitly also the frequency resolution !f is fixed.
!t
f f f
!f
t t t
University
Amplitude
0.5
0
Goal of a short-time DFT: -0.5
-1
Frequency analysis of the signal in
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
dependency of time. We want to know time t [s]
when which frequency occurs.
1.2 1.2
1
!t = 0.025 s 1
!t = 0.125 s
0.8 0.8
0.2 0.2
1.2 1.2
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
University
Quelle:
Wikipedia
University
!t = 0.02 !t = 0.10
Quelle: Skript „time-frequency-Analyse und Wavelettransformationen“ of M. Clausen und M. Müller, Universität Bonn
University
Wavelet Transform
• Looks for wave packages of different lengths and frequency.
• Long wave packages are of low frequency → high frequency res. but low time res.
• Short wave packages are of high frequency → low frequency res. but high time res.
• Idea: High frequencies commonly occur briefly and thus should be resolved more
accurately than low frequencies that typically are present for long time intervals.
University
Construction of Wavelets 0
1
!=2
Properties of Wavelets t0 = 0
0.5
• Through the time shift the signal can be analyzed around t = t0.
0
-0.5
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
University
1 1
0.5
0
0
-1 -0.5
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
1 1
0 0.5
0
-1 -0.5
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
1 1
0.5
0
0
-1 -0.5
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
f !
t t
University
Non-parametric Methods
• A large number of data samples N is described with a large number of n parameters.
Often n = N, i.e., no averaging or noise suppression in the statistical sense takes place.
• The parameters themself and their number has no direct physical motivation. It just
reflects such issues as accuracy, resolution, variance, etc.
• The parameter have no direct physical meaning or interpretation.
• Examples: FIR models (= impulse response models), DFT, ...
University
50
• Modeling of one
damped oscillation.
|X(Ω)|dB
University
|X(n)|
60
with respect to noise! 40
• Leakage and picket fence effect 20
distort the spectrum from a peak at 0
0 5 10 15 20 25 30 35 40 45 50
f1 = 5.5 Hz to a broader bump. n
University
University
University
University
Digital Filter
We focus to digital filters, i.e., filters that are discrete in time and can be described by
difference equations. They can be implemented directly in digital electronic circuits
(hardware) but usually are implemented by programs on a computer (software).
Time ↔ Frequency
Usually we consider signals as functions of continuous or low high-
discrete time t or k: x(t) or x(k). In a lot of applications, however, frequency frequency
the signals rather depend on other variables like location. This is
the case for the vast field of image processing. “Frequency” then
means the inverse of space (like normally frequency is the inverse
of time).
University
Coffee filter:
Soot filter: Air filter:
Lets only liquids pass!
Lets only small Lets only small
Optical filter: particles pass! particles pass!
Lets only certain
colors pass!
Analog
electronic filter: Digital filter:
Lets certain Lets certain
frequencies pass! frequencies pass!
Realized as Realized in software
R-L-C-circuit. on a computer.
University
University
University
1 1
0.5 0.5
u(t)
u(t)
0 0
-0.5 -0.5
-1 -1
-1.5 -1.5
100 150 200 250 300 350 400 450 500 100 150 200 250 300 350 400 450 500
t t
filtered with PT1, f = 0.2 filtered with PT1, f = 0.2
1.5 1.5
1 1
0.5 0.5
u(t)
u(t)
0 0
-0.5 -0.5
-1 -1
-1.5 -1.5
100 150 200 250 300 350 400 450 500 100 150 200 250 300 350 400 450 500
t t
filtered with PT1, f = 1 filtered with PT1, f = 1
1.5 1.5
1 1
0.5 0.5
u(t)
u(t)
0 0
-0.5 -0.5
-1 -1
-1.5 -1.5
100 150 200 250 300 350 400 450 500 100 150 200 250 300 350 400 450 500
t t
University
Amplitude
Then it is possible to place the limit (cut-off) frequency !g
in such a way that a significant part of the desired signal can desired signal
pass while a significant part of the disturbance cannot.
Filter Types
If, as in the above example, the desired signal lies mostly in the low-frequency range while
the disturbance lies mostly in the high frequency range, a low-pass filter can improve the
signal quality a lot. A low-pass filter lets all low frequency components pass but suppresses
all high frequency components. That is the most common used filter type. In many
applications, however, the desired signal and disturbance are in other frequency ranges.
Low-pass High-pass Band-pass Band-stop
Amplitude
Amplitude
Amplitude
Amplitude
University
Ideal Filter
• Perfect output of the signal in the passband, i.e., .
• Perfect suppression of the signal in the stopband, i.e., .
• Infinitely steep transition from passband to stopband, i.e., steepness = ∞.
• No phase shift (no delay) of the signal, i.e., .
University
Transition-
Remarks: p = pass s = stop
band • The closer "p and "s lie together and
the smaller !1 and !2 are chosen, the more
Pass-band extreme are the requirements.
Stop-band
• More extreme requirements necessarily
lead to more complex filters.
University
A filter with such a transfer function has an output y(t) to an input oscillation u(t) with an
amplitude A1, frequency !1 and phase "1 after transients are decayed of:
Because the phase shift is linear in the Amplitude gain Phase shift
frequency this can be written as:
The phase "1 of the input signal u(t) is not changed by the filter. Time shift
And this is the case independent of the frequency of the signal !1. (dead time)
University
linear
linear phase
phase
University
This means: F(z) has for every zero zn a mirrored zero at zn–1 = 1/zn and for every pole zp a
mirrored pole at zp–1 = 1/zp. If zn and zp are inside the unit circle (stable!) then 1/zp and 1/zp
automatically are outside the unit circle (unstable!). Consequently, zero phase filters have the
following properties:
• FIR: non-causal.
• IIR: unstable and non-causal.
University
1 1
0.5 0.5 x(50–k) Time reverse
0 0
u(k)
-0.5 -0.5
1
-1 -1
0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50 0.5 u(k)
y(k)
k k 0
-0.5
Filter with G(z) Filter with G(z) -1
0 5 10 15 20 25 30 35 40 45 50
1 1 k
0.5
x(k) 0.5 x(50–k)
0 0
u(k) y(50–k)
-0.5 -0.5
u(k) y(k)
-1 -1 |G(z)|2
0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50
k k
Time reverse u(k) x(k) Time x(N–k) y(N–k) Time y(k)
G(z) reverse G(z) reverse
University
Properties
• Order n is small: e.g. n = 2, 3, 4, ... • Order m is large: m = 10, 20, 30, ...
• Feedforward: biu(k–i) • Feedforward: biu(k–i)
• Feedback: aiy(k–i) • No feedback!
• Infinite impulse response (IIR) • Finite impulse response (FIR)
University
Impulse response
0.08
flexibility! 0.08 possible!
(5 deg. of freedom) (16 deg. of
0.06 0.06
freedom)
0.04 0.04
k = 16
0.02 0.02
0 0
-0.02 -0.02
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
k k
University
response
Impulse
0.6
0.5
University
response
Impulse
0.6
bm is chosen
0.4
such that the
gain keeps
This is a reasonable approach for low-pass filters. correct!
0.2
For high-pass filter an alternative could be to require m=9
identical gains for ! → ∞ / z → ∞. 0
0 5 10 15 20
k
University
University
0 7.2
10
1 u(k)
-1
10
y(k)
Remember:
Addition of two conjugate complex numbers:
→ purely real!
Same numbers written in absolute value and phase form:
→ purely real! Im
b z1
–b z2
University
are purely real and therefore have phase = 0. Thus the phase of this filter finally is:
(+ ! if the sign of “{...}” is negative!)
University
Impulse response
University
This impulse response typically is non-causal and of infinite length. We have to shift it and
crop it at a certain finite order m to make the FIR filter realizable.
0.2
0.1
-0.1
-30 -20 -10 0 10 20 30
University
0.4 0.4
0.2 0.2
0 0
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1
31 coefficients 11 coefficients
7.2 7.2
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1
University
This explains the ripples. Unfortunately they do not become smaller if more coefficients are
spent to describe the impulse response more accurately. This is the so-called Gibbs
phenomenon (see math, “Fourier series”).
In order to reduce this undesirable effect, the impulse response is multiplied with a smoother
window like in the DFT context. Such a window can reduce high frequencies by letting the
impulse response slowly decay towards zero. For FIR filter design the so-called Kaiser
window is commonly applied.
University
However, the algorithm according to Parks and McClellan minimizes the maximal (not
squared) error because it has yield more reliable results:
The minimization of the maximal absolute value ensures that the ripples are equally
distributed over all frequencies which led to the name Equiripple filter. The criterion is also
important in many other approaches to robust optimization and control.
Because the absolute value of the error is magnitudes larger in the pass-band than in the stop-
band, it is important to multiply the errors with a normalization weight that guarantees no
frequency ranges are preferred:
University
or
The default mode of operation of firls and firpm is to design type I or type II linear phase filters, depending on whether
the order you desire is even or odd, respectively. A lowpass example with approximate amplitude 1 from 0 to 0.4 Hz,
and approximate amplitude 0 from
0.5 to 1.0 Hz is
n = 20; % Filter order
f = [0 0.4 0.5 1]; % Frequency band edges
a = [1 1 0 0]; % Desired amplitudes
b = firpm(n,f,a); % Parks-McClellan FIR Design
From 0.4 to 0.5 Hz, firpm performs no error minimization; this is a transition band or "don't care" region. A transition
band minimizes the error more in the bands that you do care about, at the expense of a slower transition rate. In this
way, these types of filters have an inherent trade-off similar to FIR design by windowing.To compare least squares to
equiripple filter design, use firls to create a similar filter.
University
The filter designed with firpm exhibits equiripple behavior. Also note that the firls filter has a better response over
most of the passband and stopband, but at the band edges (f = 0.4 and f = 0.5), the response is further away from the
ideal than the firpm filter. This shows that the firpm filter's maximum error over the passband and stopband is
smaller and, in fact, it is the smallest possible for this band edge configuration and filter length.
University
2 1.2
1.5 u(k)
1
1
0.6
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
y(k)
100 0.4
50
0.2
0
0
-50
-100 -0.2
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0 5 10 15 20 25 30 35 40 45 50
k
University
u(k) u(k)
k k
• Positive and negative half waves have to be • Positive and negative half waves must
symmetrical in order to cancel each other. accumulated to zero.
• m has to be even. • Strong averaging (low-pass effect).
• Little distortion for other frequencies. • Removes only multiples of !p.
• Removes only multiples of 2!p.
University
-1 -1
10 10
-2 -2
10 -3 -2 -1 0
10 -3 -2 -1 0
10 10 10 10 10 10 10 10
100 100
0
0
-100
-100 -3 -2 -1 0
-200 -3 -2 -1 0
10 10 10 10 10 10 10 10
1.2 1.2
1 1
noisy step response
0.2 0.2
0 0
-0.2 -0.2
0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50
k k
University
University
Re Re
University
-1
The upper bound for the digital frequency is given
-2
by the half sampling frequency according to Shannon: -3
-4
-30 -20 -10 0 10 20 30
University
medium
height
width u(t)
In the z-domain this results in:
University
u(t) u(t)
University
University
University
0.8 0.8
0.6 0.6 Chebyshev
Butterworth
0.4 0.4 Type I
0.2 0.2
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
1 1
0.8 0.8
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
University
Butterworth Filter
• Design with focus on maximal flatness of the amplitude response close to the limit
frequency .
• Monotone amplitude response, i.e., no ripples.
• Fast drop-off in the amplitude response at the limit frequency.
• Strong overshoot of the impulse response.
• Relative low steepness with 20·n dB / decade (n = filter order).
s-Plane s-Plane
Amplitude Response:
n=3 n=4
Im Im
× × × ×
× ×
× ×
Re Re
× ×
× × × ×
where si are the n stable poles of the 2n-root of .
× Pole of × Pole of
University
0
Magnitude [dB]
-20 n=2
-40
n=4
n=6
-60
0
n=2
-180
Phase [°]
n=4
-360
n=6
-540
-2 -1 0 1 2
10 10 10 10 10
University
0.8
Step responses
0.6
0.4
0.2
0
0 10 20 30 40 50
Time [s]
University
Chebyshev Filter
• Steeper than Butterworth filter.
• Ripples in pass-band (type I) or stop-band (type II) in the amplitude response.
Acceptance of ripple drawback for benefits in steepness.
• Step response oscillates more than for Butterworth filter.
• Transposes into Butterworth filter if the allowed ripple factor ! → 0!
• Design parameters: limit frequency , order n, allowed ripple factor !.
Chebyshev Polynomial of Order n:
! : ripple factor
Because the Chebyshev polynomial changes
in the pass-band between 0 and 1 a lower limit
on the gain is given by:
University
Source: https://en.wikipedia.org/wiki/Chebyshev_filter
University
0
Magnitude [dB]
-20 n=2
-40
n=4
-60 n=6
0
n=2
-180
Phase [°]
n=4
-360
n=6
-540
-2 -1 0 1 2
10 10 10 10 10
University
0.8
Step responses
0.6
0.4
0.2
0
0 10 20 30 40 50
Time [s]
University
Magnitude (dB) 0
−20
−40
−60
−80
0
−90
Phase (deg)
−180
−270
−360
−1 0 1
10 10 10
Frequency (rad/s)
University
University
Source: https://en.wikipedia.org/wiki/Elliptic_filter
University
0
n=2
Magnitude [dB]
n=4
-20
-40
n=6
-60
1080
n=6
720
Phase [°]
360 n=2
n=4
-360
-2 -1 0 1 2
10 10 10 10 10
University
0.8
Step responses
0.6
0.4
0.2
0
0 10 20 30 40 50
Time [s]
University
Shannon frequency
University
University
University
University
University
University
University
University
University
WARNING: Formally such a block diagram is wrong because it mixes time and frequency
domain. However, such a sloppy representation is commonly found and easy to read. More
strictly the following time delay is meant:
Addition:
Subtraction:
University
University
4. 4. 4. 4.
3. 3. 3. 3.
2. 2. 2. 2.
1. 1. 1. 1.
4. 4. 4. 4.
3. 3. 3. 3.
2. 2. 2. 2.
1. 1. 1. 1.
University
If the order of the numerator is smaller than the order of the denominator (m < n) then simply
the lacking bi = 0 for i > m. This transfer function can be split into two part in two ways:
u x y
Direct Form I:
u x y
Direct Form II:
University
University
University
University
In this product each factor represents a second order system with two conjugate complex or
two real poles. For an even order n of the complete filter l = n/2.
For an odd n we have l = (n+1)/2 and bl2 = al2 = 0.
Parallel Form
Consists of a parallel circuit of filters derived from a partial fraction expansion:
This means that filters with poles at 0, with real poles at –ai and with conjugate complex pole
pairs at –fi and –fi* are run in parallel..
Causal Filters
For a causal filter its output y(k) depends only on the current and previous input u(k–i)
with i ≥ 0. This automatically means that the impulse response is equal to zero for negative
times: g(k)
k
since g(i) = 0 for i < 0,
otherwise the future inputs would influence the now: u(k–i)
Non-Causal Filters
For a non-causal filter its output y(k) also depend on the future input u(k–i) with i < 0.
This automatically means that the impulse response is not equal to zero for negative times:
g(k) commonly
symmetrical,
but this is
k not necessary
since g(i) ≠ 0 for i < 0, because future inputs are relevant: u(k–i)
University
University
0.2
non-causal causal
• Because a non-causal filter can “react” to a step input before it 0
University
k k k k
University
Filter 2
University
Median Filter
Probably the most important and frequently used nonlinear filter is the median filter. It is
helpful in eliminating outliers. In contrast to the arithmetic average, the median gives the
numbers which is right in the middle of a sorted sequence, i.e., half of the number are larger,
half of the numbers are smaller.
Example:
Sequence: 4, 7, 20, 21, 30 → median = 20, arithmetic average = 16.4
Sequence: 4, 7, 20, 21, 1000 → median = 20, arithmetic average = 210.4
The median is commonly used to eliminate outliers e.g. in statistics where the arithmetic
average does not represent the “typical” case like study program duration, house prices, etc.
University
2
2
Median 2
Linear
7.3 7.3 Filter 7.3 FIR Filter
1 1 1
u(k)
y(k)
y(k)
0.5 0.5 0.5
1 Outlier
0 0 0 1 Outlier
1 Outlier
-0.5 -0.5 1 Outlier -0.5
2 Outlier
-1 -1 2 Outlier -1
0 5 10 15 20 25 30 0 5 10 15 20 25 30 0 5 10 15 20 25 30
Time k Time k Time k
University
University
This means the update is proportional to the (new) step size !´, to the error e(k) and to the
“excitation” (regressor) of the corresponding parameter "i by u(k–i).
University
FIR filter:
fir1;1 % FIR filter using the window method
firls;1 % FIR-Filter using least squares optimization
firpm;1 % FIR-Filter using Parks-McClellan optimization
IIR filter:
besself;1 % Bessel filter
butter;1 % Butterworth filter
cheby1;1 % Chebychev filter type 1
cheby2;1 % Chebychev filter type 2
ellip;1 % Cauer filter (elliptic filter)
University
University
Filter-Parameter-Identifikation:
[b,a] = invfreqz(h,w,n,m);1 % Identifies a discrete-time amplitude
% and phase response (continuous-time:
% “invfreqs”)
University
University
University
Prof. Dr.-Ing.
11. Selected Methods in Signal Processing Page 264
Oliver Nelles
of Siegen
11.1 Principal Component Analysis
Data Preprocessing
Complex tasks in signal processing often are partitioned into two or more steps that each can
be handled simpler individually. Typically, a early (first) steps is called signal preprocessing.
Dependent on the specific task, signal preprocessing can be:
• Filtering, smoothing, interpolation
• Transformation of data into a new coordinate system
Outputs y1 y 2 y3 yr
• Dimension reduction, data compression
...
• Transformation into the frequency domain
further data
• Feature extraction
processing
• Nonlinearity transform transformed ...
Inputs x1 x 2 x3 xq
Some of the most common an important data
data pre-
preprocessing approaches will be discussed in
processing
the following.
...
original
Inputs u1 u2 u3 up
University
University
2. data point
N data points
2. old axis p dimensions
The scalar products uT(k) x are the projections of the k = 1, 2, ..., N data points onto an
arbitrary axis x = {x1, x2, ..., xp}. If the data has zero mean (if not then the mean has to be
subtracted first) then the following expression corresponds to the squared distance to the
mean (which is equal to 0): (uT(k) x)2.
University
We want to maximize this expression. However, we must prevent that the variance becomes
large just by shrinking the axis (and thereby generate large numbers). Thus the axes’ scaling
are restricted to a norm of 1:
This constraint is included in the optimization. With ! as Lagrange multiplier we achieve the
following optimization problem:
University
The eigenvector corresponding to the highest eigenvalue !1 is the 1. axis x1, the eigenvector
corresponding to the second highest eigenvalue !2 is the 2. axis x2, and so on up to the
smallest eigenvalue !p with the p. axis xp. The eigenvalues of UTU are the squared singular
values of U and thus can be computed with a singular value decomposition (SVD). This can
be done to a extremely high accuracy without explicitly squaring the matrix U. These
eigenvalues all are positive and the associated eigenvectors are orthogonal to each other.
If U has more rows than columns the following matrix dimensions arise:
= · · The marked red quadratic matrix in S
m contains the singular values of U on
n×n n×n its diagonal. They are identical to the
diag{s1, s2, ..., sn} square root of the eigenvalues of U TU.
n m×n They are sorted from large to small.
Therefore the matrix U can be decomposed in a sum of n outer products (each has rank 1),
whose influence becomes smaller through the decreasing singular values:
s1 s2 sn
mit
maximal rank = n
If the rank of U
is r < n then
sr+1 = ... = sn = 0.
University
= 35.1826 -0.1013 · -0.5193 -0.5755 -0.6318 + 1.4769 0.7679 · -0.7508 -0.0459 0.6589 + 0
-0.2486 0.4881
-0.3958 0.2082
-0.5430 -0.0717
-0.6902 -0.3515
The last equality hold because V is unitary, i.e., V TV = I and V V T = I and thus V T = V ‒1..
In the case of dimensionality reduction only the most important axes are selected. They
belong to the largest eigenvalues of UTU or to the largest singular values of U, respectively.
Because a SVD sorts the eigenvalues according to their absolute values, this corresponds to
the first singular values.
University
97% of
• The most important 5-10 axes from a PCA already represent the 3
the variance
10
picture quite well. The singular values quickly decline to 0.
• Computational effort is high. This method is not used in praxis. 10
2
0 5 10 15 20 25 30 35 40 45
20 20 20 20 20 20
40 40 40 40 40 40
60 60 60 60 60 60
80 80 80 80 80 80
University
University
Figure 2. Rank 12, 50, and 120 approximations to a rank 598 color photo of Gene Golub.
erties. You can judge whether the singular described in terms of the eigenvalues and ei- is possible to discuss singular values without
values are small enough to be regarded as genvectors of the covariance matrix, AAT, but discussing eigenvalues—but, of course, the
negligible, and if they are, analyze the rel- the SVD approach sometimes has better nu- two are closely related. In fact, if A is square,
evant singular system. merical properties. symmetric, and positive definite, its singular
Let Ek denote the outer product of the SVD and matrix approximation are often values and eigenvalues are equal, and its left
k-th left and right singular vectors, that is illustrated by approximating images. Our and right singular vectors are equal to each
example starts with the photo on Gene other and to its eigenvectors. More gener-
Ek = ukvkT Golub’s Web page (Figure 2). The image ally, the singular values of A are the square
is 897-by-598 pixels. We stack the red, roots of the eigenvalues of ATA or AAT.
Then A can be expressed as a sum of rank-1 green, and blue JPEG components verti- Singular values are relevant when the ma-
matrices, cally to produce a 2691-by-598 matrix. trix is regarded as a transformation from
n We then do just one SVD computation. one space to a different space with pos-
Page 279 A = σk Ek
∑ After computing a low-rank approxima- sibly different dimensions. Eigenvalues
k=1 tion, we repartition the matrix into RGB are relevant when the matrix is regarded
components. With just rank 12, the colors as a transformation from one space into
If you order the singular values in decreasing are accurately reproduced and Gene is itself—as, for example, in linear ordinary
order, σ > σ > ... > σ , and truncate the sum
1 2 n
recognizable, especially if you squint at the differential equations.
after r terms, the result is a rank-r approxima- picture to allow your eyes to reconstruct Google finds over 3,000,000 Web pages
Oliver Nelles
Prof. Dr.-Ing.
tion to the original matrix. The error in the the original image. With rank 50, you that mention “singular value decomposi-
approximation depends upon the magnitude can begin to read the mathematics on the tion” and almost 200,000 pages that men-
of the neglected singular values. When you white board behind Gene. With rank 120, tion “SVD MATLAB.” I knew about a few
do this with a matrix of data that has been the image is almost indistinguishable from of these pages before I started to write this
centered, by subtracting the mean of each the full rank 598. (This is not a particularly column. I came across some other interest-
column from the entire column, the pro- effective image compression technique. In ing ones as I surfed around.
cess is known as principal component analy- fact, my friends in image processing call it Professor SVD made all of this, and much
sis (PCA). The right singular vectors, vk, are “image degradration.” ) more, possible. Thanks, Gene.
■ The Wikipedia pages on SVD and PCA are quite good and ■ The first Google hit on “protein svd” is “Protein Substate Mod-
contain a number of useful links, although not to each other. eling and Identification Using the SVD,” by Tod Romo at Rice
en.wikipedia.org/wiki/Singular_value_decomposition University. The site provides an electronic exposition of the
en.wikipedia.org/wiki/Principal_component_analysis use of SVD in the analysis of the structure and motion of pro-
teins, and includes some gorgeous graphics.
■ Rasmus Bro, a professor at the Royal Veterinary and Agri- bioc.rice.edu/~tromo/Sprez/toc.html
cultural University in Denmark, and Barry Wise, head of Ei-
genvector Research in Wenatchee, Washington, both do che- ■ Los Alamos biophysicists Michael Wall, Andreas Rechsteiner, and
mometrics using SVD and PCA. One example involves the Luis Rocha provide a good online reference about SVD and PCA,
analysis of the absorption spectrum of water samples from a phrased in terms of applications to gene expression analysis.
lake to identify upstream sources of pollution. public.lanl.gov/mewall/kluwer2002.html
www.models.kvl.dk/users/rasmus
www.eigenvector.com ■ “Representing cyclic human motion using functional analysis”
(2005), by Dirk Ormoneit, Michael Black, Trevor Hastie, and
■ Tammy Kolda and Brett Bader, at Sandia National Labs in Liver- Hedvig Kjellstrom, describes techniques involving Fourier analy-
more, ca, developed the Tensor Toolbox for MATLAB, which sis and principal component analysis for analyzing and modeling
provides generalizations of PCA to multidimensional data sets. motion-capture data from activities such as walking.
csmr.ca.sandia.gov/~tgkolda/TensorToolbox www.csc.kth.se/~hedvig/publications/ivc_05.pdf
His paper led to articles in the New York Times and the Washington genwalker” demo.
Post because it provides a nonpolitical, phenomenological model of www.journalofvision.org/2/5/2
court decisions. Between 1994 and 2002, the court heard 468 cases. www.mathworks.com/moler/ncm/walker.m
Since there are nine justices, each of whom takes a majority or mi-
nority position on each case, the data is a 468-by-9 matrix of +1s ■ A search at the US Patent and Trademark Office Web page lists
and -1s. If the judges had made their decisions by flipping coins, this 1,197 U.S. patents that mention “singular value decomposi-
matrix would almost certainly have rank 9. But Sirovich found that tion.” The oldest, issued in 1987, is for “A fiber optic inspection
the third singular value is an order of magnitude smaller than the system for use in the inspection of sandwiched solder bonds
first one, so the matrix is well approximated by a matrix of rank 2. in integrated circuit packages”. Other titles include “Compres-
In other words, most of the court’s decisions are close to being in a sion of surface light fields”, “Method of seismic surveying”,
two-dimensional subspace of all possible decisions. “Semantic querying of a peer-to-peer network”, “Biochemical
©1994-2006 by The MathWorks, Inc. MATLAB, Simulink, Stateflow, Handle Graphics, Real-Time Workshop, and xPC TargetBox are registered trademarks and SimBiology, SimEvents,
and SimHydraulics are trademarks of The MathWorks, Inc. Other product or brand names are trademarks or registered trademarks of their respective holders.
91425v00 10/06
of Siegen
University
11.1 Principal Component Analysis
5
Difficulties with Dimensionality Reduction
4
The assumption that low variance axes are redundant and 3
can be removed can be wrong! A small variance point
2
y(k)
towards a possible linear dependency but this is not
1
necessarily the case. An analysis based on input space
0
distributions only can never ensure this with certainty.
-1
The output has to be considered in order to be sure.
-2
0 10 20 30 40 50 60 70 80 90 100
For example for dynamic processes a strong correlation Time k
of two subsequent outputs y(k–1) and y(k–2) occurs. 5
However, they are not redundant if the process is of Strongly correlated, but
4
AR(2)-type as an example, that is it follows the equation: important information!
3
y(k–2)
2
-2
-2 -1 0 1 2 3 4 5
y(k–1)
University
u1 x1 u1 u2
u2 Feature x2 u2 Feature u5
extraction selection
up xq up uq
Each output xi can depend on all inputs uj! Each output is identical to one input!
University
University
Basics of Clustering
Like PCA Clustering operates on the input data. The task is to
find groups (clusters) of data points. These groups can be of
different shapes and sizes. Depending on the method, a special
prototype is defined that defines how a cluster should look u1
u2 K=4
like. In two dimensions examples are: hollow or filled circles
or ellipsoids, linies, ...
where K is the number of clusters and runs over the data points belonging to the
cluster j whose center of gravity is closest (in the Euclidian sense).
University
University
University
Observations:
• Convergence is very fast; only a few iteration are needed.
• The global minimum of the loss function is reached in most cases.
• The sensitivity with respect to the initialization is low.
• For reasonable results the number of clusters has to be chosen in the right manner.
• Normalization of data is important because some dimensions can be dominant
(and others almost irrelevant) if axes are scaled differently.
University
1. Iteration 2. Iteration
8 8 8
6 6 6
4 4 4
2 2 2
0 0 0
-2 -2 -2
-4 -2 0 2 4 6 8 -4 -2 0 2 4 6 8 -4 -2 0 2 4 6 8
10 10 10
6 6 6
4 4 4
2 2 2
0 0 0
-2 -2 -2
-4 -2 0 2 4 6 8 -4 -2 0 2 4 6 8 -4 -2 0 2 4 6 8
University
6 6 6
4 4 4
2 2 2
0 0 0
-2 -2 -2
-4 -2 0 2 4 6 8 -4 -2 0 2 4 6 8 -4 -2 0 2 4 6 8
6 6 6
4 4 4
2 2 2
0 0 0
-2 -2 -2
-4 -2 0 2 4 6 8 -4 -2 0 2 4 6 8 -4 -2 0 2 4 6 8
University
0 0 0
K=3 10 Iterations until convergence Scaling of the x-axis is factor 100 larger
10 10 10
6 6 6
4 4 4
2 2 2
0 0 0
-2 -2 -2
-400 -200 0 200 400 600 800 -400 -200 0 200 400 600 800 -400 -200 0 200 400 600 800
University
6 6 6
4 4 4
2 2 2
0 0 0
-2 -2 -2
-4 -2 0 2 4 6 8 -4 -2 0 2 4 6 8 -4 -2 0 2 4 6 8
6 6 6
4 4 4
2 2 2
0 0 0
-2 -2 -2
-4 -2 0 2 4 6 8 -4 -2 0 2 4 6 8 -4 -2 0 2 4 6 8
University
The second sum runs over all data points (not only those belonging to a single cluster j).
K-means is a special case of fuzzy K-means with
if data point belongs to cluster j
if data point does not belong to cluster j
The variable !ij denotes the degree of membership to a cluster. A value of “1” means this
point fully belongs to that cluster. A value of “0” means it doesn’t. The degree of
membership !ij can be extended from a binary values to a real value between 0 and 1. Each
point belongs to each cluster to a certain degree. They have to sum up to 1. A degree of
membership of 0.51 to cluster A is similar to 0.49 to cluster B and would yield similar
results. In the classical K-means it is binary and the point would fully be associated with
cluster A und not at all with cluster B. Therefore, fuzzy clustering is less prone to bad
initialization.
University
cluster
5
they are clustered first. With the help of these cluster, the 4
= classes
classifier has an easier task to perform the classification. 3
class 1
The underlying idea is that a certain distribution of the data 2
class 2
reflects the associated classes. Often this is the case. However,
1
go astray. -2
-4 -2 0 2 4 6 8
6
5
cluster
4
≠ classes
3
class 1 class 1
2
class 2
1
-1
-2
-4 -2 0 2 4 6 8
University
PCA:
[COEFF,SCORE] = princomp(X);1
1 : Statistics Toolbox
2 : Fuzzy Logic Toolbox
University
University
University
The relative error er is absolute error divided by the true value yw and commonly is given in
percentage:
Often the quadratic error e2 (absolute or relative) is utilized for optimization as an criterion.
Many reasons for this exist. An important one is that the resulting optimization is particularly
easy to solve and manage (least squares).
University
• Static errors: In the ideal case, the characteristics of the sensor is linear/affine.
In practice nonlinearities distort the result.
Example: quantity = temperature, output = voltage:
U [V] nonlinear
10
T [°C] –100 – 50 0 50 100
linear/
U [V] 1 1.7 3 6 10 affine
1
• Dynamic errors: If the measured quantity changes over –100 0 100 T [°C]
time, the sensor follows with a time constant and delay. T [°C]
T
If we do not wait long enough until the measurement U
values reach steady state (settling time) a dynamic error
occurs. Time t
UQ[V]
• Quantization errors: During the A/D conversion the U [V]
discretization causes errors in time (through sampling) and
in amplitude (through quantization). The latter corresponds eQ
to a stepwise characteristics. The maximum error is eQ/2. U [V]
University
The accuracy rating declares the maximally to expect Typical accuracy ratings:
error in percentage of the instrument range. 0,1; 0,2; 0,5; 1; 1,5; 2,5
University
Examples:
a) Determination of electrical power from voltage and current:
c) Determination of force via resistance change dependent on length, area, and specific
conductivity:
with
How do errors in the measurement of U, I, s, t, l, A (or r), ρ affect the final results?
University
The errors of the single measurements xi are denoted by !xi. This yields the following
systematic error accumulation for the final output y:
This equation directly is obtained from the Taylor series expansion of the function f, in which
all higher than first order terms (linear) are neglected. Thus it is approximately correct if the
errors are small, i.e., !xi is close to zero.
In the above equation, measurement errors can cancel or attenuate each other because they
might be of opposite sign. Of course this requires knowledge about the right sign of !xi and
the slope of f () and therefore the systematic over- or underestimation.
University
Examples:
a) Power measurement:
If for example the voltage is measured too small (!U < 0) and the current too large
(!I > 0) (and U > 0, I > 0), then these error can (partly) compensate each other. If
nothing is known about the sign of the errors and only their magnitude can be assessed,
then a maximal error assessment has to be made in which the individual errors
accumulate.
University
In this example a (partly) compensation happens if both, the distance and time interval,
are over- or underestimated because of the “−” sign. Notice that the second term can
become extremely large if the time interval t is chosen very small, i.e., then the speed
measurement is very sensitive with respect to measurement errors in time.
University
The standard deviation of the individual input factors xi shall be given by sxi. Then the
standard deviation of the output quantity y becomes:
This is a universal statistical law! 100 times more measurement values improve the quality
by a factor of 10 by reducing the standard deviation of the output y correspondingly.
University
0,2
The relative frequencies of observations sum up to 1: 0,1
University
It is:
The density p(x) is a continuous and no stepwise function. We can calculate the probability
of a measurement to fall into a certain interval (x1 x2] by:
0,3
The true density p(x) according to which the
measurements are distributed is usually unknown. 0,2
Typically, realistic assumptions are made from 0,1
insights in the first principles and a histogram. In
most cases a Gaussian distribution is assumed if nothing
contrary is known. Here is why... (see next slide)
University
The estimation results depend on the actual measurement data. If the same quantity is
measured twice (even under identical conditions) we obtain different results and thus
different estimates, because the random disturbances (noise) have different values.
University
2 1 1
σestimator ⇠ σestimator ⇠ p
N N
A data set 4 times the size reduces the scatting by a factor of 2!
University
If the bias (and the variance) tend to 0 for N → ∞, then we call this a consistent estimation:
University
sample mean:
It can be shown that the sample mean approaches the true value (unbiased) if N becomes large
It can also be shown that for statistically independent data the variance of the sample mean
estimation decreases for increasing data sets N, such as [4]:
University
is its estimation!
sample variance:
The true mean "x is usually unknown und is replaced by its best estimate . Because of this
the sum is divided by N–1 and not by N. One degree of freedom (dof) was already exploited
or exhausted (figuratively speaking) for the estimation of this mean value and is not available
anymore for the variance estimation. Only N–1 dof are remaining. It can also be shown
theoretically that due to the denominator N–1 we have an unbiased estimation [4]:
→ unbiased!
The variance of an estimate can be used for assessing the reliability of an estimate itself. It is
required for example for determination of the confidence intervals that indicate the
reliability of the estimate.
University
Confidence Interval
The trust or confidence in an estimate can be quantified based on its probability density
function (pdf). The pdf allows to calculate the probability that the true value lies within some
interval. Typically a symmetric interval around the mean is considered. Most pdfs also have
their maximal value at their mean. The probability that the deviation from the mean is
smaller than ±δ is:
For any interval size (width) ! we can calculate the associated probability. It is called a
confidence interval.
University
0.35
68,27%
0.3
95,45% 0.25
0.2
99,73%
0.15
99,99% 0.1
0.05
The associated probability values 1–" are called
0
confidence levels. The probability of error is denoted by -3 -2 -1 0 1 2 3
This means it is possible, in principle, to decrease the standard deviation of the mean to an
arbitrary accuracy. We just have to measure often enough! To double the accuracy we
have to measure 4 times as many values. At the end, this is just a matter of cost and time.
University
where the factor c corresponds to the requested confidence level 1–! or error probability !,
e.g. c = 3 for a confidence level of 99,73%.
Instead of measuring the value x a single time, the mean can be calculated from N measure-
p
ments. Then we replace x with and its standard deviation decreases according to 1/ N :
University
0
-3 -2 -1 0 1 2 3
University
University
The formula for known standard deviation is used, i.e., the confidence interval is
calculated from the normal distribution because the standard deviation is well-known
from a previous history of the instrument. (Or we assume N → ∞ for the estimate).
University
Sample mean:
This result is more accurate by a factor of 3.16 for the same error probability of 0.3%.
Even more measurement would improve the accuracy further.
University
Factor c for the t-distribution with the confidence level of 1–! = 99.7%: c = 3.96
University
6.1 Overview
6.2 Static Behavior of Sensors
6.3 Dynamic Behavior of Sensors
6.4 Filtering of Sensor Signals
University
Against these error sources counter measures can be taken that eliminate or at least
reduce the error:
1. Compensation of the nonlinear distortion.
2. Compensation of the dynamic lag or waiting for the signal to settle (dynamics has faded).
3. Filtering to suppress noise.
Even if these counter measures are not completely successful or sufficient it is important to
understand their effects. Only this allows one to assess the errors appropriately.
University
For converting between input and output (or back) only the
proportionality constant k is necessary. It is independent of
the operating point (OP). This is also true for the almost as
simple affine relationship that includes an additional offset:
University
University
Method 2 is better, if x changes slowly and it is possible to adjust the line as the OP changes.
If the behavior is rapidly time-variant the 1. method might be better.
University
University
University
we recognize the quadratic terms (and all terms of even powers) are eliminated in the
difference calculation:
University
The inverse function only exists of f (x) is biuniquely, i.e., if for every y from the physically
reasonable range, exactly one x exists. If f (x) does not fulfill this property (most will do) then
the inversion can be carried out in intervals in which this property holds. By such an inversion,
the electronics can compensate for all (at least most) nonlinearities in the sensor. The “~“ shall
indicate that an exact inversion is never possible in practice.
A prerequisite for an inversion is that the function f (x) is known accurately. Special care is
necessary for very small or large (where the inverse is very small) sensitivities because tiny
errors cause huge deviations.
Sensor Evaluation
University
y06 y(t)
x01
y01
y01
Time t x01 x06 x
University
250
A typically characteristic map out of
University
University
University
University
University
(1) Weicheisenkern, (2) Permanentmagnet, (3) Polschuhe, (4) Skale, (5) Spiegelskale, (6) Rückstellfeder, (7) Drehspule, (8)
Ruhelage, (9) Maximalausschlag, (10) Spulenkörper, (11) Justierschraube, (12) Zeiger, (13) Südpol, (14) Nordpol
University
Change of Range:
Internal Resistance: 10RM Internal Resistance: 100RM
Internal Resistance: RM
I : 10 I : 10 I
A A A
x 10 x 10
RM RM RM
University
I0 IM Internal
R R A resistance
U0 U0
distorts the
RM
measurement!
University
I0 I0 IM finite internal
V resistance
R U0 R UM
distorts the
RM
measurement!
University
IM IM IM
: 10 9RM : 10 99RM
V V V
x 10 x 10
RM RM RM
University
University
Replacing the permanent magnet of the moving coil mechanism creating the magnetic field B
by an electromagnet, constructs the electrodynamic instrument. It can measure power. If
the electromagnet is fed with voltage U this creates a current and subsequently a magnetic
field proportional to U:
University
If the power is constant over time, energy is simply power times time:
Otherwise can be fed to an integration circuit (see Chapter 2.6) and be computed in an analog
manner. Alternatively it can be measured (counted) by a motor meter. A motor meter
basically is an induction measuring system (see Chapter 2.5) in which the electromagnets are
replaced with an electromotor whose torque is proportional to the power. The number of
revolutions of the disk is proportional to the energy.
University
Mean: Peak:
Rectified: RMS:
The by far most important periodic signal type is a sine or cosine signal. A sine oscillation
with amplitude A has the following characteristic values:
Mean: Peak:
Rectified: RMS:
For a rectangular oscillation the mean, peak, rectified, and RMS values are all identical to its
amplitude A. The rectified value is the mean of the absolute value. The RMS value is a
measure for the signal power or energy.
University
University
University
But the entire apparent power cannot perform work. One part of it just oscillates around the
mean value 0. The really useful part of it is called active power (“Wirkleistung” in German).
This part can perform work and is calculated by:
The part that cannot perform any work is called reactive power (“Blindleistung” in German)
and calculated by:
6
2
4
1
2
0 0
-2
-1
-4
-2
-6
-3 -8
0 2 4 6 8 10 0 2 4 6 8 10
6 6
4 4
2 2
0 0
-2 -2
-4 -4
-6 -6
-8 -8
0 2 4 6 8 10 0 2 4 6 8 10
University
The displayed deflection is proportional to the product between voltage and current
The 2. cos term is averaged out to 0, because we can assume a high frequency of AC
quantities (e.g. 50 Hz) compared to the bandwidth of the instrument (around 1 Hz). This
gives the mean value of the apparent power pS(t) which is identical to the mean of the
amplitude of the active power:
The reactive power can be measured by shifting the voltage by –90° before feeding it to the
instrument. The displayed value is proportional to the reactive power:
University
Besides these possibilities there are some tricky measurement circuits for three-phase
systems that are beyond the scope of this chapter.
University
University
University
University
University
University
Differentiator
At the OpAmp circuit it is obvious, that this is the exact
opposite of the integrator shown above.
University
University
If the resistance R2 is unknown, we can tune one resistor (in principle, any one or more than
one) until the diagonal voltage is zero: Ud = 0. The bridge then is balanced. The unknown
resistance thus can be calculated from:
Advantage: Independent of quality of the voltage source U0. Only measurement of Ud around
zero is necessary.
Drawback: Tedious tuning of the comparing resistance.
University
0.5
-0.5
-1 0 1 2 3 4
University
Full Bridge
A further increase of sensitivity can be achieved by utilizing
2 positively (red, R + ΔR) and negatively (green, R – ΔR)
changed resistances. This is e.g. a common approach for
resistance strain gauges. Typically the strains are attached
on opposite sides of a bar.
∆R
Ud = U0
R
University
University
Then, the impedance of the oscillator is purely ohmic. In the ideal case of no energy loss
(R → 0 or in the mechanical case damper constant d → 0, respectively) the current would be
of infinite amplitude and oscillating at the resonance frequency of:
University
University
Sensor Systems
• Sensors integrated with intelligent components such as micro-controllers with software
(also called smart sensor).
• Combination of many identical or different sensors.
• Integration of sensors, actuators, and appropriate control equipment.
University
University
If the wire is pulled apart with a force F this is influencing the relative resistance:
University
University
upper strains
are stretched
lower strains
are compressed
University
– – – – – – – –
N
University
University
This means that only for tiny displacements the inductivity L is roughly proportional to the
displacement !s (with negative sign, i.e., !s > 0 → !L < 0). To enlarge the roughly linear
range, the differential approach was developed. The idea is to introduce a second coil whose
inductivity operates in the other direction. The displacement drives the armature opposite to
the first coil and a displacement !s leads to a decrease in the first but increase in the second
coil, or the other way round:
The differential principle together with the bridge circuit results in an exact proportionality
between displacement and diagonal voltage. This type of “physical linearization” is widely
applied in many circumstances (also with capacitor, etc.).
University
University
Similar to the inductivity change, the capacitor can be built Differential Capacitor
according to the differential principle. Again, together with
a bridge circuit a linear characteristics can be created.
University
Depth b
With an original plate area of A = b s this yields a change of that area of !A = b !s. Thus, the
capacity changes linearly with the displacement of the plates against each other:
non-
conducting
University
University
University
University
University
-2
!t = 10 s -0.5
0 10
!t [s]
20 30 0 20 40
t [s]
60 80 100
University
University
University
spring
University
University
If !0 is chosen to be big (via a stiff spring and a small mass) then the
3. term dominates the left part of this equation which yields
approximately:
Acceleration measurement:
c >> 1, m << 1, D << 1, !0 >> 1
Velocity measurement:
c << 1, m << 1, D >> 1
Displacement measurement:
c << 1, m >> 1, D << 1, !0 << 1
University
high resonance
frequency
acceleration displacement
measurement measurement
University
University
University
University
University
University
Source: http://www.telemetrie-
Signal Processing world.de/fachartikel/7._Drehmomentmessung_mit_
Telemetrie.pdf
One difficulty with measuring torques is the transmission pipe disc
of the measurement signals outside of the rotating axle to a
fixed system around. This can be solved via slip rings.
A more robust technique is via a transformer.
Modern systems are based on infrared or radio systems.
University
University
A Cu
Evaluation
B Cu T−T0
University
The coefficients ! und " are material dependent, R0 denotes the resistance at a reference
temperature T0 (as well material dependent). Because " is much smaller than !, the quadratic
term can be neglected − at least for small and moderate temperate changes.
Typically the reference temperature is chosen as T0 = 0°C:
University
University
University
3.6 Temperature
E) Miscellaneous
Besides the discussed temperature measurement approaches, there exist
many alternatives that also work according to a contact principle. The
following things have to be considered:
• First, the sensors measure their own temperature.
• The instrumentation engineer has to ensure that the sensor adopts
the temperature of the medium which shall be measured.
• The sensors affect the medium which shall be measured. Thus, the sensors can introduce
or draw heat from the medium. This means, the measurement is interacting!
Alternatively, there exist sensors which work according to the radiation principle.
Especially for high temperatures this is a common approach. The sensors do not have any
contact to the measured medium. They evaluate its radiation, e.g.:
• Thermopile: Series connection of thermocouples that are sensitive to heat radiation.
• Pyroelectric temperature sensor (see picture): Based on the change of polarization of
certain dielectric materials whose charge density on their surface is measured.
• Radiation pyrometer: Based on the measurement of the radiation power density ~ !T 4.
University
If the density is known theoretically (commonly the case for incompressible fluids) or can be
measured, then it is possible to convert volume flow in mass flow and vice versa.
Mass flow as a quantity has the advantage that it is constant in closed systems, while volume
flow of compressible fluids depends on their density and thus also on pressure and
temperature. On the other hand, the measurement of volume flows is cheaper, simpler, and
more widely used.
University
The volume flow can be calculated from the square root of the difference pressure:
Dependent on the kind of narrowing (orifice, nozzle, venturi), an additional pressure drop of
9% – 60% has to be considered due to turbulence (energy loss). That has to be taken into
account with a proportionality factor k.
With a Pitot tube well-known through Prandtl such difference pressures can be measured.
University
Modern method: The energy for turning the wheel is not taken from the fluid flow. Rather it
is supplied from outside. The pressure drop is feedback controlled to zero.
University
Here v is the flow velocity inside the ring-formed opening A – AK between the tube and the
floating body. According to the balance of continuity the flow is proportional to the square
of the height h (~ diameter):
measuring
tube
guidanc
e In order to not only display the height but transmit
the signal to the outside world, it is reasonable to
secondary primary convert it into an electrical signal. An effective way
coil
coil to realize that, is to use a ferromagnetic floating
floating
body as coupling between two coils works like in a
body transformer.
flow meter
with floating body
University
University
Properties:
– No constructions inside necessary.
– Robust with respect to all fluid properties.
– Suitable for liquids and gases. Coriolis flow meter in straight configuration
University
Properties:
– Especially well suited for low velocities.
– Sensitive with respect to dirt and burn-out.
hot wire meter
– Because of aging frequent calibrations are necessary. with bridge circuit
University
University
University