Académique Documents
Professionnel Documents
Culture Documents
Image Coding-II
Outline
z Waveform Coding
Pixel Coding
PCM
Entropy Coding
Huffman
Group Coding
LZW Coding
Predictive Coding
Delta Modulation
Line-by-Line DPCM
2D DPCM
Lossless DPCM
95 1/8 1/4
169 1/8
>> cr = imratio('Tracy.tif','Tracy_LZW.tif')
cr =
1.6729
which is a larger CR than the Huffman encoding because LZW
uses correlations between pixels which Huffman does not.
ECE G311 – Image Coding-II Slide 24 of 36
Predictive Coding
z This technique exploits interpixel redundancy to achieve a
significant amount of data compression.
z Let
x(n1, n2 ) : zero-mean random field
and
)
x ( n1 , n2 ) causal optimal predictor
= E ⎡⎣ x ( n1 , n2 ) past signal values x ( k1 , k2 ),(k1 , k2 ) < ( n1 , n2 ) ⎤⎦
Then )
e( n1 , n2 ) x ( n1 , n2 ) − x ( n1 , n2 )
is the prediction error (also known as the Innovations Process).
z It is a causal, causally invertible, and uncorrelated random field.
e1 (n1, n2 ) e2 (n1, n2 )
e3 (n1, n2 ) e3 (n1, n2 )
z Encoder Structure:
TRANSFORMATION
x ( n1 , n2 ) + e( n1 , n2 ) CODEWORD Bit
QUANTIZER Stream
− ASSIGNMENT
PREDICTOR )
x ( n1 , n2 )
High-correlated, Uncorrelated,
Large-variance signal Small-variance signal
+ x% ( n1 , n2 ) ≈ x ( n1 , n2 )
e% ( n1 , n2 )
+
1 Pixel Delay
)
x% ( n1 , n2 ) ρh x% ( n1 , n2 − 1)
ECE G311 – Image Coding-II Slide 34 of 36
Predictive Coding (11)
1. Delta Modulation: (continued)
In the receiver, the decoder has to generate predictor
estimates based on x% ( n1 , n2 ), or equivalently, e% (n1 , n2 ).
Hence, it is necessary that the encoder also make use of e%
in order to maintain synchronization between the transmitter
and the receiver.
)
Therefore, we have to use e% ( n1 , n2 ) in order to generate x% , as
shown below:
) e% ( n1 , n2 )
x% ( n1 , n2 )
Filter
x% ( n1 , n2 − 1) +
x% ( n1 , n2 )
ρh 1 Pixel Delay
) +
x% ( n1 , n2 )
+ e( n1 , n2 ) +Δ e% ( n1 , n2 )
x ( n1 , n2 )
− −Δ
+
ρh 1 Pixel Delay
+
Decoder:
e% ( n1 , n2 ) + x% ( n1 , n2 )
+
1 Pixel Delay
ρh
ECE G311 – Image Coding-II Slide 36 of 36
Predictive Coding (13)
z Delta Modulation: (continued)
Disadvantages:
Uses on the horizontal correlation but not higher-order or
2-D correlations.
Slope overload.
Granularity.
Limited compression ratios (< 3).
Granularity
)
e( n1 , n2 ) = x ( n1 , n2 ) − x% ( n1 , n2 ); e% ( n1 , n2 ) = Q [ e( n1 , n2 )]
x% ( n1 , n2 ) = ∑ p =1 a p x% ( n1 , n2 − p ) + e% ( n1 , n2 )
P
Ra = r ⇒ a = R −1r
where
R is the autocorrelation matrix,
r is the autocorrelation vector, and
a is the vector of unknown coefficients.
The prediction error is zero mean and its mean-squared value
or variance is given by
σ e2 = σ 2x − ∑ ak1 ,k2 Rx ( − k1 , − k2 )
( k1 ,k2 )
Rx (1,1) = σ 2x ρv ρh =
( )(
σ 2x ρv σ 2x ρh ) = R (1,0) R (0,1)
x x
σ 2x Rx (0,0)
Then Ra = r yields
Rx (1,0) R (0,1) R (1,1)
a1,0 = = ρv ; a0,1 = x = ρh ; a1,0 = x = − ρ v ρh
Rx (1,0) Rx (1,0) Rx (1,0)
This gives a very simple prediction mask:
− ρ v ρh ρv
ρh ( n1 , n2 )
ECE G311 – Image Coding-II Slide 44 of 36
Predictive Coding (21)
3. 2-D DPCM:
Example-10: (continued)
Note that
E [ e( n1 , n2 )] = 0
E [ e( n1 , n2 )e( k1 , k2 )] = 0,( k1 , k2 ) ≠ ( n1 , n2 ) ⇒ Uncorrelated
σ e2 = σ 2x (1 − ρv2 )(1 − ρh2 )
Thus this predictor requires only the horizontal and
vertical correlations which are easy to compute.
If the image is highly correlated then ρh ≈ ρh ≈ 1 (but < 1).
Hence
(1 − ρh2 )(1 − ρv2 ) 1 ⇒ σ e2 σ 2x
Therefore, we are successful in converting a highly
correlated, high variance image into an uncorrelated, low
variance “difference” image.
ECE G311 – Image Coding-II Slide 45 of 36
Predictive Coding (22)
3. 2-D DPCM:
Example-10: (continued)
The rate reduction is given by
1 ⎛ σ 2x ⎞
RPCM − R2D-DPCM = log 2 ⎜ 2 ⎟
2 ⎝ σe ⎠
1 ⎛ 1 ⎞
= log 2 ⎜ 2 2 ⎟
2 ⎝ (1 − ρv )(1 − ρh ) ⎠
For example, if ρh = ρv = 0.97, then
RPCM − R2D-DPCM = 4 bpp ⇒ CR = 8 / 4 = 2
If ρh = ρv = 0.997, then
RPCM − R2D-DPCM = 7.4 bpp ⇒ CR = 13
ECE G311 – Image Coding-II Slide 46 of 36
Predictive Coding (23)
3. 2-D DPCM: (continued)
Encoder:
+ e( n1 , n2 ) e% ( n1 , n2 )
x ( n1 , n2 ) QUANTIZER
−
+
NSHP Predictor
+
Decoder:
e% ( n1 , n2 ) + x% ( n1 , n2 )
+
NSHP Predictor
x ( n1 , n2 ) + e( n1 , n2 ) CODEWORD Bit
ASSIGNMENT Stream
− )
x ( n1 , n2 )
NEAREST
PREDICTOR INTEGER
Decoder:
Bit Symbol e( n1 , n2 ) x ( n1 , n2 )
Stream decoder
)
x ( n1 , n2 )
PREDICTOR