Vous êtes sur la page 1sur 14

135

CHAPTER 6

COUNTER PROPAGATION NEURAL


NETWORK FOR IMAGE RESTORATION

6.1 INTRODUCTION

Neural networks have high fault tolerance and potential for adaptive
training. A Full Counter Propagation Neural Network (Full CPNN) is used
for restoration of degraded images. The quality of the restored imaged image
is almost the same as that of the original image. This chapter is organized as
follows. In section 6.1, the features of CPN are discussed. In section 6.2, the
architecture of Full CPNN is presented. In section 6.3, the training phase of
Full CPNN is discussed. Some experimental results which confirm the
performance of CPNN for image restoration are presented in section 6.4.
Finally section 6.5 concludes the chapter.

6.2 COUNTER PROPAGATION NETWORK

Counter Propagation Networks (CPN) are multilayer networks


based on a combination of input, competitive and output layer. Counter
propagation is a combination of two well-known algorithms: self organizing
map of Kohenen and the Grossberg outstar (Liang et al 2002). The Counter
Propagation network can be applied in a data compression approximation
functions or pattern association.
136

Counter propagation networks training include two stages:

1. Input vectors are clustered. Clusters are formed using dot


product metric or Euclidean norm metrics.

2. Weights from cluster units to outputs units are made to produce


the desired response

CPN is classified in two types. They are i) Full counter propagation network
and ii) Forward only counter propagation network. CPN advantages are that,
it is simple and forms a good statistical model of its input vector environment.
The CPN trains rapidly. If appropriately applied, it can save large amount of
computing time. It is also useful for rapid prototyping of systems.

6.2.1 Full Counter Propagation Neural Network (Full CPNN)

The full CPNN possess the generalization capability which allows it


to produce a correct output even when it is given an input vector that is
partially incomplete or partially correct (Freeman and Skapura 1999). Full
CPNN can represent large number of vector pairs, x:y by constructing a look
up table.

Figure 6.1 shows the schematic block diagram of restoration process


using full CPNN. The architecture of full CPNN for image restoration is
shown in Figure 6.2. The architecture of a counter propagation network
resembles an instar and outstar model. Basically, it has two input layers and
two output layers with hidden (cluster) layer common to the input and output
layers. The model which connects the input layers to the hidden layers is
called Instar model and the model which connects the hidden layer to the
output layer is called Outstar model. The weights are updated both in the
Instar and Outstar model. The Instar model performs the first training and the
137

Outstar model performs the second phase of training. The network is a fully
interconnected network.

The major aim of full CPNN is to provide an efficient means of


representing a large number of vector pairs, X:Y by adaptively constructing a
look up table. It produces an approximation X:Y based on input of a X vector
alone or input of a Y vector alone or input of a X:Y pair, possibly with some
distorted or missing elements in either or both the vectors.

During the first phase of training of full Counter propagation


network, the training pairs X:Y are used to form the clusters.

The Full CPNN is used for bi-directional mapping. X is the input


image and Y is the degraded image. X* and Y* are the restored and the
degraded images respectively.

Original
image
Full Counter-
Restored image
Propagation Neural
Network

Degraded
image

Figure 6.1 The schematic block diagram of the Full CPNN


138

Original image Restored image

W V x*1
x1
Z1
x*2
x2

Z2
x*n
xn
. Output Layer
Input Layer .
. y*1
y1
y*2
y2
Znn

U T
y* m
ym Neuron
Degraded image Degraded image Y*

Figure 6.2 The Architecture of the Full CPNN for image restoration

6.3 TRAINING PHASES OF FULL CPNN

The full CPNN is achieved in two phases.

6.3.1 First Phase

This phase of training is called Instar modeled training. The active


units are the units in the x- input, z- competitive and y-output layers.

In CPNN, the winning unit is allowed to learn. This winning unit


uses Kohenen learning rule for its weight updation. This rule is given by
139

WiJ(k+1) = [1-(k)] wiJ(k) + (k) xi= (6.1)

UiJ(k+1) = [1-(k)] u iJ(k) + (k) yj (6.2)

6.3.2 Second phase

In this phase, only the J unit remain active in the cluster layer. The
weights from the winning cluster unit J to the output units are adjusted, so that
vector of activation units in the y output layer, y*, is approximation of input
vector y; and x* is an approximation of input vector x. This phase is called
outstar modelled training. The weight updation is done by the Grossberg
learing rule which is used only for outstar learning. In outstar learning, no
competition is assumed among the units and the learning occurs for all units
in a particular layer. The weight updation rule is given as:

ViJ(k+1) = ViJ(k) + (k) [xi-viJ(k)] (6.3)

TiJ(k+1) = tiJ(k) + (k) [yj-tiJ(k)] (6.4)

6.3.3 Training Algorithm

The algorithm for the Full CPNN network is given below:

Step 1: Set X input layer activations to vector X, which is the input


image pixels of size n.

Set Y input layer activations to vector Y, which is the degraded


image pixels of size m.
140

Express the input X and output Y as vectors

X= { x1, x2, x3 … xn}

Y= {y1, y2, … ym}

Step 2: Initialize the weights U(n,nn), W(n,nn), V(n,nn) and


T(n,nn).

Step 3: Find the total input of the ith neuron.

n 2 m
2
Z j    x i  w i, j     y k  u k, j  (6.5)
i 1 k 1

Find min Zj and thus the winning neuron J, which receives the
minimum values.

Step 4: If the number of iterations is greater than the specified


number of iterations then stop. Update weights of the winning neuron as
given below and then go to step 3.

WiJ(k+1) = [1-(k)] wiJ(k) + (k) xi

UiJ(k+1) = [1-(k)] u iJ(k) + (k) yj

ViJ(k+1) = ViJ(k) + (k) [xi-viJ(k)]

TiJ(k+1) = tiJ(k) + (k) [yj-tiJ(k)]

where Kohonen learning function

 k 
(k) = (0) exp   (6.6)
 k0 
141

(0) is the initial learning rate, k is the number of iterations and k0 is a


positive constant.

Similarly, the Grossberg learning function of the output stage

 k
(k) = (0) exp   (6.7)
 k0 

where (0) is the initial learning rate and k0 is a positive constant.

Restoration Procedure

Step 1: Give any degraded image as the input X


X = {0}
Y = {y1,y2, … yn}

Step 2: Find Zj
n
Zj =  (x i  w ij ) 2 (6.8)
i 1

Find min Zj and the winning neuron J.

Step 3: Find X* and Y*. X* is the restored image.

n
x *i   v Ji (6.9)
i 1

m
y*j   t Jk (6.10)
k 1
142

6.4 EXPERIMENTAL RESULTS

The CPNN approach to restore the image is implemented


successfully in VC++ and experiments are carried out to evaluate its
performance. In the simulation environment where the original signal is
available, the improvement in performance is measured using difference in
signal to noise ratio between the original image and the restored image. To
evaluate the performances of the proposed approach, different experiments
are conducted using Lena, mandrill, flowers and boat images of size
256 × 256. The quality of restored images has been assessed by Peak Signal-
To-Noise Ratio (PSNR). The restored images obtained with this approach are
of good visual quality with higher PSNR. Different experiments are
conducted with varying values of α and β and quality of the restored image is
noted. From the experimental results, it is found that the noisy images are
restored to good quality images when the value of α is kept close to 0.7 and
β = 0.5. Also, as the value of α is decreased below 0.6 and β value decreased
below 0.5, it is found that the restored image quality is very much deteriorated
and the mean squared error is high as 3.43 × 103. Figure 6.3 shows a degraded
Lena image. Figure 6.4 is the restored Lena image after 25 iterations.

PSNR = 49 db

Figure 6.3 Degraded Lena image Figure 6.4 Restored Lena image
with impulse noise
143

A few experiments were also conducted by varying the number of


hidden layer neurons and it was found that the quality of the restored image
produced is optimum when the number of neurons in the hidden layer is 64. It
is also found that as the number of iterations is increased the restored image
quality also increases considerably. The effect of the number of iterations
versus PSNR in db is given in Figure 6.5. From the Figure, it is evident that
the proposed approach performs significantly better as the number of
iterations increases.
PSNR in db

Number of iterations

Figure 6.5 Number of iterations versus PSNR

A few experiments were also conducted using degraded images of


low information (Lena image), medium information (cameraman image) and
large information (crowd). Figure 6.6 shows the large information image
degraded with 0.8 noise probability and the corresponding restored image is
shown in Figure 6.7.
144

PSNR = 49.57db

Figure 6.6 Image degraded by Figure 6.7 Restored image


impulse noise

Figure 6.8 shows the degraded medium information image with 0.8
noise probability and the restored image are shown in Figure 6.9.

PSNR = 49.34db

Figure 6.8 Image degraded by Figure 6.9 Restored image


impulse noise

A graph is plotted between number of iterations and MSE and is


shown in Figure 6.10 for low information, medium information and large
information images.
145

350

More information image


300
Medium information image
less information image
250

200
MSE
MSE

150

100

50

0
5 10 15 20 25 30
Number of iterations
Number of iterations

Figure 6.10 Number of iteration versus MSE for low information,


medium information and large information images

For the three types of images, the MSE is minimized as the number
of iterations increases.

The same network is also used for the restoration of colour images.
Colour images are divided into RGB distribution. Then each subspace can be
regarded as a gray image space and is processed by counter propagation
neural network used in gray images. Finally, they are combined to get a
restored colour image. In the first experiment, the image is degraded using
impulse noise. Figure 6.11 shows the image degraded by impulse noise and
the corresponding restored image is shown in Figure 6.12.
146

PSNR = 41.254 db

Figure 6.11 Image degraded by Figure 6.12 Restored image


impulse noise

A blurred sail boat is shown in Figure 6.13 and the restored image
using full CPNN is shown in Figure 6.14.

PSNR = 41.59 db

Figure 6.13 Blurred sail boat Figure 6.14 Restored image

In another experiment Lena image is degraded with Gaussian noise


of standard deviation σ = 0.5 and is shown in Figure 6.15. The corresponding
restored image is shown in Figure 6.16.

.PSNR = 41.89db

Figure 6.15 Degraded image Figure 6.16 Restored image


with σ = 0.5
147

In another experiment Lena image is degraded with mixed noise


(σ = 0.5, p = 0.5) and is restored by full CPNN. Figure 6.17 shows the image
degraded by mixed noise and the corresponding restored image is shown in
Figure 6.18.

PSNR = 41.79 db

Figure 6.17 Degraded image Figure 6.18 Restored image


with mixed noise

Table 6.1 gives the processing time taken by the proposed CPN and
the resultant PSNR for different colour images like Lena, Mandrill and
parrot corrupted with impulse noise of p=0.6.

Table 6.1 Processing time in seconds for different colour images for
impulse noise

Image Processing time in PSNR of restored


seconds image (db)
Lena 28 41.254

Mandrill 26 41.48
Parrot 30 41.72
148

6.5 CONCLUSION

In this chapter, a Full Counter propagation network is proposed for


restoring colour images. It was found that the quality of the restored image
gets improved when the numbers of iterations are increased. The restored
images obtained with this approach are of good visual quality with higher
PSNR. The average time taken by this approach to restore the degraded
colour images was about 26 seconds, but the visual quality of the image is
about 3 db more than the proposed MLMNN.

Vous aimerez peut-être aussi