Vous êtes sur la page 1sur 13

This article was originally published in a journal published by

Elsevier, and the attached copy is provided by Elsevier for the


author’s benefit and for the benefit of the author’s institution, for
non-commercial research and educational use including without
limitation use in instruction at your institution, sending it to specific
colleagues that you know, and providing a copy to your institution’s
administrator.
All other uses, reproduction and distribution, including without
limitation commercial reprints, selling or licensing copies or access,
or posting on open internet sites, your personal or institution’s
website or repository, are prohibited. For exceptions, permission
may be sought for such use through Elsevier’s permissions site at:

http://www.elsevier.com/locate/permissionusematerial
Pattern Recognition Letters 28 (2007) 759–770
www.elsevier.com/locate/patrec

A statistical framework based on a family of full range

py
autoregressive models for edge extraction
a,* b
K. Seetharaman , R. Krishnamoorthi

co
a
Department of Computer Science and Engineering, Annamalai University, Annamalainagar 608 002, Tamilnadu, India
b
Department of Information Technology, Bharathidasan Institute of Technology, Bharathidasan University, Trichy 620 024, Tamilnadu, India

Received 7 September 2005; received in revised form 10 August 2006


Available online 3 January 2007

Communicated by K. Tumer

Abstract
al
on
In this paper, a novel technique is proposed based on a Family of Full Range Autoregressive (FRAR) models to extract edges in 2D
monochrome images. The model parameters are estimated based on Bayesian approach and is used to smooth the input images. At each
pixel location, residual value is calculated by differentiating the original image and its smoothed version. Edge magnitudes and its direc-
rs

tions are measured based on the residual. The edge magnitudes are squared to enhance the edges whereas the other values are suppressed
by using confidence limit is based on the global descriptive statistics. Threshold value is fixed automatically based on the autocorrelation
value calculated on the smoothed image. This extracts the thick edges. To obtain thin and continuous edges, the nonmaxima suppression
algorithm is applied with the confidence limit based on the local descriptive statistics. Then the performance of the proposed technique is
pe

compared with that of the existing standard algorithms including Canny’s algorithm. Since Canny’s algorithm oversmoothes across the
edges, it detects the spurious and weak edges. This problem is overcome in the proposed technique because it smoothes minimally across
the edges. The extracted edge map is superimposed on its original image to justify that the proposed technique is locally characterize the
edges correctly. Also, the proposed technique is experimented on synthetic images such as concentric circle and square images to prove
that it detects the edges in all directions and edge junctions.
 2006 Elsevier B.V. All rights reserved.
r's

Keywords: Edge extraction; Full range autoregressive; Confidence limit; Nonmaxima suppression; Edge map; Residual

1. Introduction decided only on the basis of local characteristics viz. neigh-


o

bourhood pixel values.


Edge detection is a fundamental step in most of the In the literature, a lot of reviews on edge detection are
th

applications of image analysis. It constitutes a crucial ini- reported and a large number of algorithms are proposed
tial step before performing high-level tasks such as object for edge extraction in the case of gray level images. The
recognition, object classification, pattern matching, image edge detection methods are broadly classified into enhance-
Au

segmentation, image compression and boundary detection ment/threshold type, gradient based operators, edge-fitting
in computer vision applications. The edges are broadly edge detection, zero-crossings in second derivatives, opti-
classified as roof edges, step edges and ridge edges, etc. mality criteria, residual analysis based technique, etc. The
An edge is characterized by an abrupt change in the gray enhancement/threshold type is a well known one due to
value of an image. A pixel that represents an edge is mostly its simplicity and low computational complexity. This
method is based on gradient values of the image under
analysis. The amplitude response of the image is obtained
*
Corresponding author. Tel.: +91 4144 229225; fax: +91 4144 38080. by taking either root mean square or the sum of the abso-
E-mail address: kseethadde@yahoo.com (K. Seetharaman). lute gradient values of the image along the two spatial

0167-8655/$ - see front matter  2006 Elsevier B.V. All rights reserved.
doi:10.1016/j.patrec.2006.11.003
760 K. Seetharaman, R. Krishnamoorthi / Pattern Recognition Letters 28 (2007) 759–770

coordinates. An edge is assumed to be present if a single to each pixel in the block. If the pixel is greater than the
criterion like maximization of amplitude response to the average value then it is marked as 1. Otherwise it is marked
image is satisfied. The gradient values of different orders as 0. Then, they proposed fixed weights such as 1, 2, 4, 8,
are calculated by convolving the various gradient opera- 16, 32, 64 and 128 to the eight-neigbouring pixels depend-
tors. The gradient based operators such as Roberts ing on their position related to the centre pixel and they
(1965), Prewitt (1970) and Sobel (1970) operators are well determine 8-bit code by summing the weights of the pixels
known and are widely used to find the step edges than marked as 1. This is not justifiable because the pixel values
the roof and ridge edges. All these operators are first order in a 3 · 3 block generally influence the centre pixel more or

py
derivatives. Hueckel (1971) proposed a method, based on less equally. In their approach, they use weight 1 to the
edge-fitting edge detection and then it was simplified later pixel in the location (1, 1) and 128 to the pixel in the loca-
by Rosenfield (1981), which is not widely used in computer tion (3, 3). This shows the great difference between the
vision applications when compared to the gradient based influences of the two pixels on its centre pixel. Generally,

co
operators. The Laplacian, a zero-crossing operator, is a this is not true since both pixels are in opposite direction
second order derivative, and is used to establish the loca- and closest neighbouring to the centre pixel, so the influ-
tion of edges present in the image. This operator is not used ences of those pixels on the centre is more or less equal.
with its original form for edge detection. Instead it is used With this point of view, giving fixed weights may lead to
with the Gaussian function and is called Laplacian of wrong decision on finding the directionality of edge. This
Gaussian (LoG) function (Marr and Hildreth, 1980). problem is overcome in our proposed scheme and the out-
Krishnamoorthi and Bhattacharya (1998) used curve fitting put of our proposed scheme is compared with that of the

al
model based on zero-crossing in the second derivatives and other existing techniques such as Zheng et al. and the tra-
claimed that their technique is superior to the vector order ditional Prewitt, Sobel, and Canny’s detectors.
statistics and entropy schemes. Edge detectors based on Stochastic model based methods, such as, Hidden Mar-
on
optimality criteria are also reported by many authors. kov Random field (Chen and Kundu, 1995; Romberg et al.,
For example, Canny’s operator (Canny, 1986) is very pop- 2001), Markov Random field (Sarkar et al., 2000; Elia
ular among all the edge detectors that uses Gaussian filter et al., 2003), Autoregressive (Kadaba et al., 1998; Krishna-
for smoothing images before extracting the edges. The moorthi and Seetharaman, 2007), Multiresolution Guas-
rs

noise can be reduced, by smoothing the images and then sian Autoregressive (Comer and Delp, 1999) and Gibbs
the actual edges are identified. Though, the noise in the field (Chalmond, 1989; Geman and Geman, 1984) models
images is reduced by smoothing, the Canny’s algorithm have attracted many researchers on image processing and
pe

captures the spurious edges (Rakesh et al., 2004). Statistical computer vision applications such as pattern recognition,
approaches are also adopted to detect the edges (Qie and object recognition, feature analysis, edge detection, image
Bhandarkar, 1996; Stern and Kurz, 1988; Li, 2000; Rakesh classification, segmentation, etc. Markov Random field
et al., 2004) as they give better results for the images with models have been used by many researchers (Bouman
noise and without noise (Rakesh et al., 2004). The residual and Sauer, 1993; Li, 2000) to capture edges. These models
analysis based approach is studied by many authors (Chen are adopted to impose the smoothness on the image surface
et al., 1991; Lee et al., 1988; Pavlidis and Lee, 1988). For function.
r's

instance, Chen et al. (1991) considered the distribution of Autoregressive model utilizes the linear dependency of
residuals, the difference between the original image and the pixels in an image to estimate its surface. This model,
its smoothed version. Then they find autocorrelation for also, takes advantage of the spatial interaction of pixels
the residual by which they extract the features. Zheng in local neighbourhood. To estimate the gray tone of a
o

et al. (2004) proposed a hybrid edge detector with the com- pixel in an image region, it needs the conditional probabil-
bination of gradient and zero-crossing based on Least ity density function of that pixel given the gray tone of the
th

Square Support Vector Machine (LS-SVM) with Gaussian neighbourhood pixels in that region. That is
filter. It reports that it takes lesser time than the Canny’s
detector with similar performance on edge extraction. In P fðgray tone of the pixel to be estimatedÞ
Au

the earlier works, the threshold is chosen on heuristic basis. =ðgray tone of the neighbourhood pixels
Even in the Canny’s edge detector the default value of
in the image regionÞg
upper limit is suggested to be 75th percentile of the gradient
strength and also it requires at least two threshold values
namely, lowest and highest threshold. Though Rakesh Generally, a model with statistical properties, which
et al. (2004) reported that this problem was overcome in describes the probability structure of a time series and in
their work, their algorithm needs initial threshold value general any sequence of observations, is called stochastic
and parameters as input. process. The image to be analysed can be thought of as
Kim et al. (2004) proposed a procedure to determine the one particular realization, produced by the underlying
edge magnitude and direction and found the 3 · 3 ideal probability mechanism of the image under study. That is,
binary pattern for a pixel as follows. The average value is in analysing an image we regard it as a realization of a
calculated for the pixels in a 3 · 3 block, which is compared stochastic process.
K. Seetharaman, R. Krishnamoorthi / Pattern Recognition Letters 28 (2007) 759–770 761

Definition. If a stochastic process, that is, a family of time gated into various non-overlapping blocks with equal sizes
dependent random variables {X(t)} satisfies of 3 · 3. The confidence limit is measured based on the
autocorrelation values of each block and the pixel values
EðX ðtÞ=X ðt  jÞ;j ¼ 1;2;. ..Þ ¼ EðX ðtÞ=X ðt  1Þ;.. .;X ðt  pÞÞ that fall outside the upper limit are identified. Now, the
identified pixel values are squared to enhance the edges
then {X(t)} is said to satisfy the Markov property.
and are replaced in the corresponding location itself and
In the above equation, on the left hand side (LHS) the the other values (within the limits) are with 0. The value
expectation is conditional on the infinite history of X(t). other than 0 represents edges and 0 represents non-edge

py
On the right hand side (RHS) it is conditional only on part in the image.
the part of the history. From the definition, an AR(p) In the proposed method, the centre pixel value in the
model is seen to satisfy the Markov property. In that sense, small image region (3 · 3) is estimated based on the local
time series models, Markov Random models and Stochas- neighbourhood values with the use of conditional probabil-

co
tic Process, all are interrelated and are associated with each ity mechanism. The centre pixel value is estimated at each
other in the context of image processing. In the next sec- pixel location by considering the image region in raster
tion, the proposed, a Family of Full Range Autoregressive scan fashion. The estimated value is almost closer to the
(FRAR) models is introduced. actual value on the homogeneous region whereas there is
Most of the aforesaid techniques capture spurious and a small variation between the actual and estimated pixel
minor edges, and failed to extract the edge junction, that
is, meeting point of the edges. It is observed from the

al
literature that the Gaussian derivative is used in most exist- Input Image fi
ing techniques to estimate the smoothed surface of the
input images. Generally, Gaussian derivative oversmoothes Estimate Smoothing Parameters
on
across the edges in the images. This is the main reason to Κ, α, θ, φ and Coefficients ar
capture the spurious and minor edges, and missed to
extract the edge junction. Another important point is that Compute Smoothed Image
Surface fs of Input Image
several of the existing techniques require initial input
rs

parameters or threshold values. Mainly these two concepts Compute Residual Image
motivated us to carry out this study. In the proposed tech- fd and Magnitude M(x,y)
nique, the input parameters or threshold values are not
pe

required because it automatically computes the threshold Calculate global CL on fd


or parameters values according to the nature of the image
data, which is discussed in detail in Sections 5 and 8. The
Yes
main advantage of the proposed technique is that it If
fte = fd2
extracts fine and correct edges, and it captures the edge fd >= CL
junction. No
The purpose of this paper is to provide automatic
fte = 0
r's

threshold and to maintain high performance for edge


extraction. Initially a technique, based on FRAR model
is proposed to smooth the original input image. Here, the Apply Non-maxima
Suppression at each Pixel (x,y)
smooth means, slow change of the gray level which have in fte with local CL
o

abrupt change in the original image. The input image is


modelled as a Gaussian Makov Random Field (GMRF),
th

viz., assuming that each given pixel X(s) depends statisti- If No


cally on the rest of the image only through a selected group fte <= CL fem = 255
of neighbourhood pixels Xn(s) i.e., P(X(s)/Xn(s)). In fact, Suppressed
Au

this assumption reveals that the group of pixels satisfies Yes


the linear dependency criteria. Generally, with this assump-
tion, the MRF model captures the features like edges and fem = 0
boundaries of the complicated images to some extent. In
our method, when attempting to predict a pixel that pixel
represents the features based on its neighbourhood, the Yes Is there
model predicts the actual pixel value to some extent. This any more
pixel?
indicates that, the features are filtered (smoothed) accord-
ing to the local characteristics. The filtered features are No
captured by differentiating the original and smoothed Track Edge
images and that are stored in another image array and it
can be called residual image. The ‘residual image’ is segre- Fig. 1. Flow chart of overview of the work.
762 K. Seetharaman, R. Krishnamoorthi / Pattern Recognition Letters 28 (2007) 759–770

values on the inhomogeneous region, that is, the region other values only depends upon the nearest neighbourhood
contains features. So, the proposed method smoothes the values.
image surface minimally across the edges. The initial assumptions about the parameters are K 2 R,
Overview of the proposed work: The overall concepts used a > 1, and h, / 2 [0, 2p]. Examining the identifiability of the
in the proposed technique are presented in the form of flow model places further restrictions on the range of the param-
chart is given in Fig. 1. One can refer the algorithm discussed eters. It is interesting to note that some of the models used
in Section 5 for more details of the Flow chart. in the previous works, that is, white noise, autoregressive
The rest of the paper is organised as follows. In Section finite order and infinite order autoregressive models can

py
2, the proposed model to smooth the image is introduced be regarded as a special case of the proposed model. Thus
and the smoothing parameters of the model are estimated
in Section 3. Section 4 deals with image smoothing and Sec- (i) If we set h = 0, then the FRAR model reduces to the
tion 5 focuses on edge magnitude and edge direction. In white noise process.

co
Section 6, the performance of the proposed technique is (ii) When a is large, the coefficients ars become negligible
compared with existing standard methods. Section 7 deals as ‘r’ increases. So the FRAR model reduces to AR(r)
with the comparison of edge maps extracted by the pro- model approximately, for a suitable value of r, where
posed technique for different types of synthetic images. r is the order of the model.
The results and conclusion are respectively drawn in (iii) When a is chosen to be less than one, then the FRAR
Section 8 and 9. model becomes an explosive infinite order AR model.

al
The fact that X(s) has regression on its neighbourhood
2. Proposed smoothing model
pixels gives rise to the terminology autoregressive process.
However, in this case the dependence of X(s) on neighbour-
Let X be a random variable that represents the intensity
on
hood values may be true to some extent. In fact, the pro-
value of a pixel at location (k, l ) in an image. We assume
cess is Gaussian under the assumption that the e(k, l)s are
that X may have noise and is considered as independently
Gaussian and in this case its probabilistic structure is com-
and identically distributed Gaussian random variable
pletely determined by its second order properties. Second
with discrete time space and continuous state space with
rs

order properties meant for the proposed FRAR model is


mean zero and variance r2 and is denoted as e(k, l) i.e.
asymptotically stationary up to order two, provided 1 
e(k, l)  N(0, r2).
a < K < a  1. Finally the range of the parameters of the
Since {X(s); s 2 S} is a stochastic process, where S =
pe

model are set as with the constraints K 2 R, a > 1,


{s: (k, l); 1 6 k, l 6 M}, {X(s)} can be considered as a Mar-
0 < h < p, 0 < / < p/2.
kov process because we have the conditional probability
The noncausal model given in Eq. (1) represents the
P fX ðsn Þ ¼ in jX ðsk Þ ¼ ik ; k ¼ 0; 1; 2; . . . ; n  1g pixel X(s) as a linear combination of nearest neighbour-
hood values on each side as shown in Fig. 2(a) and the
¼ P fX ðsn Þ ¼ ik jX ðsn  1Þ ¼ in  1g influence of the horizontal and vertical pixels on the centre
is higher than the diagonal pixels. The order of influence of
r's

for all ik, k = 0, 1, 2, . . . , n  1 and sk belonging to the state neighbourhood pixels on its centre is shown in Fig. 2(b).
space S and s0 < s1 <    < sn. The pixel X(s) is predicted by using the closest neighbour-
Thus, we propose the model in Eq. (1), a family of Full hood pixels on each side.
Range Autoregressive (FRAR) model
o

X
M X
M
3. Estimation of smoothing parameters
X ðk; lÞ ¼ ar X ðk þ p; l þ qÞ þ eðk; lÞ ð1Þ
th

p¼M q¼M p¼q6¼0


In order to implement the proposed FRAR model, we
K sinðrhÞ cosðr/Þ must estimate the parameters. The parameters K, a, h,
where ar ¼ ; r ¼ jpj þ jqj þ MðM  1Þ=2
Au

ar and / are estimated, by taking the suitable prior informa-


ð1aÞ tion for the hyper parameters b, c, and d, based on numer-
ical integration technique and Bayesian methodology. The
and K, a, h, and / are real parameters. ars are the model hyper parameters are meant for the parameters of the prior
coefficients which are computed by substituting the model distribution of the actual model parameters K, a, h, and /.
parameters K, a, h, and / in Eq. (1a). The model para- The hyper parameters are approximately estimated by
meters are interrelated to each other. using the mean and standard deviation of the pixel values
The proposed model is employed to analyse a two of the subimage to be estimated. Only for the computa-
dimensional discrete gray-scale image. The image is parti- tional purpose, the pixel values of each subimage are
tioned into various subimages of size M · M, to locally arranged as one-dimensional vectors X(t), t = 1, 2, 3, . . . , N
characterize the nature of the image. With the Markovian (M · M = M2 = N). Since the error term e(k, l) in Eq. (1)
assumption, the conditional probability of X(s) given all are independent and identically distributed Gaussian
K. Seetharaman, R. Krishnamoorthi / Pattern Recognition Letters 28 (2007) 759–770 763

k - 1, l - 1 k - 1, l k - 1, l + 1
2 1 2

k, l - 1 k, l + 1
s 1 s 1
k, l

py
2 1 2
k + 1, l - 1 k + 1, l k + 1, l + 1

co
Fig. 2. (a) Relationship among neighbourhood pixels; (b) order of the influence of the neighbourhood pixel on its centre.

random variable, the joint probability density function of 1. a is distributed as the displaced exponential distribution
the stochastic process {X(t)} is given by with parameter b, i.e.
2 ( )2 3
1 XN X1 P ðaÞ ¼ b expðbða  1ÞÞ; a > 1; b > 0 ð6Þ
2 N =2
P ðX =HÞ / ðr Þ exp 4 2 Xt  K S r X tr 5 2
2r t¼1 r¼1
2. r has the inverted gamma distribution with parameter m

al
and d, i.e.
ð2Þ
2 P ðr2 Þ / expðm=r2 Þðr2 Þðdþ1Þ ; r2 > 0; m; d > 0 ð7Þ
where X = (X1,X2, . . . ,XN), H = (K, a, h, /, r ) and S r ¼
on
sinðrhÞ cosðr/Þ 3. K, h and / are uniformly distributed over their domain,
.
ar i.e.
When analyse the real data with finite number of N
P ðK; h; /Þ ¼ C; a constant;
observations, the range for the index ‘r’ viz., 1 to 1,
K 2 R; 0 < h < p; 0 < / < p=2
rs

reduces to 1 to N and so in the joint probability density


function
P1 of the observations
PN given by (2) the summation So, the joint prior density function of H is given by
r¼1 can be replaced by which gives
2 r¼1 ( )2 3 P ðHÞ / b expðbða  1Þ  m=r2 Þðr2 Þðdþ1Þ ;
pe

1 XN X N
P ðX =HÞ / ðr2 ÞN =2 exp 4 2 Xt  K S r X tr 5 r2 > 0; a > 1; 0 < h < p; 0 < / < p=2 ð8Þ
2r t¼1 r¼1
where P is used as a general notation for the probability
ð3Þ density function of the random variables given within the
By expanding the square in the exponent, we get parentheses following P.
2 8 Using (5), (8) and Bayes theorem, the joint posterior
>
< XN density of K, a, h, /, and r2 is obtained as
6 1
r's

N =2
P ðX =HÞ / ðr2 Þ exp 4 2 T 00 þ K 2 S 2r T rr
P ðH=X Þ / expðbða  1ÞÞ expð1=2r2 ÞðQ þ 2mÞðr2 Þð 2 þdþ1Þ ;
N
2r >: r¼1

93 K 2 R; a > 1; 0 < h < p; 0 < / < p=2 and r2 > 0 ð9Þ


X X >
=
N N
7 Integrating (9) with respect to r2, the posterior density of
o

2
þ2K S r S s T rs  2K S r T 0r 5 ð4Þ K, a, h and / is obtained as
r;s¼1 r¼1
>
;
P ðK; a; h; /=X Þ / expðbða  1ÞÞðQ þ 2mÞ ð 2 Þ ;
r<s  N þd
th

PN
where T rs ¼ t¼1 X tr X ts ; r; s ¼ 0; 1; 2; . . . ; N .
K 2 R; a > 1; 0 < h < p; 0 < / < p=2 ð10Þ
The above joint probability density function can be
written as where
20
Au

 
6B X 2 X
Q N N
2 N =2
P ðX =HÞ / ðr Þ exp  2 ð5Þ ½Q þ 2m ¼ 4@K 2 S r T rr þ 2K 2 S r S s T rs
2r
r¼1 r;s¼1
r<s
where 1 3
X
N X
N X
N X
N
C 7
Q ¼ T 00 þ K 2 S 2r T rr þ 2K 2 S r S s T rs  2K S r T 0r ; 2K S r T 0r A þ T 00 þ 2m5 ð11Þ
r¼1
t¼1 r;s¼1 r¼1
r<s
That is,
K 2 R; a > 1; 0 < h < p; 0 < / < p=2 and r2 > 0
ðQ þ 2mÞ ¼ aK 2  2Kb þ T 00 þ 2m ¼ C½1 þ a1 ðK  b1 Þ2 
The prior distribution of the parameters is assigned as
follows: ð12Þ
764 K. Seetharaman, R. Krishnamoorthi / Pattern Recognition Letters 28 (2007) 759–770

where 4. Image smoothing


2
b
C ¼T 00  þ 2m In this section, we discuss the estimate of the input image
a
surface as follows. We consider a gray level input image
X
N X
N
a¼ S 2r T rr þ 2 S r S s T rs fi(x, y) with size L · L (L = 256) with pixel values in the
r¼1 r;s¼1 range from 0 to 255. The image is divided into various slid-
r<s ing windows with equal size of M · M (M < L; M = 3) with
X
N
the pixel of interest at centre. The parameters K, a, h and /
b¼ S r T 0r

py
of the model are estimated, as discussed in the previous sec-
r¼1
tion. The pixels in the horizontal and vertical directions to
a b
a1 ¼ ; b1 ¼ the centre pixel of a window (3 · 3) are treated as one set
C C and the diagonal pixels are treated as another set. The for-
Thus, the above joint posterior density of K, a, h and /

co
mer has spatially the equal impact on its centre pixel while
can be rewritten as the latter has the equal impact, but lesser than the former.
P ðK; a; h; /=X Þ / expðbða  1ÞÞ½Cf1 þ a1 ðK  b1 Þ2 gd ; By substituting the computed parameters in Eq. (1a), the
K 2 R; a > 1; 0 < h < p; 0 < / < p=2 ð13Þ coefficients ars in Eq. (1) are calculated as follows:
N
where d ¼ þ d. 2 r = 1, when p = 0; q = 1, 1 and p = 1, 1; q = 0 (pixels
This shows that, given a, h and / the conditional distri- in horizontal and vertical directions)

al
bution of K is ‘t’ distribution located at b1 with (2d  1)
K sinðhÞ cosð/Þ
degrees of freedom. a1 ¼ and ð17Þ
The proper Bayesian inference on K, a, h and / can be a
on
obtained from their respective posterior densities. The joint r = 2, when p = 1, 1; q = 1, 1 (pixels in diagonal
posterior density of a, h and /, namely, P(a, h, //X), can be directions)
obtained by integrating Eq. (13) with respect to K. Thus,
K sinð2hÞ cosð2/Þ
the joint posterior density of a, h and / is obtained as a2 ¼ ð18Þ
a2
rs

1=2
P ðK; a; h; /=X Þ / expðbða  1ÞÞC d a1 ;
The coefficients a1 and a2 are used to estimate the centre
a > 1; 0 < h < p; 0 < / < p=2 ð14Þ pixel value in the sliding window. As already mentioned in
pe

The marginal posterior density of a, h and / in Eq. (14) is Section 1, the model filters the features of the pixel of inter-
a complicated function and is analytically not solvable. est at centre. The estimated value is stored in another image
Therefore, we can find the original posterior density of a, array. By continuing this process for the entire image, the
h and / numerically from the joint density Eq. (14). That is, estimated image surface of the input image can be obtained.
ZZ Now, the estimated surface can be called smoothed image
P ðaÞ / P ða; h; /=X Þdh d/ fs(x, y) which is shown in Fig. 3. Here, the smoothing means,
the proposed model filters the features, that is, it smoothes
Similarly,
r's

ZZ minimally across the edges. This is the main advantage of


P ðhÞ / P ða; h; /=X Þda d/ and the proposed model when compared with that of the existing
methods, namely, Canny’s, Sobel and Prewitt edge detec-
ZZ
tors. In Canny’s edge detector, the Gaussian filter is used,
P ð/Þ / P ða; h; /=X Þda dh ð15Þ
o

which over smoothes the edges. So it detects the spurious


edges and also it fails to capture the edge junctions (Ding
The point estimates of the parameters a, h and / may be
th

and Goshtasby, 2001). The proposed method detects


taken as the means of the respective marginal posterior dis-
the fine and correct edges only. This is discussed in detail
tribution i.e. posterior means. With a view to minimize the
in Section 6. While taking into account the difference
computations, we first obtain the posterior mean of a
between the original and the smoothed versions of the
Au

numerically. Then fix a at its posterior mean and evaluate


images, the features are extracted, that is, thick edges. The
the conditional means of h and / fixing a at its mean. We
minimal smoothing across the edges causes to extract the
fix a, h and / at their posterior means respectively and then
thick edges.
evaluate the conditional mean of K. Thus, the estimates are
a ¼ EðaÞ
^
^ /Þ
ðh; ^ ¼ Eðh; /=a ¼ ^aÞ and ð16Þ 5. Edge magnitude and direction
b ¼ EðK=^a; h ¼ ^h; / ¼ /Þ:
K ^
The edge magnitude is defined as the difference between
The estimated parameters K, a, h and / are adopted to the original image fi(x, y) and smoothed image fs(x, y). At
compute the coefficients ars of the model in Eq. (1) and each pixel location (x, y), the edge magnitude is measured
then the model is used to smooth the images. by taking absolute value of the difference between the
K. Seetharaman, R. Krishnamoorthi / Pattern Recognition Letters 28 (2007) 759–770 765

py
co
al
on
rs

Fig. 3. (a) and (c) are original images; (b) and (d) are smoothed images.
pe

pixels in the corresponding locations of original and


smoothed images. That is,
fd ðx; yÞ ¼ fi ðx; yÞ  fs ðx; yÞ ð19Þ
r's

Mðx; yÞ ¼ jfd ðx; yÞj ð20Þ


where M(x, y) represents the edge magnitude and fd(x, y)
represents the difference image.
o

According to statistical theory, the difference between


the actual and estimated values is known as residual. In
th

our scheme, the residual represents the features (i.e. edges,


Fig. 4. Geometrical representation of the features in the image.
textures, creases, etc.) of the untextured image, which are
missed to capture by the model while measuring the r
Upper Confidence Limit ¼ X þ t pffiffiffi ð21Þ
Au

smooth image surface of the input image. The highest gra-


n
dient between the original and smoothed image surfaces
represents the edge pixels. The noises have zero-crossings where t is (1  q1), q1 = a1/(1  a1) is the first order auto-
while those image surfaces are tangent but with very mini- correlation and it ranges from 1 to +1. The value 1 rep-
mum of gradient. If the smoothed image is parallel to the resents the non-homogeneity of gray levels of the pixels in
original with slight perplexity, then it indicates the presence the image whereas +1 represents the homogeneity of gray
of textures. These are illustrated in Fig. 4. The detailed levels of the pixels in the image and n is the number of pix-
discussion is available in (Chen et al., 1991). els in the image. Then each value in the residual image is
For the ‘residual image’, the global statistic mean X and compared with the confidence limit used in Eq. (21). If
standard deviation r are calculated and applied in Eq. (21) the pixel value is greater than or equal to the confidence
to measure the confidence limit. limit then it is squared; otherwise it is replaced with zero.
That is,
766 K. Seetharaman, R. Krishnamoorthi / Pattern Recognition Letters 28 (2007) 759–770

162 160 162 162 162 161 0 2 1


200

Pixel value
159 161 161 161 160 161 2 1 0

163 162 162 162 162 163 1 0 1

py
150
Fig. 5. Edge magnitude for background images: (a) original image; (b) 1 2 3 4 5 6 7 8 9
smoothed image; (c) residual image. Original image Smoothed image

14


co
½fd ðx;yÞ2 if f d ðx; yÞ P Upper Confidence Limit 12
fte ðx;yÞ ¼

Residual value
0 Otherwise 10
ð22Þ 8
6
where fte(x, y) represents the thick edges.
For example, we consider the subimages of size 3 · 3 as 4

shown in Figs. 5(a) and 6(a) are the original forms of back- 2

al
ground scenes/textured and untextured images. The sub- 0
1 2 3 4 5 6 7 8 9
images shown in Figs. 5(b) and 6(b) are the smoothed
Mean UCL LCL
version of the aforesaid (Figs. 5(a) and 6(a)) subimages.
on
The original and estimated pixel values are plotted to Fig. 7. (a) Gradient between original image and its smoothed version; (b)
describe geometrically the structures of the various features pixels represent thick edge map.
in the image. The plotted geometric structure is shown
in Fig. 7. The following points can be observed from
rs

Fig. 7(a): (i) The highest gradient (corresponding to third


x-ordinate value) represents the fine edges while the moder-
ate gradient (corresponding to 2nd and 5th x-ordinate val-
pe

ues) represents the weak edges. (ii) The tangent with very
small gradient between the original and smoothed images
indicates the noises. (iii) The slight wavering parallel lines
with minimum gradient (corresponding to 7th, 8th and 9th
x-ordinate values) represent the textures/background
scenes. Fig. 7(b) exposes the pixels that fall outside the upper
confidence limit (UCL).
r's

It is observed that the result obtained after implement-


ing the above procedure may contain thick edges. The thick
edges indicate the fact that, the procedure has captured a
region around an edge pixel and it has not extracted the
o

exact edge pixel. The extraction of thick edges is caused Fig. 8. Edge region extracted image.
due to the minimal smoothing across the edges. The output
from the region extracted, the nonmaxima suppression
th

is shown in Fig. 8.
algorithm used in Eq. (23) is applied.
The edge pixel that belongs to the region extracted, is
the pixel, which has the higher intensity value than any
other pixel around it. So, to detect the actual edge pixel fem ðx; yÞ ¼ nms½fte ðx; yÞ; UCL ð23Þ
Au

162 193 199 166 182 186 4 11 13 0 121 169

159 195 201 162 185 189 3 10 12 0 100 144

163 162 162 167 163 164 4 1 2 0 0 0

Fig. 6. Edge magnitude and direction for edge map: (a) original image; (b) smoothed image; (c) residual image between (a) and (b); (d) region extracted
around edge pixel; (e) represents edge direction.
K. Seetharaman, R. Krishnamoorthi / Pattern Recognition Letters 28 (2007) 759–770 767

py
co
Fig. 9. Edge map extracted using proposed method.

where fem(x, y) represents the extracted edge map and nms Step 1: Input the gray scale image fi of size m · m.
is meant for nonmaxima suppression. Step 2: Define four matrices fs, fd, fte and fem each of size

al
In order to obtain thin and continuous edges, we use m · m with all elements are equal to zero.
nonmaxima suppression algorithm with confidence limit Step 3: For I = 1, . . . ,m  2
used in Eq. (21). The confidence limit is calculated with For j = 1, . . . ,m  2
on
the use of local mean, standard deviation and autocorrela- For k = i, . . . ,i + 2
tion value of thick edge map. Each value in the thick edge For l = j, . . . ,j + 2
map is compared with the confidence limit. If a value is estimate smooth image fs(i + 1, j + 1)
greater than the confidence limit then it is identified as edge from input image by using Eq. (1).
rs

pixel and it is represented with 255. Otherwise it is identi- Step 4: Find fd


fied as non-edge pixel and is represented with 0. The output For I = 2, . . . ,m  1
of this procedure is shown in Fig. 9. For j = 2, . . . ,m  1
pe

In order to justify that the extracted edge map is fine and fd(i, j) fi(i, j)  fs(i, j)
correct, the edge map is superimposed on its original Step 5: Find Confidence Limit (CL) globally on fd,
image. So we can see the minor and fine edges are correctly as in Eq. (21).
extracted and localized and such superimposed edge map is Step 6: Find fte(i, j) by comparing each pixel with CL.
shown in Fig. 10. If fd(i, j) P CL then
Fte(i, j) [fd(i, j)]2
5.1. Algorithm else fte(i, j) 0
r's

endif
The proposed technique for edge extraction is explained Step 7: For i = 1, . . . ,m  1
in the preceding sections is presented here in an algorithmic For j = 1, . . . ,m  1
way. For k = i, . . . ,i + 2
o
th
Au

Fig. 10. Edge map: Fig. 9(a) and (b) are superimposed on its original images.
768 K. Seetharaman, R. Krishnamoorthi / Pattern Recognition Letters 28 (2007) 759–770

For l = j, . . . ,j + 2 are considered for the experiment, but the output of the
find edge map by Calculating CL as in Lena and boat images are presented here. For the standard
Eq. (21) and apply nonmaxima suppres- techniques, the edge maps are obtained by using Matlab
sion algorithm using calculated CL 6.5, and release 13.0. The edge maps shown in Column 1
If fte(i, j) P CL then of Fig. 11 are the output of the proposed technique and
fem(i, j) 255 for Canny, Sobel and Prewitt techniques, the edge maps
else fem(i, j) 0 are given in Columns 2, 3 and 4 respectively. In this section,
endif the comparative discussion is only subjected to Canny’s

py
Here, 255 represents edge pixel and 0 represents non- algorithm since many authors have referred the Canny’s
edge pixel. algorithm. The default parameters such as lowest and high-
est threshold values fixed in the Matlab are considered here
6. Comparison with some standard techniques as it is. It can be observed from the obtained output that, in

co
Lena image, the minute edges in the right most bottom cor-
To validate the efficiency of the proposed technique, ner, the left most top corner and the ridge edges of the peak
comparisons are made with the existing standard tech- of the cap are detected well by the proposed technique but
niques such as Canny, Sobel and Prewitt. Several images these are not captured by the Canny’s detector. As for the

al
on
rs
pe

Fig. 11. Edge maps obtained by different detectors. Column 1 is obtained by the proposed technique; Columns 2, 3 and 4 are obtained by Canny, Sobel
and Prewitt detectors.
o r's
th
Au

Fig. 12. Column 1: concentric circle and square images of size 256 · 256; Column 2: smoothed image; Column 3: edge map of the proposed method
Column 4: edge extracted by Canny’s detector.
K. Seetharaman, R. Krishnamoorthi / Pattern Recognition Letters 28 (2007) 759–770 769

boat image, the Canny’s detector detects spurious and very the concerned image intensity values. Since r and the auto-
minor edges. The proposed technique detects only the vis- correlation q1 are calculated automatically, the proposed
ible and fine edges. For instance, the ropes in the right most technique works adaptively for some kinds of images.
top corner and the tire hanging in the front side of the boat For discussion purpose, only a few images are given here.
are detected correctly. The Canny’s detector failed to detect The experiment is carried out with Visual Basic 6.0 on
these edges. It is evident that the proposed technique is the Pentium III processor with 850 MHz PC.
more or less similar or better than the Canny’s detector.

py
7. Edge map in synthetic images 9. Conclusion

In order to validate the efficiency of the proposed edge In this paper, we have presented a novel technique based
detection technique besides natural images, Lena and Boat, on the FRAR model for minimal smoothing across the

co
in this section, two kinds of synthetic images are consid- edges. The proposed technique does not require any input
ered. The artificially synthesised synthetic images: (i) con- parameters/threshold. Each pixel magnitude is measured
centric circle and (ii) square images with well defined by differentiating the original and smoothed images and
homogeneous regions that are formed with different con- the edge direction is identified implicitly. The confidence
trasts are considered and the output of the proposed tech- limit measures are used at two stages: first is used with glo-
nique is compared to the Canny’s output also. A detailed bal statistics to extract thick edges and the second stage is
discussion of the circle and square images can be found with local statistics in the nonmaxima suppression algo-

al
in (Rakesh et al., 2004; Ding and Goshtasby, 2001). The rithm to remove the minimal intensity pixel values from
edge map obtained for the circle image is the evident that the thick edges. To justify the performance of our method
the proposed technique extracts edges in all directions. in terms of accuracy, a series of experiments are conducted
on
Ding and Goshtasby have reported that their algorithm on different types of natural and synthetic images and their
extracts edge junction of meeting point of four regions in results are presented. Also, the output of the proposed
the square image better than the Canny’s algorithm. But technique is compared with the outputs of Canny, Sobel
it is visibly seen that there is a slight blur in the edge junc- and Prewitt algorithms. The proposed method provides
rs

tion of four regions. Our proposed technique captures the more or less similar or better results than that of the exist-
edges well and clearly in the junction of four regions. The ing standard methods.
output of the proposed technique is given in Fig. 12.
pe

Gaussian derivative is used in Canny’s and revised Can-


ny’s edge detectors. It oversmoothes the edges in the image. Acknowledgements
This may be the reason for not clearly detecting the edges
in the meeting point of four regions of square image. This The authors wish to thank the reviewers for their valu-
oversmoothing process can be seen in the experimental able suggestions to improve the readability of the paper.
results of Ding and Goshtasby.
References
r's

8. Results and discussion


Bouman, C., Sauer, K., 1993. A generalised Gaussian image model for
edge preserving MAP estimation. IEEE Trans. Image Process. 2, 296–
Most of the existing edge detection algorithms use
310.
Gaussian filter to smooth the input image surface. The Canny, J., 1986. A computational approach to edge detection. IEEE
o

Gaussian filter oversmoothes the images. A detailed discus- Trans. Pattern Anal. Machine Intell. 8 (6), 679–698.
sion is available in (Ding and Goshtasby, 2001). Since it Chalmond, B., 1989. An iterative Gibbsian technique for simultaneous
th

oversmoothes the images, the detectors like Canny capture structure estimation and reconstruction of M-ary images. Pattern
Recognition 22 (6), 747–761.
the spurious and minor edges and also it does not capture
Chen, Jia-Lin, Kundu, Amlan, 1995. Unsupervised segmentation using
the edge junction well. In our proposed technique, the multichannel decomposition and hidden Markov models. IEEE Trans.
Au

FRAR model is used to smooth the images. It predicts Image Process. 4 (5), 603–619.
the input image surface accurately which is close to the Chen, M.H., Lee, D., Pavlidis, T., 1991. Residual analysis for feature
actual input image surface for the background scenes detection. IEEE Trans. Pattern Anal. Machine Intell. 13, 30–40.
Comer, M.L., Delp, E.J., 1999. Segmentation of textured images using a
(homogeneous) and there is reasonable difference between
multiresolution Gaussian autoregressive model. IEEE Trans. Image
original and smoothed images for edge portions (non Process. 8 (3), 408–420.
homogeneous). So our proposed method detects the edge Ding, Lijun, Goshtasby, Ardeshir, 2001. On the Canny edge detector.
magnitude and direction of a pixel with minimal smoothing Pattern Recognition 34, 721–725.
across the edges. Elia, C.D., Pogi, G., Scarpa, G., 2003. A tree-structured Markov random
field model for Bayesian image segmentation. IEEE Trans. Image
The number of input parameters are high in the existing
Process. 12 (10), 1259–1273.
algorithms and the threshold values are varying from Geman, S., Geman, D., 1984. Stochastic relation, Gibbs distributions and
image to image. In the proposed technique, the parameters the Bayesian restoration of images. IEEE Trans. Pattern Anal.
and the threshold t is also computed automatically from Machine Intell. 6, 721–741.
770 K. Seetharaman, R. Krishnamoorthi / Pattern Recognition Letters 28 (2007) 759–770

Hueckel, M., 1971. An edge operator which locates edge in digital Prewitt, J.M.S., 1970. Object enhancement and extraction. In: Lipkin,
pictures. J. ACM 18, 113–125. B.S., Rosenfield, A. (Eds.), Picture Processing and Psychopictorics.
Kadaba, S.R., Gelfand, S.B., Kashyap, R.L., 1998. Recursive estimation Academic Press, New York.
of images using non-Gaussian autoregressive models. IEEE Trans. Qie, P., Bhandarkar, S.M., 1996. An edge detection techniques using
Image Process. 7 (10), 1439–1452. local smoothing and statistical hypothesis testing. Pattern Recognition
Kim, Dong-Su, Lee, Wang-Heon, Kweon, In-So, 2004. Automatic Lett. 17 (8), 849–872.
edge detection using 3 · 3 ideal binary pixel patterns and fuzzy- Rakesh, Rishi R., Chaudhuri, Probal, Murthy, C.A., 2004. Thresholding
based edge thresholding. Pattern Recognition Lett. 25, 101– in edge detection: a statistical approach. IEEE Trans. Image Process.
106. 13 (7), 927–936.
Krishnamoorthi, R., Bhattacharya, P., 1998. Color edge extraction using Roberts, L.G., 1965. Machine perception of three dimensional solids. In:

py
orthogonal polynomials based zero-crossing scheme. Inform. Sci. 112, Tipper, J.T. (Ed.), Optical and Electro optical Information processing.
51–65. MIT Press, Cambridge, MA.
Krishnamoorthi, R., Seetharaman, K., 2007. Image compression Romberg, Justin K., Choi, Hyeokho, Baraniuk, Richard G., 2001.
based on a family of stochastic models. Signal Process. 87 (3), 408– Bayesian tree-structured image modeling using wavelet-domain hidden
416. Markov models. IEEE Trans. Image Process. 10 (7), 1056–1068.

co
Lee, D., Pavlidis, T., Huang, K., 1988. Edge detection through residual Rosenfield, A., 1981. The Max Roberts operator is Hueckel-type of edge
analysis. In: Proc. IEEE Comput. Soc. Conf. Computer Vision and detectors. IEEE Trans. Pattern Anal. Machine Intell. 3 (1), 101–103.
Pattern Recognition, pp. 215–222. Sarkar, Anjan, Manoj Biswas, K., Sharma, K.M.S., 2000. A simple
Li, S.Z., 2000. Roof-Edge preserving image smoothing based on MRFs. unsupervised MRF model based images segmentation approach.
IEEE Trans. Image Process. 9 (6), 1134–1138. IEEE Trans. Image Process. 9 (5), 801–812.
Marr, D., Hildreth, E.C., 1980. Theory of edge detection. Proc. Roy. Soc., Sobel, I.E., 1970. Camera models and machine perception. Ph.D. Thesis,
London B 207, 187–217. Stanford University, 1970.
Pavlidis, T., Lee, D., 1988. Residual analysis for feature extraction. In: Stern, D., Kurz, L., 1988. Edge detection in correlated noise using Latin

al
Simon, J.C. (Ed.), From Pixel to Features (Proc. COST13 Workshop, squares models. Pattern Recognition 21, 119–129.
Bonas, France, August 22–27, 1988). North-Holland, Amsterdam, Zheng, Sheng, Liu, Jain, Tian, Jin Wen, 2004. A new efficient SVM-based
The Netherlands, pp. 219–227. edge detection method. Pattern Recogntion Lett. 25, 1143–1154.
on
rs
pe
o r's
th
Au

Vous aimerez peut-être aussi