Vous êtes sur la page 1sur 5

AtAL DA 2AhhLDAtDMt A ADAYA1

ADDL h
L I0nc0|du`
!
mc|I0udc0u`
T1 0Huw8kI
!
1 LI
!
.W TubInJ/
!
L. Lh0um, m
!
` \uIVOt8Ily OIutguudy
!
Lm HIdgOP0lIOu0 0Ot0lOty
!
\uIVOt8Ily OIJOuuO88OOMO0lh bCIOuCO LOulOt
ABSTRACT
Diabetic macula edema (DME) is a common vision threat
ening complication of diabetic retinopathy which can be as
sessed by detecting exudates (a type of bright lesion) in fun
dus images. In this work, two new methods for the detection
of exudates ae presented which do not use a supervised learn
ing step; therefore, they do not require labelled lesion taining
sets which ae time consuming to create, difcult to obtain
and prone to human error. We intoduce a new dataset of fn
dus images from vaious ethnic groups and levels of DME
which we have made publicly available. We evaluate our al
gorithm with this dataset and compare our results with two
recent exudate segmentation algorithms. In all of our tests,
our algorithms perform better or compaable with an order of
magnitude reduction in computational time.
uJx Ims fundus image database, segmentation,
computer-aided diagnosis, retina normalisation
1. INTRODUCTION
Diabetic retinopathy (DR) is a progressive eye disease that
currently afects 250 million of people worldwide. Diabetic
macular edema (DME) is a complication of DR and it is a
common cause of vision loss and blindness. DME occurs
from swelling of the retina in diabetic patients due to leak
ing of fuid fom microaneurysms within the macula. Oph
thalmologists can infer the presence of the fuid that causes
the retina thickening in diabetic patients by the presence of
accompanying lipid deposits called exudates. They appear as
bright structures with well defned edges and vaiable shapes.
The approaches to exudate segmentation presented in
the literature can be roughly divided into four diferent cate
gories. Tresholding methods base the exudate identifcation
on a global or adaptive grey level analysis [1, 2]. Region
grwing methods segment the images using the spacial con
tiguity of grey levels; a standad region growing approach is
used in [3], which is computationally expensive. In [4] the
computational issues ae reduced employing edge detection
to limit the size of regions. Morhology methods employ
grey scale morphological operators to identify all structures
978-1-424-4128-0/11/$25.00 2011 IEEE 1396
with predictable shapes (such as vessels). These structures
ae removed from the image so that exudates can be identifed
[5, 6]. Classication methods build a feature vector for each
pixel or pixel cluster, which are then classifed by employing
a machine learning approach into exudates or not exudates
[7, 8, 9] or additional types of bright lesions [10, 11] (such
as drusen and cotton wool spots), all of which require good
ground-truth datasets.
In this paper we present two vaiations of a new exudate
segmentation method that falls into the category of threshold
ing methods which do not require supervised learning. By
avoiding a supervised leaing step, we prevent common is
sues with human segmentation inconsistencies. In addition,
many supervised learning methods require large amounts of
data which can be very difcult to obtain in this application
domain. We introduce a new way to normalise the fundus im
age and directly compare our method with an implementation
of a morhology based technique [6] and another thresholding
based technique [2]. In Section 2 we introduce the new public
dataset used for testing the algorithm; Section 3 presents the
details of the two automatic exudate segmentation methods;
Section 4 presents the results by compaing them to other two
published techniques; fnally, Section 5 concludes the discus
sion.
2. MATERIALS
For the past 5 years, our group has been designing a semi
automated, HIPAA-compliant, teleophthalmology network
for the screening of diabetic retinopathy and related condi
tions by employing a single macula centred fundus image. We
have randomly extracted a set of 169 images representative of
vaious degree of DME and patient ethnicity (the appearance
of the retinal fundus greatly vaies between ethnic groups).
For testing puroses each image of the dataset was manually
segmented by one of the co-authors (E. Chaum). The dataset
has been made publicly available to the reseach community
(lttp://v1Lct.u-Lcu1gcgre.1/1uca/le1ued.
plp).
ISBI 2011
This IEEE Base paper is downloaded by Wine Yard Technologiesfor Research Application
|!
|l
|0 |@!
Fig. 1. (a) Example of a fundus image of the dataset used. The square represents the area shown in (d) and (e); (b) Estimated background,
bgEst2; (c) Initial exudates candidates, J__, (d) Image detail of Stationary Haar Wavelet Analysis; (e) Image detail of Kirsch's Edges
Image Analysis; (f Exudates Probability with Wavelet, J, (g) Exudates Probability with Kirsch's Edges, J,,_.
3. METHOD
The two methods presented are divided in two parts: Pre
prcessing (which is shared by both techniques) and Exudate
detection. Fig. 1 shows an example of a fundus image, the
intermediate and fnal steps of our techniques.
3.1. Preprocessing
J.1.1. Background Estimation
Our approach uses the green channel I 9 of the image and the
Ii channel from the HSI colour space. We start the analysis by
estimating the background with a large median flter, whose
size is 310 the height of the fundus image on Ii. This approach
has been used previously [12, 13] and has great computation
performance advantages over other methods [14]. In other
median fltering normalisation techniques, the background is
subtacted from the original image in order to obtain a nor
malised version. In our approach, we enhance the normalisa
tion with morhological reconstruction [15] as opposed to the
more comon practise of subtacting the median fltered re
sult, which seems to improve the removal of nerve fbre layer
and other stuctures at the edges of the optic nerve (ON). The
pseudocode of algorithm 1 illustrates this step.
Algorithm 1 Background Estimation
l: tuncti0n MORPHBGEsT(lg)
2: bgEst + MEDIANFILTER(Ig)
J: INITIALISE(bgMask) t sct aIImcq|xcIsto
4: t0ry toHEIGHT(Ig)-l d0
5: t0rx toWIDTH(lg)-l d0
: it Ig(x, y) < bgEst(x, y) thcn
J: bgMask(x,y) +bgEst(x,y)
8: clsc
: bgMask(x,y) + Ig(x,y)
i: cndit
ii: cndt0r
i2: cndt0r
iJ: bgEst2 + MORPHRECONSTR(bgEst,bgMask)
i4: rcturnbgEst2
i5: cndtuncti0n
J.1.2. Image Noralisation
Once the background 0gtst_ is estimated, it is subtracted
from the original image with signed precision. In the resulting
image, the highest peak of the histogram is always centred on
zero regardless of the ethnicity of the patient or disease stat
ifcation. The histogram shows a clear distinction between
dark structures and bright structures. The former represent
the vasculature, macula, dark lesions and other structures and
their distribution varies depending on the ethnicity or pigmen
tation of the patient. Bright structures are found in the posi
tive side of the histogram and include the ON, bright lesions,
1397
This IEEE Base paper is downloaded by Wine Yard Technologiesfor Research Application
and other structures. The distribution is fairly constant across
diferent ethnicities.
Because of the chaacteristics of te normalized image,
we can select all the exudate candidates 1_ with a had
threshold th_. This has obvious computational advantages
in comparison with model ftting operations which are more
sensitive to suboptimal background subtraction and do not
ofer clea motivations for threshold selection. In our case
we use th_ = 3 simply by empirically choosing a value
slightly above 0 in order to accommodate small background
estimation errors.
The pre-processing step of ON removal seems to be a con
stant across all the methods in the literature. This is under
standable given the potential colour resemblance between ON
and exudates in fundus images. Since the ON detection is a
mature technique and we want compare the exudate detection
only, we did not implement an automatic ON detection for
our experiments. Instead, we simply removed the ON manu
ally and masked out a region slightly larger than the average
ON size. An example of a technique for automatic ON local
isation (that works fawlessly on our dataset) can be found in
[16].
J.2. Exudate detecton
The exudate detection is performed by assigning a score for
each exudate candidate. The exudate candidates are selected
by running a 8-neighbour connected component analysis on
1_. We have implemented two ways to assign tis score,
one based on Kirsch's Edges [17] and the other based on Sta
tionar Wavelets. Both methods seek to take advantage of the
higher inner and outer edge values of exudates in comparison
to non-exudate structures.
J.2.1. Kirsch's Edges
Kirsch's edges try to capture the exteral edges of the lesion
candidate. This edge detector is based on the kernel /(shown
below) evaluated at 8 diferent directions on 1_ The ker
nel outputs are combined together by selecting the maximum
value found on each pixel output. The result is stored in the
fnal 1,,, image.
/ = [ -0: = 1 (1)
15
-
15 15
The average edge outputs of 1,,, under each lesion
cluster are calculated and assigned to the lesion in its en
tirety. The thresholds used to evaluate the fnal output are
th,_ E {O : 0.5 : 30}.
J.2.2. Stationar Wavelets
Quellec et al. [18] proposed a method for the detection of
retina microaneurysms in the wavelet space. We try to cap-
1398
ture the stong peak at the centre of exudates by developing a
method that employs a simila wavelet as Quellec et al., but
that evaluates the results in image space. A stationary Ha
wavelet analysis is performed up to the second level on 1_
The process is inverted maintaining the last vertical, diago
nal and horizontal details only, as these ae te wavelet co
efcients that seem to contain most of the foreground stuc
tures. By transforming back to the image space we obtain
a background-less image Iwav with a strong response at the
centre of exudates. Similaly to the previous approach, we
evaluate the candidate by detecting vaiation in the Iwav area
which corresponds to each lesion cluster of 1_as follows.
b
max(pxwav) - min(pxwav)
wav cae = (2)
max (Pxwav)
where
PXwav are the pixels of Iwav corresponding to a
lesion candidate cluster. The thresholds used to evaluate the
fnal output are th,_ E {O : 0.05 : I}.
4. RESULTS
In our results, we compare our two techniques described in
this paper with our implementation of [6] and [2]. In all in
stances, we evaluate the performance of the lesion segmen
tation algorithms on a lesion by lesion basis for each image.
The analysis stats fom the labelled image in the dataset and
each lesion is compared to the automatic segmentation one
by one. A lesion is considered a true positive (TP) if it over
laps at least in part with the ground-truth; a false negative
(FN) if no corresponding lesion is found in the automatic seg
mentation; a false positive (FP) if an exudate is found in the
automatic segmentation but no corresponding lesion has been
manually segmented. In the evaluation of the segmentation,
we did not employ any tue negatives (TN). As such, we avoid
any evaluation of specifcity, which is inherently high in im
ages where the lesions represent a very small percentage of
the total image aea. In order to compae the methods fairly,
all the images in the dataset have been resized to the resolu
tion used in the original papers (but maintaining the original
width/height ratio). It should be noticed that in the test results
of this section all the corresponding to bright lesions other
than exudates have not been taken into account.
Fig. 2(a) shows the free-response receiver operating
characteristic (FROC) analysis [19] of the implemented algo
rithms on te entire dataset. The global Sensitivity is evalu
ated as
T
l
I
on each image and then averaged together.
The global Positive Predictive Value (PPV) is computed in a
similar way but employing
T
l
1
. It can be noticed how
our two methods and the technique developed by Sanchez et
al. performed comparably, with our technique (with Kirsch's
edges) performing somewhat better.
Simila results ae shown in Fig. 2(b). In this case only
the 54 images with DME ae used in order to have a better
This IEEE Base paper is downloaded by Wine Yard Technologiesfor Research Application
Lxuda|esegmen|a||cn(cna|||mages) Lxuda|esegmen|a||cn(cnpa||en|s ||h macu|aredemacn|y)
Lxuda|esegmen|a||cn||hcur me|hcd(K|rsch)arcund|hemacu|a
0.9
~ ~ 8anchez2009
" "" " 8cpharak2003
-OurMe|hcd(w||hWave|e|)
0.9

.
.
w^
0.3
-OurMe|hcd(w||hKIrsch)
0.I

0.3

.- 0.5

0.a
0.l
0.2 8ancmz2009
* 8cpharak2003
0.:
-OurMe|hcd(||hWave|et)
OurMe|hcd(||hK|ch)
0.I
0.
L
0.

0.1
0.l
",,-
.
+
.
.

r
.

l
' |
\
.
^ I
f
I
I
!
.
0.

i
0.5
J
0.1
0.l
0.2
0.:
.

0.

0.3

0.I

.
0

.

0.2

0.:

0.9 0.3 0.I 0. 0.5 0,1 0.3 0.2 0.1


0.90.3 0,I0.0. 0a 0. l0.2 0.:
cs|||vered|c||veVa|ue csi|iveredic|iveVa|ue
cs|||vered|c||veVa|ue
|I
Fig. 2. FROC curves. (a) the exudate segmentation perfonances on all images; (b) the perfonance on the images diagnosed with DME
only; (c) the exudate segmentation perfonances of our method using Kirsch's edges in the macula area; each line corresponds to a growing
circular area centred on the fovea.
evaluation of the segmentation performance. In this case both
of our methods seem to have a higher sensitivity than the other
two methods implemented.
Since exudation that is close to the fovea it is more likely
to cause vision loss, we tested the performance of the Kirsch's
edges method in this region ( Fig. 2(c) ). The algorithm per
formance improves because the macular pigment in the aea
around fovea tends to be darker than the rest of the image,
therefore exudates have more contrast, and there are fewer
FPs due to the remove of structures near the larger vessels.
We note that the true goal of automated screening is a di
agnosis of the patient condition, and lesion segmentation is a
step towards this goal. Consequently, the detection of every
exudate in an image with many may not be as important as
fnding a single signifcant lesion on a image with only one.
Therefore, we have evaluated the algorithms based on their
ability to discer patients with or without DME by employing
the hard threshold of a single lesion. If one or more exudates
are found, the image is diagnosed with DME, otherwise the
patient is classifed as being negative. Fig. 3 shows the results
of this experiment by the means of a standard ROC analysis
on our dataset. This is possible because we are classifying
the image DME condition and not the lesion segmentation,
therefore we can employ . Again, our methods seem to
perorm better or comparably to the other algorithms. Al
though the area under the ROC curve (AUC) is highest for
our method with Kirsch's edges, the technique with wavelet
shows a higher sensitivity at a higher specifcity, a useful as
pect for the development of a automatic DME screening tool.
The computational performance are evaluated on a Dual
Core 2.6 GHz machine with 4 GB of RAM. All the algorithms
are implemented in unoptimised Matlab code. Our methods
require 2.4 and 1.9 seconds per image for the Wavelet and
Kirschs edges methods, while the methods of [2] and [6] re
quire 39 and 36 seconds respectively. A large reason for the
computational savings is a bilinear image interpolation and
0.9
0.3
0.I
Diagnosiswit Exuda|esSegmented

- - Sanchez2009 (AUC: 0.7S)
" Sopharak200S (AUC: 0.13)
~OurMetod (wit Wavelet) (AUC: O.SI)
-Our Metod (wit Kirsch) (AUC: 0.S6)
I
***^*
=*"
***

*
l
|
I
r
0.1 0.2 0.3 0.4 0.5 0.6 0.7 O.S 0.9
1 -Spcificity
Fig. J. ROC curves of the diagnosis of DME by employing only
the lesions segmented, i.e. an image is considered positive if it shows
at least a lesion.
expectation maximization step in [2], and the calculation of
the standard deviation for each pixel for [6].
b. DISCUSSION AND CONCLUSION
We have evaluated the global segmentation performance with
the real distribution of patients in a screening setting and on
only the patients showing signs of exudation. The results are
particularly encouraging especially because of the compari
son with the other techniques by Sopharak et al. and Sanchez
et al. The method by Sanchez et al. is somewhat close to
ours tests, however, the image normalisation procedure gives
a substantial computational advantage to our method. The
median flter with morhological reconstuction approach
maintains a good contrast of the foreground stuctures by
limiting the efects of the noise due to nerve fbre layer refec
tions and other small artefacts. In addition we have evaluated
1399
This IEEE Base paper is downloaded by Wine Yard Technologiesfor Research Application
our algorithms ability in identifing patients with DME based
on the segmentation of one or more lesion in a fundus image.
This is a simplistic method for DME diagnosis, but does pro
vide a baseline of the possible screening performances that
can be achieved employing the output of the segmentation as
a classifcation feature.
Many other exudate segmentation methods have been
published throughout the yeas using a variety of datasets
and evaluation methods. This makes a direct compaison
almost impossible, as emphasized by many of the authors
themselves. We note this issue in tis paper as we found
that our implementation of two algorithms did not perform
as well as in the respective datasets employed by the origi
nal autors. We contribute to a solution to this problem by
making available our heterogeneous, ground-truthed dataset
to the research community.
b. ACKNOWLEDGEMENTS
These studies were supported in part by grants fom Oak Ridge Na
tional Laboratory, the National Eye Institute, (EYOI7065), by an
unrestricted UTHSC Departmental grant from Research to Prevent
Blindness (RPB), New York, NY, Fight for Sight, New York, NY,
by The Plough Foundation, Memphis, TN and by the Regional Bur
gundy Council, France. Dr. Chaum is an RPB Senior Scientist.
. REFERENCES
[1] R Phillips, J Forrester, and P Sharp, "Automated detection
and quantifcation of retinal exudates," Graefes Arh Clin Exp
Ophthalmol, vol. 231, no. 2, pp. 90-94, Feb 1993.
[2] C I Sanchez, M Garcia, A Mayo, M I Lopez, and R Homero,
"Retinal image analysis based on mixture models to detect hard
exudates.," Medical Image Analysis, vol. 13, no. 4, pp. 650-
658, Aug 2009.
[3] C Sinthanayothin, J F Boyce, T H Wlliamson, H L Cook,
E Mensah, S Lal, and D Usher, "Automated detection of
diabetic retinopathy on digital fndus images," Diabetic
Medicine, vol. 19, no. 2, pp. 105-112, Feb 2002.
[4] H Li and L Chutatape, "Automated feature extraction in color
retinal images by a model based approach," IEEE Trnsactions
on Biomedical Engineering, vol. 51, no. 2, pp. 246-254, Feb
2004.
[5] T Walter, J K Klein, P Massin, and A Erginay, "A contribution
of image processing to the diagnosis of diabetic retinopathy
detection of exudates in color fndus images of the human
retina.," IEEE Trnsactions on Medical Imaging, vol. 21, no.
10,pp. 1236-1243,Oct 2002.
[6] Akara Sopharak, Bunyarit Uyyanonvara, Sarah Barman, and
Thomas H Williamson, "Automatic detection of diabetic
retinopathy exudates from non-dilated retinal images using
mathematical morphology methods.," Computerized Medical
Imaging and Grphics, vol. 32, no. 8, pp. 720-727, Dec 2008.
[7] G G Gardner, D Kating, T H Williamson, and A T Elliott,
"Automatic detection of diabetic retinopathy using an artifcial
140
neural network: a screening tool," Br J Ophthalmol, vol. 80,
no. 11, pp. 940-944, Nov 1996.
[8] A Osareh, M Mirmehdi, B Thomas, and R Markham, "Au
tomated identifcation of diabetic retinal exudates in digital
colour images," Br J Ophthalmol, vol. 87, no. 10, pp. 1220-
1223, Oct 2003.
[9] M Garcia, C I Sanchez, M I Lopez, D Abasolo, and R Homero,
"Neural network based detection of hard exudates in retinal
images.," Com put Methods Prgrams Biomed, vol. 93, no. 1,
pp. 9-19, Jan 2009.
[10] M Niemeijer, B Van Ginneken, S R Russell, M S A Suttorp
Schulten, and M D Abramof, "Automated detection and dif
ferentiation of drusen, exudates, and cotton-wool spots in dig
ital color fndus photographs for diabetic retinopathy diagno
sis," Invest Ophthalmol Vs Sci, vol. 48, no. 5, pp. 2260-2267,
May 2007.
[11] A D Fleming, K A Goatman, and S Philip, "The role of haem
orrhage and exudate detection in automated grading of diabetic
retinopathy," British Joural of Ophthalmolog, August 2009.
[12] M Niemeijer, B van Ginneken, J Staal, M S A Suttorp
Schulten, and M D Abramof, "Automatic detection of red
lesions in digital color fndus photographs," IEEE Trnsac
tions on Medical Imaging, vol. 24, no. 5, pp. 584-592, 2005.
[13] A D Fleming, S Philip, K A Goatman, J A Olson, and P F
Sharp, "Automated microaneurysm detection using local con
trast normalization and local vessel detection," IEEE Trnsac
tions on Medical Imaging, vol. 25, no. 9, pp. 1223-1232, Sept.
2006.
[14] M Foracchia, E Grisan, and A Ruggeri, "Luminosity and con
trast normalization in retinal images.," Medical Image Analy
sis, vol. 9, no. 3, pp. 179-190, Jun 2005.
[15] L Vincent, "Morphological grayscale reconstruction in image
analysis: applications and efcient algorithms," IEEE Joural
of Image Prcessing, vol. 2, no. 2, pp. 176-201, 1993.
[16] T. P. Kamowski, D. Aykac, E. Chaum, L. Giancardo, Y. Li,
K. W. Tobin, and M. D. Abramoff, "Practical considerations
for optic nerve location in telemedicine," in Prc. Annual In
ternational Conference of the IEEE Engineering in Medicine
and Biology Societ EMBC ZV, Sept. 3-6, 2009, pp. 6205-
6209.
[17] R A Krsch, "Computer determination of the constituent struc
ture of biological images.," Computers and Biomedical Re
searh, vol. 4, no. 3, pp. 315-328, Jun 1971.
[18] G Quellec, M Lamard, P M Josselin, G Cazuguel, B Cochener,
and C Roux, "Optimal wavelet transform for the detection of
microaneurysms in retina photographs," IEEE Trns on Medi
cal Imaging, vol. 27, no. 9, pp. 1230-1241,2008.
[19] P Bunch, G Hamilton, J Sanderson, and A Simmons, "A free
response approach to the measurement and characterization of
radiographic-observer performance," Joural of Applied Pho
togrphic Engineering, vol. 4, pp. 166-171, 1978.
This IEEE Base paper is downloaded by Wine Yard Technologiesfor Research Application

Vous aimerez peut-être aussi