Vous êtes sur la page 1sur 5

Automatic Vessel Extraction with combined Bottom-hat and Match-Filter

Danu Onkaew1 , Rashmi Turior1 , Bunyarit Uyyanonvara1


School of Information, Computer and Communication Technology Sirindhorn International Institute of Technology, Thammasat University Pathum Thani, Thailand Email: {danu.onkaew,rashmi.turior,bunyarit}@siit.tu.ac.th

Nishihara Akinori2
The Center for Research and Development of Educational Technology W9-108, 2-12-1 Ookayama, Meguro-ku, Tokyo, 152-8550, Japan Email: aki@cradle.titech.ac.jp

Chanjira Sinthanayothin3
National Electronics and Computer Technology Center 112 Thailand Science Park, Phahon Yothin Rd., Klong I, Klong Luang, Pathumthani, Thailand Email: chanjira.sinthanayothin@nectec.or.th

AbstractRetinal vessel extraction is important for the detection of numerous eye diseases. It plays an important role in automatic retinal disease screening systems. In this paper, vessel extraction algorithm based on combination of matched lter and bottom hat transform is proposed. First, green channel is extracted from original image. It is then applied to matched lter and bottom hat transform separately in order to enhance the contrast of vessels against background. Both enhanced images are extracted by thresholding process. Finally, two binary images are combined by aligning them together. Any pixel that appears in both binary images is considered as a vessel. The receiver operating characteristics (ROC), area under ROC and segmentation accuracy is taken as the performance criteria. The methods performance is evaluated on two publicly available databases (DRIVE and STARE database) of manually labeled images. The results demonstrate that the proposed method outperforms other unsupervised methods in respect of maximum average accuracy (MAA). The proposed method results in the area under ROC and the accuracy of 0.8557, 0.9388 for DRIVE database 0.9019, 0.9405 for STARE database respectively.

I. I NTRODUCTION The characteristic of retinal blood vessel, for example, vessel width, color, and tortuosity is a risk indicator to many diseases such as diabetes, arteriosclerosis, hypertension and retinopathy of prematurity. So, assessment of the vessels plays an important role in a wide range of ophthalmology diagnosis. Vessel extraction is the most vital step in retinal image analysis. The accuracy of all measurements related to vascular morphology depends on the detection procedure. There are many published methods for detection of retinal blood vessel tree. The method presented by Chaudhari et al. [1] is based on 2-D matched lter. The concept of matched lter algorithm is used for detection of piecewise linear segment of retinal

blood vessel. Hoover et al. [2] improved the method in [1] by the concept of threshold probing. The result obviously shows the increase of true positive rate over basic thresholding of a matched lter. Kande et al. [3] also uses matched lter in [1] to detect vessel tree. The improvised result is achieved by using thresholding algorithm based on the Spatially Weighted Fuzzy c-Means (SWFCM) clustering. Staal et al. [4] proposed an automated segmentation of vessels in two-dimensional color images of the retina. This method is based on extraction of image ridges and approximates the vessel centerlines at the same time. Akram et al. [5] and Oloumi et al. [6] detected the vascular pattern and thin vessels by using 2-D Gabor wavelet. Sofka and Stewart [7] improved vessel detection of low-contrast and narrow vessels by multi-scale matched lters. The algorithm combines matched-lter responses, condence measures and vessel boundary measures. After combining these responses, it forms a six-dimensional measurement vector at each pixel. Then use training technique to map this vector to likelihood ratio vesselness which is used for the vesselness measurement at each pixel. A supervised approach based on articial neural network (ANN) was proposed for blood vessel extraction as in [8], [9]. The sensitivity and specicity achieved by the method being quite high, however post-processing was required to do away with the misclassied vessels. The methods mentioned above work well to detect the main parts of the vessel tree. However, it does not perform well to extract the narrow vessels. Because the vessels have a wide range of width and the area of small width usually has very low contrast, it can miss to detect it as a vessel. In addition, detection of non-vascular structure such as camera

aperture boundary and the optic disc along with the vascular structure is of concern. To address the above problem, a vessel extraction algorithm based on combination of matched lter and bottom hat transform is proposed. First, green channel from original image is applied to matched lter and bottom hat transform separately since it has a better contrast of vessels against background. Then, both resultant images are binarized by thresholding. Finally, two binary images are combined by aligning the images together. Any pixel that appears in both binary images is considered as a vessel. This paper is organized in four sections. In section II, a schematic overview of our methodology is demonstrated and explains step by step techniques required for retinal blood vessel segmentation. Experimental results and evaluation of the algorithm on the images of the DRIVE [4] and STARE [2] database are given in section III. Discussion and conclusion is in section IV. II. M ETHODOLOGY A. Overview The implementation methodologies herein presented is schematically described in the Fig. 1. The proposed algorithm is divided into 3 main parts. First part is matched ltering and the second part is bottom hat transform. The last part is combination of both binary result from part one and two into nal output. For the rst part, the green channel of RGB retinal image is used as an input. Before convolve with matched lter kernel, the input image needs to reduce the strong contrast between area of interest and the area outside cameras aperture by padding algorithm [10]. Then, matched lter and un sharp masking [11] is applied in order to enhance and sharpen the matched lter response respectively. The image is then binarised by Otsus thresholding [12]. The second part also uses green channel as an input. The bottom hat transform and high boost ltering is applied to enhance and sharpen the retinal vessel tree respectively. Then the thresholding is done to obtain the binary images. Then, all misclassied connected pixels that have their size less than 50 of both binary results are removed using isolated island removal. Finally, two binary images are combined by aligning the images together. Any pixel that appears in both binary images is considered as a vessel. B. Preprocessing Each component of the RGB images is analyzed separately. The result shows that green channel has the greatest contrast between background and vessel [Fig. 2(a)], on the other hand the red and blue channels show lower contrast between vessel and background, and also contaminated by noise. Consequently, the green channel of RGB image is used as input to extract retinal blood vessel. This is supported by experiment of [13]. Then, we reduce the strong contrast between retinal ocular (Region Of Interest) and the region outside the aperture by iterative padding algorithm. The aim of this step is to eliminate the strong response at that area. There are 3 steps to achieve the padding result. First, determine the group of
Fig. 1. Implementation methodology of retinal vessel extraction algorithms.

(a)

(b)

Fig. 2. (a): Green channel of RGB retinal images (b): Preprocessing image by padding the border of region of interest.

pixels outside border of the ROI. Then, change intensity value of each pixel with its mean value of its neighbor inside the ROI. Finally, the ROI is enlarged by repeating this process, as shown in Fig. 2(b). The number of iteration depends on the size of the matched lter kernel used. C. Matched Filtering In [1], the approximation of gray-level prole of the cross section of a retinal blood vessel can be achieved by a Gaussian shaped curve. Matched lter algorithm is used for detecting the rough intensity prole of vessel by matching the number of cross sections along its length. Because blood vessels typically have very low contrast, the two-dimensional matched lter kernel is designed to apply to the retinal image in order to enhance the contrast between blood vessel and background. Such a kernel is expressed as: L (1) 2 denes the spread of the intensity prole, L is the length of the vessel segment assumed to be a straight line and have a xed orientation. For this study, L is chosen as 9 since a vessel segment estimated lie within the range of 9-15 pixels. The authors made some experiments on the values of (ranging from 1.5 to 3.0) and found that the best parameter values Ki (x, y ) = e for |Y |
(
x2 2 2

(a) (b) Fig. 3. (a): Images from DRIVE (b): Result images using matched ltering approach.

(a)

(b)

(c)

Fig. 5. DRIVE database images (a) Segmentation results of matched lter method (b) Segmentation results of bottom hat method (c) Segmentation results of the proposed method.

(a) (b) Fig. 4. (a): Images from DRIVE (b): Result images using bottom hat transform approach.

(a)

(b)

(c)

Fig. 6. DRIVE database images (a) Segmentation results of matched lter method (b) Segmentation results of bottom hat method (c) Segmentation results of the proposed method.

were those that give the best response at = 2. The vessel is assumed to have a xed width and orientation for a short length. A neighborhood N is dened as { } L N = (u, v ) | |u| 3, |v | (2) 2 Matched lter kernel is rotated from 0 up to 180 by using angular resolution equal to 15 for the purpose to detect retinal blood vessel in all possible direction. Then, a set of 12 kernels is convolved with retinal image and at each location only the maximum of their response is taken. However, if we use the kernel equation mentioned as in Eq. 1, the response is affected by noise at area of non-vessel. So, we need to subtract mean from coefcient to make summation of all coefcients equal to zero as in Eq. 3. The examples of the results are shown in Fig. 3.
( Ki (x, y ) = e
u2 2 2

E. Proposed Method The proposed method is based on the combination of mathematical morphology and matched lter approach. The drawback of matched lter is that it provides very strong response to thin vessels and optic disc [Fig. 3(b)]. So, the size of thin vessel and optic disc that are achieved by matched lter are much larger than the actual size. That is, the number of background pixels which are detected wrongly as vessel is high (high false negative). On the other hand, bottom hat transform detect size of thin vessel very close to actual size and also masks the optic disc. But it results in too much noise connected to the vessel tree [Fig. 4(b)]. We combine the result from both methods by aligning the images together. Any pixel that appears in both binary images is considered as a vessel. The combined result shows that the size of thin vessel in resultant image is closer to the original one. Optic disc and noise are also removed [Fig. 5(c) and Fig. 6(c)]. III. E XPERIMENTS A ND R ESULTS A. Experiments The performance of the proposed method is performed using two publicly available databases of retinal images and manual segmentations: the DRIVE and STARE. There are 40 color images in DRIVE database. The size of each image is 565x584 pixels. The set of 40 images is divided into training and a test set and each set consist of 20 images. There are a single and two manual segmentations available in training set and test set respectively. All human observers were instructed and trained by ophthalmologist in order to provide a manual segmentation. The STARE database consists of 20 RGB images and size of each on is 700x605 pixels. Ten of the images are of patients with no pathology and the rest are containing pathology. There

N 1 Ki N i=1

(3)

D. Bottom hat Transform The bottom-hat transform, also called closing residue, is used to extract valleys such as dark lines and dark spots. It is a process which is done by the subtraction of the original image from the closing result. Therefore, the blood vessels of the retina, actually considered as dark lines are extracted by applying the bottom-hat transform. The bottom-hat transform is expressed as the following equation, h = (f b) f (4)

Where f is the image to be processed, b is the square-shaped structuring element with increasing radius from 5pixels to 10 pixels for closing () operator, and h, the closing residue; and the examples of the results are shown in Fig. 4.

TABLE I R ESULTS OF THE DIFFERENT VESSEL EXTRACTION ALGORITHMS USING STARE DATABASE [14].
Segmentation Method Staal et al. [4] Soares et al. [10] Kande et al. [14] Our method Hoover et al. [2] Jiang et al. [15] STARE Database Accuracy Az (standard deviation) 0.9516 0.9641 0.948 0.9671 0.9485 0.9602 0.9405(0.0187) 0.9019 0.9275 0.759 0.9009 0.9289 Comment Supervised Supervised Unsupervised Unsupervised Unsupervised Unsupervised

TABLE II R ESULTS OF THE DIFFERENT VESSEL EXTRACTION ALGORITHMS USING DRIVE DATABASE [14], [16]. Fig. 7.
Segmentation Method Human observer Staal et al. [4] Niemeijer et al. [16] Kande et al. [14] Our method Zana et al. [17] Jiang et al. [15] Martnez-Prez et al. [18] Chaudhuri et al. [1] DRIVE Database Accuracy standard deviation 0.9473 0.0048 0.9442 0.0065 0.9416 0.0065 0.9437 n/a 0.9388 0.008 0.9377 0.0077 0.9212 0.0076 0.9181 0.024 0.8773 0.3357 Az n/a 0.952 0.9294 0.9515 0.8557 0.8984 0.9114 n/a 0.7878 Comment n/a Supervised Supervised Unsupervised Unsupervised Unsupervised Unsupervised Unsupervised Unsupervised

The proposed method ROC curve for the DRIVE database.

are two hand-labeling from different observers available for all 20 images. We used three performance measures to evaluate the performance of the algorithm. The rst is receiver operating curve (ROC). An ROC space is dened by false positive rate (FPR) and true positive rate (TPR) as x and y axes respectively, which depicts relative trade-offs between true positive (benets) and false positive (costs). Since TPR is equivalent with sensitivity (SN) and FPR is equal to (1 - specicity), the ROC graph is sometimes called the sensitivity vs (1 - specicity) plot. The SN and SP is obtained as follows: Both measures are evaluated using the four metric values true positive (TP), sum of pixel marked as vessel in both result and ground truth image; false positive (FP), sum of pixel marked as a vessel in result image but not in ground truth image; false negative (FN), sum of pixel marked as a background in result image but not in ground truth image; true negative (TN), sum of pixel marked as a background in both result and ground truth image. The sensitivity and the specicity are computed as shown in Eq. 5 and 6 respectively. We create the ROC curve by varying the threshold on the soft classication image. In our proposed method, there are 2 soft classications from matched lter and bottom hat transform. So, we produce ROC for each image. Then, by averaging the two ROC curve from the two classication methods, the resultant ROC plot is achieved. The best possible prediction method would yield a point in the upper left corner or coordinate (0,1) of the ROC space, representing 100% sensitivity (no false negatives) and 100% specicity (no false positives). The (0,1) point is also called a perfect classication. The second is the area under ROC (Az). The larger area under the curve (Az) signied a greater discriminatory ability of the segmentation method. The third

measure is maximum average accuracy (MAA). The accuracy of an image is calculated by taking the sum for the TP and TN dividing by sum of the total number of vessel pixels (P) and total number of non vessels (N) as illustrated in Eq. 7. In our experiments, we used the manual segmentation by 1st observer of DRIVE database and segmentation that provided by Hoover of STARE database as a gold standard for calculating all these three measures- ROC, area under ROC, and MAA, only pixels inside the eld of view (FOV) is taken into account. Sensitivity = True positive rate (T P R) = TP TP + FN TN TN + FP (5)

Specicity = True negative rate (T N R) = Accuracy = B. Results TN + TP N +P

(6) (7)

Table I and II demonstrate the comparison of maximum average accuracy, area under ROC and standard deviation for different supervised and unsupervised segment approaches. Fig. 7 and 8 show the ROC curve of DRIVE and STARE. The results from the proposed method are shown in Fig. 9 and 10. The MAA is calculated with the proposed method for the test set of the DRIVE database by using rst-observer as a gold standard and for the STARE data set by using hand-labeled of Hoover as a gold standard. It is observed from the table that our method outperforms many unsupervised methods for both DRIVE and STARE database. IV. D ISCUSSION A ND C ONCLUSION The conclusion goes here. The combination of mathematical morphology and ltering approach is implemented and the performance is evaluated in term of ROC, Az and MAA measure. An increase in accuracy is achieved by combining the matched lter and bottom hat transform approach. It implies that choosing the criteria to measure the performance of vessel detection algorithm is very important depending on the application for which the algorithm is to be used. The

(a)

(b)

(c)

Fig. 8.

The proposed method ROC curve for the STARE database.

(d)

(e)

(f)

Fig. 10. STARE database Image. (a)(d): Original image. (b)(e): Ground truth image. (c)(f): Proposed method result.

(a)

(b)

(c)

(d)

(e)

(f)

Fig. 9. DRIVE database Image. (a)(d): Original image. (b)(e): Ground truth image. (c)(f): Proposed method result.

area under ROC and the accuracy achieved by our method are 0.8557, 0.9388 for DRIVE database 0.9019, 0.9405 for STARE database respectively. ACKNOWLEDGMENT This research is nancially supported by Thailand Advanced Institute of Science and Technology (TAIST), National Science and Technology Development Agency (NSTDA), Tokyo Institute of Technology and Sirindhorn International Institute of Technology (SIIT), Thammasat University (TU). R EFERENCES
[1] S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum, Detection of blood vessels in retinal images using two-dimensional matched lters, in IEEE Trans. Med. Imag., vol. 8, 1989, pp. 263269. [2] A. Hoover, V. Kouznetsova, and M. Goldbaum, Locating blood vessels in retinal images by piecewise threshold probing of a matched lter response, in IEEE Trans. Med. Imag, vol. 19, 2000, pp. 203210. [3] G. B. Kande, T. S. Savithri, and P. V. Subbaiah, Extraction of exudates and blood vessels in digital fundus images, in Proceedings of 2008 IEEE 8th International Conference on Computer and Information Technology, 2008, pp. 526513.

[4] J. Staal, M. Abramoff, M. Niemeijer, M. Niemeijer, and B. van Ginneken, Feature selection and activity recognition from wearable sensors, in 3rd International Symposium Ubiquitous Computing Systems, vol. 23, 2004, pp. 501509. [5] M. Akram, A. Tariq, and S. Khan, Retinal image blood vessel segmentation, in International Conference on Information and Communication Technologies, 2009, pp. 181192. [6] F. Oloumi, R. Rangayyan, P. Eshghzadeh-Zanjani, and F. Ayres, Detection of blood vessels in fundus images of the retina using gabor wavelets, in 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2007, pp. 6451 6454. [7] M. Sofka and C. V. Stewart, Retinal vessel centerline extraction using multiscale matched lters, condence and edge measures, in IEEE Trans. Med. Imag, vol. 26, 2007, pp. 15311545. [8] C. Sinthanayothin, J. F. Boyce, H. L. Cook, and T. H. Williamson, Automatic localisation of the optic disk, fovea, and retinal blood vessels from digital colour fundus images, in Br. J. Ophthalmol, vol. 83, 1999, pp. 902910. [9] C. Sinthanayothin, J. F. Boyce, T. H. Williamson, H. L. Cook, E. Mensah, S. Lal, and D. Usher, Automated detection of diabetic retinopathy on digital fundus images, in Diabet Med., vol. 83, 2002, pp. 902910. [10] J. V. B. Soares, J. J. G. Leandro, R. M. Cesar, H. F. Jelinek, and M. J. Cree, Retinal vessel segmentation using the 2-d gabor wavelet and supervised classication, in IEEE Trans. Med. Imag, vol. 25, 2006, pp. 12141222. [11] R. C. Gonzalez and R. E. Woods, Digital Image Processing. Prentice Hall, 2010. [12] N. Otsu, A threshold selection method from gray-level histograms, in IEEE Trans. Syst., Man, Cybern., vol. 9, 1979, pp. 6266. [13] M. M. Fraz, M. Javed, and A. Basit, A threshold selection method from gray-level histograms, in 4th IEEE International Conference on Emerging Technologies, 2008, pp. 232236. [14] G. B. Kande, T. S. Savithri, and P. V. Subbaiah, Comparative study of retinal vessel segmentation methods on a new publicly available database, in IEEE International Conference on Systems, Man and Cybernetics, 2008, pp. 34483453. [15] X. Jiang and D. Mojon, Adaptive local thresholding by vericationbased multithreshold probing with application to vessel detection in retinal images, in IEEE Trans. Pattern Anal. Mach. Intell., 2003, pp. 10101019. [16] M. Niemeijer, J. J. Staal, B. van Ginneken, M. Loog, and M. D. Abramoff, Comparative study of retinal vessel segmentation methods on a new publicly available database, in SPIE Medical Imaging, 2004, pp. 648656. [17] F. Zana and J.-C. Klein, Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation, in IEEE Trans. Image Process., 2001, pp. 10101019. [18] M. Martnez-Prez, A. Hughes, A. Stanton, S. Thom, A. Bharath, and K. Parker, Retinal blood vessel segmentation by means of scalespace analysis and region growing, in Medical Image Computing and Computer-Assisted Intervention, 1999, pp. 9097.