Vous êtes sur la page 1sur 65

AUTOMATED BLOOD VESSEL SEGMENTATION OF RETINAL IMAGES

PROJECT REPORT submitted in partial fulfillment of the requirements for the award of the degree of BACHELOR OF TECHNOLOGY in BIOTECHNOLOGY

by

CHITRA.J (10904065) JAYASHREE.S(10904115) NISHA.T(10904177) under the guidance of Mrs.S.SUBHASHINI, M.Sc., M.Phil., Ph.D., (Lecturer, Department of Biotechnology)

DEPARTMENT OF BIOTECHNOLOGY SCHOOL OF BIOENGINEERING FACULTY OF ENGINEERING AND TECHNOLOGY SRM UNIVERSITY KATTANKULATHUR 603 203
April 2008

CERTIFICATE
Certified that the project report entitled AUTOMATED BLOOD VESSEL SEGMENTATION OF RETINAL IMAGES submitted by J.CHITRA (10904065), JAYASHREE.S (10904115), NISHA.T (10904177) is a record of project work done by them under my supervision. This project has not formed the basis for the award of any degree, diploma, associateship or fellowship.

INTERNAL GUIDE

HEAD OF THE DEPARTMENT

For the purpose of viva voce 1. 2.

DECLARATION
We do hereby declare that the project report entitled AUTOMATED BLOOD VESSEL SEGMENTATION OF RETINAL IMAGES is a record of original work carried out by us under the supervision of Mrs.Subashini, Lecturer, Department of Biotechnology, SRM University, Kattankulathur. This project has not been submitted earlier in part or full for the award of any degree, diploma, associateship or fellowship.

Kattankulathur (CHITRA.J) Date:

(JAYASHREE.S)

(NISHA.T)

ACKNOWLEDGEMENT

I am greatly indebted to Mrs. Subashini for giving me this wonderful opportunity to work on a challenging problem. Her highly insightful perspectives and enthusiasm, guided me throughout this work.

I also take this opportunity to thank Dr. Kantha D Arunachalam, Head of Department, School of Bioengineering, Department of Biotechnology, SRM University and, Dr. K Ramasamy, Dean, School of Bioengineering, Department of Biotechnology, SRM University.

I like to thank Mr. R. Yoga Saravanan for his encouragement and support throughout this work and especially for introducing me to MATLAB. I would also like to thank him for providing me with the retinal images and devoting some of his valuable time in writing the algorithm.

I also thank Dr.Agarwals Eye Hospital for giving me an opportunity to work under them. It is indeed a privilege to work on a challenging problem with them. I dedicate this project to the medical community especially ophthalmologists to whom this project may serve as a means of diagnosis of retinal diseases.

List of Tables

SERIAL No.
1.

CONTENTS
A brief comparison among the proposed algorithms for retinal image segmentation.

PAGE No.
52

List of Figures

SERIAL No.
1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

CONTENTS
Anatomy of the Human Eye Anterior and Posterior Chambers of the Eye Thresholding Histogram Fundus Camera Fluorescein Angiogram Digital Fundus Photograph The Matlab Help The Matlab GUI The Matlab Figure Window Fluorescent Images 10.1. Original Image 10.2. Preprocessed Image 10.3. Manually Segmented Image 10.4. Automated Segmented Image

PAGE No.
2 5 17 30 32 33 36 37 39

43 44 45 46

11.

Fundus Images 11.1. Original Image 11.2. Preprocessed Image 11.3. Manually Segmented Image 11.4 Automated Segmented Image 47 48 49 50

CONTENTS

SERIAL No.
1.

CONTENTS
Introduction 1.1. The Human Eye 1.1.1. Anatomy of the Human Eye 1.1.2. Retina 1.2. Eye Diseases 1.2.1. Glaucoma 1.2.2. Diabetic Retinopathy 1.2.3. Arteriosclerosis 1.3. Function of the Mammalian Eye 1.4. Image Segmentation 1.4.1. Interactive Thresholding 1.4.2. Texture based Segmentation Objectives Review of Literature Materials and Methods 4.1. Materials 4.1.1. Retinal Imaging using Fundus Camera 4.1.2. MATLAB 4.1.2.1. Image Processing Toolbox 4.1.3. Operating System 4.2. Methods 4.2.1. Manual Segmentation 4.2.2. Automated Segmentation Results 5.1. Subjective Results 5.1.1. Fluorescent Images 5.1.2. Fundus Images 5.2. Quantitative Results Discussion Summary Social Relevance of Retinal Image Segmentation References

PAGE No.
1 1 2 7 9 9 11 13 14 16 17 17 19 20 30 30 30 34 38 40 41 41 42 43 43 43 43 51 52 53 54

2. 3. 4.

5.

6. 7. 8. 9.

Automated Blood Vessel Segmentation of Retinal Images

ABSTRACT

Retinal Blood vessel morphology can be an important indicator for many diseases such as diabetes mellitus, hypertension and arteriosclerosis, and the measurement of geometrical changes in retinal veins and arteries and can be applied to a variety of clinical studies. Segmentation of the retinal blood vessels is an assistance to understand more about its morphology and will provide a better source of information for studying the various related diseases. Two of the major problems in the segmentation of retinal blood vessels are the presence of a wide variety of vessel widths and inhomogeneous background of the retina. Computer based analysis for automated segmentation of blood vessels in retinal images will help eye care specialists screen larger populations for vessel abnormalities. We present a method of automated segmentation, comparing both fluorescent and fundus images of the retinal blood vessel. These segmentations are compared against manual measurements and between imaging techniques.

1. INTRODUCTION
1.1. THE HUMAN EYE
Eyes are organs that detect light. Different kinds of light-sensitive organs are found in a variety of animals. The simplest eyes do nothing but detect whether the surroundings are light or dark, which is sufficient for the entrainment of circadian rhythms but can hardly be called vision. More complex eyes can distinguish shapes and colors. The visual fields of some such complex eyes largely overlap, to allow better depth perception (binocular vision), as in humans; and others are placed so as to minimize the overlap, such as in rabbits and chameleons. In the human eye (shown in Fig.1), light enters the pupil and is focused on the retina by the lens. Light-sensitive nerve cells called rods (for brightness) and cones (for color) react to the light. They interact with each other and send messages to the brain that indicate brightness, color, and contour. Dimesnions vary only 1-2 mm among individuals. The vertical diameter is 24 mm; the transverse being larger. At birth is it generally 16-17 mm, enlarging to 22.5-23 mm by three years of age, between then and age 13 the eye attains its mature size. It weighs 7.5 grams and its volume 6.5 millilitres. Each animal exhibits a different anatomy of the eye when compared to the humans [1].

1.1.1. ANATOMY OF THE HUMAN EYE


Fig.1.Anatomy of the Human eye

1. posterior compartment 2. ora serrata 3. ciliary muscle 4. ciliary zonules 5. canal of Schlemm 6. pupil 7. anterior chamber 8. cornea 9. iris

10. lens cortex 11. lens nucleus 12. ciliary process 13. conjunctiva 14. inferior oblique muscule 15. inferior rectus muscule 16. medial rectus muscle 17. retinal arteries and veins 18. optic disc 19. dura mater 20. central retinal artery 21. central retinal vein 22. optical nerve 23. vorticose vein 24. bulbar sheath 25. macula 26. fovea 27. sclera 28. choroid 29. superior rectus muscule 30. retina The structure of the mammalian eye can be divided into three main layers or tunics whose names reflect their basic functions: the fibrous tunic, the vascular tunic, and the nervous tunic.

The fibrous tunic, also known as the tunica fibrosa oculi, is the outer layer of the eyeball consisting of the cornea and sclera. The sclera gives the eye most of its white color. It consists of dense connective tissue filled with the protein collagen to both protect the inner components of the eye and maintain its shape.

The vascular tunic, also known as the tunica vasculosa oculi, is the middle vascularized layer which includes the iris, ciliary body, and choroid. The choroid contains blood vessels that supply the retinal cells with necessary oxygen and remove the waste products of respiration. The choroid gives the inner eye a dark color, which prevents disruptive reflections within the eye. The iris is seen rather than the cornea when looking straight in one's eye due to the latter's transparency, the pupil (central aperture of iris) is black because there is no light reflected out of the interior eye. If an ophthalmoscope is used, one can see the fundus, as well as vessels especially those crossing the optic disk - the point where the optic nerve fibers depart from the eyeball - among others

The nervous tunic, also known as the tunica nervosa oculi, is the inner sensory which includes the retina. The retina contains the photosensitive rod and cone cells and associated neurons. To maximise vision and light absorption, the retina is a relatively smooth (but curved) layer. It has two points at which it is different; the fovea and optic disc. The fovea is a dip in the retina directly opposite the lens, which is densely packed with cone cells. It is largely responsible for color vision in humans, and enables high acuity, such as is necessary in reading. The optic disc, sometimes referred to as the anatomical blind spot, is a point on the retina where the optic nerve pierces the retina to connect to the nerve cells on its inside. No photosensitive cells exist at this point, it is thus "blind". In addition to the rods and cones, a small proportion (about 2% in humans) of the ganglion cells in the retina are photosensitive through the pigment melanopsin. They are generally most excitable by blue light, about 470 nm. Their information is sent to the SCN (suprachiasmatic nuclei), not to the visual center, through the retinohypothalamic tract which is formed as melanopsin-sensitive axons exit the optic nerve. It is these light signals which regulate circadian rhythms in mammals and several other animals. Many, but not all, totally blind individuals have their circadian rhythms adjusted daily in this way.

The mammalian eye can also be divided into two main segments: the anterior segment and the posterior segment(shown in Fig.2)

The human eye is not a plain sphere but is like two spheres combined, a smaller, sharper curved one and a larger lesser curved sphere. The former, the anterior segment is the front sixth of the eye that includes the structures in front of the vitreous humour: the cornea, iris, ciliary body, and lens. Within the anterior segment are two fluid-filled spaces:

the anterior chamber between the posterior surface of the cornea (i.e. the corneal endothelium) and the iris.

the posterior chamber between the iris and the front face of the vitreous.

Fig.2.Anterior and Posterior Chambers of the Human Eye

Aqueous humor fills these spaces within the anterior segment and provides nutrients to the surrounding structures. The posterior segment is the back five-sixths of the eye that includes the anterior hyaloid membrane and all of the optical structures behind it: the vitreous humor, retina, choroid, and optic nerve.

The radii of the anterior and posterior sections are 8 mm and 12 mm, respectively. The point of junction is called the limbus. On the other side of the lens is the second humour, the aqueous humour, which is bounded on all sides: by the lens, ciliary body, suspensory ligaments and by the retina. It lets light through without refraction, helps maintain the shape of the eye and suspends the delicate lens. In some animals, the retina contains a reflective layer (tapetum lucidum) which increases the amount of light each photosensitive cell perceives, allowing the animal to see better under low light conditions [3]. Lying over the sclera and the interior of the eyelids is a transparent membrane called the conjunctiva. It helps lubricate the eye by producing mucus and tears. It also contributes to immune surveillance and helps to prevent the entrance of microbes into the eye. In many animals, including humans, eyelids wipe the eye and prevent dehydration. They spread tears on the eyes, which contains substances which help fight bacterial infection as part of the immune system. Some aquatic animals have a second eyelid in each eye which refracts the light and helps them see clearly both above and below water. Most creatures will automatically react to a threat to its eyes (such as an object moving straight at the eye, or a bright light) by covering the eyes, and/or by turning the eyes away from the threat. Blinking the eyes is, of course, also a reflex. In many animals, including humans, eyelashes prevent fine particles from entering the eye. Fine particles can be bacteria, but also simple dust which can cause irritation of the eye, and lead to tears and subsequent blurred vision. In many species, the eyes are inset in the portion of the skull known as the orbits or eyesockets. This placement of the eyes helps to protect them from injury. In humans, the eyebrows redirect flowing substances (such as rainwater or sweat) away from the eye.

1.1.2. RETINA
The retina contains two forms of photosensitive cells important to visionrods and conesin addition to the photosensitive ganglion cells involved in circadian adjustment but not vision. Though structurally and metabolically similar, the functions of rods and cones are quite different. Rod cells are highly sensitive to light, allowing them to respond in dim light and dark conditions; however, they cannot detect color differences. These are the cells that allow humans and other animals to see by moonlight, or with very little available light (as in a dark room). Cone cells, conversely, need high light intensities to respond and have high visual acuity. Different cone cells respond to different wavelengths of light, which allows an organism to see color. The shift from cone vision to rod vision is why the darker conditions become, the less color objects seem to have. The differences between rods and cones are useful; apart from enabling sight in both dim and light conditions, they have further advantages. The fovea, directly behind the lens, consists of mostly densely-packed cone cells. The fovea gives humans a highly detailed central vision, allowing reading, bird watching, or any other task which primarily requires staring at things. Its requirement for high intensity light does cause problems for astronomers, as they cannot see dim stars, or other celestial objects, using central vision because the light from these is not enough to stimulate cone cells. Because cone cells are all that exist directly in the fovea, astronomers have to look at stars through the "corner of their eyes" (averted vision) where rods also exist, and where the light is sufficient to stimulate cells, allowing an individual to observe faint objects. Rods and cones are both photosensitive, but respond differently to different frequencies of light. They contain different pigmented photoreceptor proteins. Rod cells contain the protein rhodopsin and cone cells contain different proteins for each color-range. The process through which these proteins go is quite similar upon being subjected to electromagnetic radiation of a particular wavelength and intensity, the protein breaks down into two constituent products. Rhodopsin, of rods, breaks down into opsin and retinal; iodopsin of cones breaks down into photopsin and retinal. The breakdown results in the activation of Transducin and this activates cyclic GMP Phosphodiesterase, which

lowers the number of open Cyclic nucleotide-gated ion channels on the cell membrane, which leads to hyperpolarization; this hyperpolarization of the cell leads to decreased release of transmitter molecules at the synapse. Differences between the rhodopsin and the iodopsins is the reason why cones and rods enable organisms to see in dark and light conditions each of the photoreceptor proteins requires a different light intensity to break down into the constituent products. Further, synaptic convergence means that several rod cells are connected to a single bipolar cell, which then connects to a single ganglion cell by which information is relayed to the visual cortex. This convergence is in direct contrast to the situation with cones, where each cone cell is connected to a single bipolar cell. This divergence results in the high visual acuity, or the high ability to distinguish detail, of cone cells compared to rods. If a ray of light were to reach just one rod cell, the cell's response may not be enough to hyperpolarize the connected bipolar cell. But because several "converge" onto a bipolar cell, enough transmitter molecules reach the synapses of the bipolar cell to hyperpolarize it. Furthermore, color is distinguishable due to the different iodopsins of cone cells; there are three different kinds, in normal human vision, which is why we need three different primary colors to make a color space. A small percentage of the ganglion cells in the retina contain melanopsin and, thus, are themselves photosensitive. The light information from these cells is not involved in vision and it reaches the brain not via the optic nerve but via the retinohypothalamic tract, the RHT. By way of this light information, the body clock's inherent approximate 24hour cycling is adjusted daily to nature's light/dark cycle.

1.2. EYE DISEASES


There are various eye diseases affecting humans. We discuss two of the most important eye diseases that are prevalent and with whom our project has a social relevance namely: Glaucoma

Diabetic Retinopathy Retinopathic Arteriosclerosis

1.2.1. GLAUCOMA
Glaucoma is a disease caused by increased intraocular pressure (IOP) resulting either from a malformation or malfunction of the eyes drainage structures. Left untreated, an elevated IOP causes irreversible damage the optic nerve and retinal fibers resulting in a progressive, permanent loss of vision. However, early detection and treatment can slow, or even halt the progression of the disease [2]. The eye constantly produces aqueous, the clear fluid that fills the anterior chamber (the space between the cornea and iris). The aqueous filters out of the anterior chamber through a complex drainage system. The delicate balance between the production and drainage of aqueous determines the eyes intraocular pressure (IOP). Most peoples IOPs fall between 8 and 21. However, some eyes can tolerate higher pressures than others. Thats why it may be normal for one person to have a higher pressure than another. Glaucoma is an insidious disease because it rarely causes symptoms. Detection and prevention are only possible with routine eye examinations. However, certain types, such as angle closure and congenital, do cause symptoms. Angle Closure (emergency)

Sudden decrease of vision Extreme eye pain Headache Nausea and vomiting Glare and light sensitivity

Congenital

Tearing

Light sensitivity Enlargement of the cornea

Because glaucoma does not cause symptoms in most cases, those who are 40 or older should have an annual examination including a measurement of the intraocular pressure. Those who are glaucoma suspects may need additional testing. The glaucoma evaluation has several components. In addition to measuring the intraocular pressure, the doctor will also evaluate the health of the optic nerve (ophthalmoscopy), test the peripheral vision (visual field test), and examine the structures in the front of the eye with a special lens (gonioscopy) before making a diagnosis. The doctor evaluates the optic nerve and grades its health by noting the cup to disc ratio. This is simply a comparison of the cup (the depressed area in the center of the nerve) to the entire diameter of the optic nerve. As glaucoma progresses, the area of cupping or depression, increases. Therefore, a patient with a higher ratio has more damage. The progression of glaucoma is monitored with a visual field test. This test maps the peripheral vision, allowing the doctor to determine the extent of vision loss from glaucoma and a measure of the effectiveness of the treatment. The visual field test is periodically repeated to verify that the intraocular pressure is being adequately controlled. The structures in the front of the eye are normally difficult to see without the help of a special gonioscopy lens. This special mirrored contact lens allows the doctor to examine the anterior chamber and the eyes drainage system. Most patients with glaucoma require only medication to control the eye pressure. Sometimes, several medications that complement each other are necessary to reduce the pressure adequately. Surgery is indicated when medical treatment fails to lower the pressure satisfactorily. There are several types of procedures, some involve laser and can be done in the office, others must be performed in the operating room. The objective of any glaucoma operation is to allow fluid to drain from the eye more efficiently.

1.2.2. DIABETIC RETINOPATHY


Diabetes is a disease that occurs when the pancreas does not secrete enough insulin or the body is unable to process it properly. Insulin is the hormone that regulates the level of sugar (glucose) in the blood. Diabetes can affect children and adults [5]. Patients with diabetes are more likely to develop eye problems such as cataracts and glaucoma, but the diseases affect on the retina is the main threat to vision. Most patients develop diabetic changes in the retina after approximately 20 years. diabetes on the eye is called diabetic retinopathy. Over time, diabetes affects the circulatory system of the retina. The earliest phase of the disease is known as background diabetic retinopathy. In this phase, the arteries in the retina become weakened and leak, forming small, dot-like hemorrhages. These leaking vessels often lead to swelling or edema in the retina and decreased vision. The next stage is known as proliferative diabetic retinopathy. In this stage, circulation problems cause areas of the retina to become oxygen-deprived or ischemic. New, The effect of

fragile, vessels develop as the circulatory system attempts to maintain adequate oxygen levels within the retina. This is called neovascularization. Unfortunately, these delicate vessels hemorrhage easily. Blood may leak into the retina and vitreous, causing spots or floaters, along with decreased vision. In the later phases of the disease, continued abnormal vessel growth and scar tissue may cause serious problems such as retinal detachment and glaucoma. The affect of diabetic retinopathy on vision varies widely, depending on the stage of the disease. Some common symptoms of diabetic retinopathy are listed below, however, diabetes may cause other eye symptoms. Blurred vision (this is often linked to blood sugar levels Floaters and flashes Sudden loss of vision

Diabetic patients require routine eye examinations so related eye problems can be detected and treated as early as possible. Most diabetic patients are frequently examined by an internist or endocrinologist who in turn works closely with the ophthalmologist. The diagnosis of diabetic retinopathy is made following a detailed examination of the retina with an ophthalmoscope. Most patients with diabetic retinopathy are referred to vitreo-retinal surgeons who specialize in treating this disease. Diabetic retinopathy is treated in many ways depending on the stage of the disease and the specific problem that requires attention. The retinal surgeon relies on several tests to monitor the progression of the disease and to make decisions for the appropriate treatment. These include: fluorescein angiography, retinal photography, and ultrasound imaging of the eye. The abnormal growth of tiny blood vessels and the associated complication of bleeding is one of the most common problems treated by vitreo-retinal surgeons. Laser surgery called pan retinal photocoagulation (PRP) is usually the treatment of choice for this problem. With PRP, the surgeon uses laser to destroy oxygen-deprived retinal tissue outside of the patients central vision. While this creates blind spots in the peripheral vision, PRP prevents the continued growth of the fragile vessels and seals the leaking ones. The goal of the treatment is to arrest the progression of the disease. Vitrectomy is another surgery commonly needed for diabetic patients who suffer a vitreous hemorrhage (bleeding in the gel-like substance that fills the center of the eye). During a vitrectomy, the retina surgeon carefully removes blood and vitreous from the eye, and replaces it with clear salt solution (saline). At the same time, the surgeon may also gently cut strands of vitreous attached to the retina that create traction and could lead to retinal detachment or tears. Patients with diabetes are at greater risk of developing retinal tears and detachment. Tears are often sealed with laser surgery. Retinal detachment requires surgical treatment

to reattach the retina to the back of the eye. The prognosis for visual recovery is dependent on the severity of the detachment.

1.2.3. ARTERIOSCLEROSIS
Arteriosclerosis is one of the major health problems in todays society. It leads to heart attacks, cerebral ischemia and a host of other diseases. Because arteriosclerosis poses a significant problem in general medicine and, consequently, to the eye as well, there is a supplementary chapter dedicated to this topic. There various risk factors will be discussed, such as smoking, increased blood pressure and high lipid levels that lead to arteriosclerosis [4]. Just as is possible in any other artery, ocular blood vessels can also suffer from arteriosclerosis. Arteriosclerosis is therefore considered an important risk factor in a number of eye diseases, among which occlusions of retinal arteries and veins are the most important. It is interesting to note that patients having arteriosclerosis suffer more frequently, and at earlier ages, from cataracts (lens opacification) and maculopathy (an age-related disease of the central retina). Neither disease is probably the direct result of arteriosclerosis, but rather share the same pathogenetic mechanism. Therefore, the term risk indicators ( indicare: indicate) might be more appropriate than risk factors. Risk factors for arteriosclerosis are thus also important risk indicators for cataracts and maculopathy. It actually appears that arteriosclerosis does not increase the chance of developing glaucomatous damage. This is quite surprising because it is now known that the average glaucoma patient suffers from a reduced ocular perfusion. The cause for circulatory problems in glaucoma is a dysregulation of the eyes perfusion rather than arteriosclerosis. There is, however, only a weak correlation between arteriosclerosis (and its accompanying risk factors) and increased intraocular pressure. This means that people suffering from arteriosclerosis are more likely to have an elevated IOP than healthy subjects of the same age without arteriosclerosis. But it should again be emphasized that this correlation is not strong.

Occasionally, patients confuse intraocular pressure with blood pressure and start asking for a possible connection. Though both are regulated by independent mechanisms, someone having a higher-than-average blood pressure is just slightly more likely to have an increased IOP. But once again, this correlation is not strong. Many people with high blood pressure have a normal IOP and vice versa. The same applies for other risk factors of arteriosclerosis: Smokers and patients with high serum lipid levels have only a slightly higher risk for an increased IOP.

1.3. FUNCTION OF THE MAMMALIAN EYE


The structure of the mammalian eye owes itself completely to the task of focusing light onto the retina. This light causes chemical changes in the photosensitive cells of the retina, the products of which trigger nerve impulses which travel to the brain. There are many diseases, disorders, and age-related changes that may affect the eyes and surrounding structures. Various eye care professionals, including ophthalmologists, optometrists, and opticians, are involved in the treatment and management of ocular and vision disorders.Treatment is possible only after successful diagnosis and this is aided by the means of automated medcal diagnosis systems and procedures. Faster advances in computing technology have aroused increasing interest in the development of automated medical diagnosis systems to improve the services provided by the medical community. Medical imaging allows scientists and physicians to understand potentially lifesaving information without doing anything harmful to the patient. It has become a tool for surgical planning and simulation and for tracking the progress of diseases. With medical imaging playing an increasingly prominent role in the diagnosis and treatment of disease, the challenging problem of extracting clinically useful information about anatomical structures imaged through CT, MR, PET1 and other modalities has become important. Although modern imaging devices provide exceptional views of internal anatomy, the use of computers to quantify and analyze the embedded structures with accuracy and efficiency has been limited.

Pathological changes of the retinal vessel tree can be observed in a variety of diseases such as diabetes and glaucoma. Retinal imaging reveals information about retinal, ophthalmic and even systemic diseases such as diabetes, hypertension and arteriosclerosis. Image segmentation is an intrinsic determinist in the performance of computer vision applications as it directly influences the efficiency of subsequent image processing steps. Accurate identification of the region(s) of interest in an image is critical if one were to perform image analysis successfully. Numerous approaches and techniques have been developed to meet this need over the past few decades. However, due to the diversity and complexity of scenes, there is no single technique which produces the best result for every application.

Segmentation of structures from medical images and reconstructing a compact geometric representation of these structures is difficult due to the sheer size of the datasets and the complexity and variability of the shapes of interest. Also sampling artifacts and noise may cause the structures to be indistinct and disconnected. Examination on vascular modifications and manual analysis are often carried out by an ophthalmologist, but the routine inspection of fundus images can be a laborious and tedious process and may be prone to human error. For example, human measurement of vessel width is subjective and can produce imprecise results. In contrast, the automatic computer examination would provide far more objective, precise with repeatable measurements.

The challenge is to extract elements belonging to the same structure and integrate them into a coherent and consistent model. Traditional low-level image processing techniques, which only consider local information, often make incorrect assumptions. As a result, these model/object-free techniques usually require considerable amount of expert intervention, which is time consuming and tedious.

Although the underlying mechanisms for some eye disease are not fully understood, its progress can be prevented by early diagnosis and treatment. Accurate blood vessel segmentation is fundamental in the analysis of fundus images as further analysis usually depends on the accuracy of this segmentation.

1.4. IMAGE SEGMENTATION


Segmentation refers to the grouping of an image into individual entities where an object is distinguished from its surrounding in a scene. It allows a quantitative measurement of the geometrical changes of arteries, tortuosity or lengths and provides the localization of landmark points such as, bifurcations needed for image registration. Therefore, automated vasculature measurement could reduce both the expenditure of resources in terms of specialists and the examination time and provide an objective, precise measurement of retinal blood vessel structure and other pathologies, which motivate the development of a robust vessel segmentation method [6].

A central feature in such diagnosis is the appearance of blood vessels in retinal images. Segmentation of these vessels enables eye care specialists to screen larger populations for vessel abnormalities. However automated retinal image segmentation is complicated by the fact that the width of retinal images can vary from very large to small, and that the local contrast of vessels is unstable(inhomogenous background). Thresholding defines a region of interest before image segmentation will limit the processing of the defined region so no computing resource is wasted for other irrelevant areas. This also reduces the amount of editing needed after image segmentation because object boundaries are generated within the defined regions. Defining a region of interest before image segmentation will limit the processing the defined region so no computing resource is wasted for other irrelevant areas. This also reduces the amount of editing needed after image segmentation because object boundaries are generated within the defined regions. Image segmentation by thresholding is a simple but powerful approach for images containing solid objects which are distinguishable from the background or other objects in terms of pixel intensity values. The pixel thresholds are normally adjusted interactively and displayed in real-time on screen. When the values are defined properly, the boundaries are traced for all pixels within the range in the image. Grayscale thresholding works well when an image that has uniform regions and contrasting background.

The histogram of the 3-D image is first calculated and an optimal threshold to divide the image into object and background is derived by finding the valley from the histogram.

Fig.3.Thresholding Histogram

1.4.1 INTERACTIVE THRESHOLDING This technique uses two values to define the threshold range. The thresholds are adjusted interactively by showing all pixels within the range in one color and all pixels outside the range to a different color. Since the thresholds are displayed in real-time on the image, the threshold range can be defined locally and varied from slice to slice. All pixels within the range are segmented to generate the final boundaries. 1.4.2 TEXTURE-BASED SEGMENTATION While image texture has been defined in many different ways, a major characteristic is the repetition of a pattern or patterns over a region. The pattern may be repeated exactly,

or as a set of small variations on the theme, possibly a function of position. For medical images, because objects are normally certain type of tissues, such as blood vessels, brain tissue, bones and etc, they provide a rich set of texture information for image segmentation. For some objects with strong texture, texture based segmentation generates more accurate object boundary than thresholding based methods. The texture based segmentation starts with a user defined training area, where texture characteristics are calculated and then applied as a pixel classifier to other pixels in one cross-section image or the entire volume to separate them into groups. Object boundaries are traced and their topological relationship is established. Image segmentation by thresholding is a simple but powerful approach for images containing solid objects which are distinguishable from the background or other objects in terms of pixel intensity values. When the values are defined properly, the boundaries are traced for all pixels within the range in the image. Grayscale thresholding works well when an image has uniform regions and contrasting background. In our dissertation we have fixed the threshold value by both manual and automated methods and the results are compared for fundus and fluorescent images. [6-8]

2. OBJECTIVES
We hope to realize the following objectives through this project: 1. To develop and implement a novel method of automated image segmentation. 2. To develop an algorithm that can differentiate small blood vessels as against the background. 3. To compare segmentation of fluorescent and fundus images by manual and automated methods of segmentation. 4. To quantify and validate the supremacy of automated method over manual thresholding.

Further our aim is to emphasize the significance of our work by putting forward an algorithm which is relatively easy for implementation and hence the viability of a realizable and repeatable step.

3. REVIEW OF LITERATURE
Digital image-processing techniques can provide an objective and highly repeatable way of quantifying retinal pathology. This study describes an image-processing strategy which detects and quantifies micro aneurysms present in digitized fluorescein angiograms. After preprocessing stages, a bilinear top-hat transformation and matched filtering are employed to provide an initial segmentation of the images. Thresholding this processed image results in a binary image containing candidate micro aneurysms. A novel region-growing algorithm fully delineates each marked object and subsequent analysis of the size, shape, and energy characteristics of each candidate results in the final segmentation of micro aneurysms. The technique was assessed by comparing the computer's results with micro aneurysm counts carried out by five clinicians, using Receiver Operating Characteristic (ROC) curves. The performance of the automated technique matched that of the clinicians' analyses. This strategy was valuable in providing a way of accurately monitoring the progression of diabetic retinopathy (Spencer et al, 1996).

A new system was proposed for tracking sensitive areas in the retina for computerassisted laser treatment of choroidal neovascularization (CNV). The system consists of a fundus camera using red-free illumination mode interfaced to a computer that allows realtime capturing of video input. The first image acquired was used as the reference image and utilized by the treatment physician for treatment planning. A grid of seed contours over the whole image is initiated and allowed to deform by splitting and/or merging according to preset criteria until the whole vessel tree is demarcated. Then, the image was filtered using a one-dimensional Gaussian filter in two perpendicular directions to extract the core areas of such vessels. Faster segmentation can be obtained for subsequent images by automatic registration to compensate for eye movement and saccades. An efficient registration technique was developed whereby some landmarks were detected in the reference frame then tracked in the subsequent frames. Using the relation between these two sets of corresponding points, an optimal transformation can be obtained. The implementation details of proposed strategy were presented and the obtained results

indicate that it was suitable for real-time location determination and tracking of treatment positions (Solouma et al,2002).

A method was presented for automated segmentation of vessels in two-dimensional color images of the retina. This method can be used in computer analyses of retinal images, e.g., in automated screening for diabetic retinopathy. The system was based on extraction of image ridges, which coincide approximately with vessel centerlines. The ridges were used to compose primitives in the form of line elements. With the line elements an image was partitioned into patches by assigning each image pixel to the closest line element. Every line element constituted a local coordinate frame for its corresponding patch. For every pixel, feature vectors are computed that make use of properties of the patches and the line elements. The feature vectors are classified using a kappaNN-classifier and sequential forward feature selection. The algorithm was tested on a database consisting of 40 manually labeled images. The method achieved an area under the receiver operating characteristic curve of 0.952. The method was compared with two recently published rule-based methods of Hoover et al. and Jiang et al. The results show that the method was significantly better than the two rule-based methods (p < 0.01). The accuracy of the method was 0.944 versus 0.947 for a second observer (Staal et al, 2004).

A new method to extract retinal blood vessels from a colour fundus image was described. Digital colour fundus images are contrast enhanced in order to obtain sharp edges. The green bands are selected and transformed to correlation coefficient images by using two sets of Gaussian kernel patches of distinct scales of resolution. Blood vessels are then extracted by means of a new algorithm, directional recursive region growing segmentation or D-RRGS. The segmentation results have been compared with clinicallygenerated ground truth and evaluated in terms of sensitivity and specificity. The results are encouraging and will be used for further application such as blood vessel diameter measurement (Himaga et al, 2004).

Retinal blood vessels are important structures in ophthalmological images. Many detection methods are available, but the results are not always satisfactory. In this paper,

a novel model based method for blood vessel detection in retinal images was presented. It was based on a Laplace and thresholding segmentation step, followed by a classification step to improve performance. The last step assures incorporation of the inner part of large vessels with specular reflection. The method gives a sensitivity of 92% with a specificity of 91%. The method can be optimized for the specific properties of the blood vessels in the image and it allows for detection of vessels that appear to be split due to specular reflection (Vermeer et al, 2004).

A new scheme for detection of small blood vessels in retinal images was proposed. A novel filter called Gabor Variance Filter and a modified histogram equalization technique are developed to enhance the contrast between vessels and background. Vessel segmentation was then performed on the enhanced map using thresholding and branch pruning based on the vessel structures. The experiments on high resolution images showed the desirable results with performance of 84.75% true positive rate and 0.15% false positive rate (Zhang et al, 2005).

Retinal vessel segmentation is an essential step of the diagnoses of various eye diseases. In this paper, an automatic, efficient and unsupervised method based on gradient matrix, the normalized cut criterion and tracking strategy was proposed. Making use of the gradient matrix of the Lucas-Kanade equation, which consists of only the first order derivatives, the proposed method can detect a candidate window where a vessel possibly exists. The normalized cut criterion, which measures both the similarity within groups and the dissimilarity between groups, was used to search a local intensity threshold to segment the vessel in a candidate window. The tracking strategy makes it possible to extract thin vessels without being corrupted by noise. Using a multi-resolution segmentation scheme, vessels with different widths can be segmented at different resolutions, although the window size is fixed. The method was tested on a public database. It was demonstrated to be efficient and insensitive to initial parameters (Cai et al, 2006).

This paper presents an automated method for the segmentation of the vascular network in retinal images. The algorithm starts with the extraction of vessel centerlines, which are used as guidelines for the subsequent vessel filling phase. For this purpose, the outputs of four directional differential operators are processed in order to select connected sets of candidate points to be further classified as centerline pixels using vessel derived features. The final segmentation was obtained using an iterative region growing method that integrates the contents of several binary images resulting from vessel width dependent morphological filters. The approach was tested on two publicly available databases and its results are compared with recently published methods. The results demonstrate that the algorithm outperforms other solutions and approximates the average accuracy of a human observer without a significant degradation of sensitivity and specificity (Mendona et al, 2006).

A method for automated segmentation of the vasculature in retinal images was presented. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and two-dimensional Gabor wavelet transform responses taken at multiple scales. The Gabor wavelet was capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. A Bayesian classifier with classconditional probability density functions (likelihoods) described as Gaussian mixtures was used yielding a fast classification, while being able to model complex decision surfaces. The probability distributions were estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance was evaluated on publicly available DRIVE (Staal et al., 2004) and STARE (Hoover et al., 2000) databases of manually labeled images. On the DRIVE database, it achieved an area under the receiver operating characteristic curve of 0.9614, being slightly superior than that presented by state-of-the-art approaches (Soares et al, 2006).

Computer based analysis for automated segmentation of blood vessels in retinal images will help eye care specialists screen larger populations for vessel abnormalities. However, automated retinal segmentation was complicated by the fact that the width of retinal

vessels can vary from very large to very small, and that the local contrast of vessels was unstable, especially in unhealthy ocular fundus. We propose a novel method that takes these facts into account. The method includes a multiscale analytical scheme using Gabor filters and scale production, and a threshold probing technique utilizing the features of retinal vessel network. The method was good for detecting large and small vessels concurrently. It also offers an efficient way to denoise and enhance the responses of line filters, allowing the detection of vessels with low local contrast (Qin Li et al,2006).

The widespread availability of electronic imaging devices throughout the medical community was leading to a growing body of research on image processing and analysis to diagnose retinal disease such as diabetic retinopathy (DR). Productive computer-based screening of large, at-risk populations at low cost requires robust, automated image analysis. In this paper results for the automatic detection of the optic nerve and localization of the macula using digital red-free fundus photography was presented. The method relies on the accurate segmentation of the vasculature of the retina followed by the determination of spatial features describing the density, average thickness, and average orientation of the vasculature in relation to the position of the optic nerve. Localization of the macula follows using knowledge of the optic nerve location to detect the horizontal raphe of the retina using a geometric model of the vasculature. 90.4% detection performance for the optic nerve and 92.5% localization performance for the macula for red-free fundus images representing a population of 345 images corresponding to 269 patients with 18 different diseases were reported (Tobin et al, 2007).

In this paper, a novel algorithm to detect optic disc location in retinal images was proposed. Optic disc is a bright disk area and all major blood vessels and nerves originate from it. With its high fractal dimension of blood vessel, optic disc can be easily differentiated from other bright regions such as hard exudates and artifacts. Compared with existing algorithms, the report has much lower computational cost and was more robust. With its location known, segmentation of optic disc was done with simple local

histogram analysis. The algorithm can be valuable for automated processing for early stage retinal disease (Ying et al, 2007).

An improved implementation of a segmentation method for retinal blood vessels based on a multi-scale approach and region growing employing modules from the Insight Segmentation and Registration Toolkit (ITK) was described in this paper. The results of segmentation of retinal blood vessels using this improved method was presented and compared these with results obtained using the original implementation in Matlab, as well as with expert manual segmentations obtained from a public database. It was shown that the ITK implementation achieves high quality segmentations with markedly improved computational efficiency. The ITK version has greater segmentation accuracy, from 0.94 to 0.96, than the Matlab version due to a decrease in FPR values and it was between 8 and 12 times faster than the original version. Furthermore, the ITK implementation was able to segment high-resolution images in an acceptable timescale (Martinez-Perez et al, 2007).

In the framework of computer-aided diagnosis of eye diseases, retinal vessel segmentation based on line operators was proposed. A line detector, previously used in mammography, was applied to the green channel of the retinal image. It was based on the evaluation of the average grey level along lines of fixed length passing through the target pixel at different orientations. Two segmentation methods were considered. The first uses the basic line detector whose response was thresholded to obtain unsupervised pixel classification. As a further development, two orthogonal line detectors along with the grey level of the target pixel was employed to construct a feature vector for supervised classification using a support vector machine. The effectiveness of both methods was demonstrated through receiver operating characteristic analysis on two publicly available databases of color fundus images (Ricci et al, 2007).

With improvements in fundus imaging technology and the increasing use of digital images in screening and diagnosis, the issue of automated analysis of retinal images is gaining more serious attention. The problem of retinal vessel segmentation, a key issue in

automated analysis of digital fundus images was considered. A texture-based vessel segmentation algorithm based on the notion of textons was proposed. Using a weak statistical learning approach, textons for retinal vasculature was constructed by designing filters that are specifically tuned to the structural and photometric properties of retinal vessels. The performance of the proposed approach using a standard database of retinal images was evaluated. On the DRIVE data set, the proposed method produced an average performance of 0.9568 specificity at 0.7346 sensitivity. This compares well with the bestpublished results on the data set 0.9773 specificity at 0.7194 sensitivity (Adjeroh et al, 2007).

Proliferative diabetic retinopathy can lead to blindness. However, early recognition allows appropriate, timely intervention. Fluorescein-labeled retinal blood vessels of 27 digital images were automatically segmented using the Gabor wavelet transform and classified using traditional features such as area, perimeter, and an additional five morphological features based on the derivatives-of-Gaussian wavelet-derived data. Discriminant analysis indicated that traditional features do not detect early proliferative retinopathy. The best single feature for discrimination was the wavelet curvature with an area under the curve (AUC) of 0.76. Linear discriminant analysis with a selection of six features achieved an AUC of 0.90 (0.73-0.97, 95% confidence interval). The wavelet method was able to segment retinal blood vessels and classify the images according to the presence or absence of proliferative retinopathy (Jelinek et al, 2007).

The morphology of the retinal blood vessels can be an important indicator for diseases like diabetes, hypertension and retinopathy of prematurity (ROP). Thus, the measurement of changes in morphology of arterioles and venules can be of diagnostic value. Here a method to automatically segment retinal blood vessels based upon multiscale feature extraction was presented. This method overcomes the problem of variations in contrast inherent in these images by using the first and second spatial derivatives of the intensity image that gives information about vessel topology. This approach also enables the detection of blood vessels of different widths, lengths and orientations. The local maxima over scales of the magnitude of the gradient and the maximum principal curvature of the

Hessian tensor are used in a multiple pass region growing procedure. The growth progressively segments the blood vessels using feature information together with spatial information. The algorithm was tested on red-free and fluorescein retinal images, taken from two local and two public databases. Comparison with first public database yields values of 75.05% true positive rate (TPR) and 4.38% false positive rate (FPR). Second database values are of 72.46% TPR and 3.45% FPR. The results on both public databases were comparable in performance with other authors. However, it was concluded that these values are not sensitive enough so as to evaluate the performance of vessel geometry detection. Therefore we propose a new approach that uses measurements of vessel diameters and branching angles as a validation criterion to compare our segmented images with those hand segmented from public databases. Comparisons made between both hand segmented images from public databases showed a large inter-subject variability on geometric values. A last evaluation was made comparing vessel geometric values obtained from our segmented images between red-free and fluorescein paired images with the latter as the "ground truth". The results demonstrated that borders found by our method are less biased and follow more consistently the border of the vessel and therefore they yield more confident geometric values (Martinez-Perez et al, 2007).

An automatic system was presented to find the location of the major anatomical structures in color fundus photographs; the optic disc, the macula, and the vascular arch. These structures are found by fitting a single point-distribution-model to the image, that contains points on each structure. The method can handle optic disc and macula centered images of both the left and the right eye. The system uses a cost function, which was based on a combination of both global and local cues, to find the correct position of the model points. The global terms in the cost function are based on the orientation and width of the vascular pattern in the image. The local term was derived from the image structure around the points of the model. To optimize the fit of the point-distribution-model to an image, a sophisticated combination of optimization processes was proposed which combines optimization in the parameter space of the model and in the image space, where points are moved directly. Experimental results were presented demonstrating that the specific choices for the cost function components and optimization scheme are needed to

obtain good results. The system was developed and trained on a set of 500 screening images, and tested on a completely independent set of 500 screening images. In addition to this the system was also tested on a separate set of 100 pathological images. In the screening set it was able to find the vascular arch in 93.2%, the macula in 94.4%, the optic disc location in 98.4% and whether it is dealing with a left or right eye in 100% of all tested cases. For the pathological images test set, this was 77.0%, 92.0%, 94.0%, and 100% respectively (Niemeijer et al, 2007).

In this paper, segmentation of blood vessels from colour retinal images using a novel clustering algorithm with a partial supervision strategy was proposed. The proposed clustering algorithm, which is a Radius based Clustering Algorithm (RACAL), uses a distance based principle to map the distributions of the data by utilising the premise that clusters are determined by a distance parameter, without having to specify the number of clusters. Additionally, the proposed clustering algorithm was enhanced with a partial supervision strategy and it was demonstrated that it is able to segment blood vessels of small diameters and low contrasts. Results are compared with those from the KNN classifier and show that the proposed RACAL performs better than the KNN in case of abnormal images as it succeeds in segmenting small and low contrast blood vessels, while it achieves comparable results for normal images. For automation process, RACAL can be used as a classifier and results show that it performs better than the KNN classifier in both normal and abnormal images (Salem et al, 2007).

Although it has been proposed that retinal vasculature is fractal, no method of standardization has been performed for vascular segmentation or for dimension calculation, thus resulting in great variability among values of fractal dimensions. The present study was designed to determine if estimation of retinal vessel fractal dimensions is dependent on vascular segmentation and dimensional calculation methods. Ten eye fundus images were segmented to extract their vascular trees by four computational methods ("multi-threshold", "scale-space", "pixel classification" and "ridge based detection"). Their information, mass-radius and box counting fractal dimensions were calculated and compared with those of the same vascular trees manually segmented (gold

standard). The mean vascular tree dimension varied among the groups of different segmentation methods, from 1.39 to 1.47 for box counting, from 1.47 to 1.52 for information and from 1.48 to 1.57 for mass-radius dimensions. The utilization of different vascular segmentation methods and different dimension calculation methods introduced significant difference among fractal dimension of vessels. Estimation of retinal vessel fractal dimensions was dependent on both vascular segmentation and dimension calculation methods (de Mendona et al, 2007).

In this paper, a method was proposed for detecting blood vessels in pathological retina images. In the proposed method, blood vessel-like objects are extracted using the Laplacian operator and noisy objects are pruned according to the centerlines, which are detected using the normalized gradient vector field. The method has been tested with all the pathological retina images in the publicly available STARE database. Experiment results show that the method can avoid detecting false vessels in pathological regions and can produce reliable results for healthy regions (Lam et al, 2008).

4. MATERIALS AND METHODS


4.1 MATERIALS:
We have put forward an algorithm which segments both fluorescent and fundus images. The algorithm is relatively easy for implementation as it requires no elaborate instruments or materials. The following are a list of the materials required for our method of segmentation: 1. Retinal Imaging using Fundus Camera 2. Matlab-Image Processing Tool Box 3. Operating System

4.1.1 RETINAL IMAGING USING FUNDUS CAMERA


Fig.4. Fundus Camera

Ophthalmic photography is a highly specialized form of medical imaging dedicated to the study and treatment of disorders of the eye. There are two common procedures to perform such photography: 1. Angiography and 2. Fundus photography Angiography is the imaging of vessels, and the resulting pictures are angiograms. Angiography of the retina of the eye requires the injection of a small amount of dye through a vein in the patients arm. The dye travels through the blood stream and is photographed using special cameras and colored light as it travels through the vessels of retina. Fluorescein Angiography and Indocyanine Green (ICG) are the two main types of such procedure. [5]

Fluorescein angiography is a test which allows the blood vessels in the back of the eye to be photographed as a fluorescent dye is injected into the bloodstream via the hand or arm. Fluorescein sodium is a highly fluorescent chemical compound that absorbs blue light with fluorescence. Although commonly referred to as fluorescein, the dye used in angiography is fluorescein sodium, the sodium salt of fluorescein. Fluorescein angiography may detect and quantify changes in the blood vessels geometry more accurately than fundus photography due to high contrast between the blood vessel and background retinal layer. It is sometimes unsuitable for certain people because of allergic reactions and therefore fundus photography is more widely used in clinics. Despite the high resolution of photographs in fundus photography, the contrast between the blood vessels and retinal background tends to be poor. Thus accurate vessel segmentation on fundus photography is harder than from other photographic procedures.

Alternatively, when performing ophthalmic fundus photography for diagnostic purposes, the pupil is dilated with eye drops and a special camera called a fundus camera is used to focus on the fundus. The resulting images are detailed and revealing, showing the optic nerve through which visual signals are transmitted to the brain and the retinal vessels

which supply nutrition and oxygen to the tissue. Fundus photograph is usually taken using a green filter (red-free) to acquire images of retinal blood vessels. Green light is absorbed by blood and appears darker in colour in the fundus photograph than the background and the retinal nerve fiber layer.

Fig.5.Fluorescein Angiogram

Fig.6. Digital Fundus Photograph

4.1.2.MATLAB:
SOURCE URL: www.mathworks.com/products/matlab Jack Little and Cleve Moler, the founders of The MathWorks, recognized the need among engineers and scientists for more powerful and productive computation environments beyond those provided by languages such as Fortran and C. In response to that need, they combined their expertise in mathematics, engineering, and computer science to develop MATLAB. MATLAB is a high-performance language for technical computing. It integrates computation, visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation. Typical uses include Math and computation, Algorithm development, Data acquisition Modeling, simulation and prototyping, Data analysis, exploration and visualization, Scientific and engineering graphics Application development, including graphical user interface building .

MATLAB is an interactive system whose basic data element is an array that does not require dimensioning. This allows you to solve many technical computing problems, especially those with matrix and vector formulations, in a fraction of the time it would take to write a program in a scalar noninteractive language such as C or Fortran.

The name MATLAB stands for matrix laboratory. MATLAB was originally written to provide easy access to matrix software developed by the LINPACK and EISPACK projects. Today, MATLAB engines incorporate the LAPACK and BLAS libraries, embedding the state of the art in software for matrix computation.

MATLAB has evolved over a period of years with input from many users. In university environments, it is the standard instructional tool for introductory and advanced courses in mathematics, engineering, and science. In industry, MATLAB is the tool of choice for high-productivity research, development, and analysis.

MATLAB features a family of add-on application-specific solutions called toolboxes. Very important to most users of MATLAB, toolboxes allow you to learn and apply specialized technology. Toolboxes are comprehensive collections of MATLAB functions (M-files) that extend the MATLAB environment to solve particular classes of problems. Areas in which toolboxes are available include signal processing, control systems, neural networks, fuzzy logic, wavelets, simulation, and images among many others.

THE MATLAB SYSTEM


The MATLAB system consists of five main parts:

1. DESKTOP TOOLS AND DEVELOPMENT ENVIRONMENT: This is the set of tools and facilities that help you use MATLAB functions and files. Many of these tools are graphical user interfaces. It includes the MATLAB desktop and Command Window, a command history, an editor and debugger, and browsers for viewing help, the workspace, files, and the search path. (shown in Fig.7.)

2. THE MATLAB MATHEMATICAL FUNCTION LIBRARY: This is a vast collection of computational algorithms ranging from elementary functions, like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix inverse, matrix eigenvalues, Bessel functions, and fast Fourier transforms.

3. THE MATLAB LANGUAGE: This is a high-level matrix/array language with control flow statements, functions, data structures, input/output, and object-oriented programming features. It allows both "programming in the small" to rapidly create quick and dirty throw-away programs, and "programming in the large" to create large and complex application programs.

4. GRAPHICS: MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as annotating and printing these graphs. It includes high-level functions for twodimensional and three-dimensional data visualization, image processing, animation, and

presentation graphics. It also includes low-level functions that allow you to fully customize the appearance of graphics as well as to build complete graphical user interfaces on your MATLAB applications. (shown in Fig.8) 5. THE MATLAB EXTERNAL INTERFACES/API: This is a library that allows you to write C and Fortran programs that interact with MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling MATLAB as a computational engine, and for reading and writing MAT-files.

Fig.7.The Matlab Help

Fig.8.The Matlab GUI

4.1.2.1 IMAGE PROCESSING TOOL BOX


The Image Processing Toolbox is a collection of functions that extend the capability of the MATLAB numeric computing environment. The toolbox supports a wide range of image processing operations, including Spatial image transformations Morphological operations Neighborhood and block operations Linear filtering and filter design Transforms Image analysis and enhancement Image registration Deblurring Region of interest operations

The basic data structure in MATLAB is the array, an ordered set of real or complex elements. This object is naturally suited to the representation of images, real-valued ordered sets of color or intensity data.

MATLAB stores most images as two-dimensional arrays (i.e., matrices), in which each element of the matrix corresponds to a single pixel in the displayed image. (Pixel is derived from picture element and usually denotes a single dot on a computer display.)

For example, an image composed of 200 rows and 300 columns of different colored dots would be stored in MATLAB as a 200-by-300 matrix. Some images, such as truecolor images, require a three-dimensional array, where the first plane in the third dimension represents the red pixel intensities, the second plane represents the green pixel intensities, and the third plane represents the blue pixel intensities. This convention makes working with images in MATLAB similar to working with any other type of matrix data, and makes the full power of MATLAB available for image processing applications.

The MATLAB Mathematical functions Library has a series of inbuilt functions that enable us to write algorithms that can display, enhance, deblur morphologically reconstruct and restore an image that removes noise and bring about its consequent segmentation. The capabilities of the Image Processing Toolbox were further extended by writing our own M-files, or by using the toolbox in combination with other toolboxes, such as the Signal Processing Toolbox and the Wavelet Toolbox.

Fig.9. The Matlab Figure Window

4.1.3 OPERATING SYSTEM


The following platforms are supported for MATLAB Release 14 with Service Pack 3 (R14SP3): Windows XP Windows 2000, 2003 Server Linux x86 2.4.x, glibc (glibc6) 2.2.5 Linux x86 2.4.x, glibc (glibc6) 2.3.2 Linux x86 2.6.x, glibc (glibc6) 2.3.2 Sun Solaris 2.8, 2.9, and 2.10 HPUX 11.0 and 11.i Panther -- Mac OS X 10.3.8 Panther -- Mac OS X 10.3.9 (requires an Apple Java patch. You can find more information on this in Solution 1-161VXT). Tiger -- Mac OS X 10.4

We have used a Windows XP server and this has enabled easier implementation of the algorithm.

4.2. METHODS
ALGORITHM 4.2.1. MANUAL SEGMENTATION
The following is a brief description used in segmenting images using the manual method:

The image was resized to 1.5 times to the original size.

The contrast of the image was adjusted to the limit 0 and 1.

A morphological working operation was performed by highlighting the background to a disk size of 15.

The highlighted background was subtracted from the original image so now the blood vessels alone were shown much clearly when compared with the original image.

The image was filtered either by correlation or convolution with a 5 by 5 filter containing equal weights, This is called as an averaging filter.

The image was then filtered using a predefined filter to give the fully preprocessed image.

The preprocessed image was converted to a binary image with the threshold value 0.4. This is the manual method so the threshold value can be changed to any value depending on the image.

The final image was displayed where the blood vessels are shown in white and the background in black.

4.2.2. AUTOMATED SEGMENTATION

In a match filtered retinal image, enhanced blood vessels are usually very sparse compared with the uniform background. This leads to a highly peaky co-occurrence matrix with low entropy that is not appropriate for thresholding. Therefore, blood vessels extracted by the manual method are usually not complete, and some detailed structures are missed. In our algorithm, We consider the sparse foreground in selecting the optimal threshold.

A global threshold level can be computed which lies in the range[0 1].

The histogram is initially segmented into two parts using a starting threshold value which is half the maximum dynamic range.

The sample mean (mf, 0) of the gray values associated with the foreground pixels and the sample mean (mb,0) of the gray values associated with the background pixels are computed.

A new threshold value is now computed as the average of these two sample means. The process is repeated, based upon the new threshold, until the threshold value does not change any more.

The threshold is selected with aims to maximize the local entropy of foreground and background in a gray-scale image. The larger the local entropy, the more balanced the ratio is between foreground and background in the binary image.

5. RESULTS
We have tested the images for 50 fundus and fluorescent images provided by Dr. Agarwals Eye Hospitals and present the result for one fundus and fluorescent image.

5.1. SUBJECTIVE RESULTS 5.1.1. FLUORESCENT IMAGES

Fig.10.1. Original Image

Fig.10.2. Preprocessed Image

Fig.10.3. Manually Segmented Image

Fig.10.4.Automated Segmented Image

5.1.2. FUNDUS IMAGES


Fig.11.1. Original Image

Fig.11.2. Preprocessed Image

Fig.11.3. Manually Segmented Image

Fig.11.4. Automated Segmented Image

5.2. QUANTITATIVE RESULTS


The sensitivity and specificity on a fixed threshold on 60 images yielded an average sensitivity of approximately 96.143% and a specificity of approximately 94.097%. This confirmed the supremacy of our algorithm.

SE = TP/ (TP +FN) SP = TN/ (FP + TN)

where, SE=Sensitivity SP=Specifity TP=True Positive TN=True Negative FN=False Negative FP-False Positive

6. DISCUSSION
The manual and automated thresholding methods of segmentation were compared for the fluorescent and fundus images. The results confirmed the supremacy of automated thresholding and the specificity and sensitivity of our algorithm was calculated and compared with other algorithms that used different operators. (shown in Table.1)

Table.1. A brief comparison among the proposed algorithms for retinal image segmentation Algorithm Accuracy Edge continuity Noise in background Sobel Operator Robert Operator Prewitt Operator Canny Operator Decision Based Directional Edge Detector Deformable models Very high Very good Highly accepted high low low Very high high Good Bad Bad Very good Very good accepted accepted accepted Fairly accepted not accepted

Morphological Gradient Morphological Reconstruction Thresholding

high

Good

accepted

high

Very good

Highly accepted

high

Very good

accepted

7. SUMMARY
Retinal Blood vessel morphology is an important indicator for many diseases such as diabetes mellitus, hypertension and arteriosclerosis, and the measurement of geometrical changes in retinal veins and arteries and is applied to a variety of clinical studies. Segmentation of the retinal blood vessels is an assistance to understand more about its morphology and provides a better source of information for studying the various related diseases. Two of the major problems in the segmentation of retinal blood vessels namely the presence of a wide variety of vessel widths and inhomogeneous background of the retina have been addressed. A method of automated segmentation for both fluorescent and fundus images of the retinal blood vessel has been proposed and this segmentation has been compared against manual measurements and the efficiency of the automated algorithm has been quantified.

8.

SOCIAL

RELEVANCE

OF

RETINAL

IMAGE

SEGMENTATION
In the case of diabetic retinopathy often there are no symptoms in the early stages of the disease, nor is there any pain. Sometimes blurred vision may occur when the macula the part of the retina that provides sharp central visionswells from leaking fluid. This condition is called macular edema. Also new blood vessels grow on the surface of the retina; they can bleed into the eye and block vision. These changes can be observed on the fundus morphology even before any visual loss. The fundus images have poor resolution and contrast between the blood vessels and hence there is a need to segment the images by first preprocessing them and then extracting the features [31]. Also Glaucoma is estimated to affect 12 million Indians; it causes 12.8% of the total blindness in the country and is considered to be the third most common cause of blindness in India. The symptoms of Glaucoma include loss of peripheral vision blurred or foggy vision, heaviness or dull pain in the eyes and halos or rainbow colored rings perceived around lights. Glaucoma is diagnosed by measuring the pressure (tonometry), looking for the optic nerve changes (fundus examination) and to document the visual field defects (perimetry). The diagnosis is confirmed on basis of the clinical condition and these findings. These tests are all carried by the retinal image examination which is once again done by segmentation [2]. Hence segmentation by extracting the vasculature of the retinal images is very important to ophthalmologists as it can be used to diagnose retinal diseases and aid in their subsequent treatment.

9. REFERENCES
1. Curcio, C.A. and Hendrickson, A.E., Organization and development of the primate photoreceptor mosaic, 1991,10: 89-120. 2. Quigley, HA., Open-angle glaucoma, N Engl J Med, 1993,328: 1097-1106. 3. Hendrickson, A.E. and Youdelis, C.,The morphological development of the human fovea. Ophthalmology, 1984. 91: 603-612. 4. Nicolela, MT. and Drance, SM., Various glaucomatous optic nerve appearances Clinical correlations,Ophthalmology, 1996,103 (4): 640-649. 5. Liesenfeld, B., Kohner, E., Piehlmeier, W., Kluthe, S., Aldington, S. and Porta, M., A telemedical approach to the screening of diabetic retinopathy: digital fundus photography. Diabetes Care, 2000,23(3):345-348. 6. Yecheng Ted Wu, Image Segmentation : The First Step in 3-D Imaging, able software, 1999. 7. Narasimha-Iyer, H., Robust detection and classification of longitudinal changes in color retinal fundus images for monitoring diabeticr etinopathy, IEEE Trans Biomed Eng,2006, 53(6):1084-1098. 8. Mabrouk, Mai S., Solouma Nahed, H. and Kadah ,Yasser M., Survey of Retinal Image Segmentation and Registration, GVIP Journal,2006,6(2):27-30. 9. Youssif, AR., Ghalwash, AZ. and Ghoneim, AR., Optic disc detection from normalized digital fundus images by means of a vessels' direction matched filter, IEEE Trans Med Imaging ,2008,27(1):11-18. 10. Tobin, KW., Chaum, E., Govindasamy, VP.and Karnowski, TP., Detection of anatomic structures in human retinal imagery, IEEE Trans Med Imaging, 2007,26(12):1729-1739. 11. Ying, H,, Zhang, M. and Liu, JC., Fractal-based automatic localization and segmentation of optic disc in retinal images, Conf Proc IEEE Eng Med Biol Soc, 2007,41:39-41.

12. Martinez-Perez, M., Hughes, AD., Thom, SA. and Parker, KH., Improvement of a retinal blood vessel segmentation method using the Insight Segmentation and Registration Toolkit (ITK), Conf Proc IEEE Eng Med Biol Soc, 2007,89:2-5. 13. Ricci, E. and Perfetti, R., Retinal blood vessel segmentation using line operators and support vector classification, IEEE Trans Med Imaging, 2007, 26(10):1357-1365. 14. Jelinek, HF., Cree, MJ., Leandro, JJ., Soares, JV., Cesar, RM Jr. and Luckie, A., Automated segmentation of retinal blood vessels and identification of proliferative diabetic retinopathy, J Opt Soc Am A Opt Image Sci Vis,2007, 24(5):1448-1456. 15. Adjeroh, DA., Kandaswamy, U. and Odom, JV., Texton-based segmentation of retinal vessels, J Opt Soc Am A Opt Image Sci Vis,2007, 24(5):1384-1393. 16. Cai,W., and Chung, AC., Multi-resolution vessel segmentation using normalized cuts in retinal images, Med Image Comput Comput Assist Interv Int Conf,2006,9:928-936. 17. Salem, SA., Salem, NM. and Nandi, AK., Segmentation of retinal blood vessels using a novel clustering algorithm (RACAL) with a partial supervision strategy, Med Biol Eng Comput,2007, 45(3):261-273. 18. Niemeijer, M., Abrmoff, MD. and van Ginneken, B., Segmentation of the optic disc, macula and vascular arch in fundus photographs,IEEE Trans Med Imaging,2007, 26(1):116-127. 19. Martinez-Perez ,ME., Hughes, AD., Thom, SA., Bharath, AA. and Parker, KH., Segmentation of blood vessels from red-free and fluorescein retinal images, Med Image Anal,2007, 11(1):47-61. 20. Soares, JV., Leandro, JJ., Cesar Jnior, RM., Jelinek, HF. and Cree, MJ., Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification, IEEE Trans Med Imaging, 2006, 25(9):1214-1222. 21. Mendona, AM. and Campilho, A., Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction, IEEE Trans Med Imaging, 2006, 25(9):1200-1213. 22. Staal,J., Abrmoff, MD., Niemeijer, M., Viergever, MA. and van Ginneken B., Ridge-based vessel segmentation in color images of the retina,IEEE Trans Med Imaging, 2004, 23(4):501-509.

23. Solouma, NH., Youssef, AB., Badr, YA. and Kadah, YM., A new real-time retinal tracking system for image-guided laser treatment,IEEE Trans Biomed Eng, 2002, 49(9):1059-1067. 24. Spencer, T., Olson, JA., McHardy, KC., Sharp, PF. and Forrester, JV., An imageprocessing strategy for the segmentation and quantification of microaneurysms in fluorescein angiograms of the ocular fundus, Comput Biomed Res,1996, 29(4):284302. 25. Lam,BY. and Yan, H., A novel vessel segmentation algorithm for pathological retina images based on the divergence of vector fields, IEEE Trans Med Imaging,2008, 27(2):237-246. 26. de Mendona, MB., de Amorim Garcia, CA., Nogueira Rde, A., Gomes, MA., Valena, MM. and Orfice, F., Fractal analysis of retinal vascular tree: segmentation and estimation methods, Arq Bras Oftalmol, 2007, 70(3):413-422. 27. Zhang, M., Wu, D. and Liu, JC., On the small vessel detection in high resolution retinal images, Conf Proc IEEE Eng Med Biol Soc,2005, 3:3177-3179. 28. Vermeer, KA., Vos, FM., Lemij, HG. and Vossepoel, AM., A model based method for retinal blood vessel detection, Comput Biol Med ,2004, 34(3):209-219. 29. Himaga,M., Usher, D., and Boyce,J.,Accurate Retinal Blood Vessel Segmentation by Using Multi-Resolution Matched Filtering and Directional Region Growing,IEICE Trans Inf Syst, 2004, 87; 155-163. 30. QinLi., Zhang, D., LeiZhang., Bhattacharya, P., A New Approach to Automated Retinal Vessel Segmentation Using Multiscale Analysis, Pattern Recognition, 2006,4:77-80. 31. Kyuichi, Yamamoto. And Shinichi Murakami., Study on Image Segmentation by KMeans Algorithm,IEIC Technical Report, 2003,103:130-147.

Vous aimerez peut-être aussi