Académique Documents
Professionnel Documents
Culture Documents
INTRODUCTION
Cancer is defined as the abnormal growth of tissues. Prostate cancer is a form of cancer that
develops in the prostate, a gland in the male reproductive system. Most prostate cancers are slow
growing. The cancer cells may spread from the prostate to other parts of the body; particularly,
the bones and lymph nodes. Rates of detection of Prostate cancer vary widely across the world.
The Prostate cancer is the second leading cause of cancer-related death among men and is the
most commonly diagnosed cancer in men.
Image segmentation is the division of an image into regions or categories, which correspond to
different objects or parts of objects. Every pixel in an image is allocated to one of a numberof
these categories. A good segmentation is typically one in which:- a.) pixels in the same category
have similar greyscale of multivariate values and form a connected region, b.) neighbouring
pixels which are in different categories have dissimilar values.
Segmentation is a fundamental operation on images that partitions the image into homogeneous
segments or regions to ease the analysis of images. It is the process of grouping pixelstogether,
with each group having at least one or more features in common. The features may bebrightness,
color, motion, texture etc. Boundary detection is an integral part of this processsince it helps to
identify the individual segments themselves. Segmentation is represented as aclustering problem
in statistics and can be used for 3D reconstruction of objects when appliedto a stack of images
such as in Medical Imaging.
In computer vision, segmentation is widely used to separate the foreground from the back ground
and for feature extraction, face detection, video compression etc. It is one of the oldestand most
extensively studied problems with Brice and Fennema contributing significantly by beginning to
analyze scenes using regions, in 1970. Early techniques of segmentation used region splitting or
merging while recent techniques often tend to optimize some global criterion.This report
describes the technique for detecting cancer cell using image segmentation in MATLAB. A new
algorithm is implemented to successfully segment intensity inhomogeneous images.
Segmentation of cancer cells images in prostate gland of male reproductive organusing this
proposed algorithm is shown and the results are discussed.
A medical image archiving system and method can be employed to understand image
segmentation. A medical image archiving system receives analog NTSC or PAL video from a
medical imaging device and converts it to a digital format for storage. The storage can be via
local hard disc drive, or CD Writer, or other optical storage medium, or via Local or Wide area
network storage to a remote electronic storage medium. The system includes an integral Web
server to permit easy access over a network using a browser. When an image is stored on a CD, it
can be stored as a session and the CD closed to prevent further Writing.
The aim of this project is to show how to detect a cell using image segmentation. An object can
be easily detected in a image if the object has sufficient contrast from the background. In this
project we detect prostate cancer cells using edge detection and basic morphology. These days,
the scope of medical imaging is increasing rapidly with the collaboration of engineering in this
area. Thus this project is a step forward in the field of medical imaging and letting identify the
future scope of the biomedical applications in the field of digital image processing.
CHAPTER 2
LITERATURE SURVEY
The aim of this section is to relate reader with the survey of basic processes and literature of
current research in the field of image sementation in medical imaging to detect a cancer cell.
2.1 Introduction:
For many years, research efforts have been focused on accurate methods of differentiating
indolent from more aggressive cancers. Since the original description by Gleasonof a pathologic
grading system, other methods of predicting cancer aggressiveness, using nuclear morphometry
as well as various molecular markers which include DNA ploidy, have been reported. These
methods have been based primarily on the analysis of fixed, histologic sections of prostate
cancer. By concentrating research efforts solely on the analysis of nonliving tissue, we are
ignoring certain aspects of prostate cancer that may be important in determining the metastatic
potential of individual tumors. Dynamic analysis of live cells to quantify various motility
characteristics has been instrumental in understanding ciliary function, cell development, early
physiologic cellular reactions, nuclear organization, cytoplasmic forces, neuronal death, and the
metastatic potential of animal models of prostate cancer. Common methods of imaging these
cells for motilityanalysis have been brightfield, fluorescence, phase contrast, modulation
contrast, and differential interference contrast microscopy. Not all of these imaging modalities
are optimal for live prostate cells. For instance, the transparency of cells often leads to low-
contrast images if bright field microscopy is used. Vital dyes have the potential for influencing
cell motility processes and decay of the signal occurs rapidly, thus limiting the extent to which
fluorescence microscopy can be employed to quantify cell motility. Phase contrast techniques
accentuate the boundaries of flat surfaces of prostate cells with a white halo, creating an
artifact that interferes with further analysis of cell shape. In consequence of these shortcomings,
modulation contrast (Hoffman) and differential interference contrast (DIC) microscopy are often
used to achieve high-contrast images of motile prostate cells. Hoffman microscopy detects
optical density gradients using a spatially varying transmittance filter (modulator).
This filter causes the modulation of opposite gradients above and below average background
intensity, which produces a three-dimensional appearance to a cell image. DIC microscopy also
gives a three-dimensional appearance to a cells image, but does so by using birefringenceoptics
consisting of Nomarski prisms and a polarizer. Live cell boundary detection, i.e., image
segmentation, is a central requirement of any quantitative motility analysis.
Special attributes of Hoffman and DIC images undermine the use of segmentation techniques
that have been developed for analysis of brightfield and fluorescence images. For example, the
intensity of cell borders in Hoffman images can be lighter or darker (or both) than the
surrounding background intensity (asymmetric intensity border), which makes segmentation
difficult. DIC images place an even greater demand on segmentation algorithms due to their
resolution of common delicate cell components (e.g., membrane ruffles, pseudopodal extensions,
undulations) in addition to the asymmetric intensity borders. As a result, effective use of
Hoffman and DIC images requires modification of many segmentation techniques, including
image thresholding, first- and second-order statistical analyses, B-splines, color feature analysis,
and fractal features. In order to segment the borders of cultured cells in Hoffman and DIC
images, we developed specialized algorithms that could be utilized in an automated system for
performing cell motility and morphometry analyses. Our focus has been on the use of segmented
cell information in the prediction of metastatic potential of ratprostatic adenocarcinoma cell
lines. Previously, Partin et al. (19) developed a Fourier analysis method that used a sequence of
spatial and temporal Fast Fourier Transforms to describe quantitatively the cells dynamic shape
change, based on temporal alterations in the boundary. This Fourier analysis scheme was applied
to in vitro cells from low and high metastatic Dunning sublines of the rat prostatic
adenocarcinoma (19). This study showed a strong correlation between the calculated Fourier
measurements and metastatic potential. Widespread use of this Fourier methodology has been
partially limited because earlier quantitative analyses required timeintensive and subjective
manual tracing of cell boundariesfrom microscopic images. To eliminate the tedious process of
manual cell tracing, we developed a segmentation algorithm for a cell motility analysis system.
This approach has the further benefit of providing an objective, reproducible method of
determining cell boundaries for our motility system and for other systems involving cell image
analysis. We examine the segmentation of cultured motile prostate cells, imaged using both
Hoffman and DIC video microscopy. Features are extracted from segmented cell boundaries and
used to characterize the morphometry of these motile cells, which can be utilized in prostate
cancer research. Features extracted from cell boundaries derived using our segmentation
algorithms are comparedwith those derived from manual cell tracings.
Human beings are predominantly visual creatures. We not only look at things to identify and
classify them, but we can scan for differences, and obtain an overall rough feeling for a scene
with a quick glance.
Humans have evolved very precise visual skills: we can identify a face in an instant; we can
differentiate colours; we can process a large amount of visual information very quickly.
(i) Binary: Each pixel is just black or white. Since there are only two possible values for each
pixel, we only need one bit per pixel. Such images can therefore be very efficient in terms of
storage. Images for which a binary representation may be suitable include text (printed or
handwritten), fingerprints, or architectural plans.
(ii) Greyscale: Each pixel is a shade of grey, normally from 0 (black) to 255 (white). This range
means that each pixel can be represented by eight bits.
(ii) RGB (or True) Images: Here each pixel has a particular colour; that colour being described
by the amount of red, green and blue in it. If each of these components has a range0 to 255, this
gives a total of 2553 = 1, 67, 77,216 different possible colours in the image. This is enough
colours for any image. Since the total number of bits required for each pixel is 24, such images
are also called 24-bit colour images. Such an image may be considered as consisting of a stack of
three matrices; representing the red, green and blue values for each pixel. This means that for
every pixel there correspond three values.
(iv) Indexed: Most colour images only have a small subset of the more than sixteen million
possible colours. For convenience of storage and file handling, the image has an associated
colour map, or colour palette, which is simply a list of all the colours used in that image. Each
pixel has value which does not give its colour (as for an RGB image), but an index to the colour
in the map. It is convenient if an image has 256 colours or less, for then the index values will
only require one byte each to store.
Image processing is any form of signal processingfor which the input is an image, such as
a photograph or video frame; the output of image processing may be either an image or a set of
characteristics or parameters related to the image.
Digital image processing is the use of computer algorithms to perform image processing on
digital images. It allows a much wider range of algorithms to be applied to the input data and
can avoid problems such as the build-up of noise and signal distortion during processing. Since
images are defined over two dimensions (perhaps more) digital image processing may be
modeled in the form of multidimensional systems.
(i) Image enhancement: This refers to processing an image so that the result is more suitable for a
particular application. Examples include:
Sharpening or de-blurring an out of focus image.
Highlighting edges.
Improving image contrast or brightening an image.
Removing noise.
(ii) Image restoration: This may be considered as reversing the damage done to an image by a
known cause. For example:
Removing of blur caused by linear motion.
Removal of optical distortions.
Removing periodic interference.
(iii) Image segmentation: This involves subdividing an image into constituent parts, or isolating
certain aspects of an image. For examples:
Finding lines, circles, or particular shapes in an image.
In an aerial photograph, identifying cars, trees, buildings, or roads.
2.2.3 Applications:
Image processing has an enormous range of applications; almost every area of science and
technology can make use of image processing methods. Here is a short list just to give some
indication of the range of image processing applications.
(i) Medicine:
Inspection and interpretation of images obtained from X-rays, MRI or CAT scans,
Analysis of cell images, of chromosome karyotypes.
(ii) Agriculture:
Satellite/aerial views of land, for example to determine how much land is being used for different
purposes, or to investigate the suitability of different regions for different crops,
Inspection of fruit and vegetables distinguishing good and fresh produce from old.
(iii) Industry:
2.3.1 METHODS:
Several common approaches have appeared in the recent literature on medical image
segmentation. We define each method, provide an overview of its implementation, and discuss its
advantages and disadvantages. Although each technique is described separately, multiple
techniques are often used in conjunction for solving different segmentation problems. We divide
segmentation methods into eight categories: (a) thresholding approaches, (b) region growing
approaches, (c) classifiers, (d) clustering approaches, (e) Markov random field (MRF) models,
(f) artificial neural networks, (g) deformable models, and (h) atlas-guided approaches.
Thresholding:
Thresholding approaches segment scalar images by creating a binary partitioning of the image
intensities. A thresholding procedure attempts to determine an intensity value, called the
threshold, which separates the desired classes. The segmentation is then achieved by grouping all
pixels with intensities greater than the threshold into one class and all other pixels into another
class.Determination of more than one threshold value is a process called multithresholding.
Thresholding is a simple yet often effective means for obtaining a segmentation of images in
which different structures have contrasting intensities or other quantifiable features. The partition
is usually generated interactively, although automated methods do exist. Thresholding is often
performed interactively, based on the operators visual assessment of the resulting segmentation.
Thresholding is often used as an initial step in a sequence of image-processing operations. It has
been applied in digital mammography, in which two classes of tissue are typically present
healthy and tumorous. Its main limitations are that, in its simplest form, only two classes are
generated, and it cannot be applied to multichannel images. In addition, thresholding typically
does not take into account the spatial characteristics of an image. This causes it to be sensitive to
noise and intensity inhomogeneities, which can occur in MR images.
Both of these artifacts essentially corrupt the histogram of the image, making separation more
difficult. For these reasons, variations on classical thresholding have been proposed for medical-
image segmentation that incorporate information based on local intensities and connectivity.
Region Growing:
Region growing is a technique for extracting an image region that is connected based on some
predefined criteria. These criteria can be based on intensity information and/or edges in the
image. In its simplest form, region growing requires a seed point that is manually selected by an
operator and extracts all pixels connected to the initial seed based on some predefined criteria.
Figure 2.3: Feature space methods and region growing. (a) Histogram showing three
apparent classes. (b) 2-D feature space. (c) Example of region growing.
For example, one possible criterion might be to grow the region until an edge in the image is
met. This is depicted in Figure 2.3b, in which region growing has been used to isolate one of the
Structures.
Classifiers:
Classifier methods are pattern recognition techniques that seek to partition a feature space
derived from the image by using data with known labels. A feature space is the range space of
any function of the image, with the most common feature space being the image intensities
themselves. All pixels with their associated features on the left side of the partition would be
grouped into one class.
Classifiers are known as supervised methods because they require training data that are manually
segmented and then used as references for automatically segmenting new data. There are a
number of ways in which training data can be applied in classifier methods. A simple classifier is
the nearest-neighbor classifier, in which each pixel is classified in the same class as the training
datum with the closest intensity. The k-nearest-neighbor classifier is a generalization of this
approach, in which the pixel is classified into the same class as the majority of the k-closest
training data. The k-nearest-neighbor classifier is considered a nonparametric classifier because
it makes no underlying assumption about the statistical structure of the data. Another
nonparametric classifier is the Parzen window, in which the classification is made by a weighted
decision process within a predefined window of the feature space, centered at the unlabeled pixel
intensity.
A commonly used parametric classifier is the maximum-likelihood or Bayes classifier. It
assumes that the pixel intensities are independent samples from a mixture of probability
distributions, usually Gaussian. This mixture, called a finitemixture model, is given by the
probability density function:
yj
K
k
f(yj; , = k f k ;
k=1
For Gaussian mixtures, this means estimating K-means, covariances, and mixing coefficients.
Classification of new data is obtained by assigning each pixel to the class with the highest
posterior probability. When the data truly follow a finite Gaussian mixture distribution, the
maximum-likelihood classifier can perform well and is capable of providing a soft segmentation
composed of the posterior probabilities.
Standard classifiers require that the structures to be segmented possess distinct quantifiable
features. Because training data can be labeled, classifiers can transfer these labels to new data as
long as the feature space sufficiently distinguishes each label as well. Being noniterative,
classifiers are relatively computationally efficient, and, unlike thresholding methods, they can be
applied to multichannel images. A disadvantage of classifiers is that they generally do not
perform any spatial modeling. This weakness has been addressed in recent work extending
classifier methods to segmenting images that are corrupted by intensity inhomogeneities.
Another disadvantage is the requirement of manual interaction to obtain training data. Training
sets can be acquired for each image that requires segmenting, but this can be time consuming and
laborious. On the other hand, use of the same training set for a large number of scans can lead to
biased results that do not take into account anatomical and physiological variability between
different subjects.
Clustering:
Clustering algorithms essentially perform the same function as classifier methods without the use
of training data. Thus, they are termed unsupervised methods. To compensate for the lack of
training data, clustering methods iteratatively alternate between segmenting the image and
characterizing the properties of each class. In a sense, clustering methods train themselves, using
the available data. Three commonly used clustering algorithms are the K-means or ISODATA
algorithm, the fuzzy c-means algorithm, and the expectation-maximization (EM) algorithm. The
K-means clustering algorithm clusters data by iteratively computing a mean intensity for each
class and segmenting the image by classifying each pixel in the class with the closest mean. The
fuzzy c-means algorithm generalizes the K-means algorithm, allowing for soft segmentations
based on fuzzy set theory. The EM algorithm applies the same clustering principles with the
underlying assumption that the data follow a Gaussian mixture model. It iterates between
computing the posterior probabilities and computing maximum likelihood estimates of the
means, covariances, and mixing coefficients of the mixture model.
Although clustering algorithms do not require training data, they do require an initial
segmentation (or, equivalently, initial parameters). The EM algorithm has demonstrated greater
sensitivity to initialization than the K-means or fuzzy c-means algorithm. Like classifier
methods, clustering algorithms do not directly incorporate spatial modeling and can therefore be
sensitive to noise and intensity inhomogeneities. This lack of spatial modeling, however, can
provide significant advantages for fast computation. Work on improving the robustnessof
clustering algorithms to intensity inhomogeneities in MR images has demonstrated excellent
success. Robustness to noise can be incorporated byMRF modeling as described in the next
section.
DeformableModels:
Deformable models are physically motivated, model-based techniques for delineating region
boundaries by using closed parametric curves or surfaces that deform under the influence of
internal and external forces. To delineate an object boundary in an image, a closed curve or
surface must first be placed near the desired boundary and then allowed to undergo an iterative
relaxation process. Internal forces are computed from within the curve or surface to keep it
smooth throughout the deformation. External forces are usually derived from the image to drive
the curve or surface toward the desired feature of interest.
Proposed systems:
The first stage starts with taking a collection of CT scan images from the Database (ACSC).
Images are stored in MATLAB and displayed as a gray scale image. The lung CT images having
low noise when compared to scan image and MRI image. So we can take the CT images for
detecting the lungs. The main advantage of the computer tomography image having better clarity,
low noise and distortion For the experimental purpose 10 male images are examined his CT
scans were stored in database of images in JPEG/PNG image standards.
Image pre-processing
All the images have been undergoing several preprocessing process such as noise removal and
enhancement.
Noise Removal: Image denoising algorithms may be the mostly used in image processing. The
input image is a normal RGB image. The RGB image is converted into grey scale image because
the RGB format is not supported in Matlab. Then the grey scale image contains noises such as
white noise, salt and pepper noise etc White noise is one of the most common problems in image
processing. This can be removed by using filter from the extracted image.
Image Enhancement: Image enhancement defined as a way to improve the quality of image, so
that the resultant image is better than the original one, the process of improving the quality of a
digitally stored image by manipulating the image with MATLAB software. It is quite easy, for
example, to make an image lighter or darker, or to increase or decrease contrast. The aim of
image enhancement is to improve the visual appearance of an image, or to provide a better
transform representation for future automated image processing. Many images like medical
images, satellite images, aerial images and even real life photographs suffer from poor contrast
and noise. It is necessary to enhance the contrast and remove the noise to increase image quality.
The enhancement technique differs from one field to another according to its objective. In the
image enhancement stage we use Gabor filter enhancement technique.
2.4.2 Processing
This stage involves mainly segmentation which is explained as below:
Image Segmentation: In computer vision, segmentation refers to the process of partitioning a
digital image into multiple segments (sets of pixels, also known as superpixels).Image
segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images.
More precisely, image segmentation is the process of assigning a label to every pixel in an image
such that pixels with the same label share certain visual characteristics. The result of image
segmentation is a set of segments that collectively cover the entire image, or a set of contours
extracted from the image. Each of the pixels in a region is similar with respect to some
characteristic or computed property, such as color, intensity, texture. All image processing
operations generally aim at a better recognition of objects of interest, i.e., at finding suitable local
features that can be distinguished from other objects and from the background. The next step is
to check each individual pixel to see whether it belongs to an object of interest or not. This
operation is called segmentation and produces a binary image. A pixel has the value one if it
belongs to the object otherwise it is zero. After segmentation, it is known that which pixel
belongs to which object.
2.4.3 Post-Processing:
Post processing segmentation is done using following methods.
Thresholding approach: Thresholding is useful in discriminating foreground from the
background. By selecting an adequate threshold value T, the gray level image can be converted
to binary image. The binary image should contain all of the essential information about the
position and shape of the objects of interest (foreground). The advantage of obtaining first a
binary image is that it reduces the complexity of the data and simplifies the process of
recognition and classification. The most common way to convert a gray-level image to a binary
image is to select a single threshold value (T). Then all the gray level values below this T will be
classified as black (0), and those above T will be white (1). Otsus method using (gray thresh)
function Computes global image threshold. Otsus method is based on threshold selection by
statistical criteria. Otsu suggested minimizing the weighted sum of within-class variances of the
object and background pixels to establish an optimum threshold. Recall that minimization of
within-class variances is equivalent to maximization of between-class variance. This method
gives satisfactory results for bimodal histogram images.
1 MATLAB window.
2 Current Directory.
3 Command History.
4 Command Editor.
5 Workspace.
3.3 Strengths of Matlab:
MATLAB may behave as a calculator or as a programming language
MATLAB combine nicely calculation and graphic plotting.
MATLAB is relatively easy to learn.
MATLAB is interpreted (not compiled), errors are easy to fix.
MATLAB is optimized to be relatively fast when performing matrix operations
MATLAB does have some object-oriented elements
3.4 Weakness of Matlab:
MATLAB is not a general purpose programming language such as C, C++, or FORTRAN.
MATLAB is designed for scientific computing, and is not well suitable for other applications.
MATLAB is an interpreted language, slower than a compiled language such as C++.
MATLAB commands are specific for MATLAB usage. Most of them do not have a direct
equivalent with other programming language commands.
3.5 SUMMARY OF COMMANDS AND OPERATORS USED IN MATLAB
Table 3.1: Arithmetic Operators and Special Characters
Character Description
+ Addition
- Subtraction
/ Division (right)
^ Power or exponentiation
= Assignment operator
Character Description
.* Array multiplication
.^ Array power
Character Description
== Equal to
~= Not equal to
& Logical or element-wise AND
| Logical or element-wise OR
|| Short-circuit OR
Command Description
figure FIGURE, by itself, creates a new figure window, and returns its
handle.
figure (H); makes H the current figure, forces it to become visible, and
raises it above all other figures on the screen. If Figure H does not
exist, then H is an integer, a new figure is created with handle H.
title title ('text') adds text at the top of the current axis.
xlabel xlabel ('text') adds text beside the X-axis on the current axis.
ylabel ylabel ('text') adds text beside the Y-axis on the current axis.
CHAPTER 4
OBJECTIVE
Medical image processing is an exciting and active field of research , where disciplines such as
Engineering, Computer Science, Physics, Biology and Medicine interdisciplinarily cooperate in
order to improve healthcare. Most frequently, medical images are the basis of diagnostics,
treatment planning, and treatment, but medical images are likewise important for medical
education, research and epidemiology.
Segmentation is the process dividing an image into regions with similar properties such as gray
level, color, texture, brightness, and contrast.[79] The role of segmentation is to subdivide the
objects in an image; in case of medical image segmentation the objective is to:
Identify Region of Interest i.e. locate tumor, lesion and other abnormalities
Measure tissue volume to measure growth of tumor (also decrease in size of tumor with
treatment)
To automate the process so that large number of cases can be handled with the same
accuracy i.e. the results are not affected as a result of fatigue, data overload or missing
manual steps.
To achieve fast and accurate results. Very high-speed computers are, now, available at
modest costs, speeding up computer-based processing in the medical field.
To support faster communication, wherein patient care can be extended to remote areas
using information technology.
CHAPTER 5
IMPLEMENTATION
This section shows how to detect a cell using edge detection and basic morphology. An object
can be easily detected in an image if the object has sufficient contrast from the background. In
this project, the cells are prostate cancer cells.
Edge Detection:
Edge detection is the name for a set of mathematical methods which aim at identifying points in
a digital image at which the image brightness changes sharply or, more formally, has
discontinuities. The points at which image brightness changes sharply are typically organized
into a set of curved line segments termed edges. The same problem of finding discontinuities in
1D signals is known as step detection and the problem of finding signal discontinuities over time
is known as change detection. Edge detection is a fundamental tool in image processing, machine
vision and computer vision, particularly in the areas of feature detection and feature extraction.
Once we have computed a measure of edge strength (typically the gradient magnitude), the next
stage is to apply a threshold, to decide whether edges are present or not at an image point. The
lower the threshold, the more edges will be detected, and the result will be increasingly
susceptible to noise and detecting edges of irrelevant features in the image. Conversely a high
threshold may miss subtle edges, or result in fragmented edges.If the edge thresholding is applied
to just the gradient magnitude image, the resulting edges will in general be thick and some type
of edge thinning post-processing is necessary. For edges detected with non-maximum
suppression however, the edge curves are thin by definition and the edge pixels can be linked
into edge polygon by an edge linking (edge tracking) procedure. On a discrete grid, the non-
maximum suppression stage can be implemented by estimating the gradient direction using first-
order derivatives, then rounding off the gradient direction to multiples of 45 degrees, and finally
comparing the values of the gradient magnitude in the estimated gradient direction.
Morphological Operations
Morphological processing is constructed with operations on sets of pixels. Binary morphology
uses only set membership and is indifferent to the value, such as gray level or color, of a pixel. It
relies on the ordering of pixels in an image and many times is applied to binary and gray scale
images. Through processes such as erosion, dilation, opening and closing, Binary images can be
modified to the user's specifications. Binary images are images whose pixels have only two
possible intensity values. They are normally displayed as black and white. Numerically, the two
values are often 0 for black, and either 1 or 255 for white. Binary images are often produced by
thresholding a gray scale or color image in order to separate an object in the image from the
background. The color of the object (usually white) is referred to as the foreground color. The
rest (usually black) is referred to as the background color.
Morphological Operators:
After converting the image in the binary format some morphological operations are applied on
the converted binary image. The purpose of the morphological operators is to separate the
tumour part of the image. The portion of the tumour in the image is visible as white color which
has the highest intensity then other regions of the image. Some of the commands used in the
morphing are strel which is used for creating morphological structuring element, imerode which
is used to erode or shrink an image and imdilate which is used to for dilating i.e. expanding an
image[7].After segmentation and thresholding some percent of noise will be there, in order to
remove this noise two important morphological operations have been used: opening and
closing.
These morphological operations basically used are
1.Dilation: Dilation is generally used for thickening of the object.
2.Erosion: Erosion is generally used for thinning of the object.
3.Opening: Opening is generally used for smoothening the contour of an object or elimination of
thin protrusions.
4.Closing: Closing is generally used for filling gaps between close objects.
Smoothing
Images taken from a camera will contain some amount of noise. As noise canmislead the result
in finding edges, we have to reduce the noise. Therefore theimage is first smoothed [16] by
applying a Gaussian filter.
We have implemented Gaussian filtering with specific kernel size (N) and Gaussianenvelope
parameter _. The Gaussian filter mask of NxN size is generated byinvoking following function.
private void GenerateGaussianKernel(int N, float S, out int Weight). The function
GenerateGaussianKernel takes kernel size ( parameterN) and theenvelope parameter _
(parameter S) as input to generate the NxN kernel values.
Following subroutine removes noise by Gaussian filtering private int[,] GaussianFilter(int[,]
Data). The function GaussianFilter multiplies each pixel in the image by the kernel
generated. It returns the smoothed image in a two dimensional array.
Finding Gradients
The principle of Canny algorithm is based on finding edges where the intensityof the grey image
changes to maximum value. These points are marked bydetermining the gradient of the image.
Gradient for each pixel is calculated byapplying Sobel operator. For each pixel, partial gradient
towards x and y directionis determined respectively by applying the kernels.By applying
Pythagoras law, we have calculated themagnitude of gradient (strength of the edge) for each
point. The edges have been marked where the gradients of the image have large magnitudes.
SobelGx and Gy masks are used to generate partial derivatives @x and@y (partial spatial
derivatives) of the image. Spatial gradient value (d f (x; y)) isobtained from this partial
derivatives. The following function implements differentiationusing Sobel filter mask.
private float[,] Differentiate(int[,] Data, int[,] Filter)
The function takes the Gray image as input. Color value of Gray image isstored in a two
dimensional array. The image array is smoothed first. Later, aderivative of the image is
performed by calling the function Differentiate. Thisfunction returns gradient image as a two
dimensional array.
SOURCE CODE
Step 1: Read Image
Read in the cell.tif image, which is an image of a prostate cancer cell.
I = imread('cell.tif');
figure, imshow(I), title('original image');
text(size(I,2),size(I,1)+15, ...
'Image courtesy of Alan Partin', ...
'FontSize',7,'HorizontalAlignment','right');
text(size(I,2),size(I,1)+25, ....
'Johns Hopkins University', ...
'FontSize',7,'HorizontalAlignment','right');
An alternate method for displaying the segmented object would be to place an outline around the
segmented cell. The outline is created by the bwperim function.
BWoutline = bwperim(BWfinal);
Segout = I;
Segout(BWoutline) = 255;
figure, imshow(Segout), title('outlined original image');
Conclusions
Segmentation techniques based on gray level techniques such as thresholding, and region based
techniques are the simplest techniques and find limited applications. However, their performance
can be improved by integrating them with artificial intelligence techniques. Techniques based on
textural features utilizing atlas or look-up-table have excellent results on medical image
segmentation. However, they need expert knowledge in building the atlas. The limitation of atlas
based technique is that under certain circumstances it becomes difficult to correctly select and
label data; has difficulties in segmenting complex structure with variable shape, size, and
properties. In such situations it is better to use unsupervised methods such as fuzzy-c-means
algorithm.
References:
1. Research project on image contrast enhancement methods dept. of radio
communication and video technology; university of SOFIA.
2. Efficient algorithm for contrast enhancement of natural images ShyamLal and Mahesh
Chandra; department of ECE, Moradabad Institute of Technology, India.
3. Review of various image contrast enhancement techniques Vijay A. Kotkar, Sanjay S.
Ghardi; research scholar, dept of CSE, Maharastra, India.
4. Contrast limited adaptive histogram equalization for qualitative enhancement Neethu
M. Sasi, V.K Jayasree; Govt. Model Engg. College, Cochin University of Science &
Technology, Thirkkatkara, Kerela, India.
5. Region based contrast limited adaptive HE Sonia Goyal, Seema ,YCOE; Punjabi
University Campus; Talwandi Sabo, Punjab, India.
6. Different histogram equalization based contrast enhancement techniques Shefali Gupta,
YadwinderKaur; Chandigarh group of colleges, Gharuan, Mohali, India.
7. A comparative analysis of histogram equalization based techniques for contrast
enhancement and brightness preserving Raju A. Dwarakish, G.S and Venkant Reddy;
National Institute of Technology, Karnataka, India.
8. Adaptive image contrast enhancement using generalizations of histogram equalization
J. Alex Stark, Cambridge University U.K.
9. Comparison of histogram equalization techniques for image enhancement of grayscale
images Dinesh Sonkar, M.R. Parsai, dept. of ECE, Jabalpur Engg. College, Jabalpur,
M.P. India.