Académique Documents
Professionnel Documents
Culture Documents
PROCESSING
ABSTRACT:
Processor : PENTIUM IV
Ram : 1 GB SD RAM
Monitor : 15” COLOR
Hard Disk : 80 GB
Keyboard : STANDARD 102 KEYS
Mouse : 3 BUTTONS
SOFTWARE CONFIGURATION:
Melanomas can occur anywhere on the body, but are more likely to start in certain
locations. The trunk (chest and back) is the most common site in men. In women,
the legs are the most common site. The neck and face are other common places for
melanoma to start.
Image Acquisition
The database images we use contain digital images taken by means of a
digital camera.
These images are fed into a computer system for further processing. The
images mainly dealt for the research are RGB images. Since color is a
powerful descriptor the RGB images are considered for the paper. The
database images are obtained from different sources and the size of the
images is non-standard.
Fig: Input Image
Image preprocessing
Preprocessing as the fundamental stage of detection system helps to enhance
the quality of an image by removing hairs, noise and air bubbles on the skin. The
enhanced image is used for feeding the next step. In preprocessing of an image,
there are many existing techniques which can be classified into two groups; binary
and gray color images. The common images chosen for research here are color
images.
Fig: Architecture
Wiener filtering
PROPOSED METHOD
In our methodology, the image is first enhanced by obtaining the highest
frequency components from its Curvelet transform and then add it to the original
image, in order to sharpen the edge detail. Subsequently the sharpened image is
subjected to morphological processing and thresholding to get a binary image,
from which boundaries are extracted after morphological processing. In the end, an
Otsu algorithm is applied to get normal skin and the cancerous skin. I thus
propose a computerised solution for replacing the clinical calculations by feature
exatraction.
1. For any segmentation strategy, noise removal is a must, a priori, lest one
may get a lot of false edges. Our method starts with the removal of unwanted
particles or noise present in the image (I), through the use of Weiner filter to get
IW. The latter is useful in the situations where the purpose is to reduce noise but
preserve the edges. Wiener filter is statistical in nature as it adopts a least square
(LS) approach for signal recovery in the presence of noise. It is very effective in
eliminating both the additive noise and blur which are usually competing against
each other.
2.A Forward Discrete Curvelet Transform (FDCT) is applied to the input image to
get the finest detailed coefficients. The FDCT is a multi-dimensional transform in
the sense that not only linear contours but also the curvy edges of the contained
objects can be captured through its use. Hence, the Curvelet transform captures
the structural activity along the radial wedges in the frequency domain and has a
very high directional sensitivity. It captures singularities with very few
coefficients in a non-adaptive manner. The edge and singularity details are
processed to extract the feature points.
3.The obtained high-pass image (IHP) is added to IW and we get an enhanced SEM
image (Ie). The image would now have stronger edges than the original and
would perform better in lending edge details to the segmentation step.
So far, we were interested only in pattern matching with the structuring elements,
so no background operation is required in the hit-or-miss transform. A more useful
expression for thinning Asymmetrically is based on a sequence of structuring
elements:
{ B} = { B1, B2 ,...Bn}
The process is to thin A by one pass with B1, then thin the result with one
pass of
B2 d tillAi
Set Sequence of rotated SEs
B2, and so on, until is thinned with one pass of Bn.
The entire process is repeated until no further changes occur.
(c)Thickening:
Depending on the nature of A, the thickening procedure may result in
disconnected points. Therefore, this method is usually followed by post-processing
to remove disconnected points.
(d). Skeleton:
The segmentation is the most important stage for analyzing image properly
since it affects the accuracy of the subsequent steps. However, proper segmentation
is difficult because of the great verities of the cancer shapes, sizes, and colors
along with different skin types and textures. In addition, some cancer has irregular
boundaries and in some cases there is smooth transition between the lesion and the
skin.
To address this problem, several algorithms have been proposed. In this
thesis I have discussed one method of thresholding using Otsu’s method.
Otsu’s thresholding is one of the oldest methods in the image segmentation. It is
treated as a statistical method according to probabilistic implemented. It should be
noted that the method Otsu is one of the best ways automatic threshold level. The
basic principle in Otsu method is divided into two categories, such as foreground
and the background. The threshold is obtained by finding automatic maximum
contrast between the layers.
Dermatologists use the ABCDE rule to help people spot the signs of melanoma on
their skin:
o Asymmetry
o Border
o Color
o Diameter
o Evolving
SAMPLE CODING
close all
clear all
clc
%%
globalInput_Image
global EN_IP
fullpathname = strcat(pathname,filename);
pause (2)
%%
RGB_G = rgb2gray(Input_Image);
figure, imshow(RGB_G);
G_NSE = imnoise(RGB_G,'gaussian',0,0.025);
figure,imshow(G_NSE)
pause (2)
%%
FLTD_IMG = wiener2(G_NSE,[5,5]);
figure, imshow(FLTD_IMG)
pause (2)
%%
h=fspecial('unsharp');
US_IMG=imfilter(Input_Image,h);
figure,imshow(US_IMG),title('filtered iamge');
pause (2)
%% Boundary Extraction
RGB_G(RGB_G>255)=0;
S=strel('disk',4,0);%structuring element
D=~im2bw(Input_Image);
D=~im2bw(Input_Image);
F=imerode(D,S);
figure,imshow(D);title('binary image');
pause (2)
%%
INP_SVMS = im2bw(Input_Image, graythresh(Input_Image));
% imshow(bw)
INP_SVMS = bwareaopen(INP_SVMS, 50);
% imshow(bw)
INP_GRP = bwlabel(INP_SVMS);
INP_TR_ST = regionprops(INP_GRP,'PixelIdxList');
INP_TR_ST=bwareaopen(INP_TR_S1,20)
INP_TR_S1
%%
% se = strel('disk',1);
% a_bw = imclose(INP_TR_ST,se);
% a_bw=~INP_TR_ST;
img2=im2bw(Input_Image,graythresh(Input_Image));
figure, imshow(img2)
pause (2)
B = bwboundaries(img2);
figure, imshow(img2)
cc = bwconncomp(img2,4);
number =cc.NumObjects;
number
s =regionprops(img2, 'Area');
N_O = numel(s);
N_O
% INP_TR_S1
hold on
%% Image Classification
SRC=im2double(Input_Image);
In1=SRC(:,:,1);
IN2=SRC(:,:,1);
IN3=SRC(:,:,1);
INP_S_VL=(In1+IN2+IN3)/3;
SBL_VL = fspecial('sobel');
SBL_VL_INV = SBL_VL';
SB_FLT = imfilter(double(INP_S_VL), SBL_VL, 'replicate');
SB_RPLKT = imfilter(double(INP_S_VL), SBL_VL_INV, 'replicate');
SR_V = sqrt(SB_RPLKT.^2 + SB_FLT.^2);
INP_CNV_FLT = deconvwnr(INP_S_VL,SBL_VL);
LN_VAL = watershed(INP_CNV_FLT);
LBL_IMG = label2rgb(LN_VAL);
figure,imshow(LBL_IMG);title('Imege Analysis')
STRL_FUN = strel('disk',20);
IN_BW_OPN = imopen(INP_S_VL, STRL_FUN);
IMG_RGNL_ILM = imregionalmin(IMG_RCNST);
figure,imshow(IMG_RGNL_ILM);title('Imege Analysis')
FLM_IMG = INP_S_VL;
FLM_IMG(IMG_RGNL_ILM) = 255;
figure, imshow(FLM_IMG);title('Imege Analysis')
IMG_STRL_FUN = strel(ones(7,7));
IM_FGM2 = imclose(IMG_RGNL_ILM, IMG_STRL_FUN);
IMG_FGM3 = imerode(IM_FGM2, IMG_STRL_FUN);
IMG_FGM4 = bwareaopen(IMG_FGM3, 20);
FNL = INP_S_VL;
FNL(IMG_FGM4) = 255;
FNL1=im2bw(FNL);
a_citra_keabuan = rgb2gray(Input_Image);
threshold = graythresh(a_citra_keabuan);
a_bww = im2bw(a_citra_keabuan,threshold);
a_bw = bwareaopen(a_bww,80);
se = strel('disk',2);
a_bw = imclose(a_bw,se);
a_bw=~a_bww;
[LBL,N_O]=bwlabel(a_bw,8);
LBL
N_O
else
uiwait(msgbox('Stage II: The segmented Image is affected by Cancer'));
end
clc
CONCLUSION
In this project, a classic image processing algorithm is designed to segment the
lesion picture for shape analysis. After demonstrating the dataset and method
used in this project, performance analysis shows that the algorithm can have an
even better performance on extracting shapes than the labels do. Finally, some
improvement suggestions are proposed to have further optimization of the
algorithm.
REFERENCES
1. M. J. Eide, M. M. Asgari, S. W. Fletcher, A. C. Geller, A. C. Halpern, W. R.
Shaikh, L. Li, G. L. Alexander, A. Altschuler, S. W. Dusza, A. A.
Marghoob, E. A. Quigley, and M. A. Weinstock; Informed (Internet course
for Melanoma Early Detection) Group, “Effects on skills and practice from a
web-based skin cancer course for primary care providers,” J. Am. Board
Fam. Med. 26(6), 648–657 (2013).
2. 2. American Cancer Society. Cancer Facts and Figs. (2014).
3. 3. N. R. Telfer, G. B. Colver, and C. A. Morton; British Association of
Dermatologists, “Guidelines for the management of basal cell carcinoma,”
Br. J. Dermatol. 159(1), 35–48 (2008).
4. 4. A. Kricker, B. Armstrong, V. Hansen, A. Watson, G. Singh-Khaira, C.
Lecathelinais, C. Goumas, and A. Girgis, “Basal cell carcinoma and
squamous cell carcinoma growth rates and determinants of size in
community patients,” J. Am. Acad. Dermatol. 70(3), 456–464 (2014).
5. 5. E. A. Gordon Spratt and J. A. Carucci, “Skin cancer in
immunosuppressed patients,” Facial Plast. Surg. 29(5), 402–410 (2013).
6. 6. S. Ogden and N. R. Telfer, “Skin cancer,” Medicine (Baltimore) 37(6),
305–308 (2009).
7. 7. K. Korotkov and R. Garcia, “Computerized analysis of pigmented skin
lesions: a review,” Artif. Intell. Med. 56(2), 69–90 (2012).
8. 8. A. O. Berg, D. Best; US Preventive Services Task Force, “Screening for
Skin Cancer: recommendations and rationale,” Am. J. Prev. Med. 20(3
Suppl), 44–46 (2001).
9. 9. M. M. Rahman and P. Bhattacharya, “An integrated and interactive
decision support system for automated melanoma recognition of
dermoscopic images,” Comput. Med. Imaging Graph. 34(6), 479–486
(2010).
10.10. A. G. Isasi, B. G. Zapirain, and A. M. Zorrilla, “Melanomas non-
invasive diagnosis application based on the ABCD rule and pattern
recognition image processing algorithms,” Comput. Biol. Med. 41(9), 742–
755 (2011).
11.11. P. G. Cavalcanti and J. Scharcanski, “Automated prescreening of
pigmented skin lesions using standard cameras,” Comput. Med. Imaging
Graph. 35(6), 481–491 (2011).
12. J. A. Jaleel, S. Salim, and R. B. Aswin, “Artificial neural network based
detection of skin cancer,” IJAREEIE 1, 200–205 (2012). 13. M. Sadeghi,
M. Razmara, T. K. Lee, and M. S. Atkins, “A novel method for detection of
pigment network in dermoscopic images using graphs,” Comput. Med.
Imaging Graph. 35(2), 137–143 (2011). 14. C. Barata, J. S. Marques, and J.
Rozeira, “A system for the detection of pigment network in dermoscopy
images using directional filters,” IEEE Trans. Biomed. Eng. 59(10), 2744–
2754 (2012)