Académique Documents
Professionnel Documents
Culture Documents
I.
INTRODUCTION
YUV-KL,
and
YCgCr-YES,
(1)
{
)
))
))
(2)
)
European television studios for image compression work.
It represents the luma (which is luminance, computed from
nonlinear RGB). YCbCr capable to separate luminance and
chrominance that makes this colour space attractive for skin
colour modeling. Smooth face skin region is converted into
than one face detected in the image, the process looped until
there is no faces detected. Then individual results using
single dynamic threshold will be. This is because, each faces
will generate a unique threshold values according to skin
tone variations obtained from the detected faces.
c
d, eyes
coordinate
center point
(x0,y0)
a, major
axis
b, minor
axis
III.
(a)
(b)
(c)
(e)
(f)
(d)
Fig. 3. Online skin colour extraction obtained from the face. (a) input, (b)
elastic elliptical mask region based on eye rotation, (c) smooth skin region
using edge detector and dilation process, (d)-(f) RGB, YCbCr and HSV
smoothed skin region
(3)
)
)
Multiple thresholds calculated during the online skin
sampling process. This dynamic threshold values then will
be used to classify the skin and non-skin pixels for still
colour images.
C. Dynamic Skin Classification
Finally, the dynamic threshold values that obtained from
the online skin sampling method will classify the skin pixel
according to the minimum and maximum threshold value
defined by Equation 3. Value 1 represent skin and 0 nonskin pixel. If the input image contains more than one faces,
the proposed method will be looped and applied for
individual faces until there is no face identified. Lastly, the
individual result of each faces will be merged to produce the
final detection output as shown in the Fig. 4. White and
Face 2
Merging
operation
Face 3
(a)
(c)
Face 4
(b)
Fig. 4. Merged result for more than one faces. (a) Input image, (b) individual threshold result, (c) final result
Fig. 5. Qualitative comparison using Pratheepan dataset [16] of single face. From left to right represents the input image, ground truth, Yogarajah et al.
method [16], Tan et al. method [17] and our proposed method
IV.
[2]
[3]
6.9887
Accuracy
(%)
84.05
Precision
(%)
91.49
CbCr-SV
6.9984
83.48
91.48
YCbCr-RGB
8.3235
83.48
90.04
CbCr-RGB
8.8040
83.81
89.67
CbCr
14.8803
85.86
85.45
RGB
17.6375
83.32
85.33
HSV
19.6069
81.57
81.27
FP (%)
YCbCr-SV
CONCLUSION
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
ACKNOWLEDGMENT
The authors would like to acknowledge Ministry of
Higher Education (MOHE) and Universiti Teknologi
Malaysia (UTM) for supporting this research under Research
University Grant (RUG) vote 10J28 and Science Fund Grant
(MOSTI) vote 01-01-06-SF1167. We would also like to
thanks to Dr. See Seng Chan from Universiti Malaya,
Malaysia for giving an access to their dataset.
REFERENCES
[1]
[17]
[18]
[19]
[20]