Vous êtes sur la page 1sur 7

The 8th International Conference on Science, Technology and Innovation

for Sustainable Well-Being (STISWB VIII), 1517 June, 2016,


Yangon, Myanmar

CST-101

Object Positional Analysis by Using Image


Processing Technique
Sivapong Phetsong1*, Nantawatana Weerayuth2, Dheeraporn Saentawee3
Department of Mechanical Engineering, Faculty of Engineering and Industrial Technology,
Silpakorn University (Sanam Chandra Palace Campus) Nakhon Pathom, Thailand 73000
E-mail: sivapong@su.ac.th
2
Department of Mechanical Engineering, Faculty of Engineering, Ubon Ratchathani University,
Warinchamrap, Ubonratchathani , T 34190, Thailand , 1Email: weerayuth_gm@hotmail.com
3
center of excellent for Astronautical and Marine Engineering,
King Mongkuts University of Technology, North Bangkok, Bangkok, 10800
1

Abstract
Image processing technology is one of the necessary knowledge that required in the computer
system in the present day. It can be applied to use in a various types of work. It can reduce the human
error in many tedious task. This research study the possibility in the development the image processing
solution to find the suitable integrate in a microcontroller in the future works. We try to measure the
object distance and analyze the object position and also try to calculate the size of the object. The
result of this work will be used to implement the solution in the microcontroller. We use the stereo
vision technique to calculate the distance but the non-linearity make the calculation error. After
compensate non-linearity error, we found that the maximum error of distance is about 2.82% and the
area calculation error is about 5.6%. The maximum error of the object to object distance is 6.7%. The
problem cause by the edge detection that make the error in position calculation and lead to other error
as describe.
Keywords: Stereo Vision, Image Processing, Microcontroller

1. Introduction
Image processing technology is one of the necessary knowledge that required in the computer
system in the present day. It can be applied to use in a various types of work. In the traffic system, it
can be used for the online license plate recognition by working with OCR technology and it can also
measure the cars speed or even analyzing the car image in the car park management system [1], It can
be applied in other areas such as the analysis of the part of pictures to check the similarity of the two
images. These task has to be performed continuously by human. [2] However, there are a lot of human
error in the data analysis process and it took a lot of labor work which is a tedious task. The adoption
of the image processing technology can reduce the errors but it still requires a lot of CPU processing
time. For the integration this technology with the microcontroller which have less CPU performance
and less memory is the big challenge. Even the microcontroller nowadays have the high performance
but their performance is quiet limited. Integrating the image processing technology in microcontroller
is still very limited also. This research study the possibility in the development the image processing
solution to find the suitable image processing system to integrate in a microcontroller in the future
works. We try to measure the object distance and analyze the object position in this work and also try
to calculate the size of the object. The result of this work will be used to implement the solution in the
microcontroller.

476

CST-101

The 8th International Conference on Science, Technology and Innovation


for Sustainable Well-Being (STISWB VIII), 1517 June, 2016,
Yangon, Myanmar

2. Fundamentals
2.1 Image Processing
There are two types of the imaging data in the computer, vectors and raster. Vectors image is
always used in the engineering design. Raster image is the typical image from the digital camera
which is used in this work. The raster images will record the dot colors in the pictures, called pixel.
One images may contains millions of pixels depend on the detail we needed and also depend on the
capture devices. Digital camera will directly affect the picture quality. The pictures has to be
processed before use to get the suitable quality and meet the conditions required on each works. It may
be sharpen, noise reduction or even crop to get the most important part in the picture. Then the
pictures is ready to use in other application and meet the requirement in both size and quality which
can be process by mathematic.
2.2 The Gray Scale
The Gray Scale is the technique to change the color of the image into black and white gradient by
change all pixels in the picture. Each image pixel is composed of three primary colors: red, green and
blue (RGB). By changing the colors to grayscale, we have to average the three primary colors with the
suitable weight. In the computer system, pixel color is divided into 256 levels of color scale ranges
from 0-255, from darkness to light. The problem is the 3 major colors have the different brightness.
This make the different major colors required the different factor to make the best transformation. The
equation 1 is the suitable color scale to convert color pixel to grayscale.

grey (blue 0.11) ( green 0.59) (red 0.30)

(1)

2.3 The distance between the camera and the object.


To calculate the distance between the camera and the subject, we use two cameras that are used to
calculate the distance by using the Stereo Vision technique [3], [4] At first we have to make an
alignment of both cameras in parallel and also fix the distance between the two camera. When take the
pictures from both camera, the results are the same with the human eye and can calculate the distance
easily. Considering the images from both cameras that take on the same time. These pictures can be
merge and will have overlap of the object in each pictures. By analyze the overlap distance, we can
estimate the distance from camera. The more overlapping distance the close distance from camera. If
the object have less overlap, that object may be far away in distance. All the relation is explained in
the figure 1.
Object

Left image

Xl

Xr

L
Left
Camera

right image

R
b

Right
Camera

Fig 1: Vector from cameras to object

477

The 8th International Conference on Science, Technology and Innovation


for Sustainable Well-Being (STISWB VIII), 1517 June, 2016,
Yangon, Myanmar

CST-101

In Figure 1, when considering the relationship of geometric shapes. We can calculate the distance
between the object and the camera by using the equation 2.

z
Where z
b
f
xl
xr

bf
( xl xr )

(2)

is the distance between the camera and the object.


is the distance between the two cameras.
is the distance Epipolar plane
is the distance from the center of the image on the left.
is the distance from the center of the image to the right.

The different of the distance from the center to the object in the both image, called Offset. We can
use the offset to calculate the distance (Z), which is the distance from the camera to the object by using
equation 2.[5],[6]

Fig 2: Overlap of the object in the two pictures

3. Experiment
The test procedure was carried out by develop software to connect the cameras and analyzing
results. And also create the camera module to force the camera in parallel alignment to each other. The
distance between the cameras is fixed to 5 cm as shown in the figure 3. These Camera connect to
computer and directly interface with software that we developed. The main components of the
program consist of a connection with both cameras. The camera can be config the location left or
right. The software also have the features to improve the picture brightness, contrast, sharpen and
basic error correction. Then pictures will be sent to analysis in the next process and software will
calculate the Offset of the object in the two pictures and also calculate the distance of the object.

478

The 8th International Conference on Science, Technology and Innovation


for Sustainable Well-Being (STISWB VIII), 1517 June, 2016,
Yangon, Myanmar

CST-101

Fig 3: Camera Module

Fig. 4: Analysis Software

Object to camera distance(Z) (cm)

In the first experiment and calibration process to make the more accurate calculation. We founded
that the relation between the distance from the camera to object and the Offset in the two pictures is
nonlinear. The relation can be plot in the exponential equation as shown in figure 5. In the other hand,
the comparing of the pixel per centimeter and the distance of the object is also nonlinear as shown in
figure 6.

Offset (pixel)

Fig. 5: Relation on the Offset and the distance between camera and object

479

The 8th International Conference on Science, Technology and Innovation


for Sustainable Well-Being (STISWB VIII), 1517 June, 2016,
Yangon, Myanmar

CST-101

Pixel per cm

Pixel ratio

Object to camera distance(Z) (cm)

Fig. 6: Relation on the pixel per centimeter and the distance from camera

4. Results
The experiment results show that the output from the software still have error when using linear
equation cause by the nonlinearity of some parameter as describe above. Then we implement the
correction factor for non-linear parameter by using the information in figure 5 and 6 and repeat the
experiment again. We use the sample objects to test the model by using various size of triangles,
squares and circles objects as shown in figure 7 and vary the distance range from 25 to 100 cm. The
result show that the program can calculate the distance as shown in figure 8. The maximum error of
calculation distance is about 2.82%. We also founded that the calculation error is founded in the far
distance. It may cause by the less pixel per centimeter in the far distance.

Fig.7 Experiment objects

480

CST-101

The 8th International Conference on Science, Technology and Innovation


for Sustainable Well-Being (STISWB VIII), 1517 June, 2016,
Yangon, Myanmar

Object Distance
Calculated Distance

Fig. 8: Real Distance compare with calculated distance

The next experiment is object area measurement by using edge detection technique. We use the
object centroid as a reference. The result show that the area from the calculation by software is not
different from the real area but still have cumulative error about 5.6%. This may cause from the edge
detection error. Software may detect the wrong edge position. The last experiment is object to object
distance measurement. There is a maximum error about 6.7% cause by the error of edge detection.
Then it make centroid calculation error and lead to the error in vector calculation in X-Z plane.
Area (cm2)

Distance (cm)
Square (measure)

Square (calculated)

Circle (measure)

Circle (calculated)

Triangle (measure)

Triangle (calculated)

Fig.9 Area comparison at various distance

Fig. 10: Photo and Analyzing picture to calculate object to object distance

481

CST-101

The 8th International Conference on Science, Technology and Innovation


for Sustainable Well-Being (STISWB VIII), 1517 June, 2016,
Yangon, Myanmar

Table 1 Distance measurement comparison


Real distance
Distance
(cm)
Object 1 and object 2 20
Object 1 and object 3 20
Object 2 and object 3 20

Calculated Distance (cm)

% error

20.08
20.04
19.45

4%
2%
2.75%

5. Summary and conclusion


The result from this research found that there is a few error in the calculation because there is the
correction factor to compensate the distance error. This make the result from the calculation of the
distance between the camera and the object has a few error. But from the several experiment, we
found that the Brightness and Contrast of inappropriate. Sometime, The shade and shadow of the light
on the object also the source of the problem. From the result, the distance error is only 2.82% but it
make more error in the other step. For the microcontroller adoption, It still need some improvement in
program coding to make it less complicate and less CPU and memory consume.

6. References
[1] Thanathip Limna, Applying stereo vision and parallel computing for supporting the journey of the
visually impaired, Thesis, Faculty of Engineering Prince of Songkla University, 2010.
[2] Chalermpol Longjard, Automatic Lane Detection and Navigation Using Pattern Matching Model,
Thesis, School of Electrical Engineering, Suranaree University, 2007.
[3] Orachat Chitsobhuk, Digital Image Processing, Sangunkij Press and Media, Bangkok, 2009.
[4] Milan Sonka, Vaclav Lavaca and Roger Boyle, Image Processing Analysis and Machine Vision,
CL Engineering, 1998.
[5] Stefano Mattoccia, Stereo Vision: Algorithms and Applications, University of Bologna, 2013.
[6] Karl Johnson, Shader-Based Stereo Matching with Local Algorithms, Institution of Computer
Science, Lund University, 2003.

482

Vous aimerez peut-être aussi