Vous êtes sur la page 1sur 6

Visual Servoing for an Omnidirectional Mobile

Robot Using the Neural network - Multilayer


Perceptron.
research group in mobile robotics (ROMA)
Jonathan Eduardo Cruz Ortiz
Universidad Distrital Francisco Jose De Caldas
Facultad Tecnologica -Ingenieria En control
Bogota , Colombia
Email: jecruzo@correo.udistrital.edu.co
AbstractThis paper presents a Image Based Visual Servoing
(IBVS) for a omnidirectional robot and algorithm is proposed to
navigate a mobile robot in a unknown environment And explore
the path from start configuration to goal configuration along
some checkpoints while avoiding obstacles, which can be used
as part of a high-level path planner the latter is a critical in
the field of robotics, We have adopted a robot WowWeeTMs
Rovio, as robotic platform for our experiments. It uses vision
and image processing to detect of obstacles with various shapes
and colors and environment recognition and uses IR sensor to
learn whether they are obstacles or non-obstacles such as red
color figures. It then analyzes the information about the obstacles
and decides which direction it should move thanks learning of
a neural network Multilayer Perceptron ". Our algorithm uses
CGI commands for communication with our Rovio, as written in
API Specification. Wrapper classes were written in LabVIEW.
Index TermsRovio , Neural Network , artificial vision , image
processing , system navigation,visual servoing,omnidirectional
robot,avoiding obstacles.

I. INTRODUCTION
In the last years there have been projects with a type of robot
that has unique characteristics compared to other robots . [1],
[2] These are the omnidirectional robots, these are classified
in locomotion platforms with the same name, provides a
complete structure that is able to move in a direction at any
time without requiring a specific orientation for navigation. For
the type of movement requires wheels that can move in more
than one direction. [3] omnidirectional movement has become
popular in mobile robots because it allows the robot to move
in a straight line from a point source to other point, without
having to rotate before moving. Additionally, the path planning
can be combined with a rotation, so that the robot reaches its
destination at the correct angle. this robots already have study
in the field of robotics, [4], [5] from certain positions in a
plane, [6] until the determining its kinematics and dynamics.
[7] On the other hand the process of combining vision and
robot control is commonly known as visual servoing, this control of robotic systems has large potential applications when
robots are operated in an unstructured environment. In this
approach, the vision system mimics the human sense of sight

and allows non-contact measurements with the environment.


[8] These are classified as : [9]
Position based control (PBVS): [10] where the vision information is interpreted with respect to a base or a world
coordinate system. It requires a precise knowledge of the
kinematic robot model, the exact location of the target in
the world coordinate, and a precise camera calibration model.
The requirement for a prior knowledge makes this technique
unsuitable for unstructured environments. Image based control
(IBVS): [11], [12] uses feature vectors from the camera image
plane as input to the controller. In this paper we use image
based control and focus on 2D systems. (Related to this
article). Values are computed on the basis of image features
directly. Visual servoing control is an attractive research topic
in robotics that tries to mimic human and animal visual control
principles and behaviors to use in various robot control tasks.
For instance, visual information obtained from a robot camera
can be used to generate control input for the robot to track
the trajectory of a moving object. [8], [13]. In this paper
proposed a (IBVS) for an Omnidirectional Mobile Robot based
the Neural network (Multilayer Perceptron). Our vision based
approach to this problem; robot navigation is controlled by
observing and extracting relevant information of objects in the
environment through images. The organization of this paper is
as follows: the theoretical background of image based visual
servoing is presented in Section II. The implementation of a
distributed fuzzy proportional controller is derived in Section
III.
II. RELATED WORK
In many recent works, [14] mobile robot navigation is done
by processing information from the vision sensors. [15] A
visual servoing strategy based on the estimation of the height
of features on the plane of motion was proposed in for mobile
robots equipped with different types of vision sensors, such
as panoramic cameras. [16] In the article "Robot motion
control from a visual memory presents a new approach
for robot motion control, using images acquired by an onboard camera. A particularity of this method is that it can
avoid reconstructing the entire scene without limiting the

displacements possible. [17] Other case a camera was installed


in a robot the experimental setup using a 7 DOF robot
manipulator from Power Cube. Using neural network and
vision artificial for movement controlling. [18] other article
deals with navigation algorithm of an omnidirectional mobile
robot. Given some checkpoints in an unknown environment,
the robot can be guided autonomously from start to goal along
those checkpoints while avoiding obstacles. [9] The proposed
algorithm mainly utilizes its video camera embedded inside.
The navigation scenario that they have considered includes
an unknown environment with figures of colors. Also also
used in robotic arms [19] there used Neural network selforganizing map for the control terminal effector. Or pattern
recognition. [20], this equip they developed a management tool
for home robots with a graphical editing interface. The user
assigns instructions by selecting a tool from a toolbox and
sketching on a birds-eye view of the environment. Layering
supports the management of multiple tasks in the same room.
Layered graphical representation gives a quick overview of and
access to rich information tied to the physical environment.
This paper describes the prototype system and reports on your
evaluation of the system. [21] The goal of this Project [22] is
to use the Rovio to create a 2D map of its environment using
a camera and a fixed laser pointer mounted on the robot. It
uses basic visual algorithms to isolate the angular location
of the laser dot in the frame, and uses that to determine
distance to the object and is about computer vision with one
camera embedded robot rovio. They use image processing
and analysis to solve several modern technical issues, such
as autonomous robot navigation in an unknown environment,
obstacle detecting, obstacle avoidance, but also map editing of
an unknown environment. The goal final of this project is to
read physical sensors (our target) to extract information such
as gas level in a mine. [23]
The other projects are : Project seeks to implement planning,
vision, and classification algorithms in the context of locating
a Heineken mini- keg with an unknown environment, [24] or
a Project is to make Rovio a totally autonomous robot which
can follow waypoints only based on image and reach the goal
position while learning and avoiding obstacles on its way.
Rovio will utilize the SURF (Speeded Up Robust Features)
to determine the direction. [25] or PyRovio is your Python
implementation of the WowWee Rovio API. It allows direct
control of a Rovio robot from Python programs. They have
used PyRovio to implement Python-based actor-agents that
participate in a live intermedia performance. [26]

We have implemented our proposed algorithm using


WowWeeTMs Rovio that is Wi-Fi-enabled and attached with
a video camera as well as IR sensors are embedded inside
the robot. used very simple robot human interface based
on HTTP protocol, [27] The robots main features include
wheeled locomotion, IR sensor for basic obstacle avoidance,
a webcam, microphone and speaker, Wi-Fi connectivity using
the 802.11b/g protocols and more [28]Rovio can be controlled
[3]using HTTP commands to a web server hosted on the robot.
[29] This allows controlling the robot and accessing its data.
The proposed algorithm implemented web based Rovio API ,
[28] Neuro and Vision Servoing design. seeFigure 1 on page
2

III. ROBOTICS PLATAFORM

VI. V ISION A LGORITMH D ESIGN .

We have adopted a robot Rovio, as robotic platform for our


experiments. see Figure 1 on page 2

The pre-processing stage converts the camera imagery into


useful information for the control system based on neural
networks. This process can be divided into several steps: a
color filter and "fill hole algorithm and remove particles
algorithm and other algorithms. 1) Filter Color: [30] The color
recognition system provides the images the environment in the
RGB values. Each image is passed through a color filter, here
it is separate the three layers of an RGB (red, green, blue) ,

IV. METHODOLOGY
This is faced up to in three stages: first, algorithms of control
of the platform robotics. Second a pre-processing algorithm
handles the image provided by the camera, filters the desired
colour and outlines the objects of this colour. Third, a neural

network checks if any of the obtained shapes correspond to


the target object. [20] see Figure 2 on page 3

Inage
Image Processing
Unit

Image
Data

IR sensor
Data

Neural Network
Feedback
Control Unit Rovio

Figure 1.

Visual Servoing

V. ROBOTIC P LATFORM C ONTROL

we use a lightmeter algorithm Measures the pixel intensities


on a line of an image. For obtaining certain sections of the
image due to the intensity level would be characteristic, then
averaged and is classified as an action that the robot executes.
[33] see Figure 2 on page 3 and Figure 3 on page 3,

GET IMAGE
Separate image

RED

GREEN

function(EXP)

function(EXP)

BLUE

function(EXP)

see Fig 4

IMAGE

see Fig 6

see Fig 3
THRESHOLD

LIGHT METER ALGORITM ( HISTOGRAM )

Figure 3.

Filter color

VII. N EURAL N ETWORK D ESIGN


NEURAL NETWORK

FORWARD

TURN RIGHT

Figure 2.

TURN LEFT

overall project

here apply exponential functions , an exponential remapping


operation that gives extended contrast for large pixel values
and less contrast for small pixel values. And then the layers
again unite to form the complete Image again so that the pixels
of the image RGB values to match the desired color (red), then
they become white, while the rest become black. Therefore,
the original images are converted to binary images where the
objects of the target color are white and the background is
black. Also apply fill hole " algorithm, this Fills the holes
found in a particle. The holes are filled with a pixel value of 1.
This binary image. And remove particles algorithm Eliminates
or keeps particles resistant to a specified number of erosions.
The particles that are kept are exactly the same shape as those
found in the original source image. The source image must be
an 8-bit binary image. As well as image dilation algorithms,
this to have an optimal image. [31], [32], [33]
2) extract data : In this stage we use the image to extract
the data needed to access this image to the neural network;

The neural network module consists of an artificial neural


network, where the inputs are the values provided by the image
pre-processing and an input stage is an IR sensor. Based on
these data the neural network classifies the direction to be
taken by the robot. The neural network architecture has a
multi-layer perceptron, which consists of an input layer, one
hidden layer and output layer.
After performing different experiments, a suitable topology
has been obtained, which provides a trade-off between the
number of neurons and an acceptable performance. Thus, the
number of input neurons has been set to 5, which forces the
sequence obtained from the image processing stage to be of
this length. The hidden layer has 10 neurons with a sigmoid
activation function. Lastly, the output layer consists of 3 output
neurons, one for each possible direction (forward, turn left,
turn right).
The network is trained by means of the back-propagation
algorithm, using the gradient-descendent method. The training
set is made up of 146 sequences obtained from different
images, along with the corresponding target output for each
sequence.. The training algorithm of the network is performed
in Matlab see 1 , by means of the Neural Network Toolbox
[33], [34]. Eafter a training of 3x146 iterations. [20] .
VIII. IMPLEMENTATION AND RESULTS
Below some results and experiments conducted in a known
environment an

Algoritmo 1 Neural Network In matlab


% 5 Neurona De Entrada , 10 Neuronas Intermedias, 3 De Salida (S2)
S1 = 10;
[S2,Q] = size(entrada);
%% Define la red
net = newff(minmax(entrada),[S1 3],{logsig logsig},traingdx);
% Condiciones de entrenamiento
net.performFcn = sse; .........% Validacion por error medio cuadratico
net.trainParam.goal = 0.1; ......% Error medio cuadratico deseado
net.trainParam.show = 20; .......% Frecuencia de visualizacion (en epocas)
net.trainParam.epochs = 5000; ...% epocas
net.trainParam.mc = 0.95; ......% Constante de momento
% Entrena la red
P = entrada;
T = salida;
[net,tr] = train(net,P,T);
v_1=net.IW{1};
b_1=net.b{1};
v_2=net.LW{2,1};
b_2=net.b{2}

Figure 4.

Test in the environment

IX. CONCLUSIONS
In this paper, we introduce a simple visual servoing based in
neural network, which utilizes the images and IR sensor for
estimating the actions the robot should take, working in an
unknown environment in the presence of obstacles There are
several features worth pointing out ,the control based on artificial vision significantly reduces the computational burden of
computing systems but otherwise is not as accurate compared
to other sensory systems. Experimental results indicate that
the algorithm of artificial vision and the algorithm of control
based in neural network is efficient and effective in terms
of detecting obstacles goal seeking and obstacle avoidance

behavior.The algorithm endows the robot with a visual ability


human-like that endows of reasoning in the environment, thus
improving the performance.Equipping the robot with more
sophisticated sensory systems, such as sonar sensors, and GPS,
would improve performance of the navigation system in terms
of path optimization.
ACKNOWLEGMENT
Research group recognition in mobile robotics (ROMA)
District University Francisco Jos de Caldas - Technological
Faculty, Control Engineering Bogota, Colombia and its members, and thanks to Omar Sanchez, who give much information
for the project

Figure 5.

Neural Network Implementation

Figure 6.

Test in the environment

R EFERENCES
[1] V. Martinez, G. Gil-Gomez, and A. Cerezo, Modelado cinematico y
dinamico de un robot movil omni-direccional,
[2] M. Wada and S. Mori, Holonomic and omnidirectional vehicle with
conventional tires, in Robotics and Automation, 1996. Proceedings.,
1996 IEEE International Conference on, vol. 4, pp. 36713676 vol.4,
1996.
[3] J.
Cruz,
Control
de
la
plataforma
robotica
movil
rovio,
usando
labview
y
la
api
specifications
v1.2.
http://www.udistrital.edu.co/wpmu/jokelnice/files/2011/12/ControlaRovio-con-la-API.pdf, 2011.
[4] E. Iniesta, Diseo e implementacin de los robots f180 del itam,
Engineering Graduate Thesis, pp. 16 27, 2006.
[5] R. Rojas and A. Forster, Holonomic control of a robot with an
omnidirectional drive, KI-Kunstliche Intelligenz, vol. 20, no. 2, pp. 12
17, 2006.
[6] J. Gonsalves, P. Costa, and P. Moreira, Control y estimacin del

[7]

[8]

[9]

[10]

[11]

posicionamiento absoluto de un robot omnidireccional de tres ruedas,


Encontro Cientfico Robotica. Coimbra, pp. 4956, 2005.
T. Kalmar-Nagy, R. DAndrea, and P. Ganguly, Near-optimal dynamic trajectory generation and control of an omnidirectional vehicle,
Robotics and Autonomous Systems, vol. 46, no. 1, pp. 4764, 2004.
S. Hutchinson, G. D. Hager, and P. I. Corke, A tutorial on visual servo
control, Robotics and Automation, IEEE Transactions on, vol. 12, no. 5,
pp. 651670, 1996.
A. Begum, L. Minkyoung, and Y. J. Kim, A simple visual servoing and
navigation algorithm for an omnidirectional robot, in Human-Centric
Computing (HumanCom), 2010 3rd International Conference on, pp. 1
5, 2010.
A. J. Koivo and N. Houshangi, Real-time vision feedback for servoing
robotic manipulator with self-tuning controller, Systems, Man and
Cybernetics, IEEE Transactions on, vol. 21, no. 1, pp. 134142, 1991.
B. Espiau, F. Chaumette, and P. Rives, A new approach to visual
servoing in robotics, Robotics and Automation, IEEE Transactions on,
vol. 8, no. 3, pp. 313326, 1992.

[12] F. Conticelli, B. Allotta, and P. K. Khosla, Image-based visual servoing


of nonholonomic mobile robots, in Decision and Control, 1999. Proceedings of the 38th IEEE Conference on, vol. 4, pp. 34963501 vol.4,
1999.
[13] C. Pham Thuong and M. Nguyen Tuan, Robust neural control of robotcamera visual tracking, in Control and Automation, 2009. ICCA 2009.
IEEE International Conference on, pp. 18251830, 2009.
[14] G. N. Desouza and A. C. Kak, Vision for mobile robot navigation: a
survey, Pattern Analysis and Machine Intelligence, IEEE Transactions
on, vol. 24, no. 2, pp. 237267, 2002.
[15] A. Cherubini, F. Chaumette, and G. Oriolo, A position-based visual
servoing scheme for following paths with nonholonomic mobile robots,
in Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on, pp. 16481654, 2008.
[16] D. Burschka and G. Hager, Vision-based control of mobile robots,
in Robotics and Automation, 2001. Proceedings 2001 ICRA. IEEE
International Conference on, vol. 2, pp. 17071713 vol.2, 2001.
[17] A. Remazeilles, F. Chaumette, and P. Gros, Robot motion control from
a visual memory, in Robotics and Automation, 2004. Proceedings. ICRA
04. 2004 IEEE International Conference on, vol. 5, pp. 46954700
Vol.5, 2004.
[18] I. Siradjuddin, L. Behera, T. M. McGinnity, and S. Coleman, Image
based visual servoing of a 7 dof robot manipulator using a distributed
fuzzy proportional controller, in Fuzzy Systems (FUZZY), 2010 IEEE
International Conference on, pp. 18, 2010.
[19] P. Prem Kumar and L. Behera, Visual servoing of redundant manipulator with jacobian matrix estimation using self-organizing map, Robotics
and Autonomous Systems, vol. 58, no. 8, pp. 978990, 2010.
[20] M. I. de la Fuente, J. Echanobe, I. del Campo, L. Susperregui, and
I. Maurtua, Hardware implementation of a neural-network recognition
module for visual servoing in a mobile robot, in Database and Expert
Systems Applications (DEXA), 2010 Workshop on, pp. 226232, 2010.
[21] K. Liu, D. Sakamoto, M. Inami, and T. Igarashi, Roboshop: multilayered sketching interface for robot housework assignment and management, 2011.
[22] S. Fladung and J. Mwaura, Cs4758: Rovio augmented vision mapping
project, 2010.
[23] S. Bizot, A. Martin, C. Limam, H. Lacote, and P. Iung, Applications
of dsp y computing vision computer vision group project: Sensor robots
for ensuring post-incident mining safety, 2009.
[24] J. Melville and T. Sams, Thirsty rovio autonomous mini-keg locating
robot,
[25] J. Sung, J. Lee, and S. Suwajanakorn, Rovio and juliet: Vision-based
autonomous navigation with real-time obstacle learning,
[26] J. Bona and M. Prentice, Pyrovio: Python api for wowwee rovio, 2009.
[27] R. Fielding, J. Gettys, J. Mogul, H. Frystyk, L. Masinter, P. Leach, and
T. Berners-Lee, Hypertext transfer protocolhttp/1.1, 1999.
[28] W.
G.
Limited,
Api
specification
for
rovio.
http://www.udistrital.edu.co/wpmu/jokelnice/files/2011/12/RovioAPI-Specifications.pdf, 2008.
[29] S. Settembre, Cognitive rovio: Using roviowrap and .net to control a
rovio, 2009.
[30] R. Gonzalez and R. Woods, Tratamiento digital de imagenes. AddisonWesley Longman, 1996.
[31] J. Branch and G. Olague, La visin por computador: Una aproximacin
al estado del arte, Revista Dyna (133), 2001.
[32] H. Mehl and O. Peinado, Fundamentos del procesamiento digital de
imgenes, 1997.
[33] O. Sanchez, Modelos, control y sistema de visin.
http://omarsanchez.net/aboutus.aspx, 2009.
[34] M. Hudson., H. Beale Martin T, and D. Howard B, Neural network
toolbox 6 users guide, 2011.

Vous aimerez peut-être aussi