Vous êtes sur la page 1sur 9

AbstractThis paper presents the system used by the team of

the German University in Cairo (GUC) within the FESTO


Hockey Challenge league that took place within RoboCup 2009.
The goal of the FESTO Hockey Challenge is to have a
competition between robotic teams where each team consists of
three robots to compete in an Ice Hockey game. All robots are
of the same mechanical, sensor and electronic capabilities so
that the focus of the competition is to develop novel artificial
intelligence techniques for robot control and coordination. The
GUC team scored the 2
nd
place in this competition after losing
by penalty shoot outs in the final. The proposed approach for
GUC team employed Hierarchical Fuzzy Logic Controllers
(HFLCs) in which the low level behaviours are implemented
using FLCs and the coordination between the behaviours is
implemented by a high level layer. The coordination between
the robotic agent team members is implemented by a situation
based dynamic role allocation mechanism. The paper will
describe the employed approaches and will report on the results
achieved.
I. INTRODUCTION
HE Festo Hockey Challenge league was introduced by
FESTO company in the 2009 International Robotics
Competition (RoboCup 2009) that took place in Graz,
Austria, July 2009. The goal of the Hockey Challenge is to
present a standard league where all the competing teams will
be using the same robotic platform which has the same
mechanical, sensor and electronic capabilities. Hence, the
focus of the competition will be to develop novel artificial
intelligence techniques that could advance the fields of
robotic control and multi robotic agent coordination.
The competition arena consists of a field of 4.5 m x 4.5 m.
The goal area is marked green or blue at the back walls. The
width of the goal is 50 cm. In front of the goals there are half
circles (called the front line of the goal) of radius equal to 50
cm. These circle lines consist of metallic stripes which can
be detected by inductive sensors. There are two additional
metallic lines dividing the field in three equal parts one
attacking area and two defensive areas. There are 4 bully-off
points in the attacking area and 4 bully-off points in the
defensive area. The bully points are marked by black circular
disks of 10 cm diameter. Fig.1a shows the layout of the arena

H. Hagras, A. El Molla, M. Zaher. H. Gabr, H. Fahmy are with the
German University in Cairo, New Cairo City, Egypt (e-mail:
hani@essex.ac.uk).

and Fig. 1b shows a photo of the real arena.
Each team consisted of three Robotino robots which have
the same mechanical, sensing, communication and electronic
platform. Each team can wear either a stripped green or blue
t-shirt.

(a) (b)
Fig.1. (a) The arena layout in which there is a division of the field into five
lines, two goals, one kick-off point and 8 bully-off points [5]. (b). A Photo
of the real arena [5].

This paper will present the machine vision, robot control
and the multi robotic agent coordination techniques for the
GUC team. The employed techniques employ Fuzzy Logic
Controllers (FLCs) and Hierarchical Fuzzy Logic Controllers
(HFLCs). The GUC teams scored the 2
nd
place in the
competition after losing in the final match by penalty
shootouts which verifies the power of the employed
techniques.
Section II, will provide an overview on the rules of the
hockey challenge. Section III, will provide a high level
overview on the Robotino robots [4], [5]. In Section IV, we
will present the employed computer vision techniques.
Section V will present HFLC approach. Section VI will
present the robots behaviour structure. Section VII will
present the robots intermediate behaviours. Section VIII will
present an overview on the robotic agent team coordination
system. In Section IX, we will present the results followed by
conclusions and future work in Section X.
II. THE FESTO HOCKEY CHALLENGE COMPETITION RULES
The Festo Hockey Challenge rules follow the regulations
of the International Ice-Hockey Federation (IHF) but are
simplified for the competition. Each match has a referee. The
regular playing time of a match is divided into three thirds of
5 minutes each.
A goal will be only scored if the following is fulfilled:
The puck has completely crossed the goal line and is
A Fuzzy Based Hierarchical Coordination and Control System for a
Robotic Agent Team in the Robot Hockey Competition
Hani Hagras, Senior Member, IEEE, Rabie A. Ramadan , Mina Zaher, Hala Gabr and Hussien Fahmy

T



inside the goal area.
The last player who has touched the puck must be in the
defensive area of the corresponding goal.
The puck was not touched by one player of the attacking
team being beyond the front line of the goal. This means
that no part of this player is between goal and front line
of the goal.
A game will be started at the beginning of each third or
after a goal. The start will be done at the kick-off point. All
team members must be in their downfield in front of their
defensive line. The referee places the puck on the kick-off
point. Only one player of a team is allowed to be nearby such
that he can immediately catch the puck with the pushing
device. After scoring a goal, the non successful team
receives the right to start at the kick-off point.
If there is an irregularity, the referee will start the game at
one of the bully-off point and then only one player of each
team is allowed to be nearby in order to catch the puck. The
other team members must be in their downfield at least one
meter away from the bully-off point.
An offside will be counted if the attacking player is in the
defensive area of the opponent team before the puck has
completely crossed the defensive line. In the case of offside,
the referee will stop the game and will restart the game with
at the next bully-off point outside the defensive area.
It is only allowed to have only one team member in their
own defensive area, otherwise the referee will stop the game
and give a penalty shoot. If the boundary of a goal is
displaced then the game will be interrupted and penalty
shooting will be counted against the team whose robot
displaced the goal. If one player pushes a player of the
opponent team to force him in a non regular position then the
referee will stop the game and there will a penalty shoot
against his team.
Penalty shooting will be performed where one player of
the attacking team will be selected to perform the penalty
shot. One player of the defensive team will be selected as
goalkeeper. The player of the attacking team starts moving
with the puck at the kick-off point if the referee releases the
penalty shot. The player has only one chance to hit the puck
in the goal. The goalkeeper can try to stop the attacking
player but no wheel of the robot is allowed to cross the front
line of the goal.
III. AN OVERVIEW OF THE ROBOTINO ROBOT
The mobile robot system Robotino (shown in Fig.2a )
[4], [5] is driven by 3 independent omnidirectional drive
units. They are mounted at an angle of 120 to each other.
The three omnidirectional drive units of Robotino define
the robot as being holonomic, meaning that the controllable
degrees of freedom equals the total degrees of freedom of the
robot. The chassis is protected by a rubber bumper with
integrated switching sensor. The robot diameter is 370 mm
and its height including housing is 210 mm. Each of the 3
drive units consist of DC Dunker motors with nominal speed
of 3600 rpm. The robots have an incremental encoder with a
resolution of 2048 increments per motor rotation. The robot
is equipped with 9 Infrared Red (IR) distance measuring
sensors which are mounted in the chassis at an angle of 40
to one another. The sensors are capable of relative distance
measurements to objects at distances of 4 to 30 cm. The anti-
collision sensor is comprised of a switching strip which is
secured around the entire circumference of the chassis. The
robot has also an inductive proximity sensor which serves to
detect metallic objects on the floor. In addition, the robot has
also diffuse sensors which consist of a flexible fibre-optic
cables connected to a fibre-optics unit which works with
visible red light. The robot is also equipped with a colour
webcam.
There are numerous communication interfaces on board
including 2 Ethernet ports, 2 USB ports, 2 RS232 ports,
parallel port, VGA port and Wireless LAN Access Point
following the standards 802.11.g and 802.11.b. The access
point can be switched into a client mode. There is a Linux
operating system with real time kernel running on the
embedded PC 104 pulse with AMD LX800 processor. The
main part of the controller is the Robotino server, a real
time Linux application. It controls the drive units and
provides interfaces to communicate with external PC
applications via W-LAN. There is an API with libraries
which allow to create applications for Robotino in
numerous programming languages including C++ und C
which were used by the GUC team.
IV. THE EMPLOYED COMPUTER VISION TECHNIQUES
The computer vision system employed by our team
processed the images captured by the each robot camera to
provide information about the orientation and depth
(distance) from the puck, the two goals, the team mates and
the opponents as well positions of different arena landmarks.
We needed very robust computer vision systems that can
provide very fast updates for the dynamic hockey game. We
used only one camera for each robot which meant that we
could not benefit from stereo vision algorithms and we did
not use panoramic vision. (why did not we use the panoramic
vision? ) This makes calculating depths and orientations of
different objects quite challenging so we had to improvise.
The application also needed real-time performance, so we
had to focus on algorithm efficiency as we could not afford
any delays in the robot reaction and we had to process three
video streams on the same server computer. Furthermore, the
robots had a complicated shape which made their
identification harder. We also had to adapt to different
lighting conditions and above all the arena did not have
enough reference points.
To overcome these problems, the vision system was
mainly divided into four phases: image analysis and region
formation, extracting region information, object
identification and pattern recognition and finally calculating



different object locations. These phases are discussed in the
following subsections:
A. Image Analysis and Region Formation
This phase was mainly inspired by the open source library
CMVision developed in Carnegie Mellon University [3]
although we made a lot of major modifications to make it
suitable for our application. The purpose of this phase is to
extract different connected regions from the image each
having a certain colour. This can be applied using colour
segmentation. At first, each pixel is assigned a certain colour
of interest using 2 bitwise AND operations and a single table
lookup as in [2]. After that, primary regions are formed by
applying a Run Length Encoding on the image grouping all
consecutive pixels with the same colour together in one
region. Finally, larger regions are formed by merging
primary regions on 8-connectivity basis using a Union-Find
algorithm with path compression and union by rank heuristic
which is proved to be linear for all practical values of the
problem. Unlike the CMVision library, we used the HSV
colour space instead of the YUV colour space because it
gave us more power in capturing the different colours we
needed and in making the system highly adaptable to
different lighting conditions. We also performed 8-
connectivity in region grouping instead of 4 connectivity
which gives more accurate results and we did that without
having to iterate on the whole image but on the formed
regions only.
B. Extracting Region Information.
After performing the segmentation in phase one, we have
a set of regions with different colours in different parts of the
image. This phase is responsible for collecting information
about each of these regions. For each region produced in the
previous phase we get the bounding box, the region colour,
area and centroid.
C. Pattern Recognition and Object Identification
In order to identify a region as a particular object, we use
some heuristics that try to match the properties of that region
extracted from the previous phase with the real properties of
the object in question. For example: the puck should be the
only red region in the scene with some minimum area in
order to avoid noisy red parts, the green goal is a rectangular
green region with certain geometric ratios and so on. The
robots were a bit tricky because they had an irregular shape
and they appeared on the segmented image as a group of
separate black regions. This problem was solved by
performing nearest neighbour clustering on these black
regions and grouping all the little regions near to each other
into a single cluster to represent one robot.
D. Calculating Different Object Locations
Getting the object depth with one camera was one of the
trickiest parts. We were able to develop a simple yet
powerful method to calculate the real depth of objects
accurately in terms of camera parameters like height, tilt and
resolution (Fig.2b shows the parameters involved in
calculating depth and orientation for our vision system). This
method can get the depth of any object on the ground or
above the ground with a known height with an accuracy of
90 - 95 %. The idea depends on observing the phenomena
that objects lying on the same vertical level but different
distances from the camera have different heights in the image
taken with that camera. In our case, the relation between the
real depth (d) of the object and the height (h) of that object
in the image is given by the equation:
d = h * tan () (1)
It is just one line of code, which is much faster than the
typical approaches that use stereo vision. The orientations
can also be calculated using the same concept.
Fig.3a shows the real image obtained from the robot
camera and Fig.3b shows the corresponding segmented
image obtained from the vision system which shows the
accuracy of finding the various objects of interest.
E. Capabilities of the Computer Vision System
By the end of the four phases, the vision system was
capable of performing the following using only one camera:
Identify all objects of interest with very high accuracy
even from long distances.
Calculate depths and orientations of different objects
with an accuracy of 90 - 95 % on a 320 240 resolution
(higher resolutions would yield better results).
Process a video stream of 15 frames per second from
each of the three robots using a 5 % utilization of a 3
GHz processor running Windows XP.
Adapt to different lighting conditions.
Locate the absolute location of a robot in the arena.

(a) (b)
Fig. 2. (a) Image of Robotino of Festo Didactic GmbH & Co.KG [4],
[5]. (b) The parameters involved in calculating depths and orientations for
our vision system

(a) (b)
Fig.3. (a) The real image captured from the robot camera. (b) The
segmented image obtained by our vision system.



V. THE ROBOTS HIERARCHICAL FUZZY LOGIC
CONTROLLERS (HFLCS)
Single rule base FLC suffer from the serious limitation
that the number of rules increases exponentially with the
number of variables involved [10]. If we have a robot with
only eight input sensors, if we represented each input by only
three fuzzy sets then for a single rule base we need to
determine 3
8
=6561 rules, which is very difficult to design,
also this huge rule base translates directly to slower
controller response.
To cope with the rule explosion problem and its effects on
the design and real-time operation of the FLCs, we will
hierarchically decompose the control problem by breaking
down the input space for analysis by sharing it amongst
multiple low level fuzzy behaviours. Each behaviour
responds to specific types of situations, and then integrating
the recommendations of these behaviours via a high level
fuzzy coordination layer. Each behaviour is an independent
and self contained FLC with a small number of inputs and
outputs and a small rule base and it serves a single purpose
(e.g. edge-following, obstacle avoidance, goal seeking, etc )
while operating in a reactive fashion. The behaviours will
typically (but not necessarily) map different inputs sensors to
common actuators outputs [9]. Such primitive behaviours are
building blocks for more intelligent composite behaviours,
i.e. their capabilities can be combined through synergistic
coordination by a high level fuzzy coordination layer to
obtain an overall coherent behaviour that achieves the
intended task(s). Fuzzy coordination allows the ability to
express partial and concurrent activations of behaviours and
the smooth transition between behaviours.
The hierarchical fuzzy systems have a nice property that
the total number of rules increases linearly rather than
exponentially as in the single rule base FLC [10]. For
example, we divide the robot controller into four co-
operating behaviours namely obstacle avoidance, goal
seeking, left and right edge following [6]. If we represented
each input using three fuzzy sets (as in the case of a single
rule base FLC) then the obstacle avoidance behaviour, using
three forward facing sonar sensors, will have a rule base of
3
3
= 27 rules. The left edge following behaviour, using two
left side facing sonar sensors, will have a rule base of 3
2
= 9
rules, the right edge following behaviour will have the same
number of rules. The goal seeking behaviour, using a single
goal detection sensor will have only a rule base of 3 rules.
Thus the total number of rules in the low behaviours is 27 +
9 + 9 + 3 = 48 rules and the total number of rules in the
coordination layer is 4 (number of behaviours), thus we need
a small number of rules which are much easier to be
determined than 6561 rules in the case of the single rule base
FLC.
There are many papers reporting implementations of
Hierarchical Fuzzy Logic Controller (HFLC) which had
produced good results [1], [6], [7], [8], [9]. Our work in this
paper confirms the efficiency of the HFLC. The following
section will explain the behaviours structure for the robots.
VI. THE BEHAVIOURS STRUCTURE
In our HFLC, each low level reactive behaviour will be an
FLC. Each low level behaviour will receive a subset x
h
of the
total crisp inputs x available to the HFLC, all behaviours will
produce preferences to the same common outputs, which are
the outputs of the HFLC, so each behaviour will map
different inputs to common outputs which are the omni-drive
rotation magnitude and the omni-drive side movement input
whose fuzzy sets are shown in Fig.4. The low level FLCs
outputs will approximate the centroids of output fuzzy sets to
represent the preferences from the perspective of the low
level behaviour goals.

(a) (b)
Fig. 4. The Fuzzy sets for the behaviours FLCs and HFLC outputs. (a) The
omni-drive rotation magnitude. (b) The omni-drive side movement input.

We have used four low level behaviours which are
obstacle avoidance, goal seeking, left and right edge. In the
following subsections, we will explain further each of these
low level behaviours. These basic FLCs compose the basics
of our robots low level, intermediate and high level
behaviours.
A. The Obstacle Avoidance Behaviour
For the Obstacle Avoidance (OA) behaviour, we used 7 IR
sensors from the 9 that surround the robots casing. To limit
the rule base size we grouped those sensors into three
groups. The first group detect obstacles facing the robot, this
group consists of the front sensor and its two neighbours
from the right and left. The OA FLC was fed with the
minimum of those three sensors to represent the sensor
reading for obstacles located infront of the robot. The other
two groups represented obstacles that are located on the
robots right and left sides. These two groups share a sensor
with the front sensors group in addition to two more sensors
located on the right or left of the robots case. Again, the
OA FLC was fed with the minimum of those three sensors on
the right and left. Hence, the OA receives three inputs where
each represents the minimum distance from the front, left
and right sides respectively. Each of the FLC inputs were
represented by two fuzzy sets Near and Far as shown in
Fig.5a. Due to the IR sensors short range, the obstacle
avoidance behaviour also takes machine vision inputs into
consideration detecting other robots as obstacles from far
distances. Hence, the OA behaviour receives also inputs
representing the orientation (represented by three fuzzy sets
as shown in Fig. 6a) and depth (represented by two fuzzy



sets as shown in Fig.6b) from the opponent robot which are
viewed as obstacle. Hence, the OA behaviour has a rule base
of 2*2*2*3 = 24 rules.

(a) (b)
Fig. 5. (a)The fuzzy sets used each sensor group for OA behaviour FLC.
(b) The fuzzy sets for the inductive sensors for the goal keeper behaviour.

B. The Left and Right Edge Following Behaviours
For the left edge following behaviour using two left side
facing sonar sensors and representing each sensor by three
fuzzy sets, this will lead to a rule base of 3
2
= 9 rules, the
right edge following behaviour will be the same.
C. The Goal Seeking Behvaiour
For the Goal Seeking (GS) behaviour, the target can be the
red puck or the opponent goal or the team own goal or the
opponents robots. The inputs for the goal seeking behaviour
are obtained from the machine vision block and they
represent the depth and the orientation from goal target
(wither the target is the red buck or the opponent goal or the
team own goal or the opponent robot). For the goal seeking
behaviour, the orientation input is represented by three fuzzy
sets (as shown in Fig. 8a) while the depth (distance) from the
goal is represented by two fuzzy sets (as shown in Fig. 8b).
Hence, this will lead to a rule base of 6 rules.

(a) (b)
Fig.6. The fuzzy sets for the GS FLC for (a) The orientation input. (b) The
depth input.
D. Goal Keeper
The Goal Keeper is considered a key player of the team as
a good goal keeper is a major strength to the team defence
capabilities. The goal keeper behaviour is another FLC
which takes as input two from the inductive sensor readings
and inputs from the machine vision block. The machine
vision block provides the puck orientation and depth.
According to the puck orientation, the robot will move left or
right to block the shooting angle. In order to block the far
post angle the robot needs to stand outside from the goal and
move in a semi circle in front of the goal. In order to do this
the inductive sensors were used. We equipped each robot
with two inductive sensors. Those sensors are located behind
the gripper. The first one is exactly behind the gripper while
the second is 15 cm behind the gripper. These sensors can
detect the silver line that forms a semi circle in front of the
goal marking the goal keeper area. When the sensor is
located on top of the middle of the silver line the reading
ranges from 0.4 to 3.0 as the sensor moves forward or
backward to scan the edge of the silver line the readings
range from 3.0 to 6.0 as the sensor can not detect the silver
line it returns readings above 8.0. Using the two inductive
sensors the goal keeper behaviour implements the line
following behaviour to follow the silver line. Together with
the puck orientation, the robot can be guided to move on the
line of the goal area and protect the shooting angles. The two
inductive sensors inputs are both expressed using three
fuzzy sets (as shown in Fig.7b) and the puck orientation is
expressed using three membership sets resulting in 3
3
= 27
rules.
When the robot can not locate the puck the goal keeper
will replace the puck orientation input with the orientation of
the nearest opponent robot. If for a several seconds the puck
did not re appear, the robot will start looking around for it
while staying tight in goal. On the other hand, if the puck
depth becomes within a certain range the goal keeper will
burst and grab the puck to hinder any shots being made.

E. The High level Coordination Layer
Fig.7 shows the HFLC architecture, at the high level there
is high level FLC which is responsible for the coordination
of the low level FLCs based behaviours. Each low level
behaviour has a context of activation
j
C representing when
it should be activated. The context is represented by fuzzy
set as shown in Fig.8 to handle the linguistic and numerical
uncertainties associated with these contexts. Each behaviour
is activated with a strength given by the truth value of the
context, i.e. the degree of firing of the fuzzy set.

Fig. 7. The HFLC architecture.





Fig. 8. The fuzzy sets for the contexts.

The high level FLC receives the crisp inputs to the
contexts d
j
(j=1,..H), where H is the total number of
behaviours. These crisp inputs are then fuzzified by
matching each input to its context fuzzy membership
function. When the crisp inputs are fuzzified against the
fuzzy context membership functions we obtain for each crisp
input d
j
a membership value ( )
j
C j
d for each context
j
C .
The high level coordination FLC has a coordination rule
base which contain coordination rules that describe in a
fuzzy way when each behaviour should be activated to
influence the operation of the robot at each moment, the
coordination rules take the following format:
IF d
j
is
j
C THEN Behaviour
j
B j=1,..H (2)
Where d
j
is the crisp input to the context
j
C and H is the
total number of behaviours.
j
B is the behaviour output
fuzzy set. Note that we have a coordination rule for each
behaviour, thus we have a total of H coordination rules.
From Equation (2) we see that that each behaviour output
should be activated with a strength given by the truth value
of its context
j
C .
In the inference engine of the high level FLC, we use the
product t-norm for the implication operation and maximum t-
conorm to aggregate the fuzzy outputs of the various rules.
The Height defuzzification is employed for the high level
FLC. However for the Height defuzzification, we need to
compute the centroid of the output fuzzy sets from each low
level behavior. However if in each control cycle, we
calculate the centroid of the combined output fuzzy set of
each behavior which again could be approximated by
calculating the Height defuzzification for each low level
behavior.
At each control cycle, each low level FLC based
behaviour will receive a subset x
h
of the total crisp inputs x
available to the HFLC and will generate a preference from
the perspective of its goal which is represented by the
defuzzified output of each behavior (which approximates the
centroid of the behaviour output fuzzy set).
The consequent of each context rule is the behaviour
output fuzzy set
j
B and its centroid is approximated by the
defuzzified output of the behavior
j
k
y . Where k is HFLC
output, k=1, c.
The crisp output for each HFLC output
k
yt k (k=1, c) is
calculated according the following equation:


k
yt =
1
1
j
j
H
j
C k
j
H
C
j
y

(3)
The crisp input d
1
to the obstacle avoidance behaviour
context is the minimum value of the front sonar sensors and
the nearest depth to obstacles obtained from the vision
system. The context
1
C for the obstacle avoidance behaviour
is in the form of a fuzzy set as shown in Fig. 8 which defines
that the obstacle avoidance behaviour should be active when
the robot path is obstructed by an obstacle and the closer the
robot gets to the obstacle, the higher will be the activation of
the obstacle avoidance behaviour. d
1
is fuzzified using the
fuzzy context membership functions of
1
C
to a membership
value
1
1
( )
C
d . The crisp input d
2
to the left edge following
behaviour context
2
C
is the minimum value of the left side
sonar sensors. The crisp input d
3
to the right edge following
behaviour context
3
C
is the minimum value of the right side
sonar sensors. The crisp input d
4
to the goal seeking
behaviour context
4
C is d
1
,. The context
4
C for the goal
seeking behaviour is in the form of a fuzzy set as shown in
Fig. 8 which defines that the goal seeking behaviour should
be active when the robot path is clear, the clearer the robot
path the higher will be the activation of the goal seeking
behaviour. d
4
is fuzzified using the context membership
functions of
4
C to a membership value
4
4
( )
C
d . After all
the crisp inputs d
j
(j=1,..4) are matched and fuzzified against
their fuzzy contexts, the fuzzified values are then fed to the
inference engine which determines which rules (and hence
which behaviours) are fired from the coordination rule base.
The coordination rule base contains a coordination rule for
each behaviour which relates the contexts to the behaviours.
In case of an attacking situation, the robot can coordinate
OA, GS and the left and right edge following behaviours. As
we have four behaviours then we will have four coordination
rules as follows: IF d
1
is LOW THEN Obstacle Avoidance,
IF d
2
is LOW THEN Left Edge Following, IF d
3
is LOW
THEN Right Edge Following, IF d
4
is HIGH THEN Goal
Seeking. The system is capable of performing very different
tasks using identical behaviours by changing only the context
rules and co-ordination parameters. For example, in the Goal
defensive situation, the robot will be coordinating the goal
keeper behaviour mentioned above with the GS behaviour
which seeks the puck and opponent and when the puck is
within a given safe distance, the robot can activate the goal
seeking behaviour to get the puck. We can eliminate the
unneeded behaviours from the context rules according to the
robots mission.



VII. INTERMEDIATE BEHAVIOURS
Besides the basic FLC and HFLCs behaviours mentioned
above, the robots have intermediate behaviours as explained
in the following subsections.
A. The Attacking Behaviour
The attacking behavior makes use of the HFLC
coordinating the OA and GS (The robots seek the opponent
goal) behaviours. In case the robot can not locate the goal it
will rotate around the puck in the gripper while performing
obstacle avoidance. This means that the axis of rotation is a
normal vector to the pucks surface.
As the distance and angle from goal becomes optimum for
shooting, the HFLC will be disabled and the shooting
behvaiour will be triggered. According to the situation
different shooting techniques take place. We have three
different shooting mechanisms. The first one is shooting
from long distances. This one will be triggered from a far
distance from goal as it takes about 1.5 meters to execute.
The robot simply rotates while moving forward in a certain
direction. As the robots side faces the goal it will suddenly
rotate to the goals direction. Another interesting shooting
mechanism is done when approaching the goal from a dead
angle. The robot simply leaves the puck from the gripper and
moves diagonally leaving the puck the right or left of the
gripper. The second shooting mechanism involves the robot
bursting to the right or left accordingly for about 1 meter
drifting with the puck to the grippers side. The robot would
end this diagonal movement with a sudden rotation in the
opposing direction rotating about 340 degrees hitting the
puck to the facing direction. The third shot is the basic one
which is similar to the second one but without the drifting
part.
Another attacking mechanism involves the edge following
behaviours. This approach involves sneaking to the opponent
defensive area. If the robot with the puck detects that it is
still far away from opponents goal and opponents robots
are between the robot and the goal, this behaviour will be
enabled. The robot would keep track of its orientation from
the goal. Then the robot would rotate 90 degrees or -90 to
face any of the side walls. The robot would then move
towards the wall until the IR sensors detect a certain range.
The robot would then maintain a certain distance from the
wall using the edge following behaviour while moving
towards the opponents defensive area. As the robot crosses
the silver line marking the opponents defensive area, or
detects an obstacle in the way that might be an opponents
robot, the attacking robot would rotate in the opposite
direction until it locates the opponents goal. As the robot
detects the goal it will continue with the normal attacking
behaviour mentioned above. The advantage of such an
approach is that the robots gripper can totally hide the puck
from the side view, and together with the wall the puck is
totally hidden from opponents robots.
Finally, as the robot with the puck shoots, the game
situation changes and this will be managed by the robotic
agents team coordination module mentioned below.
B. Opponent Obstruction
Opponent obstruction is a behaviour designed to take one
robot of the opponents team out of play. The motivation
behind this behaviour is that while our robot has the puck,
the opponent team robots are all obstructing our attacker. For
this reason it is a good choice to start obstructing one of
them to enhance the attackers odds to reach the goal.
Opponent Obstruction behaviour simply makes use of the GS
behaviour. This time the goal is one of the opponents
robots. The behaviour consequents are modified slightly in
the case of opponent obstruction in order to make the tackles
with more speed to produce biggest impact possible.
The rule of the game states that only one team robot can
be present in its own defensive zone per instance. If another
robot goes in its defensive area to retrieve the puck or
obstruct another robot the other team is rewarded a penalty.
To avoid those penalties, robots that are not assigned to be
the goal keeper keep track of inductive lines all the time.
Indicating an inductive line means that the robot is moving
into one of the teams defensive areas. If the robot can detect
any of the two goals, the robot can directly take the decision
to enter the defensive area to pursue its goal or not. This can
be done by checking the depth of the goal that enables the
robot to know which defensive area it is about to enter or
leave. On the other hand, if none of the goals is in sight the
robot would have to check its situation. The robot would
stop rotate until it locates one of the two goals, figure out its
position with reference to that reference point and take the
decision accordingly.
C. Defence Behaviour
The defence behaviour is a precautionary action to secure
our defensive area when the puck is loose or with the
opponent. The defence behaviour makes use of the GS FLC.
However, this time the inputs are our teams goal.
Furthermore, the FLC is modified to reduce speed in order to
avoid entering our restricted defensive area. When the robot
reaches a certain depth from our goal it will stop and start
rotating trying to locate the puck in the field. As the robot
locates the puck it will start tracking it. Note that the role
assigned to that robot according to the situation is to only
track the puck and do not interfere. This is because another
robot is currently trying to grab the puck because it has a
better chance to grab it. Also note that if the puck grabbing
robot looses sight of the puck while the defence behaviour
robot has the puck in its sight the roles will suddenly switch.
The puck seeking robot that failed to retrieve the puck will
take the defence role and the defending robot will burst to
grab the puck leaving its position to the former robot.
Finally, if all three robots loose track of the puck the game
situation will change to the last situation to be discussed in
the next section.



VIII. HIERARCHICAL COORDINATION OF THE ROBOTIC
AGENTS TEAM
From our perspective we have three main game situations.
Situation one is when one of our robots has the puck.
Situation two is when our team does not have the puck but
one or more of our robots can locate the puck in the field.
Situation three is when none of our robots can see the puck
which directly implies that none of our robots have it either.
The following subsections will explain each of these
situations and how our team handles the given situation.
A. First Situation: A Team Member has the Puck
When one of the team members has got a puck in its
gripper, then this will mean low distance for the front
medium IR and low depth as obtained from the machine
vision system. In such a situation, the robot with the puck
will have to attack and will try to reach shooting positions.
The second robot will return to be goal keeper or will stay as
goal keeper if was previously assigned to be goal keeper.
Finally, the last robot will obstruct opponents robots.
B. Second Situation: Team does not have the Puck but
One or More of Our robots Can Locate the Puck
The second situation states that none of our team members
has the puck; however, one or more of our team robots can
see the puck. Behaviours of that situation are as follows: the
robot that can locate the puck would do the goal seeking
behaviour. This time the goal of the robot is the puck itself.
The robot closest to our teams goal would return to goal
keeping position, or stay in goal if was already assigned goal.
The third robot would do the defence behaviour.
It is important to point out that our game situations do not
differentiate between a loose puck and an opponents having
the puck. In both case the game situation is set to the second
one, however, the puck grabbing procedure referred to as
Goal seeking of situation two differs in actions depending on
the context as it is explained in the upcoming subsections.
As mentioned in the second situation one robot performs
GS to grab the loose or opponents puck. A special GS
behaviour which can be referred to as puck seeking or puck
grabbing. Basically, this behaviour is based on the GS FLC
where the GS FLC is fed with the puck depth and puck
orientation. The behaviour is also fed with the locations of
opponents robots around the puck if any can be located.
Note that this time OA is only activated if the puck depth is
too far. Furthermore, if the located puck is close to an
opponents robot the controller outputs an excessive speed
when the puck depth is near. This ensures that if the puck is
near the opponents robot or even in its gripper the excessive
speed would knock the puck away from the opponent robot
and out of its gripper if it was inside the gripper. Note that if
the robot looses track of the puck and another robot locates it
the same situation will still apply, however, the roles will
change. The robot that can locate the puck will do the puck
seeking behaviour and the other robot will probably do the
defence behaviour.
C. Third Situation: None of Our Robots can Locate the
Puck
In this situation no robot can locate the puck. For this
reason highest defensive precautions are made. The robot
most suitable (namely the robot with the least depth to our
teams goal and most probably is already assigned to be goal
keeper) for being the goal keeper will return to goal and
enable goal keeper behaviour. The other two robots will do
the defence behaviour. They will return to nearest post
outside of our defensive area and start searching for the
puck. Note that as any of the three robots detect the puck, the
situation will change to situation two. If certain time duration
passes and the puck was not detected then it is feared that the
opponents team are hiding the puck. The roles would
change by switching the two defending robots to opponent
obstruction behaviour for a duration of time allowing them to
expose the hidden puck. Then they would return back to
defence behaviour.
D. The Team Hierarchical Coordination Mechanism
In the previous sections the three game situations were
explained together with the behaviours associated with them.
Our coordination system is dynamic depending on the
situations mentioned above and if one of the robots is
missing due to technical failures or due to suspension from
the game the behaviour with the least importance is
neglected. For instance, assume that one robot is missing and
game situation is situation two. This means that at least one
of our teams robots can locate the puck, however, puck is
loose or with an opponents robot. According to which robot
is more suitable for the role, one robot would grab the loose
puck and the other would return to protect the goal by being
assigned the goal keeper role. The third role is neglected as
only two roles are available.
The team hierarchical coordinator can be viewed as the
high level coach that gathers information from the robots and
determine the game situation. According to that game
situation the high level coordinator would figure out the
different roles to be assigned and their priorities. Then the
coordinator would assign the role with the highest priority to
the fittest robot for that role. Then it would assign the second
important role to the more suitable robot from the other two.
Finally, it would grant the last robot the least important role
as it was not fit enough to fit the more important roles.
This hierarchical coordination technique allows room for
dynamic role allocation. In other words, if a robot
malfunctions during the game it will not respond to the
coordinator with updates. This indicates that the mentioned
robot is the least suitable for any of the roles to be assigned,
hence, giving the chance for the coordinator to assign the
roles to the other functioning robot.
Fig. 9 shows the structure of the team hierarchical
coordination which assembles from each robots its own the
distance from puck and from the team goal. The team



coordinator then analyses the situation of the robot
depending on the assembled information. After this and
depending on the game situation, the coordinator assigns the
roles to the various robots depending on their location to the
puck and to the team goal.


Fig. 9. An overview of the structure of the team hierarchical coordination.

Fig.10. The GUC (in blue) game against the University of Osnabrueck (in
green) in the final game
IX. RESULTS
In this section, we will summarise the results achieved by
the GUC team in the Festo Hockey Challenge held within
Robocup 2009. These game results serve as comparison with
the computer vision, control and the agent coordination
systems followed by the other teams.
In the first game, the GUC team played against ESTI
Tunis (Tunisia) and the GUC team won 2-0. In the second
game the GUC played against Polytech'Lille (France) and the
GUC won 6-0. In these two games, the GUC team played
only with one robot against three opponent robots (as two of
GUC robots became faulty on this day). This shows the
power of our techniques in term of defence and attacking and
in spite of playing outnumbered the robotic team
compensated with better vision, control and artificial
intelligence systems.
The GUC team then played against the University of
Osnabrueck (Germany) who employed a panoramic vision
approach which allowed them 360
o
vision of the arena.
However the GUC team won 4-0 due to the better fuzzy
control and coordination systems employed. The GUC then
won against HEIG Yverdon (Switzerland) 5-0 before loosing
2-4 from HHT Budapest (Hungary) due to sudden hardware
failure in the GUC robots. The GUC then qualified to the
semi final to play against Polytech'Lille (France) two games
where the GUC won in the first game 3-1 and in the second
game 5-0. The GUC then played in the final game against
University of Osnabrueck (Germany) where the game
finished 2-2 and the GUC lost by penalty shootout 1-0.
Fig.10 shows a screenshot of the final game.
X. CONCLUSIONS AND FUTURE WORK
In this paper, we have show a description of the vision
system, the control and team coordination mechanisms used
by the GUC team for the Festo Hockey Challenge held
within RoboCup 2009. We have shown the employed vision
techniques which provided robust and accurate results using
only camera. In addition, we have used a fuzzy logic based
approach to implement the low level behaviours within the
robots. We have also employed fuzzy context based
coordination for the high level coordination of behaviours to
form HFLCs. The robotic agent team coordinator involved a
situation based hierarchical coordination which enabled the
dynamic role allocation to the various robotic which allowed
to handle the dynamic environment of the game. The GUC
developed systems has enabled to gain the 2
nd
place in the
competition after loosing by penalty shootout in the final.
For our future work, we will aim to employ type-2 fuzzy
systems which are able to better handle the high levels of
uncertainties encountered in this dynamic game. We will also
investigate the application of reinforcement learning
techniques to enable the robots to adapt to the changing
environment and circumstances.
ACKNOWLEDGEMENT
REFERENCES
[1] A. Bonarini, Anytime learning and adaptation of hierarchical fuzzy
logic behaviours, Adaptive Behavior Journal, Vol. 5, No. 3-4, pp.
281-315, 1997.
[2] J. Bruce, T. Balch, and M. Veloso, Fast and inexpensive color image
segmentation for interactive robots. Proceedings of
Proceedings.2000 IEEE/RSJ International Conference on Intelligent
Robots and Systems ( IROS2000) Japan, October 2000, pp. 2061-
2066.
[3] CMvision: Open Source Library
http://www.cs.cmu.edu/~jbruce/cmvision/
[4] Festo Didactic GmbH & Co.KG: Technical Documentation of
Robotino, Denkendorf 2007, http://www.festo-didactic.de
[5] Festo Didactic GmbH & Co.KG: http://www.festo-didactic.de
[6] H. Hagras, V. Callaghan and M. Colley, "Prototyping design and
learning in outdoor mobile robots operating in unstructured outdoor
environments," IEEE Magazine on Robotics and Automation, Vol. 8,
No.3, pp.53-69, September 2001.
[7] H. Hagras, M. Colley and V. Callaghan, " Learning and adaptation of
an intelligent mobile robot navigator operating in unstructured
environments based on a novel Online Fuzzy-Genetic System,"
Journal of Fuzzy Sets and Systems, Vol. 141, No. 1, pp. 107-160,
January 2004.
[8] A.Saffiotti, The uses of fuzzy logic in autonomous robot navigation,
Journal of Soft Computing, Vol. 1, No. 4, pp. 180-197, 1997.
[9] E. Tunstel, T. Lippincott and M. Jamshidi, "Behaviour hierarchy for
autonomous mobile robots: Fuzzy behaviour modulation and
evolution," International Journal of Intelligent Automation and soft
computing, Vol. 3, No. 1, pp. 37-49, 1997.
[10] L. Wang, Analysis and design of hierarchical fuzzy systems, IEEE
Transaction on Fuzzy Systems, Vol. 7, No. 5, pp. 617-624, October
1999.

Vous aimerez peut-être aussi