Académique Documents
Professionnel Documents
Culture Documents
Contents
List of Tables
IV
List of Figures
.
.
.
.
.
.
.
.
1
1
2
3
4
5
5
6
9
.
.
.
.
10
10
10
12
16
.
.
.
.
.
.
.
.
.
.
17
17
20
20
21
25
26
29
29
29
31
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
31
31
32
33
35
35
37
37
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
38
38
39
40
41
41
42
42
42
43
44
44
44
46
47
48
48
.
.
.
.
.
.
.
49
49
50
52
53
54
55
55
56
56
57
59
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
II
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Conclusion
7.1 Key results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
60
61
62
Bibliography
63
67
71
III
List of Tables
2.1
16
4.1
47
68
69
70
B.1
B.2
B.3
B.4
B.5
B.6
B.7
B.8
B.9
72
73
74
75
76
77
78
79
80
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
IV
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
List of Figures
1.1
1.2
1.3
1.4
1.5
1.6
1.7
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
2
4
7
8
8
9
2.1
2.2
2.3
2.4
2.5
2.6
2.7
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
11
11
13
13
14
15
15
20
21
22
24
25
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3.1
3.2
3.3
3.4
3.5
3.6
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
26
27
28
29
32
32
33
34
36
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
4.9
4.10
4.11
4.12
.
.
.
.
.
.
.
.
.
.
.
.
39
40
40
41
41
42
43
44
45
46
47
48
5.1
5.2
5.3
5.4
5.5
5.6
50
51
52
53
54
5.7
ARToolkit markers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ARToolkit in action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tests using the ESM ROS wrapper . . . . . . . . . . . . . . . . . . . . . . . .
Example markers of Aruco . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Otsu thresholding example . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Video of Aruco used for visual servoing. Markers are attached to the hand and
to the target object in the Gazebo simulator. . . . . . . . . . . . . . . . . . . .
Tests done with Aruco . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1
6.2
6.3
6.4
56
57
58
58
VI
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
54
55
Acknowledgements
First I would like to thank my family and my girlfriend for their support and patience during
my 5-month journey in science and technology in Barcelona. The same appreciation goes to
my friends. I would also like to say thanks to Eotvos Lorand University and PAL Robotics
for providing the opportunity as an Erasmus internship to conduct such research at a foreign
country. Many thanks to my advisors Jordi Pag`es who mentored me at PAL and Zoltan Istenes
who both helped me forming this manuscript and organizing my work so that it can be presented. Thumbs up for Thomas Morwald who was always willing to answer my questions
about BLORT. The conversations and emails exchanged with Ferran Rigual, Julius Adorf and
Dejan Pangercic helped a great deal with my research.
I really enjoyed the friendly environment created by the co-workers and interns of PAL
Robotics especially: Laszlo Szabados, Jordi Pag`es, Don Joven Agravante, Adolfo Rodriguez,
Enrico Mingo, Hilario Tome, Carmen Lopera and all.
I would also like to give credit to everyone whose work served as a basis for my thesis.
These people are the members of the open source community and the developers of: Ubuntu,
C++, OpenCV, ROS, Texmaker, Latex, Qt Creator, GiMP, Inkscape and many more.
Thank you.
VII
Chapter 1
Introduction and background
1.1
Introduction
Even though we are not aware of it we are already being surrounded by robots. The most accepted definition of a robot is that it is some kind of machine thats automated in order to help
its owner by completing some tasks. They might not have human form as one would assume but
the only difference is that humanoid robots are bigger and more complex. A humanoid robot
could replace humans in various hazardous situations where a human form is still required such
as the tools provided for a rescue mission are hand-tools designed for humans. Although popular science fiction and sometimes even scientist like to paint a highly developed and idealized
picture about robotics, it is only in the state of maturing.
Despite the initial football oriented goal, even RoboCup - one of the most respected robotics
competitions - has a special league called Robocup@Home where humanoid robots compete in
well defined common tasks in home environments. 1 Also the DARPA Grand Challenge - the
most well-founded competition - has announced its latest challenge centered around a humanoid
robot. 2
1
http://www.ai.rug.nl/robocupathome/
http://spectrum.ieee.org/automaton/robotics/humanoids/
darpa-robotics-challenge-here-are-the-official-details
2
1.2
REEM introduction
microphone and speaker system while a touch screen is available on the chest for multimedia
applications such as map navigation.
The main goal of this thesis work was to develop applications for this specific robot while
making sure that the end result will still be general enough to allow the usage of other robot
platforms.
1.3
Before going into more details the basic problems are needed to be defined.
The goal of this thesis was to implement computer vision modules for grasping tabletop
objects with the REEM robot. To be more precise: it consisted of implementing solutions for
the sub-problems of visual servoing in order to solve the grasping problem. This covers the
following two tasks from the topic statement: detection of textured and non-textured objects,
detection of tabletop objects for robot grasping.
The first and primary problem encountered is the pose estimation problem which was the
main task of this thesis work. There are several examples in scientific literature solving
slightly different problems partial to pose estimation. One of them is the object detection
problem and the other one is the object tracking problem. It is crucial to always have these
problems in mind when dealing with objects through vision.
The pose estimation problem is that we have to compute an estimated pose of an object
given some input image(s) and - possibly - given additional background knowledge. Ways of
defining a pose can be found at 1.6.
An object detection problem can be identified by its desired answer type. One is dealing
with object detection if the desired answer for an image or image sequence is whether an object
is present or its number of appearances. This problem is typically solved using features.
Numerous examples and articles can also be found for the object tracking problem. Usually these type of methods are specialized to provide real-time speed. To do so they require an
initialization stage before starting the tracker. Concretely: the target object has to be set to an
initial pose or the tracker has to be initialized with the pose of the object.
The secondary task of this thesis was to provide solution for tracking the hand of the
REEM robot during the grasping process so the manipulator position and the target position
3
1.4
ROS Introduction
ROS [29] (Robot Operating System) is a meta operating system designed to help and enhance
the work of robotics scientists. ROS is not an operating system in the traditional sense of process
management and scheduling; rather, it provides a structured communications layer above the
host operating systems of a heterogenous computer cluster.
At the very core of it, ROS provides an implementation of the Observer Design Pattern [14,
p. 293] and additional software tools to well organize the system. A ROS system is made up of
nodes which serve as computational processes in the system. ROS nodes are communicating
via typed messages through topics which are registered by using simple strings as names. A
node can publish and/or subscribe to a topic.
A ROS system is completely modular, each node can be dynamically started or stopped,
they are all independent components in the system depending on each other only for data input
reasons. Topics provide continous dataflow-style processing of messages but they have limitations if one would like to use a node service in a blocking call way. There is a way to create
such interfaces for nodes and these are called services. To support a dynamic way to store and
modify commonly used global or local parameters, ROS also has a Parameter Server through
which nodes can read, create, modify parameters.
The link below provides more information about ROS:
http://ros.org/wiki
Figure 1.3: Two nodes in the ROS graph connected through topics
Among others, ROS also provides
1.5
OpenCV
OpenCV [9] stands for Open Source Computer Vision and it is a programming library developed
for real time computer vision tasks. OpenCV is released under a BSD license, it is free for both
academic and commercial use. It has C++, C and Python interfaces running on Windows, Linux,
Android and Mac. It provides an implementation of several image processing and computer
vision algorithms classic and state of the art alike. It has great amounts of supplementary
material available on the internet such as [22]. It is being developed by Willowgarage along
with ROS and is widely used for vision oriented applications on all platforms. All tasks related
to image-processing in this thesis work were solved using OpenCV.
1.6
This section will go through the very basic definitions of Computer Vision.
A rigid body in 3D space is defined by its position and orientation which is commonly referenced as pose. Such a pose is always defined with respect to an orthonormal reference frame
where x,y,z are the unit vectors of the frame axes.
Position of a point O0 on the rigid body with respect to the coordinate frame O xyz is expressed by the relation
o0 = o0x x + o0y y + o0z z
(1.1)
, where o0x , o0y , o0z denote the components of the vector o0 R3 along the frame axes. The
position of O0 therefore can be defined as vector o0 as follows:
0
ox
0
o0 =
o
y
o0z
So far we covered the position element of the objects pose.
(1.2)
(1.3)
h
R = x0 y 0
0
0T
xx yx0 zx0
x x y 0T x z 0T x
i
0 = 0
0
0 = 0T
0T
0T
z
x
y
z
x
y
y
y
z
y
y y y
0
0
0
0T
0T
0T
xz yz zz
x z y z z z
(1.4)
(1.5)
1 2y 2 2z 2
2xy 2zw
2xz + 2yw
2
2
Q=
2xy
+
2zw
1
2x
2z
2yz
2xw
2
2
2xz 2yw
2yz + 2xw
1 2x 2y
(1.6)
Note: When talking about transformations the components of a pose are usually called
translation and rotation instead of position and orientation.
[35] provided great help for writing this section.
1.7
Grasping problem
A grasping problem has several definitions depending on specific parameters. Since the goal of
this thesis was not visual servoing the presented grasping problem will be a simplified version.
(1.7)
where Tof f is a transformation that defines a desired offset on the object frame. Also let Fmc
denote the manipulator frame w.r.t. the camera frame.
The next task is to find the sequence T1 , T2 , ..., Tn where
|| T1 T2 ... Tn Fmc Fgc ||<
(1.8)
is true where is a pre-defined error. The transformations T1 , T2 , ..., Tn are applied to a kinematic chain describing the robots current state.
1
2
3
4
5
6
7
(a)
(b)
(d)
(c)
(e)
1.8
Visual servoing
Following the same principles as with motor control visual servoing stands for controlling a
robot manipulator based on feedback. In this particular case the feedback is obtained by using
computer vision. It is also referred to as Vision-Based Control and has 3 main types such as:
Image Based (IBVS): The feedback is the error between the current and the desired image
points on image plane. Does not include 3D pose at all therefore is often referred to as
2D visual servoing.
Position Based (PBVS): The main feedback is the 3D pose error between the current pose
and the goal pose. Usually referred to as 3D visual servoing.
Hybrid: 2D-3D visual servoing approaches are taking image features as well as 3D pose
information combining the two servoing methods mentioned above.
Visual servoing is categorized as closed loop control. Figure 1.7 shows the general architecture of visual servoing.
Chapter 2
Object detection survey and State of the
Art
2.1
Introduction
As a precursor for this project a wide survey of existing software packages and techniques
needed to be done. The survey consisted of 2 stages.
1. A wider survey for shallow testing and research to classify possible subjects. The table
of results can be found in Appendix 2 B.
2. A filtered survey based on the attributes and previous results and experiences with more
detailed tests and research also taking available sensors into account. The table of results
can be found in Appendix 1 A.
This chapter introduces the most resulting softwares and techniques from the above surveys
providing the benefits and drawbacks experienced.
2.2
Available sensors
There are several ways to address the tasks of digitally recording the world. While there is a
wide variety of sensors suitable for image-based applications when building an actual humanoid
robot one has to consider to choose the type best fitting the application and one that can fit into
a robot body or - more preferably - into a robot head.
10
Laser scanners are more industrial and usually substantially more expensive than the others. Due to the primary industrial design of laser scanners they have an extremely low
error rate and high resolution. They are mostly used on mobile robots for mapping tasks
or 3D object scanning for graphical or medical applications.
2.3
Survey work
This section summarizes the research conducted for this thesis mentioning test experiences if
there were any.
Holzer et al. defined so-called distance templates and applied them using regular template
matching methods.
Hinterstoisser et al. introduced a method using Dominant Orientation Templates to identify
textureless objects and estimate their pose in real-time. In their very recent work Hinterstoisser
et al. engineered the method LINE-Mod for detecting textureless objects using gradient normals
and surface normals. The advantage of their approach is that even though an RGB-D sensor is
required in the learning stage, a simple webcamera is enough to detect - of course the error will
increase since there are no surface points available from a webcam. A compact implementation
is available since OpenCV 2.4.
Experiments done with LINE-Mod showed that this method cannot be applied to textured
objects although it is a reasonably nice alternative for textureless objects. An experience gained
by using this method is that the false detection rate was extremely high and no applicable 3D
pose result could be obtained, it only provided if an object was detected or not. The first
implementation was released at the time of this thesis work therefore it is possible that future
versions will improve results. The product of this thesis work could be expanded to textureless
objects using this technique.
Test videos prepared for this thesis:
http://www.youtube.com/watch?v=2cCsYfwQGxI
http://www.youtube.com/watch?v=3e3Wola4EWA
12
13
The OpenCV group, Rublee et al. defined a new type of feature detector/extractor Oriented
BRief (ORB) to provide a BSD licensed solution in constrast to SURF [5]. The work of [4]
was to experiment and create benchmarks for TOD [34] using ORB as its main feature detection/extracion method.
Experimental work was done for this thesis to see if SIFT could be replaced with ORB in Chapter 3 but due to deadlines it was not possible to implement it. As future work it would be
however a nice addition to the final software.
The work of [38], RoboEarth is a general communication platform for robots and has a
ROS package which contains a database client and a detector module. Even though the detector module of RoboEarth was not precise enough for the task of this thesis its still exemplary
as robotics software.
The tests of RoboEarth package were really smooth and easy to do since they provided tutorials and convenient interfaces for their software. The requirements of the system however did
not exactly meet the provided hardware because the detector of RoboEarth needs an RGB-D
sensor and with REEM we only had a stereo camera. Experiments showed that obtaining a
precise pose is hard due to its high variance and the false detection rate was also high.
14
2.4
Table 2.1 is summarizing the previous section in a table form highlighting the most relevant
attributes.
Name
Sensor
Texture
Speed
Output
Keywords
ViSP tracker
Yes
No
No
Monocular
Only edges
30Hz
Pose
edge
tracking,
grayscale,
particle
filter
RoboEarth
No
Yes
No
RGBD(train,
detect),
monocular(detect)
Needed
11Hz
Pattern
matched
LINE-Mod
No
Yes
No
RGBD(train,
detect),
monocular(detect)
Low texture
30Hz
Pattern
matched
ESM
Yes
No
No
Monocular
Needed
30Hz
Homography custom
minimization, pattern matching
ODUFinder
No
Yes
No
Monocular
Needed
4-6Hz
Matched
SIFTs
BLORT
No
No
Yes
Monocular
Needed
30Hz+
3D pose
16
Chapter 3
Pose estimation of an object
3.1
As a result of the wide and then the deep survey one software package was chosen to be integrated with the REEM robot. A crucial factor of all techniques surveyed was to see what type
of sensor is required because the REEM robot does not have a visual depth sensor in the head.
Early experiments with the BLORT system showed that it could be capable of serving as a pose
estimator on REEM for the grasping task. It provided correct results with a low ratio of false
detections especially when compared to others along with a reasonably good speed.
The vision and robotics communities have developed a large number of increasingly successful methods for tracking, recognizing and online learning of objects,
all of which have their particular strengths and weaknesses. A researcher aiming
to provide a robot with the ability to handle objects will typically have to pick
amongst these and engineer a system that works for her particular setting. The
toolbox is aimed at robotics research and as such we have in mind objects typically
of interest for robotic manipulation scenarios, e.g. mugs, boxes and packaging of
various sorts. We are not aiming to cover articulated objects (such as walking humans), highly irregular objects (such as potted plants) or deformable objects (such
as cables). The system does not require specialized hardware and simply uses a single camera allowing usage on about any robot. The toolbox integrates state-of-the
art methods for detection and learning of novel objects, and recognition and tracking of learned models. Integration is currently done via our own modular robotics
framework, but of course the libraries making up the modules can also be separately
integrated into own projects.
17
18
The algorithmic design of BLORT is a sequence of the detector and tracker modules.
1
2
3
4
5
6
7
8
initialization ;
while object not detected or (object detected and conf idence < thresholddetector ) do
//Run object detector;
Extract SIFT features;
Match extracted SIFTs to codebook using kNN;
Estimate object pose from matched SIFTs using RANSAC;
Validate confidence;
publish object pose for tracker;
19
end
while object tracking confidence is high do
//Run object tracker;
Copy the input image and render the textured object into scene to its known location;
Run colored edge detection on both (input and rendered) image;
Use a particle filter to match the images around the estimated pose;
Average particle guesses and compute confidence rate;
Smooth confidence values (edge, color) to avoid unrealistic fast flashes;
if conf idence > thresholdtracker then
publish pose of the object ;
end
20
end
9
10
11
12
13
14
15
16
17
18
19
3.2
CAD Model
CAD models are commonly used in Computer Aided Design softwares mainly by different type
of engineers. These models define simple 3D objects as well as more complex ones.
Object trackers often rely on CAD models of the target object(s) to perform edge detectionbased tracking.
Related articles of BLORT: [26], [25]
MeshLab [11] proved to be a great tool to handle simple objects and generate convex hulls
of complex meshes.
A demonstration video about the process of creating a simple juicebox brick can be found
on the following link: http://www.youtube.com/watch?v=OtduI5MWVag
3.3
Rendering
20
3.4
To validate a pose guess the tracker module compares the original input image with the one
with the 3D object rendered onto it. Such a comparison can be done several ways. In the case
of object tracking it is considerable to use the edges of the object which can be extracted by
detecting the edges of the image.
The following steps were implemenented using OpenGL Shaders - a technique highly optimized for computing image convolution. The procedure takes an input image I and a convolution kernel K and outputs O. A simplified definition could be
O[x, y] =
(3.1)
where f (x, y) and g(x, y) are the corresponding indexer functions. The result however is often
required to be normalized. This can be arranged by adding a normalizing factor to Equation
3.1 which is the sum of the factors of multiplication, more concretely the elements of kernel K.
The final formula for convolution should look like the following:
O[x, y] = P
X
1
I[f (x, y)] K[g(x, y)]
a,b K[a, b]
21
(3.2)
2 4 5 4 2
4 9 12 9 4
1
5 12 15 12 5
K=
115
4
9
12
9
4
2 4 5 4 2
(3.3)
3 0 3
1
10 0 10
Kx =
22
3 0 3
3
10
3
1
Ky =
0
0
0
22
3 10 3
(3.4)
(3.5)
3. Nonmaxima supression to only keep the strongest edges of the edge detection.
0 0 0
Kx = 1 0 1
0 0 0
22
(3.6)
0 1 0
Kx = 0 0 0
0 1 0
(3.7)
In this step the above convolutions serve as indicators whether the current pixel is a maximal edge compared to its neighborhood. If it is not, the pixel is disposed and an extremal
element is returned (RGB(0,127,128)).
4. Spreading operation to grow the remaining edges from the previous step.
1
2
K=1
1
2
1
0
1
1
2
(3.8)
1
2
This step enlarges the previously determined strongest edges. This step is important to
remove the small errors received from detected false edges.
23
(e) Spreading
24
3.5
Particle filter
As a technique well-based on statistical methods in robotics particle filters are often used for
localization tasks. For object detection tasks its utilized for tracking objects in real time.
At its very core, particle filtering is a model estimation technique based on simulation. In
such a system a particle could be called an elementary guess about one possible estimation of
the model while simulation stands for continuously validating and resampling these particles to
adapt the model to new information given by measurements or additional data.
(a)
(b)
(c)
(d)
end
1
2
3
4
5
Figure 3.6: Particles visualized on the detected object of BLORT. Greens are valid, reds are
invalid particles.
3.6
Image processing is often only the first step to further goals such as image analization or pattern
matching. The term image processing refers to operations done on pixel-level where the information gained is also often pixel-level information. The features used here are the individual
pixels. However it is necessary to define features of higher level in order to increase complexity,
robustness or speed or all of the previous at the same time. Though a sucessfully extracted line
in an image is also considered a feature when speaking of feature detection it usually refers to
feature types which are centered around a point. Such feature detectors are for example:
26
FAST
BRIEF
SIFT[23]
ORB [32]
SURF[5]
FREAK
Figure 3.7: Extracted SIFT and ORB feature points of the same scene
The SIFT detector did prove one of the strongest through the literature and existing applications therefore was chosen to be the main feature detector of BLORT. The SIFTs extracted
from the surface of the current object in the learning stage are saved in a data structure which
will be referred to as codebook or object SIFTs from now on. Later this codebook is used to
match image SIFTs: features extracted from the current image.
SIFT details:
invariant
scaling
orientation
partially invariant
affine distortion
illumination changes
SIFT procedure:
Image convolved using Laplacian of Gaussian (LoG) filter at different scales (scale pyramid)
Compute difference between the neighboring filtered images
Keypoints: local max/min of difference of LoG
27
(x, y) = arctg
L(x, y + 1) L(x, y 1)
L(x + 1, y) L(x 1, y)
(3.10)
28
3.7
3.7.1
k-Nearest Neighbors [12] is an algorithm used for solving classification problems. Given a distance measure of the data type of the actual dataset it classifies the current element based on the
attributes and classes of its nearest neighbors. It is also often used for clustering tasks.
In BLORT kNN is used to select a fixed size of set (the number k) of features from the
codebook most similar to the feature currently being matched during the detection stage.
3.7.2
RANSAC
The RANSAC[13] algorithm is possibly the most widely used robust estimator in the field of
computer vision. The abbreviation stands for Ran Sample Consensus. RANSAC is an iterative
model estimation algorithm which operates by assuming that the input data set contains outliers
- elements not inside the validation range of the estimated mathematical model and minimizes
. It is a non-deterministic algorithm since a random number generation is used
the ratio of outlier
inlier
in the sampling stage.
In BLORT RANSAC is used to estimate the pose of the object using image features (SIFTs
in this case) to initialize the tracker module therefore a RANSAC method can be found in the
detector module.
Figure 3.9: Extracted SIFTs. Red SIFTs are not in the codebook, yellow and green ones are
considered as object points, green ones are inliers of the model and yellow ones are outliers.
29
1
2
3
4
5
6
7
8
9
10
11
12
13
Data: dataset;
model - whose parameters are needed to be estimated;
n-points-to-sample - number of points to use to give a new estimation ;
max-ransac-trials - maximum number of iterations;
t - a threshold for maximal error when fitting a model ;
n-points-to-match - the number of dataset elements required to set up a valid model ;
0 - an optional, tolerable error limit
Result: best-model ;
best-inliers;
best-error ;
iterations = 0 ;
idx = NIL;
;
= npointstomatch
dataset.size
while iterations < max ransac trials or (1.0 npointstomatch )iterations >= 0
do
idx = random indices from dataset ;
model = Compute model(idx) ;
inliers = Get inliers(model, dataset) ;
if inliers.size >= n points to match then
error = Compute error(model, dataset, idx) ;
if error < best error then
best-model = model ;
best-inliers = inliers ;
best-error = error ;
end
14
15
16
17
end
increment iterations ;
end
Algorithm 4: RANSAC algorithm in BLORT
30
3.8
Implemented application
3.8.1
Learning module
As well as similar applications BLORT also requires a learning stage before any detection could
be done. In order to start the process a CAD model of the object is needed. This model gets
textured during the learning process as well as SIFTs are registered onto surface points of the
model. The software itself is running the tracker module which is able to operate without texture only based on the pheriperial edges of the object (ie: the outline of the object). The learning
stage is operated manually.
After the operator starts the tracker in an initial pose displayed on the screen the tracker
will follow the object. By the pressing of a single button both texture and SIFT descriptors are
registered for the most dominant face of the object (ie: the one that is the most orthogonal to the
camera). All information captured are used on-the-fly from the moment of recording during the
learning stage. As the tracker gets more information by registering textures to different faces of
the object the task of the operator becomes more convenient. 1
To make this step easier for new users of BLORT demonstrative videos were recorded:
Training a Pringles container: http://www.youtube.com/watch?v=pp6RwxbUwrI
Training with a juicebox: http://www.youtube.com/watch?v=Hfg7spaPmY0
3.8.2
Detector module
The detector module unlike its name implies does object detection and pose estimation however
this resulting pose is often not completely precise. The detection stage starts with the extraction
of SIFTs 3.6 then continues with a kNN 3.7.1 method to determine the best matchings from the
codebook then further used by a RANSAC 3.7.2 method approximating the pose of the object.
1
Cylindrical objects tend to keep rotating when there is no texture due to the completely symmetric form.
31
3.8.3
Tracker module
As mentioned before the tracker module is running a particle filter-based 3.5 tracker using 3D
rendering to imagine the result then validating it with edge detection and matching running
on the GPU to gain real-time speed. This step is best summarized in the overview of BLORT
Algorithm 2 at the beginning of this chapter.
Figure 3.11: Tracking result, the object visible in the image is rendered
32
3.8.4
Although BLORT was designed to provide real-time tracking (tracker module) after featurebased initialization (detector module) it yields a different possible use-case which is more desirable for this thesis than the original functionality.
When used in an almost stand still scene to determine the pose of an object to be grabbed
tracking provides the refinement and validation of the pose acquired by the detector. By defining
a timeout for the tracker in these cases would result in high resource saving which is important
in a real robot. After the timeout has been passed and the confidence is sufficient the last determined pose can be used. This way for example the robot doesnt have to run all the costly
algorithms until it reached the table where it needs to grab an object.
The above behaviour however is not always an option therefore it is also required to have a
full-featured tracker which can recover when the object is lost.
Even though it is a launch-time parameter of BLORT the run-time design pattern called
State[14, p. 305] brings convenience to the implementation and future use.
tracking
The full-featured version of BLORT. When BLORT is launched in tracking mode it will recover
(or initialize) when needed and track continuously.
singleshot
When launched in singleshot mode, BLORT will initialize using the detector module then refine
the gained pose by launching the tracker module only when queried for this service through a
ROS service interface. The result of one service call is one pose, or an empty answer if the pose
estimation failed due to unpresent object or bad detection.
34
3.9
The goal of this thesis was to find the way to start with tabletop object grasping on the REEM
robot and to provide an initial solution to it.
Table 4.1 shows detection statistics of BLORT in a few given scenes. The average pose
estimation time was between 3 seconds and 5 seconds.
Since the part which takes most of the CPU time is the RANSAC algorithm inside the detector module it is desirable to decrease the number of extracted SIFT (or any) features.
Most of the failed attempts were caused by matching the bottom or the top of the boxes to
the wall or any untextured surface. In these cases the detector made a mistake by initializing the
tracker with a wrong pose but the tracker was satisfied with it because the edge-based matching
(requiring texture) was perfect. It would be useful to provide a way to block specific faces of
the object in case they are low textured.
3.10
All software developed for BLORT were published open-source on the ROS wiki and can be
found at the following link:
http://www.ros.org/wiki/perception_blort
It is a ROS stack which consists of 3 packages.
blort: holds the modified version of the original BLORT sources used as a library.
blort ros: contains the nodes using the BLORT library, completely separate from it.
siftgpu: a necessary dependency of the blort package.
The codes are hosted at PAL Robotics public github account at https://github.com/
pal-robotics.
35
(a)
(b)
(d)
(c)
(e)
3.11
Hardware requirements
BLORT requires an OpenGL supported graphics card with GLSL = 2.0 (OpenGL Shading
Language) for running the paralellized image processing in the tracker module and also in the
detector module by SiftGPU to extract SIFTs fast.
3.12
Future work
Use a database to store learned object models. This could also be used to interface with
other object detection systems.
SIFT dependency: Remove the mandatory usage of SiftGPU and SIFT in general. Provide
a way to use different keypoint extractor/detector techniques.
OpenGL dependency: It would be elegant to have build options which also support CUDA
or non-GPU modes.
37
Chapter 4
Increasing the speed of pose estimation
using image segmentation
4.1
Image processing operators such as feature extractors are usually quite expensive in terms of
computation time therefore it is often benefitially to limit their operating space. Much faster
speed can be achieved by limiting the space of frequently called costly image processing operators. The question of possible ways arises here.
Trying to copy nature is usually a good way to start in engineering. Lets think of how our
image processing works. Human perception tries to keep things simple and fast while the brain
only provides a tiny part of itself to do it. The way how our perception works is that most of the
information we receive through our eyes is disposed of by the time it reaches our brains. The
information that actually reaches the brain is based around a certain area of our vision with high
detail called focus point while we only get highly sparse information about other areas. In this
chapter the same approach is followed to increase the speed of image-based systems - in this
case more focused on boosting BLORT.
In order to limit the operating space the input image needs to be segmented. Segmentation
can be done via direct masking by painting the masked regions of the image to some color or
by assigning a matrix of 0s and 1s as mask to the image marking the valid and invalid pixels
and carrying this mask along with the image.
In general it requires a priori knowledge to know which areas of the input are interesting for
a specific costly operator. Most of the time it depends on the actual application environment
that is defined by hardware, software, camera and physical environments. The result of the
segmentation is a mask which in the end will be used to indicate which pixels are valid for
38
(4.1)
where O is the output image, I is the input image, i, j are current indices, M is the mask while
image operator and extremal element are depending on the current use-case.
As it plays an important role in Computer Vision, image segmentation is a strong tool of
medical imaging, face and iris recognition and agricultural imaging as well as image operator
optimization.
4.2
After creating a mask based on a specific method pointilized errors should be eliminated. This
step is done by running an erode operator which is used in image processing. Erode is using a
binary image therefore during this step all pixels of the target color (or class) will be trialed for
survival. Figure 4.1 shows an example and the way erosion trial works on pixel-level.
(a) Original
(b) Result
Figure 4.1: Example of erosion the black pixel class were eroded
In order to make sure that the mask was not minimized too much a dilate step may be done.
As erode before the dilate operator is working on a binary image but trials all non-target pixels
for survival. Figure 4.2 shows an example and the way dilation trial works on pixel-level.1
1
39
(a) Original
(b) Result
Figure 4.2: Example of dilation where the black pixel class were dilated
By combining erode and dilate point-like noise can be eliminated and masking errors can be
fixed in an adaptive way to reduce mask noise. The parameters of the two operators are exposed
to the end-user and they can be tuned in run-time.
4.3
Since the segmentation tasks well defined the same ROS node skeleton can be used to implement all segmentation methods. This node has two primary input topics: image, and camera
info. The latter here holds the camera parameters and is published by the ROS node responsible
for capturing images. The output topics of the node are: a debug topic which holds information
on the inner working (eg: correlation map), a masked version of the input image and a mask.
For efficiency the node is designed in a way that messages on these topics are only published
when there is at least one node subscribing to them. For this reason the debug topic is usually
empty.
It was mentioned before that the parameters of erode and dilate operators need to be exposed. This is solved through a dynamic reconfigure 2 interface provided by ROS. For each
of erode and dilate the number of iterations and the kernel size can be set. An extra parameter threshold was included because segmentation methods often use at least one thresholding
operator inside.
4.4
4.4.1
http://ros.org/wiki/dynamic_reconfigure
41
In Figure 4.5 the disparity image is computed by matching image patches between the images
captured by the two cameras. Subfigure d in 4.5 shows a depth map with each pixel colored
accordingly to its estimated depth value. Black regions have unknown depth. The major drawback of stereo cameras compared to RGB-D sensors is that while depth images acquired from
RGB-D sensors are continuous, stereo systems tend to have holes in the depth map where no
depth information is available. This effect is usually due to the fact that stereo cameras operate
using feature detection and matching while most RGB-D cameras use light-emitting techniques.
Depth map holes are acquired of regions where no features could be extracted because of texturelessness. The texturelessness problem is solved by RGB-D techniques by emitting a light
pattern onto the surface and determining the distortion of these.
4.4.2
Segmentation
After obtaining a depth image it is not enough to create a mask based on the distance values
of single pixels. These masks would reflect the raw result of the segmentation however further
steps could be done to refine them.
Distance-based segmentation is good but not good enough in itself. Even though some parts
of the input image is usually omitted it can still forward too much unwanted information to a
costly image-processing system. Images of experiments are shown in Figure 4.6. Segmentation
steps can be organized in a pipeline fashion so the obtained result is an aggregate of masks
computed using different techniques.
4.5
4.5.1
Template-based segmentation
Template matching
Template-matching is a common way to start with object detection but rarely yields success as
a standalone solution. It is perfect to search for a subimage in a big image but the matching
often fails when the pattern is from different source.
42
The most straightforward approach to template matching is image correlation. The output of
image correlation is a correlation map which is usually represented as a floating point singlechannel image that is of the same size as the image scanned and where the value of one pixel
holds the result of the image-subimage correlation centered around that position.
OpenCV has a highly optimized implementation for template matching where several different correlation methods can be chosen. 3
(d) Mask
4.5.2
Segmentation
Irrelevant regions can be masked by thresholding the correlation map with a certain limit and
using this as the final mask. For tuning conveniency and noise issues erode and dilate operations
can also be used.
3
OpenCV documentation:
http://docs.opencv.org/modules/imgproc/doc/object_
detection.html?highlight=matchtemplate
43
4.6
4.6.1
Histogram backprojection
Calculating a histogram of an image is a fast operation and can serve with pixel-level statistical
information. Luckily this type of information is also often used to solve pattern matching in
a relatively simple way. It is based on the assumption that similar images or sub-images often
have similar histograms especially when these are normalized.
OpenCV provides an implementation for histogram backprojection where the target histogram (the pattern in this case) is backprojected to the scanned image and an correlation
image can be computed. This result image will indicate how well the target and sub-image
histograms are matching therefore a maximum search will find the best matching region.
4.6.2
Segmentation
The noise level of these results are not relevant therefore erode steps are not necessary here but
to enlarge the valid regions of the mask the dilate operator can still be used. The parameters are
- as before - exposed through configuration files.
Experiments have shown that histogram-backprojection works far more precisely and faster than
the pixel correlation-based template matching approach. Figure 4.9 shows an experiment where
the pattern was the orange ball that can be seen in the upper-right corner and the on the left is
an image masked according to the result of the histogram backprojection. Image correlationbased matching usually fails when using different light conditions than the one the pattern was
44
captured with. It can be seen that histogram-backprojection is more robust to changes in light
conditions.
Figure 4.9: Histogram segmentation using a template of the target orange ball
45
4.7
By masking the input image of BLORT nodes the overall success rate was increased. The key
of this success was to control the ratio of inliers and outliers inserted into the RANSAC method
inside the detector module. By manipulating the features handled by RANSAC in a way that
Inliers
the Outliers
ratio increases the overall success rate and speed can be enhanced. However this
ratio can not be increased directly but it can be manipulated by decreasing the overall number
of extracted features while trying to keep the ones coming from the object. A good indicator
number is the ratio of Object SIFTs - the features matching the codebook - and All SIFTs extracted from the image.
46
Method used
Extracted features
Nothing
4106
Stereo-based
2287
Matching-based
3406
Histogram-based
1220
Stereo+histogram hybrid
600
ObjectSIF T s
AllSIF T s
52
4106
53
2287
32
3406
50
1220
52
600
101s
41%
64s
31%
74s
50%
32s
82%
20s
(a)
(b)
(c)
(d)
(e)
(f)
4.8
Published software
All software developed for BLORT were published open-source on the ROS wiki and can be
found at the following link:
http://www.ros.org/wiki/pal_vision_segmentation
47
4.9
Hardware requirements
4.10
Future work
Future work on this topic may include the introduction of other pattern-matching techniques or
even new sensors. Also most works marked as detectors in Chapter 2 like LINE-Mod can be
used for segmentation as long as it is reasonable in terms of computation time.
48
Chapter 5
Tracking the hand of the robot
5.1
The previous chapters of this thesis are about estimating the pose of the target objects which
is necessary for grasping but when considering the grasping problem 1.7 in full detail and the
visual servoing problem 1.8 it is necessary to be able to track the robot manipulator - the hand
in this case.
A reasonable approach could be to use a textured CAD model to track the hand but the
design of REEM does not have any textures on the body by default. To overcome this problem the marker-based approach was selected. Augmented Reality applications already feature
marker-based detectors and trackers therefore it is worthwhile to test them for tracking a robot
manipulator.
49
5.2
AR Toolkit
As its name indicates the Augmented Reality Toolkit [1] was designed to support applications
implementing augmented reality. It is an open-source project widely known and supported. It
provides marker design and software to detect and give a pose estimation of these markers in
3D space or to compute the viewpoint of the user.
(a)
(b)
50
(a) Using an ARToolkit marker attached to an (b) ARToolkit marker on the hand of REEM
object in the Gazebo simulator to test against
BLORT
51
5.3
ESM
The idea of ESM is a completely custom pattern matching technique that stands on solid ground
thanks to a custom minimization method defined specifically for it. Unfortunately it is only able
to track the target but given the circumstances it can achieve high precision while keeping its
adaptiveness to light-source changes.
ESM was tested in the Gazebo simulator and also with a real webcam to follow the company
logo of PAL Robotics. Figure 5.3 shows screenshots of the tracking tests and the target pattern.
52
5.4
Aruco
Even though the Aruco library is matching most of the Augmented Reality libraries in functionality it is different in implementation and application aspects. The markers used by Aruco
might seem similar to the ones seen previously though they differ in the definition of the inner
side of markers such that it is a 5x5 grid made up by black and white rectangles. These patterns
code numbers from 0 to 1024 using a modified Hamming-code which provides error-detection
in code and a tool to measure distance between markers which: Hamming-distance. By knowing the distance between markers gives the opportunity to select the most distant markers from
each other to minimize the number of false or uncertain detections.
(a) 300
(b) 582
53
Figure 5.6: Video of Aruco used for visual servoing. Markers are attached to the hand and to
the target object in the Gazebo simulator.
http://www.youtube.com/watch?v=sI2mD9zRRw4
5.5
Application examples
The implemented Aruco ROS package from now on can be used for different robot tasks depending on 3D pose input. Not only the robot hand can be equipped with Aruco markers but
its other parts so robots can locate each other by their marked regions or it can also serve on
self-charger stations helping the robot to execute a safe approach. Since feature-based pose estimation will always be slower than the marker-based approach a robot-prepared kitchen could
54
be made by marking all important objects with Aruco markers relevant to their ID in the robots
database. A faster system could be implemented this way.
(a) Aruco in the Gazebo simulator attached to the (b) Real scene with the real REEM robot and an
hand of the REEM model
Aruco marker on the hand
5.6
Software
Aruco ROS nodes are planned to be released after the verification of the Aruco authors.
5.7
Hardware requirements
55
Chapter 6
Experimental results for visual
pre-grasping
6.1
The design of all components were done by keeping the visual servoing architecture in mind.
Figure 6.1 shows the final implemented structure of the general architecture version presented
in Section 1.8. All the software produced in Chapters 3,4,5 was integrated into this architecture.
56
6.2
At the end of my internship at PAL Robotics several experiments were done to validate this approach integrated with the visual servoing controller. Results showed that the system is capable
of running at a relatively fast speed given that - expect for minimal ones - no deep optimization
was done. This speed was an average of between 3 seconds and 5 seconds in a cluttered environment with the object often partially occluded. All final experiments were repeated with the
Pringles container and the juicebox.
(a)
(b)
(d)
(c)
(e)
57
(a)
(b)
(d)
(c)
(e)
(a)
(b)
(c)
(d)
(e)
(f)
58
6.3
Hardware requirements
The hardware requirements for integrated solution is the sum of all the requirements of previously introduced ROS nodes in Chapters 3,4,5.
Experiments were done on several different machines.
Desktop
Intel Xeon Quad-Core E5620 2.4GHz
4 GB DDR3 memory
NVidia GeForce GTX560
Ubuntu Linux 10.04 Lucid Lynx
Laptop1:
Intel Core2Duo 2.2GHz
4 GB DDR2 memory
Intel Graphics Media Accelerator X4500MHD
Ubuntu Linux 11.04 Natty Narhwal
Laptop2:
Intel Core i7 2.6 GHz
8 GB DDR2 memory
NVidia GeForce GT 650M
Ubuntu Linux 10.04 Lucid Lynx
Inner computer of the REEM robot:
Intel Core2Duo 2.2 GHz
Ubuntu Linux 10.04 Lucid Lynx
59
Chapter 7
Conclusion
7.1
Key results
The goals defined at the beginning of the work were all reached by the end of my internship at
PAL Robotics.
First I gathered information about existing object detection techniques and software and
tried to classify them by principal attributes. After the first survey was done I selected the best
candidates for the task and further analyzed them by running demos and tests. I integrated one
chosen software package with the REEM robot and ran experiments on it. In order to increase
the speed of the software parameters had to be finetuned and I also introduced a new way to
increase the speed of the system by segmenting the images. These segmentation nodes were
implemented in a general way so other packages relying on image processing can also benefit
them. As a result REEM is now able to estimate a pose of a trained object in common kitchen
or home scenes.
Given a working pose estimation node only a reliable hand tracker was needed. Using the
information gathered during the survey work and extra advices from Jordi I tested 3 different
packages to see which one is better for tracking the hand of the REEM robot. It turned out that
the Aruco library is capable of doing this job reasonably fast and accurately. After consulting
with the author of Aruco I created a ROS package for Aruco and used it in experiments to accomplish visual pre-grasping poses. REEM is now able to track its own hand using vision and
by using it in a visual servoing architecture it is able to move its hand into a grasping position.
While some tasks such as tracking the hand was easier to solve with existing software the
parts regarding object detection were much harder to deal with. The survey work was really interesting and I learned a lot about the field in general during those weeks. Choosing
BLORT was the best choice at that time. I consulted several times with fellow MSc students
60
7.2
Contribution
Most of the work I did during this thesis was given back to the community. This section summarizes the contribution to this field.
I kept a daily blog of my work which can be accessed here: http://bmagyaratpal.
wordpress.com/ (Last accessed: 2012.12.) It can be useful to anyone working on similar
problems.
All released software can be found in the github repository of PAL Robotics: https:
//github.com/pal-robotics (Last accessed: 2012.12.)).
BLORT links to documentation and tutorials:
BLORT stack: http://ros.org/wiki/perception_blort
blort package: http://ros.org/wiki/blort
blort ros package: http://ros.org/wiki/blort_ros
siftgpu package: http://ros.org/wiki/siftgpu
Training tutorial: http://www.ros.org/wiki/blort_ros/Tutorials/Training
Track and detect tutorial http://www.ros.org/wiki/blort_ros/Tutorials/
TrackAndDetect
How to tune? tutorial: http://www.ros.org/wiki/blort_ros/Tutorials/
Tune
Image segmentation nodes:
http://www.ros.org/wiki/pal_vision_segmentation
As an additional result, several bugs and suggestions were submitted during the thesis work.
These are:
61
7.3
Future work
As with most software developed for thesis reasons this work could also be further expanded in
several directions.
One mayor feature could be to provide a more flexible GPU usage with OpenGL, CUDA,
or OpenCL implementations.
Further decrease the number of features extracted by the detector module of BLORT to
gain speed, increase detection confidence, decrease ambiguity.
A smart addition would be to block the detection of textureless object faces such as bottoms of juice boxes, etc. This was the reason for most of the failed detections.
The BLORT library itself could be further optimized and refactored to provide a convenient way for future expansion.
Use a database to store learned object models of BLORT. This could also be used to
interface with other object detection systems.
Use a database designed for grasping to bring this work forward. The database entries
should have grasping points marked for each object so the robot can grasp it where it is
best.
62
Chapter 8
Bibliography
[1] Ar toolkit. http://www.hitl.washington.edu/artoolkit/.
22/08/2012.
Accessed:
http://esm.gforge.inria.fr/.
Accessed:
[4] Julius Adorf. Object detection and segmentation in cluttered scenes through perception
and manipulation, 2011.
[5] Herbert Bay, Tinne Tuytelaars, and Luc Van Gool. Surf: Speeded up robust features. 3951:
404417, 2006.
[6] S. Benhimane and E. Malis. Homography-based 2d visual tracking and servoing. International Journal of Robotic Research (Special Issue on Vision and Robotics joint with the
International Journal of Computer Vision), 2007.
[7] S. Benhimane and E. Mallis. Homography-based 2d visual tracking and servoing. International Journal of Robotic Research (Special Issue on Vision and Robotics joint with the
International Journal of Computer Vision), 2007.
[8] S. Benhimane, A. Ladikos, V. Lepetit, and N. Navab. Linear and quadratic subsets for
template-based tracking. IEEE Computer Society Conference on Computer Vision and
Pattern Recognition, 2007.
[9] G. Bradski. The OpenCV Library. Dr. Dobbs Journal of Software Tools, 2000.
[10] Dmitry Chetverikov. Basic algorithms of digital image processing, slides of course. ELTE.
63
[11] Paolo Cignoni, Massimiliano Corsini, and Guido Ranzuglia. Meshlab: an open-source 3d
mesh processing system. ERCIM News, 2008(73), 2008. doi: http://ercim-news.ercim.eu/
meshlab-an-open-source-3d-mesh-processing-system.
[12] S.A. Dudani. The distance-weighted k-nearest-neighbor rule. Systems, Man and Cybernetics, IEEE Transactions on, (4):325327, 1976.
[13] Martin A. Fischler and Robert C. Bolles. Random sample consensus: A paradigm for
model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381395, 1981.
[14] Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. Design patterns: elements of reusable object-oriented software. Addison-Wesley Professional, 1995.
[15] Corey Goldfeder, Matei Ciocarlie, Hao Dang, and Peter K. Allen. The columbia grasp
database. In IEEE Intl. Conf. on Robotics and Automation, 2009.
[16] S. Hinterstoisser, V. Lepetit, S. Ilic, P. Fua, and N. Navab. Dominant orientation templates
for real-time detection of texture-less objects. 2010.
[17] S. Hinterstoisser, S. Holzer, C. Cagniart, S. Ilic, K. Konolige, N. Navab, and V. Lepetit.
Multimodal templates for real-time detection of texture-less objects in heavily cluttered
scenes. 2011.
[18] S. Holzer, S. Hinterstoisser, S. Ilic, and N. Navab. Distance transform templates for object
detection and pose estimation. 2009.
[19] A. Ladikos, S. Benhimane, and N. Navab. A real-time tracking system combining
template-based and feature-based approaches. In International Conference on Computer
Vision Theory and Applications, 2007.
[20] A. Ladikos, S. Benhimane, M. Appel, and N. Navab. Model-free markerless tracking
for remote support in unknown environments. In International Conference on Computer
Vision Theory and Applications, 2008.
[21] A. Ladikos, S. Benhimane, and N. Navab. High performance model-based object detection
and tracking. In Computer Vision and Computer Graphics. Theory and Applications,
volume 21 of Communications in Computer and Information Science. Springer, 2008.
ISBN 978-3-540-89681-4.
[22] Robert Lagani`ere. OpenCV 2 Computer Vision Application Programming Cookbook.
Packt Publishing, 2011. ISBN 1849513244.
64
[35] Bruno Siciliano, Lorenzo Sciavicco, Luigi Villani, and Giuseppe Oriolo. Robotics: Modelling, Planning and Control. 2009.
[36] Changchang Wu. SiftGPU: A GPU implementation of scale invariant feature transform
(SIFT). http://cs.unc.edu/ccwu/siftgpu, 2007.
[37] M. Zillich and M. Vincze. Anytimeness avoids parameters in detecting closed convex
polygons. In The Sixth IEEE Computer Society Workshop on Perceptual Grouping in
Computer Vision (POCV 2008), 2008.
[38] Oliver Zweigle, Rene van de Molengraft, Raffaello dAndrea, and Kai Haussermann.
Roboearth: connecting robots worldwide. In Proceedings of the 2nd International Conference on Interaction Sciences: Information Technology, Culture and Human, ICIS 09,
pages 184191, New York, NY, USA, 2009. ACM. ISBN 978-1-60558-710-3. doi:
10.1145/1655925.1655958. URL http://doi.acm.org/10.1145/1655925.
1655958.
Marchand, Fabien Spindler, and Francois Chaumette. Visp: A generic software plat[39] Eric
form for visual servoing.
66
Appendix A
Appendix 1: Deep survey tables
These tables were prepared based on the tables in Appendix 2 B. The techniques listed here
were further analyzed and evaluated.
67
68
RoboEarth
packages
BIGG detector
VHF + ReIn
3
4
Stefan Hinterstoisser,
Holzer: LINEMOD
BLORT - Blocks
World
Robotic
Vision Toolbox
Object recognition,
TOD, Ecto
ROS
Name
ViSP Model based
tracker
Test
1
Yes
Yes
Yes
Yes
Yes
Yes
Detection
Yes
Yes
Yes
Yes
No
No
Yes
Pose
Yes
ROS package
ROS package
ROS package
ROS stack
Implementation
ViSP ros package
...
monocular
monocular
monocular
monocular
kinect(detect, seems
compulsory
for
recording), monocular (detect)
Sensor required
monocular
Experiences
Good and fast but will require some further work.
Tends to get stuck on strong
edges.
The detection rate is too
poor. 2D detection is as
good as 3D for textured objects. For untextured neither
works.
none
False detection is too high,
the code is incomplete, unoptimized, and has memory
leaks. No relevant output
published.
Ferran: dropped it because
of very high false detection
rate, seems almost random
Still waiting for it.
...
none
none
none
4-6
Hz
(republishing
the
input image topic on
object found
10-11 Hz
...
ReIn,
BiGG(monocular),
VHF(point cloud)
related to DOT,
LINE-2D, LINE-3D
DOT
vocabulary tree, sift,
local image regions,
DOT
recognition, kinect
Technique keywords
CAD, edge tracking
69
http://www.irisa.
fr/lagadic/visp/
computer-vision.html
http://www.ros.org/
wiki/roboearth
http://ros.org/
wiki/fast_template_
detector
http://ros.org/wiki/
objects_of_daily_use_
finder
(meeting notes:
look for
Maria from 2011.07.) http:
//pr.willowgarage.
com/wiki/
OpenCVMeetingNotes
http://www.acin.
tuwien.ac.at/?id=290
...
http://www.ros.org/
wiki/bigg_detector
Link
Test
...
printed
plate
tem-
Extendable
how
Adding different CAD
models
Using
RoboEarths
database
...
online
...
CAD
Adding new
images
to
image data
folder, offline
training, 5Hz
640x480
offline(manually training
selected 3D
bounding box
or segmented
point cloud)
, models can
be stored in
database
online
Printed template
offline
online
online (record
mode
using
printed
templates)
Type of learning
offline
BiGG stands for: Binary Gradient Grid, a faster implementation is BiGGPy where the matching algorithm at
the end is changed to a pyramid matching method. In the
related paper a combination of BiGG and VFH is done
using ReIn and it yields reliable results.
Short desc
...
go to link
http://campar.
cs.tum.edu/pub/
hinterstoisser2011linemod/
hinterstoisser2011linemod.
pdf http://campar.cs.tum.edu/
pub/hinterstoisser2011pami/
hinterstoisser2011pami.pdf
http://www.willowgarage.com/
sites/default/files/icra11.
pdf http://www.ais.uni-bonn.
de/holz/spme/talks/01_
Bradski_SemanticPerception_
2011.pdf http://www.cs.ubc.
ca/lowe/papers/11muja.pdf
http://www.vis.uky.edu/
dnister/Publications/2006/
VocTree/nister_stewenius_
cvpr2006.pdf
same as DOT
none
http://www.irisa.fr/lagadic/
publi/all/all-eng.html
Related papers
70
3D(record,
2D(detect)
2D + 3D
2D
3D
2D, 3D
2D
...
3
4
5
6
detect),
Data required
CAD, initial pose,
2D
Test
1
...
Yes
Yes
No
...
Yes(trees)
Yes
Textured
Yes
...
No
Yes
Yes
...
No
Yes
...
Library used
OpenCV,
ViSP
...
Yes
Yes
Yes
...
Yes
OpenCV, SiftGPU
...
ReIn
OpenCV
...
OpenCV
Qt interface OpenCV,
for database PCL, there is
apps,
rviz a Java dep at
for
topics, ontology
pcl visualization
Visualization
Yes
model
(record),
matched point
cloud(kinect
detect), detected object
name, pose
...
best matching
template on
ROS topic
Partial
detection roi
pose
pose
Textureless Output
Yes
pose
...
BSD
BSD
BSD
LGPL
BSD
BSD,
GPL,
LGPL
...
2010
2011
2012
2010
2012
2011
...
...
...
...
...
...
Video
http://www.
youtube.com/watch?
v=UK10KMMJFCI
...
Appendix B
Appendix 2: Shallow survey tables
These tables were prepared to help evaluate the available scientific works related to the grasping
and tabletop manipulation topic. Assumptions here are not final but served as a first level for
the deep survey.
71
72
of
use
http://campar.
Algorithm
in.tum.de/Chair/
ProjectComputerVisionCADModel
natural 3D markers
(N3M)
similar to HoG-based
representation, template matching, low
textures objects
cells,
non-cyclic
graph,
typed
edge: cell, objectrecognition: bag of
feature representation
recognition, kinect,
Technique keywords
vocabulary tree, sift,
local image regions,
DOT
This tracker works using CAD models (VRML format) and provides location and pose of the followed object. A tracker tracks
one object at a time.
This technique requires a CAD model of the target object and
does offline training on it to choose the best N3Ms that will be
used during tracking.
Short desc
A general framework for detecting object, its database are prebuilt with often used kitchen objects.
Algorithm
Hinterstoisser:
Vision
targeted CAD
models
ViSP Model
based tracker
http://campar.in.tum.de/
Main/StefanHinterstoisser
http://tw.myblog.yahoo.com/
stevegigijoe/article?mid=
275&prev=277&next=264 how to
get it work on linux
http://www.irisa.fr/lagadic/
visp/computer-vision.html
Hinterstoisser:
DOT
Framework
for perception
- seems to
be much like
a
general
framework
for processing
Algorithm
Framework
for simplified
recognition
Framework
for recognition, object
model
creation
Type
Framework
for recognition
http://ros.org/wiki/object_
recognition
http://ecto.
willowgarage.com
https:
//github.com/wg-perception/
object_recognition_ros_
server doesnt really seem stable, no
docs
http://users.acin.tuwien.ac.
at/mzillich/?site=4
Link
http://ros.org/wiki/
objects_of_daily_use_finder
http://answers.ros.org/
question/388/household_
objects_database-new-entries
http://www.ros.org/wiki/
roboearth
Willowgarage
ECTO
BLORT
RoboEarth
ROS
packages
Name
Object
daily
finder
73
30 FPS
on GPU
(geforce
gtx 285)
Willowgarage depends
ECTO
on
the
size
of
graph
and
computational
cost of
cells
DOT
for 12 FPS
Real-Time
on
orDetection
dinary
laptop
ViSP Model
based
tracker
Vision tar- 15 FPS
geted CAD (tested
models
on 1GHz
centrino)
RoboEarth
ROS packages
BLORT
10 FPS
Object
daily
finder
of
use
Speed
Name
Adding different
CAD models
Adding different
CAD models
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Extendable
Extendable how
C++,
native, originally
Windows
but
works on linux
native
OpenGL
ROS stack
ROS package
Implementation
offline
none
online
(record
mode using printed
templates)
online
(record
mode using printed
templates)
online
offline
Type of learning
http://www.irisa.fr/
lagadic/publi/all/
all-eng.html
http://wwwnavab.
in.tum.de/Chair/
PublicationDetail?pub=
hinterstoisser2007N3M
http://campar.in.tum.de/
personal/hinterst/index/
project/CVPR10.pdf
http://users.acin.tuwien.
ac.at/mzillich/files/
zillich2008anytimeness.
pdf
http://ecto.willowgarage.
com/releases/
bleedingedge/ecto/
overview/cells.html
http://www.vis.uky.edu/
dnister/Publications/
2006/VocTree/nister_
stewenius_cvpr2006.pdf
Related papers
CAD,
initial
pose, 2D
CAD, 2D
2D
depends
3D(record,
detect),
2D(detect)
2D
Data required
2D
74
Yes
Yes
Yes
Yes
Yes
depends
Yes
Yes(dot)
Textureless
Yes
Willowgarage depends
ECTO
Hinterstoisser: No
DOT
RoboEarth
ROS packages
BLORT
Yes(trees)
Object
daily
finder
of
use
Textured
Name
esti-
and
Yes
Yes
Yes
depends
Yes
Yes
Yes
Detection
Available
Available
Yes
Yes if
got 3D
info
Yes
Yes
Yes
OpenCV,
ESM
opencv,
ros, pcl
OpenCV,
Intel IPP
, ESM
OpenCV,
ViSP
own
library
OpenCV,
PCL
OpenCV
VisualizationLibrary
used
own
visualization
using
OpenGL
depends highgui
Yes
Yes
Pose
output
of
cells
location and
pose (if got
3D info)
pose
pose
mate
name
pose
Output
monocular
monocular
monocular
any
kinect(detect,
record), monocular
(detect)
monocular
monocular
Sensor required
GPL,
BSD
LGPL
BSD
2007
2011
2010
2011
BSD, 2011
GPL,
LGPL
modified 2010
BSD
License Last
activity
BSD
2012
had to install
deps manually
both SIFT
and DOT for
textures and
textureless
objects.
Additional
comments
75
http://www.ros.org/wiki/
rein
ReIn
http://ros.org/wiki/
stereo_object_recognition
detection, training,
TOD
feature extraction
ReIn
descriptor(feature)
extraction, descriptor matching, bag
of words
Technique
keywords
nodelet, computational graph, modular
DOT,
Short desc
Algorithm
http://ros.org/wiki/
textured_object_detection
Algorithm
textured
object
detection
stereo object
recognition
BIGG detector
Algorithm
Algorithm
http://www.ros.org/wiki/
bigg_detector
Jong
Seo:
LARKS
Simple
framework
Framework
for recognition
Algorithm
Type
ORB
http://opencv.itseez.com/
modules/features2d/doc/
object_categorization.
html
http://pr.
willowgarage.com/wiki/
OpenCVMeetingNotes/
Minutes2011-12-06
http://www.ros.org/wiki/
larks
OpenCV
BOWImgDesciptorExtractor
Hinterstoisser: http://campar.in.tum.de/
LINEMOD
Main/StefanHinterstoisser
Link
Name
76
Yes
OpenCV
BOWImgDesciptorExtractor
Hae
Jong
Seo:
LARKS
30FPS
ORB
textured
object
detection
stereo object
recognition
no info
BIGG detector
no info
Yes
Yes
Yes
Yes
Hinterstoisser: 10 FPS
LINEMOD
depends
Yes
depends
ReIn
training
training
one example at a
time
Implementing
more
extractors
and matchers.
Printed template
implementing ReIn
components
and
plugging
them
together
Extendable
Extendable how
Speed
Name
in
ROS package
offline, database
ROS package
OpenCV
implementation available
ROS package
ROS package
implemented
OpenCV
OpenCV
implementation (work in
progress)
ROS package
Implementation
offline(bounding
box)
without
online
Type of learning
http://www.soe.
ucsc.edu/milanfar/
publications/journal/
TrainingFreeDetection_
Final.pdf
http://www.willowgarage.
com/sites/default/files/
icra11.pdf
https://willowgarage.com/
sites/default/files/orb_
final.pdf
http://www.willowgarage.
com/sites/default/
files/icra11.pdf
http:
//www.cs.ubc.ca/lowe/
papers/11muja.pdf
http://campar.
cs.tum.edu/pub/
hinterstoisser2011linemod/
hinterstoisser2011linemod.
pdf
Related papers
3D
3D
2D
3D
2D
2D, 3D
2D, 3D
Data required
2D,
3D(optional)
77
Yes
depends
Yes
No
depends
Yes
Stefan Hinterstoisser:
LINEMOD
OpenCV
BOWImgDesciptorExtractor
Hae
Jong
Seo:
LARKS
BIGG detector
ORB
textured
object
detection
stereo object
recognition
Yes
No
No
Yes
Yes
Yes
Yes
Yes
ReIn
Textureless
Textured
Name
detection,
roi, pose
Yes
Yes
Yes
Yes
No
Yes
supposed to
Detection
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
OpenCV
OpenCV
ReIn
ReIn
OpenCV
OpenCV
OpenCV,
PCL
VisualizationLibrary
used
supposed Yes
to
Pose
detections,
roi-s
Output
BSD
BSD
BSD
stereo
BSD
BSD
BSD
BSD
2010
2011
20102011
2011
2010
2011
2012
License Last
activity
BSD
2010
monocular
stereo
stereo
monocular
monocular,
stereo(required
for pose)
monocular
Sensor required
Additional
comments
78
BOR3D
ecto: object
recognition
OpenRAVE
http://www.ros.org/wiki/
fast_plane_detection
http://www.ros.org/wiki/
tabletop_object_detector
http://openrave.
programmingvision.com/
en/main/index.html
http://ecto.willowgarage.
com/recognition/
release/latest/object_
recognition/index.html
http://sourceforge.net/
projects/bor3d/
http://ros.org/wiki/fast_
template_detector
http://ros.org/wiki/dpm_
detector
http://ros.org/wiki/vfh_
cluster_classifier
http:
//www.pointclouds.org/
documentation/tutorials/
vfh_recognition.php
Viewpoint
Feature
Histogram
cluster
classifier
Holzer and
Hinterstoisser:
Distance
transform
templates
fast template
detector
deformable
objects
detector
tabletop object perception
Link
Name
Object recognition
in 3D data
tabletop
perception,
plane
detection, object
detection
motion planning
DOT
Technique
keywords
PFH,
detection,
pose
estimation,
tabletop
manipulation,
mobile,
KNN,
kd-trees
to FLANN, point
cloud
distance transform,
edge based templates,
template
matching,
ferns,
Lucas-Kanade
Work in progress
A collection of ecto cells that can be used for object recognition tasks.
Old sources can be misleading, OpenRAVE only concentrates on motion planning now.
Short desc
Framework
Framework
Pipeline
Algorithm
Algorithm
Algorithm
Type
79
Yes
Yes
well
below 30
FPS
6 FPS
Viewpoint
Feature
Histogram
cluster
classifier
Holzer and
Hinterstoisser:
Distance
transform
templates
fast template
detector
deformable
objects
detector
tabletop object perception
OpenRAVE
ecto: object
recognition
BOR3D
training
training
Extendable
Extendable how
Speed
Name
Ecto package
ROS stack
ROS package
ROS-PCL package
Implementation
offline
offline
Type of learning
Data required
3D
same as DOT
http://ar.in.tum.de/pub/
2D
holzerst2009distancetemplates/
holzerst2009distancetemplates.
pdf
http://www.
willowgarage.com/papers/
fast-3d-recognition-and-pose-using-viewpoint-f
Related papers
80
Yes
Yes
Yes
Low
Viewpoint Feature
Histogram cluster
classifier
Holzer and Hinterstoisser: Distance
transform
templates
fast template detector
deformable objects
detector
tabletop object perception
OpenRAVE
ecto: object recognition
BOR3D
Textureless
Textured
Name
Yes
Yes
Detection
Yes
Yes
Pose
PCL,
OpenCV
VisualizationLibrary
used
Output
RGB-D
monocular
stereo
Sensor required
2011
2012
2012
GPL
2010
2010
LGPL
BSD
BSD
LGPL
2009
License Last
activity
BSD
2011
Under work
to release
Additional
comments