Académique Documents
Professionnel Documents
Culture Documents
Todays Outline
Cognition
Path
Perception
Motion Control
Cognition / Reasoning :
Todays industrial robots can operate without any cognition (reasoning) because their environment is static and very structured. In mobile robotics, cognition and reasoning is primarily of geometric nature, such as picking safe path or determining where to go next.
already been largely explored in literature for cases in which complete information about the current situation and the environment exists (e.g. sales man problem).
However, in mobile robotics the knowledge of about the environment and situation is usually only partially known and is uncertain.
makes the task much more difficult requires multiple tasks running in parallel, some for planning (global), some to guarantee survival of the robot.
In this chapter we are concerned with path planning and navigation, except
the low lever motion control and localization.
We can generally distinguish between (global) path planning and (local) obstacle avoidance.
Path Planning
The problem: find a path in the phisical space from the initial position to the goal position avoiding all collisions with the obstacles We can generally distinguish between
(global) path planning and (local) obstacle avoidance.
Assumption: there exists a good enough map of the environment for navigation.
Topological or metric or a mixture between both.
First step:
Representation of the environment by a road-map (graph), cells or a potential field. The resulting discrete locations or cells allow then to use standard planning algorithms.
Mobile robots operating on a flat ground have 3 DoF: (x, y, ) For simplification, mobile roboticists assume that the robot is a point. In this way the configuration space is reduced to 2D (x,y) Because we have reduced each robot to a point, we have to inflate each obsttacle by the size of the robot radius to compensate.
Metric-based:
where features disappear or get visible
3. Potential Field
Imposing a mathematical function over the space
10
Particularly suitable for polygon-like obstacles Shortest path length Grow obstacles to avoid collisions
R. Siegwart, ETH Zurich - ASL
11
Visibility Graph
It is easy to find the shortest path from the start to the goal positions Implementation simple when obstacles are polygons
Pros
Cons
Number of edges and nodes increases with the number of polygons Thus it can be inefficient in densely populated environments The solution path found by the visibility graph tend to take the robot asclose as possible to obstacles: the common solution is to grow obstacles by more than robots radius
12
In contrast to the Visibility Graph approach tends to maximize the distance between robot and obstacles Easy executable: Maximize the sensor readings Works also for map-building: Move on the Voronoi edges Sysquake Demo
R. Siegwart, ETH Zurich - ASL
13
Voronoi Diagram
Using range sensors like laser or sonar, a robot can navigate along the Voronoi diagram using simple control rules
Pros
Cons
Because the Voronoi diagram tends to keep the robot as far as possible from obstacles, any short range sensor will be in danger of failing
Peculiarities
when obstacles are polygons, the Voronoi map consists of straight and parabolic segments
14
Divide space into simple, connected regions called cells Determine which open sells are adjacent and construct a connectivity graph Find cells in which the initial and goal configuration (state) lie and search for a path in the connectivity graph to join them. From the sequence of cells found with an appropriate search algorithm, compute a path within each cell. e.g. passing through the midpoints of cell boundaries or by sequence of wall following movements.
15
16
17
18
Breadth-First Search
Depth-First Search
19
Breadth-First Search
A B C D E B A C D E B A C F D E B A C F D G E G B A C F H D G E G B A C F H D G F I D E G I B A C F H D G F I
A B E G D I C F H K D G F I D E G I B
A C F H K C D G F H I D E G I B
A C F H K C D G F H I L D A E G I B
A C F H K C D G F H I L D A E G I L B
A C F H K C D G F H I L D A E G I L B
A C F H K L C D G F H I L D A E G I L B
A C F H K L C A D G F H I L
B A=initial
E F G H K
C D
L=goal
R. Siegwart, ETH Zurich - ASL
20
Depth-First Search
A B C D E B A C D E B A C F D E G B A C F H D D E G I B A C F H D A D E G I B A C F H D A D E G I L B A C F H D
A B E G D A I L C F H K D A D E G I L B
A C F H K L D A D E G I L B
A C F H K L D A D G E G I L B
A C F H K L D G F I D A E G I L B
A C F H K L C D G F H I D A E G I L B
A C F H K L C A D G F H I D A E G I L B
A C F H K L C A D G F H K I
A B E G D A I L C F H K L C A D G F H K L I D A E G I L B
A C F H K L C A D G F H K L I L
B A=initial
E F G H K
C D
L=goal
R. Siegwart, ETH Zurich - ASL
21
Greedy Search
22
A*
23
24
Set robot speed (vx, vy) proportional to the force F(q) generated by the field
the force field drives the robot to the goal if robot is assumed to be a point mass
25
goal = q q goal
U att (q ) = =
26
: minimum distance to the object Field is positive or zero and tends to infinity as q gets closer to the object
obst.
27
Notes:
28 Potential Field Path Planning: Additionally a rotation potential field and a task potential field in introduced Rotation potential field
force is also a function of robots orientation relative to the obstacles. This is done using a gain factor that reduces the repulsive force when obstacles are parallel to robots direction of travel
Local minima
set a new goal point
30
Note:
Complicated, only simulation shown
This presentation is based on the following papers: Andrew J. Davison, Real-Time Simultaneous Localization and Mapping with a Single Camera, ICCV 2003 Nicholas Molton, Andrew J. Davison and Ian Reid, Locally Planar Patch Features for Real-Time Structure from Motion, BMVC 2004
Zrich
32
Outline
Visual SLAM versus Structure From Motion Extended Kalman Filter for First-Order Uncertainty Propagation Camera Motion Model Matching of Existing Features Initialization of new features in the map Improving feature matching
33
Structure from Motion: Take some images of the object to reconstruct Features (points, lines, ) are extracted from all frames and matched among them All images are processed simultaneously Both camera motion and 3D structure can be recovered by optimally fusing the overall information, up to a scale factor
34
35
Repeatable Localization
Ability to estimate the location of the camera after 10 minutes of motion with the same accuracy as was possible after 10 seconds. Features must be stable, long-term landmark, no transient (as in SFM)
36 State
of the System
Where:
In the SLAM approach the Covariance matrix is not diagonal, that is, the uncertainty of any feature affects the position estimate of all other features and the camera
37 Extended
The state vector X and the Covariance matrix P are updated during camera motion using an EKF:
Prediction: a motion model is used to predict where the camera will be in the next time step (P increases)
?
Observation: a new feature is measured (P decreases)
38
39
The unknown intentions, and so unknown accelerations, are taken into account by modeling the acceleration as a process of zero mean and Gaussian distribution:
By setting the covariance matrix of n to small or large values, we define the smoothness or rapidity of the motion we expect. In practice these values were used:
4 m/s 6 rad / s
R. Siegwart, ETH Zurich - ASL
40
By predicting the next camera pose, we can predict where each features is going likely to appear:
41
At each frame, the features occurring at previous step are searched in the elliptic region where they are expected to be according to the motion model (Normalized Sum of Square Differences is used for matching) Large ellipses mean that the feature is difficult to predict, thus the feature inside will provide more information for camera motion estimation Once the features are matched, the entire state of the system is updated to EKF according R. Siegwart, ETH Zurich - ASL
42
The Shi-Tomasi feature is firstly initialized as a 3D line Along this line, 100 possible feature positions are set uniformly in the range 0.5-5 m A each time, each hypothesis is tested by projecting it into the image Each matching produces a likelihood for each hypothesis and their probabilities are recomputed
43
Map Management
The number of features to be constantly visible in the image varies (in practice between 610) according to Localization accuracy Computing power available If a feature is required to be added, the detected feature is added only if it is not expected to disappear from the next image A feature is deleted if it produces 50% of mismatches when it should be visible
44 Improved
Feature Matching
Up to now, tracked features were treated as 2D templates in image space Long-term tracking can be improved by approximating the feature as a locally planar region on 3D world surfaces
45
The transformation between two view of a planar surface is described by a Homography H The motion model is used to predict the appearance of a feature in the new frame: The features templates are pre-warped before matching Then, gradient based image alignment is used to estimate the new surface normal
46
47
References
A. J. Davison, Real-Time Simultaneous Localization and Mapping with a Single Camera, ICCV 2003 N. Molton, A. J. Davison and I. Reid, Locally Planar Patch Features for Real-Time Structure from Motion, BMVC 2004 A. J. Davison, Y. Gonzalez Cid, N. Kita, Real-Time 3D SLAM with wide-angle vision, IFAC Symposium on Intelligent Autonomous Vehicles, 2004 J. Shi and C. Tomasi, Good features to track, CVPR, 1994.