Contents
1 Introduction and Motivation ....................................................................................................... 6
2 Introduction to Path Planning and Obstacle Avoidance .............................................................. 8
2.1 Holonomicity........................................................................................................................ 9
2.2 Configuration Space.......................................................................................................... 11
2.3 The Minkowski-Sum........................................................................................................... 13
2.4 Voronoi Methods............................................................................................................... 14
2.5 Bug Methods...................................................................................................................... 15
2.6 Potential Methods............................................................................................................. 15
- Estimation - A Quick Revision 19
- Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
- What is Estimation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
- Defining the problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
- Maximum Likelihood Estimation . . . . . . . . . . . . . . . . . . . . . . . . 21
- Maximum A-Posteriori - Estimation . . . . . . . . . . . . . . . . . . . . . . . 22
2
3
- Minimum Mean Squared Error Estimation . . . . . . . . . . . . . . . . . . . 24
- Recursive Bayesian Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 25
- Least Squares Estimation 28
- Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
- A Geometric Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
- LSQ Via Minimisation . . . . . . . . . . . . . . . . . . . . . . . . . . 29
- Weighted Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
- Non-linear Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.2.2 Long Baseline Navigation - an Example . . . . . . . . . . . . . . . . . 31
- Kalman Filtering -Theory, Motivation and Application 35
- The Linear Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
- Incorporating Plant Models - Prediction . . . . . . . . . . . . . . . . 39
- Joining Prediction to Updates . . . . . . . . . . . . . . . . . . . . . . 42
- Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
- Using Estimation Theory in Mobile Robotics . . . . . . . . . . . . . . . . . . 46
- A Linear Navigation Problem - “Mars Lander” . . . . . . . . . . . . . 46
- Simulation Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
- Incorporating Non-Linear Models - The Extended Kalman Filter . . . . . . . 52
- Non-linear Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
- Non-linear Observation Model . . . . . . . . . . . . . . . . . . . . . . 54
- The Linear Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
- Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3
4
- The Extended Kalman Filter Equations . . . . . . . . . . . . . . . . 56
- Vehicle Models and Odometry 59
- Velocity Steer Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
- Evolution of Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
- Using Dead-Reckoned Odometry Measurements . . . . . . . . . . . . . . . . 63 6.3.1 Composition of Transformations . . . . . . . . . . . . . . . . . . . . . 65
- Feature Based Mapping and Localisation 69
- Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
- Features and Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
- Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
- A Probabilistic Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
- Probabilistic Localisation . . . . . . . . . . . . . . . . . . . . . . . . . 71
- Probabilistic Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . 72
- Feature Based Estimation for Mapping and Localising . . . . . . . . . . . . . 73
- Feature Based Localisation . . . . . . . . . . . . . . . . . . . . . . . . 73
- Feature Based Mapping . . . . . . . . . . . . . . . . . . . . . . . . . 74
- Simultaneous Localisation and Mapping - SLAM . . . . . . . . . . . . . . . . 78 7.6.1 The role of Correlations . . . . . . . . . . . . . . . . . . . . . . . . . 81
- Multi-modal and other Methods 83
- Montecarlo Methods - Particle Filters . . . . . . . . . . . . . . . . . . . . . . 83
5
- Grid Based Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
- In Conclusion 89
- Miscellaneous Matters 90
- Drawing Covariance Ellipses . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 10.2 Drawing High Dimensional Gaussians . . . . . . . . . . . . . . . . . . . . . . 93
- Example Code 94
- Matlab Code For Mars Lander Example . . . . . . . . . . . . . . . . . . . . 94
- Matlab Code For Ackerman Model Example . . . . . . . . . . . . . . . . . . 98
- Matlab Code For EKF Localisation Example . . . . . . . . . . . . . . . . . . 100
- Matlab Code For EKF Mapping Example . . . . . . . . . . . . . . . . . . . 103
- Matlab Code For EKF SLAM Example . . . . . . . . . . . . . . . . . . . . . 107
- Matlab Code For Particle Filter Example . . . . . . . . . . . . . . . . . . . . 111
Topic 1
Introduction and Motivation
This set of lectures is about navigating mobile platforms or robots. This is a huge topic and in eight lectures we can only hope to undertake a brief survey. The course is an extension of the B4 estimation course covering topics such as linear and non-linear Kalman Filtering. The estimation part of the lectures is applicable to many areas of engineering not just mobile robotics. However I hope that couching the material in a robotics scenario will make the material compelling and interesting to you.
Lets begin by dispelling some myths. For the most parts when we talk about “mobile robots” we are not talking about gold coloured human-shaped walking machines [1]. Instead we are considering some-kind of platform or vehicle that moves through its environment carrying some kind of payload. Almost without exception it is the payload that is of interest - not the platform. However the vast majority of payloads require the host vehicle to navigate —to be able to parameterize its position and surroundings and plan and execute trajectories through it. Consider some typical autonomous vehicle and payloads:
- Humans in a airplane on autopilot (or car in the near-ish future. CMU NavLab project)
- Scientific equipment on a Mars lander
- Mine detectors on an autonomous underwater vehicle
- Cargo containers in a port transport vehicle
- Pathology media in a hospital deliver system
7
- Verbal descriptions in a museum tour guide
- A semi-submersible drill ship (oil recovery)
- Cameras on an aerial survey drone
- Obvious military uses (tend to be single mission only....)
All of the above require navigation. This course will hopefully give you some insight into how this can be achieved.
It is worth enumerating in general terms what makes autonomous navigation so hard. The primary reason is that the majority of mobile robots are required to work in un-engineered environments. Compare the work-spaces of a welding robot in automotive plant to one that delivers blood samples between labs in a hospital. The former operates in a highly controlled, known, time invariant (apart from the thing being built) scene. If computer vision is used as a sensor then the workspace can be lit arbitrarily well to mitigate against shadows and color ambiguity. Many industrial robots work in such well known engineered environments that very little external sensing is needed — they can do their job simply by controlling their own internal joint angles. Hospitals are a different ball-park altogether. The corridors are dynamic — filling and emptying (eventually) with people on stretchers. Even if the robot is endowed with a map of the hospital and fitted with an upward looking camera to navigate off markers on the ceiling it still has to avoid fast moving obstacles (humans) while moving purposely towards it’s goal destination. The more generic case involves coping with substantial scene changes (accidental or malicious ) — for example doors closing in corridors or furniture being moved. The thing then, that makes mobile robotics so challenging is uncertainty. Uncertainty is pervasive in this area and we must embrace it to make progress....
Topic 2
Introduction to Path Planning and Obstacle Avoidance
A basic requirement of a mobile autonomous vehicle is path planning. With the vehicle in an arbitrary initial position A we wish to issue a desired goal position B (including orientation) and have the vehicle execute a trajectory to reach B . This sounds pretty simple and we can think of several ways in which we could combine simple control laws that will get us from A to B [2] Unfortunately the waters quickly become muddied when we start talking about our other concerns:
- while executing its trajectory the vehicle must not smash into objects in the environment (especially true regarding squishy humans).
- we cannot guarantee that the vehicle in question can turn-on-the-spot and would like to be able to operate a vehicle of arbitrary shape. These are called “kinematic” constraints.
- we expect only uncertain estimates of the location of the robot and objects in the environment.
The combination of path-planning, obstacle avoidance, kinematic constraints and uncertainty makes for a very hard problem indeed — one which is still an active area of research. However we can do some interesting things if we decouple some of the issues and make some Holonomicity 9
Figure 2.1: Two holonomic vehicles. The underwater vehicle (ODIN, University of Hawaii) can move in any direction irrespective of pose and the complex wheel robot PPRK (CMU) driven by a “Palm Pilot” uses wheels that allow slip parallel to their axis of rotation. A sum of translation and slip combine to achieve any motion irrespective of pose.
simplifying assumptions. We shall begin by discussing vehicle properties and categorizing them into two classes — holonomic and non-holonomic.
2.1 Holonomicity
Holonomicity is the term used to describe the locomotive properties of a vehicle with respect to its workspace. We will introduce a mathematical definition of the term shortly but we will begin by stating, in words, a definition:
A vehicle is holonomic if the number of local degrees of freedom of movement equals the number of global degrees of freedom.
We can make this a little more clear with a few examples:
- a car is non-holonomic : the global degrees of freedom are motion in x,y and heading however locally, a car can only move forward or turn. It cannot slide sideways.(Even the turning is coupled to motion).
Holonomicity 10
Figure 2.2: A commercial non-holonomic robot vehicle (Roombavac from iRobot coorporation. This vehicle can be purchased for about 100 USD - but is it’s utility great enough to warrant the prices? Discuss....
- The “spherical” underwater robot (Odin) and the rolling wheel vehicle in Figure 2.1 are holonomic they can turn on the spot and translate instantaneously in any direction without having to rotate first.
- A train is holonomic: it can move forwards or backwards along the track which is parameterised by a single global degree of freedom — the distance along the track.
- The robot “Roombavac” (iRobot corporation) vacuum cleaner in Figure 2.1 is also non-holonomic. It can rotate in place but cannot slide in any direction — it needs to use a turn-translate-turn or turn-while-drive (like a car) paradigm to move.
It should be obvious to you that motion control for a holonomic vehicle is much easier than for a non-holonomic vehicle. If this isn’t obvious consider the relative complexity of parking a car in a tight space compared to driving a vehicle that can simply slide into the space sideways (a hovercraft).
Unfortunately for us automation engineers, the vast majority of vehicles in use today (i.e. used by humans) are non-holonomic. In fact intrinsically holonomic vehicles are so rare and complex (or so simple [3]) that we shall not discuss them further.
We can now place some formalism on our notion of holonomicity. We say a vehicle whose state is parameterised by a vector x is non-holonomic if there exists a constraint Q such that
Q(x,x˙,x¨,···) = 0 (2.1)
where the state derivatives cannot be integrated out. To illustrate such a constraint we will take the case of a front wheel steered vehicle as shown in figure 2.1
Configuration Space 11
Figure 2.3: A steered non-holonomic vehicle
Immediately we can write a constraint expressing that the vehicle cannot slip sideways that is a function of x and its first derivative:
(2.2)
Q(x,x˙) = 0 (2.3)
The state and its derivatives are inseparable and by our definition the vehicle is nonholonomic.
2.2 Configuration Space
We are describing the robot by its state x = [x1,x2,···xn]T ( for a 2D plane vehicle commonly x1 = x x2 = y x3 = θ )which is a n-state parameterisation. We call the space within which x resides the configuration space C − space of the vehicle:
= 1[2 x (2.4)
∀x ,x ···xn
The configuration space ( or C − space ) is the set of all allowable configurations of the robot. For a simple vehicle moving on a plane the configuration space has the same dimension as the work space but for more complicated robots the dimension of the configuration space is much higher. Consider the case of the bomb disposal vehicle in figure 2.2. The configuration space for such a vehicle would be 11 — 3 for the base and another 8 for the pose of the arm and gripper. We can view obstacles as defining regions of C − space that are forbidden. We Configuration Space 12
Figure 2.4: A vehicle with a high dimensional configuration space - a commercial bombdisposal platform (picture courtesy of Roboprobe Ltd. The configuration space for a human is immensely high dimensional —around 230.
can label this space as C⊗ and the remaining accessible/permissable space as C¯ such that C = C⊗ ∪ C¯. It is often possible to define and describe the boundaries of C⊗(and hence the boundaries of C¯in which the vehicle must move) as a constraint equation. For example if the workspace of a point robot in 2D x = [xy]T is bounded by a wall ax + by + c = 0 then
(2.5)
union of all x for which ax+by+c > 0
If each of the constraints imposed on C − spaceby each obstacle k is represented by an equation of the form Ck(x) = 0 then the open free space C¯admitted by n obstacles can be written as the following intersection:
(2.6)
Equation 2.6 simple states what configurations are open to the vehicle —nothing more. Any path planning algorithm must guide the vehicle along a trajectory within this space while satisfying any non-holonomic vehicle constraints and finally deliver the vehicle to the goal pose. This clearly is a hard problem. Two poses that may be adjacent to each other in state space may require an arbitrarily long path to be executed to transition between them. Take, for example, the seemingly simple task of turning a vehicle through 180o when it is near a wall. One solution trajectory to the problem is shown in figure 2.2. The non-holonomic vehicle constraints conspire with the holonomic constraint imposed by the wall to require a complicated solution trajectory.
The Minkowski-Sum 13
Figure 2.5: A simple task - turning through 180o quickly becomes complicated by C − spaceconstraints.
2.3 The Minkowski-Sum
Real robots have arbitrary shapes and these shapes make for complicated interactions with obstacles which we would like to simplify. One way to do this is to transform the problem to one in which the robot can be considered as a point-object and a technique called the “Minkowski-Sum” does just this. The basic idea is to artificially inflate the extent of obstacles to accommodate the worst-case pose of the robot in close proximity. This is easiest to understand with a diagram shown in Figure 2.3. The idea is to replace each object with a virtual object that is the union of all poses of the vehicle that touch the obstacle. Figure 2.3 has taken a conservative approach and “replaced” a triangular vehicle with a surrounding circle. The minimal Minkowski-Sum would be the union of the obstacle and all vehicle poses with the vehicle nose just touching it boundary. With the obstacles suitably inflated the vehicle can be thought of a point-object and we have a guarantee that as long as it keeps to the new, shrunken, free space it cannot hit an object [4]. Note it is usual to fit a polygonal hull around the results of the Minkowski-Sum calculation to make ensuing path planning calculations easier.
So now we have a method by which we can calculate C¯. The next big question is how exactly do we plan a path through it? How do we get from an initial pose to a goal pose? We will consider three methods : Voronoi, “Bug” and Potential methods.
Voronoi Methods 14
Figure 2.6: The Minkowski-Sum transforms a arbitrarily shaped vehicle to a point while inflating the obstacle. The result is guaranteed free space outside the inflated object boundary.
2.4 Voronoi Methods
Figure 2.7: A Voronoi diagram for obstacle avoidance in the presence of point objects.
Voronoi diagrams are elegant geometric constructions [5] that find applications throughout computer science — one is shown in figure 2.4. Points on a 2D-Voronoi diagram are equidistant from the nearest two objects in the real world. So the Voronoi diagram of two points a,b is the line bisecting them — all points on that line are equidistant from a,b. The efficient computation of Voronoi diagrams can be quite a complex matter but what is important here is that algorithms exist that when given a set of polygonal objects can generate the appropriate equi-distant loci. So how does this help us with path planning? Well, if we follow the paths defined by a Voronoi diagram we are guaranteed to stay maximally far away from Bug Methods 15
nearby objects. We can search the set of points on the diagram to find points that are closest to and visible from the start and goal positions. We initially drive to the “highway entry” point, follow the “Voronoi - highways” until we reach the “exit point” where we leave the highway and drive directly towards the goal point.
2.5 Bug Methods
The generation of a global Voronoi diagram requires upfront knowledge of the environment. In many cases this is unrealistic. Also Voronoi planners by definition keep the vehicle as far away from objects as possible - this can have two side effects. Firstly the robot may be using the objects to localise and moving away from them makes them harder to sense. Secondly the paths generated from a Voronoi planner can be extremely long and far from the shortest path (try playing with the Matlab Voronoi function). An alternative family of approaches go under the name of “ bug algorithms”. The basic algorithm is simple:
- starting from A and given the coordinates of a goal pose B draw a line AB (it may pass through obstacles that are known or as yet unknown)
- move along this line until either the goal is reached or an obstacle is hit.
- on hitting an obstacle circumnavigate its perimeter until AB is met
- goto 2
In contrast to the Voronoi approach this method keeps the vehicle as close to the obstacles as possible (but we won’t hit them as the obstacles have been modified by the Minkowski sum!). However the path length could be stupidly long. A smarter modification would be to replace step 3 in the original algorithm with “on hitting and obstacle circumnavigate it’s perimeter until AB is met or the line AB becomes visible in which case head for a point on AB closer to B ” Figure 2.5 shows the kind of trajectory this modified “VisBug” algorithm would execute. Clearly the hugging the object boundary is not always a good plan but interestingly it is guaranteed to get the robot to the goal location if it is indeed reachable.
2.6 Potential Methods
A third very popular method of path planning is a so called “Potential Method”. Once again the idea is very simple. We view the goal pose as a point of low potential energy and the Potential Methods 16
Figure 2.8: The visbug algorithm executing a goal-seek trajectory around Southern Italy.
obstacles as areas of high potential energy. If we think of the vehicle as a ball-bearing free to roll around on a “potential terrain“ then it will naturally roll around obstacles and down towards the goal point. The local curvature and magnitude of the potential field can be used to deduce a locally preferred motion. We now firm up this intuitive description with some mathematics.
At any point x we can write the total potential UΣ as a sum of the potential induced Uo
| by k obstacles and the potential induced by the goal Ug: | |
| UΣ(x) = Uo,i(x) + Ug(x) | (2.7) |
iX=1:k
Now we know that the force F(x) exerted on a particle in a potential field UΣ(x) can be written as :
| F(x) = −∇UΣ(x) | (2.8) |
| = − ∇Uo,i(x) − ∇Ug(x) | (2.9) |
iX=1:k
where ∇is the grad operator This is a powerful tool - we can simply move the vehicle in the manner in which a particle would move in a location-dependent forcefield. Equation 2.8 tells us the direction and magnitude of the force for any vehicle position x .
5don’t get confused here with the ∇ notation used for jacobians in the material covering non-linear estimation in coming pages!
Potential Methods 17
The next question is what exactly do the potential functions look like. Well, there is no single answer to that — you can “roll your own”. A good choice would be to make the potential of an obstacle be an inverse square function of the distance between vehicle and obstacle. We may also choose an everywhere-convex potential for the goal so that where ever we start, in the absence of obstacles, we will “fall” towards the goal point. Figure 2.6 shows two useful potential candidates (forgive the pun). Defining ρ(x) as the shortest distance
Figure 2.9: Two typical potential functions - inverse quadratic for obstacle and quadratic for the goal.
between the obstacle and the vehicle (at x ) and xg as the goal point, the algebraic functions for these potentials are:
U (2.10)
0 otherwise
U (2.11)
The term ρ0 places a limit on the region of space affected by the potential field — the virtual vehicle is only affected when it comes with ρ0 of an obstacle. η is just a scale factor. Simple differentiation allows us to write the force vector exerted on a virtual point vehicle as:
F (2.12)
where ∂ρ∂(xx) is the vector of derivatives of the distance function ρ(x) with respect to x,y,z. We proceed similarly for the goal potential. So to figure out the force acting on a virtual vehicle and hence the direction the real vehicle should move in, we take the sum of all obstacle
No comments:
Post a Comment