1 Votes +0 Votes -

1,0K vues151 pagesJun 21, 2011

© Attribution Non-Commercial (BY-NC)

PDF, TXT ou lisez en ligne sur Scribd

Attribution Non-Commercial (BY-NC)

1,0K vues

Attribution Non-Commercial (BY-NC)

- Land Navigation
- Volare ATPL Question Bank
- General Navigation Formulae
- Navigation
- MANUAL OF NAVIGATION
- navigation
- Ecdis Book Imo
- Plotting chart
- Navigation
- Inertial Navigation
- Navigation for Profesional Pilots
- Electronic Navigations
- Advances in Robot Navigation
- Tactical Missile Autopilot Design
- Aerospace Avionics Systems a Modern Synthesis
- Atpl Formulas Summary
- Strategic Missile Zarchan
- Instruments Question Bank
- Navigation Rules
- Low-Cost INS Report en Translation

Vous êtes sur la page 1sur 151

Eduardo Nebot Centre of Excellence for Autonomous Systems The University of Sydney NSW 2006 Australia nebot@acfr.usyd.edu.au http://acfr.usyd.edu.au/homepages/academic/enebot/

May 5, 2005

Version 1.1

Contents

1 Introduction to Navigation Systems 1.1 1.2 An Historical Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Modern Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 1.2.2 1.3 The Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . Other Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 2 3 3 4 5 5 5 7 15

Course Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Linear Kalman Filter Estimator . . . . . . . . . . . . . . . . . . . . . . . . 15 3.1.1 3.1.2 Prediction Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Update Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

The Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . 17 The Information Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 The Extended Information Filter . . . . . . . . . . . . . . . . . . . . . . . 20 Sensors Used for Beacon Based Navigation Systems . . . . . . . . . . . . . 21 Bearing Only Navigation Systems . . . . . . . . . . . . . . . . . . . . . . . 22 3.6.1 3.6.2 3.6.3 State Transition (Prediction) Equations . . . . . . . . . . . . . . . . 22 Sensor Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Update Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Navigation System Design (KC-4) 4 The Global Positioning System (GPS) 4.1 4.2 4.3 4.4 4.5 4.6

ii 29

GPS observables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Position evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Most Common GPS Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Fundamental of DGPS (Dierential GPS) . . . . . . . . . . . . . . . . . . 35 Phase carrier GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Coordinate Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 44

Fundamental Principles of Accelerometers and Gyroscopes . . . . . . . . . 44 5.1.1 5.1.2 5.1.3 5.1.4 Accelerometers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Gyroscopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Vibratory gyroscopes . . . . . . . . . . . . . . . . . . . . . . . . . 46 Fiber Optic Gyros . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

5.2

Sensor Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5.2.1 5.2.2 Evaluation of the Error Components . . . . . . . . . . . . . . . . . 49 Faults associated with Inertial Sensors . . . . . . . . . . . . . . . . 50

Application Inertial Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Coordinate Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Inertial Navigation Equations . . . . . . . . . . . . . . . . . . . . . . . . . 59 5.5.1 5.5.2 5.5.3 5.5.4 The Coriolis Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 59 Navigation in large areas . . . . . . . . . . . . . . . . . . . . . . . . 60 Navigation in local areas, Earth Frame . . . . . . . . . . . . . . . . 62 Schuler Damping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

5.6

Attitude Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.6.1 5.6.2 5.6.3 Euler Representation . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Direction Cosine Matrix (DCM) Representation . . . . . . . . . . . 65 Quaternion Representation . . . . . . . . . . . . . . . . . . . . . . . 67

iii

Discussion of Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 68

Attitude evaluation error sources . . . . . . . . . . . . . . . . . . . . . . . 69 5.7.1 5.7.2 5.7.3 Errors in attitude computation . . . . . . . . . . . . . . . . . . . . 69 Vibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Minimising the Attitude Errors . . . . . . . . . . . . . . . . . . . . 72

5.8 5.9

Error Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Calibration and Alignment of and Inertial Measurement Unit . . . . . . . . 75 5.9.1 5.9.2 5.9.3 Alignment Techniques . . . . . . . . . . . . . . . . . . . . . . . . . 75 Alignment for Low Cost Inertial Units . . . . . . . . . . . . . . . . 77 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

5.10 Position, Velocity and Attitude Algorithms . . . . . . . . . . . . . . . . . . 78 6 GPS/INS Integration 6.1 6.2 6.3 79

Navigation architectures for Aided Inertial Navigation Systems . . . . . . . 79 Linear, Direct Feedback Implementation . . . . . . . . . . . . . . . . . . . 82 Aiding in Direct Feedback Congurations . . . . . . . . . . . . . . . . . . . 85 6.3.1 Real Time Implementation Issues . . . . . . . . . . . . . . . . . . . 88

6.4

Aiding Using a Vehicle Model . . . . . . . . . . . . . . . . . . . . . . . . . 94 6.4.1 6.4.2 General Three-Dimensional Motion . . . . . . . . . . . . . . . . . . 95 Motion of a Vehicle on a Surface . . . . . . . . . . . . . . . . . . . 96

6.5

Estimation of the Vehicle State Subject to Constraints . . . . . . . . . . . 97 6.5.1 6.5.2 Observability of the States . . . . . . . . . . . . . . . . . . . . . . . 98 Multiple Aiding of an Inertial System . . . . . . . . . . . . . . . . . 101

6.6

Tightly Coupled GPS/Ins . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 6.6.1 Advantages and Disadvantages of the Tightly and Loosely Coupled Congurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

Navigation System Design (KC-4) 7 Simultaneous Localisation and Mapping (SLAM) 7.1

iv 108

Fundamentals of SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 7.1.1 7.1.2 Process Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 The Observation Model . . . . . . . . . . . . . . . . . . . . . . . . 110

7.2

Optimization of SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 7.5.1 7.5.2 7.5.3 Standard Algorithm Optimization . . . . . . . . . . . . . . . . . . . 120 Compressed Filter Sub-Optimal SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 . . . . . . . . . . . . . . . . . . . . . . . . . . 131

Navigation may be dened as the process of determining vehicle position (also known as Localisation). This is distinct from Guidance (or Control) which is the process of controlling a vehicle to achieve a desired trajectory. An autonomous vehicular system generally must include these two basic competencies in order to perform any useful task. In these notes, the problem of navigation is discussed in detail. Probabilistic methods are used in order to perform accurate, predictable localisation.

1.1

An Historical Perspective

The navigation information that is generally needed is orientation, speed and position of the vehicle. Modern navigation techniques were introduced several hundred years ago to aid vessels crossing the ocean. The rst navigation techniques were used to estimate the position of a ship through dead reckoning, using observations of the ships speed and heading. With this information, the trajectory of the ship was able to be predicted, albeit with errors that accumulated over time. The heading of the ship was determined by observing the position of the sun, or other stars. The velocity was found with a log, compensating for the eects of local currents. Absolute information was used to provide a position x, to compensate for the errors accumulated through dead reckoning. These xes were obtained when well known natural or articial landmarks were recognised. The Mediterranean was the proving ground for these navigational techniques. It was an appropriate environment as the water currents are light in most places, and many natural landmarks can be seen along most possible journeys. A new navigational problem arose amongst the explorers of the fteenth century. Even with the invention of the compass, dead reckoning was only useful for short periods of time due to diculties in obtaining accurate measurements of velocity. The diculties in compensating for local currents made dead reckoning only useful for periods of a few hours. In the open sea, natural landmarks are scarcely available, making an accurate position update not possible. It was clear at that time that new methods, or new sensors, were needed to obtain absolute position information. Techniques to determine Latitude were developed in the early 1550s by the Portuguese. Latitude information could be found with relative accuracy (30 miles) by the observation of the Pole star. The determination of Longitude is a much more complicated problem, and in fact, took another 300 years to be solved.

This problem was of fundamental economic importance since many ships were lost at sea due to total uncertainty in Longitude. In 1714, Englands parliament oered a signicant prize to anyone who could formulate a reliable method to obtain absolute position information in the open sea. Two approaches were mainly investigated at this time. One method was based on accurate prediction and observation of the moon. Diculties in predicting the lunar orbit with enough precision and accuracy required of the instrumentation made this approach almost impossible to implement in standard ships at that time. The other important approach was based on knowing the time with enough accuracy to evaluate the Longitude. By 1770 both techniques were mature enough and were used in many journeys across the ocean. By the end of the century, navigation in the open sea became predictable and safe, making possible the growing of business involving commercial transactions from all over the world. In fact, Captain Cook tested one of Harrisons chronometers (the Mark IV), on his last journey.

1.2

A Modern Perspective

The previous section introduced the essential elements of navigation, Prediction and Correction. Prediction can be considered to be the use of a model of some description to provide dead reckoning information. Dead reckoning has the property of accumulating position error with time. Correction is the process whereby the observation of landmarks (either natural or articial) can reduce the location uncertainty inherent in dead reckoning. It may be argued that with the advent of modern sensors such as the Global Positioning System (GPS) that dead reckoning is no longer a necessary part of navigation. This assertion may be readily refuted by the following statement: There is no such thing as a perfect sensor. All sensors have some measure of error or uncertainty present within every measurement. Similarly, if it were possible to perfectly model vehicle motion, external sensors would not be needed. Therefore it is essential to understand not only the sensors used for navigation, but also the model used for prediction, as they both contribute to the delity of the position solution. As both prediction and correction steps contain uncertainty, it is useful to pose navigation as an Estimation problem. If the error in prediction, and the error in correction can be modelled as probability distributions (they can, as will be seen in later sections) then the Kalman lter can be used to fuse all available information into a common estimate that may then be used for guidance.

A consistent methodology for estimating position from navigation sensors is through the use of Kalman ltering and, for nonlinear systems, through the use of the extended Kalman lter. The Kalman lter is a linear statistical algorithm used to recursively estimate the states of interest. The states of interest will usually consist of the vehicle pose and other relevant vehicle parameters. In map building, the state vector can be augmented with feature positions, so that they too may be estimated. To aid in the estimation of the states, the Kalman lter requires that there be two mathematical models: the process model and the observation model. The process model describes how the states evolve over time, together with an estimate of the errors committed by the system. The observation model explicitly describes the information supplied by a sensor as a function of the state, together with a model of measurement noise. These models correspond to prediction and correction respectively. For a linear system subject to Gaussian, uncorrelated, zero mean measurement and process noises, the Kalman lter is the optimal minimum mean squared error estimator. An additional benet of using the Kalman lter estimator is that it keeps track of the uncertainties present in the system via the computation of a covariance matrix. This is important in many applications where it is desired to know how well (or how poorly) a system is performing. The Kalman lter has been used sucessfully in many multi-sensor AGV applications, including transport vehicles[9], road vehicles[13, 15] and indoor mobile robotics[5]. 1.2.2 Other Issues

There is considerable research into dierent types of dead reckoning and external sensors that can work reliably in dierent environments and weather conditions. Dierent sensor suites are usually required for each application, such as navigation in underground mining, underwater vehicles, ships, and aerospace. Each system requires a particular combination of dead reckoning and absolute sensors. In many of these applications, reliability and integrity are very important issues. Such is the case in autonomous operation of very large machines. In these applications, the navigation system must be designed with enough redundancy to be able to detect any possible fault that may occur. This usually requires that multiple redundant sensors are used, each based on dierent physical principles to avoid common failure modes.

1.3

Course Outline

This course presents the tools and techniques necessary for designing navigation systems based on natural and articial landmarks working in structured and unstructured environments. Part 1 Presents the introductory navigation topics of vehicle modelling, beacon (known feature) based navigation. Laboratory: localisation assignment with outdoor experimental data. Part 2 Presents an introduction to inertial sensors and inertial navigation. laboratory: Calibration, Alignment and integration of data obtained with a full six degree of freedom inertial measurement unit (IMU). Part 3 Presents the fundamental of Global Positioning Systems (GPS) and GPS/INS data fusion algorithms. laboratory: Integration of GPS/INS data. Part 4 Presents the fundamentals of Simultaneous Localisation and Mapping. Optimal and Suboptimal algorithms to reduce the computational and memory requirements of SLAM. laboratory: SLAM assignment with outdoor experimental data.

This section introduces the topic of vehicle modelling. The vehicle model can be a very important component of a navigation system. However the issue of vehicle modeling for navigation is largely neglected in the literature. One of the most important issues with modelling is knowing the limitation of the model selected. For example, in the case of a pure kinematic model it is important to know the operational range where the model is valid. A simple kinematics model can be very accurate to predict the trajectory of a vehicle while moving in straight direction. However it can have signicant degradation when turning or uder dierent load conditions due to tire deformation. There are dierent type of models that can be used for a particular application such as kinematics, dynamics or lumped parameter models. In this course basic kinematics Ackerman steered vehicle models will be presented and used as a running example for the dierent navigation algorithms introduced. A much more detailed reference on the various aspects of vehicle modelling as they relate to navigation can be found in [14].

2.1

Kinematics

Perhaps the simplest method of vehicle modelling is simply to exploit the vehicles kinematic constraints. These constraints are known as the rigid body and rolling motion constraints. The rigid body constraint makes the assumption that the vehicle frame is rigid, wheras the rolling motion constraint assumes that all points on the vehicle rotate about the Instantaneous Centre of Rotation with the same angular velocity. It is further assumed that there is no slip between the vehicle tyres and the ground, and that vehicle motion may be adequately be represented as a two-dimensional model whose motion is restricted to a plane. The constraints mentioned above have the additional eect of reducing the total number of degrees of freedom for an ackerman steered vehicle from a total of six (the speeds of each wheel plus the steer angle of each of the front wheels) to two. It is often convenient for modelling purposes to replace the four wheels of an ackerman steered vehicle with two virtual wheels located along the vehicle centerline as can be seen in Figure 1. These virtual wheels encapsulate the two degrees of freedom we are left with once rigid body and rolling motion constraints are applied. 2.1.1 Basic Kinematic Model

Consider the vehicle model shown in Figure 2, it can be shown that the continuous time form of the vehicle model (with respect to the center of the front wheel) can be derived as follows:

gl gr

x

B

'Virtual' Wheels

Rrr

x(t) = V (t) cos((t) + (t)) y(t) = V (t) sin((t) + (t)) V (t) sin((t)) (t) = B

(1)

where x(t) and y(t) denote the position of the vehicle, the angle (t) is the orientation of the vehicle with respect to the x axis, and V (t) represents the linear velocity of the front wheel. The angle is dened as the steer angle of the vehicle, and B is the base length of the vehicle. These equations are reported widely in the literature[6, 8, 13, 19]. Measurements are made of the wheels angular velocity and steer angle . It should be noted that for nonlinear models such as this, a closed form solution for the discrete time vehicle model can not in general be found by integrating Equation 1 between the time intervals tk and tk1 as is possible in the linear case. For nonlinear models, an Euler approximation of the integral can be used, assuming the control inputs are approximately constant over this interval. This approximation yields a discrete time

V = wR g

x

vehicle model of the form x(k) = x(k 1) + T V (k 1) cos((k 1) + (k 1)) y(k) = y(k 1) + T V (k 1) sin((k 1) + (k 1)) V (k 1) sin((k 1)) (k) = (k 1) + T B where T is dened as the sample time from tk1 to tk . This discrete time vehicle model is now implementable within a discrete time Extended Kalman Filter.

(2)

2.2

This section presents a more realistic model of a standard outdoor vehicle shown in Figure 3. In general the models that predict the trajectory of the vehicle and the models that relates the observation with the states are non-linear. The navigation algorithms to be used require the linearisation of these models. In this case the Jacobian of the process and observation are needed to propagate the uncertainties. Assume a vehicle equipped with dead reckoning capabilities and an external sensor capable

Figure 3: Utility car used for the experiments. The vehicle is equipped with a Sick laser range and bearing sensor, linear variable dierential transformer sensor for the steering and back wheel velocity encoders. of measuring relative distance between vehicle and the environment as shown in Figure 4. The steering control , and the speed c are used with the kinematic model to predict the position of the vehicle. In this case the external sensor returns range and bearing information to the dierent features Bi(i=1..n) . This information is obtained with respect to the vehicle coordinates (xl , yl ), that is z(k) = (r, ) , where r is the distance from the beacon to the range sensor, is the sensor bearing measured with respect to the vehicle coordinate frame. Considering that the vehicle is controlled through a demanded velocity c and steering angle the process model that predicts the trajectory of the centre of the back axle is given by xc vc cos () yc = vc sin () + vc tan () c L

(3)

Where L is the distance between wheel axles as shown in Figure 5. To simplify the equation in the update stage, the kinematic model of the vehicle is designed to represent the trajectory of the centre of the laser. Based on Figure 4 and 5, the translation of the centre of the back axle can be given PL = PC + a T + b T+/ 2 (4)

PL and PC are the position of the laser and the centre of the back axle in global coordinates respectively. The transformation is dened by the orientation angle, according to the following vectorial expression:

yL

b

z(k)=(r,b)

Bi

B3

yl xl

f

vc

B1 xL

Figure 4: Vehicle Coordinate System

T = (cos () , sin ()) The scalar representation is xL = xc + a cos () + b cos ( + /2) yL = yc + a sin () + b sin ( + /2) Finally the full state representation can be written xL vc cos () vc (a sin () + b cos ()) tan () L yL = vc sin () + vc (a cos () b sin ()) tan () + L vc tan () L L

(5)

(6)

The velocity, c , is measured with an encoder located in the back left wheel. This velocity is translated to the centre of the axle with the following equation: vc = e 1 tan ()

H L

(7)

Where for this car H = 0.75m, L=2.83 m, b = 0.5 and a = L + 0.95m. Finally the discrete

10

Encoder

Laser yl

H Pc (yc, xc)

xl b

yL xL

L a

Figure 5: Kinematics parameters

model in global coordinates can be approximated with the following set x(k 1) + T vc (k 1) cos ((k 1)) vc (a sin ((k 1)) + b cos ((k 1))) L tan ((k 1)) x(k) y(k) = y(k vc (k1) 1) + T vc (k 1) sin ((k 1)) + (k) (a cos ((k 1)) b sin ((k 1))) L tan ((k 1)) (k 1) + vc (k1) tan ((k 1)) L where t is the sampling time, that in this case is not constant.

of equations: +

(8)

The observation equation relating the vehicle states to the observations is z = h (X, xi , yi ) =

i zr i z 2 2

where z is the observation vector, is the coordinates of the landmarks, xL , yL and L are the vehicle states dened at the external sensor location and h the sensor noise. The complete non-linear model can be expressed in general form as: X (k + 1) = F (X (k) , u (k) + u (k)) + f (k) z (k) = h (X (k)) + h (k) (10)

11

The propagation of uncertainties and estimation of states will require the evaluation of the jacobians of the process and measurement models. The Jacobian of the Process model with respect to the state variables is: 1 0 T (vc sin + vc tan (a cos b sin ) L F Fx = = 0 1 T (vc cos vc tan (a sin b cos ) L X 0 0 1 The evaluation of the Jacobian with respect to the input require a few additional mathematical steps. In this section the full procedure is shown for two dierent cases [11]: Using Steering encoder: = Using Gyro information: The general kinematic model is v1 cos () + x () F v1 , = v1 sin () + y () v2 v1 = 1 H L 1 = v1 or = z L G (v2 , , z ) = F v1 , with

1 L

v1

with = tan

= z

12

G F v1 F v1 = + v2 v1 v2 v1 v2 G v1 F v1 F = + + v1 v1 F G = z z F T = cos () sin () 0 v1 F T = x () y () 1 v1 1 v1 v2 = , = H v2 1L 1 H

L

H L

= or =0 v1 L v1 v1 = or =0 L v1 = 0 or =1 z z If the steering angle is used to predict (as the derivative) the orientation then the gradient is

13

G v2

G z

v1 F v1 + = v2 v1 v2 x () 1 1 + y () H L 1 H 1L L 1

F v1 F v1 G = + + = v1 v1 cos () x () G v2 H v2 = sin () + y () 2 L L 1 H 1 H 0 1 L L x () G F = = y () 0 = 0 z z 1

H v1 + L L

If a Gyro is used to predict the change in orientation then the following gradient is used: G G G = v2 G z (v2 , , z ) cos () G 1 = sin () v2 1 H L 0 cos () G v2 H = sin () H 2 L 1L 0 x () x () G = y () 1 = y () z 1 1 Finally the jacobian of the observation model with respect to the state variables can be evaluated with the following equation Then the output Jacobian matrix, evaluated in (xL , yL , ) will be

14

(yL yi ) d (xL xi ) d

0 1

These models will be used with a standard EKF algorithms to navigate, build and maintain a navigation map of the environment.

15

In this section, a navigation system employing articial beacons (or targets) is developed. The kinematic vehicle model developed in the previous section will be used, and sensor models will be described. The generally accepted denition of a beacon, is a target whose position (and perhaps other parameters) is known to the navigation system. By observing a beacon whose absolute position is known, the position of the vehicle with respect to the beacon may be easily computed. Therefore, beacon based navigation systems are perhaps the easiest to implement. The drawback, of course, is that infrastructure is usually needed to implement a navigation system of this type. A form of navigation that overcomes this limitation will be discussed in Section 10.

3.1

This section presents a brief introduction of Kalman lter techniques to make the reader familiar with notation and terms that will be used through the rest of the course. The Kalman lter is a linear estimator that fuses information obtained from model, observation and prior knowledge in an optimal manner. We can start by assuming the system we are interested can be modeled by: x(k) = F (k)x(k 1) + B(k)u(k) + G(k)v(k) (11)

where x(k1) is the state of the system at time k1,u(k) is an input, v(k) is additive noise, B(k) and G(k) are input and noise transition matrices and F (k) is the transition matrix that allows to propagate x(k) to x(k + 1), being k the index to denote one incremental step of time. We can also assume that the observation model have the following form: z(k) = H(k)x(k) + w(k) (12)

where z(k) is the observation at time k, H(k) is the observation matrix that relates the states with the observations and w(k) is additive noise. It is also assumed that all noises are Gaussian, uncorrelated and zero mean. E[v(k)] = E[w(k)] = 0 with covariances: E[v(i)v T (j)] = ij Q(i) , E[w(i)wT (j)] = ij R(i)

Navigation System Design (KC-4) The process and observation noises are also assumed uncorrelated: E[v(i)wT (j)] = 0 Finally it is assumed that we know with a given uncertainty the initial state: x(0) = x0 , P (0) = P0

16

where P0 is in general a covariance diagonal Matrix with an initial guess of the uncertainty on the initial value x0 . With the process and observation models F, B, G, H, covariances Q, R and P0 and initial state x0 the following Kalman Filter equations can be applied to recursively estimate the state in two stages: Prediction and Update. 3.1.1 Prediction Stage

The prediction stage is usually responsible for the high frequency information and uses dead reckoning sensors inputs to drive the process model and generate a prediction of the state x at time k 1 given all the information up to time k. x(k/k 1) = F (k)(k 1/k 1) + B(k)u(k) x The uncertainty of this prediction is also evaluated: P (k/k 1) = F (k)P (k 1/k 1)F T (k) + G(k)Q(k)GT (k) 3.1.2 Update Stage (14) (13)

The lter will work in prediction mode until an observation become available. The state x(k) will then be updated with the observation obtained at time k. In this stage we obtain the estimate x(k/k) that is usually read as the state x(k) given all the information up to time k. The update equations for the state is: x(k/k) = x(k/k 1) + W T (k)(z(k) H(k)(k/k 1)) x (15)

It can be seen that the state x at time k given all the information up to time k is obtained with the prediction x(k/k 1) plus a correction factor formed by a product of the Kalman Gain W and the dierence between the observation z(k) and the prediction of the observation H(k)(k/k 1). At the same time, since an observation was included the x uncertainty of the state, P (k/k), must decrease. This can be seen in the update equation

Navigation System Design (KC-4) for the state covariance P : P (k/k) = P (k/k 1) W (k)S(k)W T (k) where the Kalman gain matrix W (k) and innovation covariance are given by W (k) = P (k/k 1)H T (k)S(k)1

17

(16)

(17)

S(k) = H(k)P (k/k 1)H T (k/k 1) + R(k) . The last equation represents the covariance of the innovation sequence: (k) = z(k) H(k)x(k/k 1)

(18)

Since in most cases the true value of the state is never known the innovation sequence gives an indication of how well the lter is working. This measure can be obtained by comparing the actual value of the innovation sequence with the expected variance predicted by S(k). This approach will be used to validate the data before incorporating the observation and for data association purposes. More on these later in this chapter.

3.2

In most real cases the process and/or observation models do not behave linearly and hence the linear Kalman lter described above cannot be implemented. To overcome this problem, the extended Kalman lter (EKF) is implemented. It provides the best MMSE of the estimate and in principle it is a linear estimator which linearises the process and observation models about the current estimated state. The non-linear discrete time system is described as x(k) = F(x(k 1), u(k), k) + w(k), (19)

where F(, , k) is a non-linear state transition function at time k which forms the current state from the previous state and the current control input. The non-linear observation model is represented as z(k) = H(x(k)) + v(k). (20)

18

Following the same denitions outlined in the Kalman lter, the state prediction equation is x(k|k 1) = F( (k 1|k 1), u(k)), x and the predicted covariance matrix is P(k|k 1) = Fx (k)P(k 1|k 1) FT (k) + Q(k). x (22) (21)

The term Fx (k) is the Jacobian of the current non-linear state transition matrix F(k) with respect to the predicted state x(k|k 1). When an observation occurs, the state vector is updated according to x(k|k) = x(k|k 1) + W(k)(k) where (k) is the nonlinear innovation, (k) = z(k) H( (k|k 1)). x The gain and innovation covariance matrices are W(k) = P(k|k 1) HT (k)S1 (k), x S(k) = Hx (k)P(k|k 1) HT (k) + R(k), x (25) (26) (24) (23)

where the term Hx (k) is the Jacobian of the current observation model with respect to the estimated state x(k|k). The updated covariance matrix is P(k|k) = (I W(k) Hx (k))P(k|k 1)(I W(k) Hx (k))T + W(k)R(k)WT (k).

(27)

In the linear Kalman Filter, if the models are time invariant, then the Kalman gains can be computed o line and used within the lter in real time. However, in the non-linear implementation, since the process and observation models are dependent on the current state, this benet is not available. Furthermore, the non-linear lter must be properly initialised to ensure that the linearised models are accurate, since this can lead to lter instabilities. Finally, the presence of the Jacobians adds a higher level of complexity and furthermore increases the computational processing required.

19

3.3

The information lter is mathematically equivalent to the Kalman lter. That is, if both lters use the same data then identical results would be obtained (the reader is referred to [20] for a derivation of the information lter). The key components in the information lter are the information matrix Y(k|k), and the information vector y(k|k). Assuming that the noise is Gaussian then Y(k|k) is the inverse of the covariance matrix, Y(k|k) = P1 (k|k), (28)

and hence represents the amount of information present amongst the states of interest and their correlations. The information vector represents the information content in the states themselves and can be evaluated by transforming the state values over to information space, y(k|k) = Y(k|k) (k|k). x The predicted information vector is y(k|k 1) = L(k|k 1) (k 1|k 1) + B(k)u(k), y where L(k|k 1) = Y(k 1|k 1)F(k)Y1 (k 1|k 1). (30) (29)

The transformation matrix B(k) transforms the control input u(k) to information space. The corresponding predicted information matrix is Y(k|k 1) = [F(k)Y1 (k 1|k 1)FT (k) + Q(k)]1 . (31)

Again, these two equations are evaluated at each sample of the dead reckoning sensor. The information contribution to the states due to an observation is in the form of the information observation vector, i(k) = HT (k)R1 (k)z(k). (32)

The amount of certainty associated with this observation is in the form of the information observation matrix, I(k) = HT (k)R1 (k)H(k) (33)

The benet of implementing the information lter lies in the update stage. Since the observation has been transformed over to information space, the update procedure is

Navigation System Design (KC-4) simply the information contribution of this observation to the prediction, y(k|k) = y(k|k 1) + i(k) Y(k|k) = Y(k|k 1) + I(k).

20

(34) (35)

If there is more than one sensor providing observations, the information observation vector and matrix is simply the sum of the individual observation information vectors and matrices and hence, y(k|k) = y(k|k 1) + Y(k|k) = Y(k|k 1) + ij (k) Ij (k). (36) (37)

where j is the number of sensors providing information at time k. There are a number of advantages associated with the Information lter: No innovation covariance matrix S(k) has to be computed; Furthermore, there is no gain matrix W(k) which needs to be computed; The initial information matrix can be easily initialised to zero information; It is computationally easier to implement an information lter in a multisensor environment since it is simply the summation of the individual information contributions. Furthermore, a Kalman lter cannot accommodate for the situation where there are multiple observations to be handled by a lter at the same time, due to the correlation of the innovation amongst the observations.

3.4

The derivation of the extended information lter (EIF) can be found in [20]. As with the EKF, the EIF estimates the states of interest given non-linear process and/or observation models, however in information space. The predicted information vector and matrix is y(k|k 1) = Y(k 1|k 1)F( (k 1|k 1), u(k)), x 1 Y(k|k 1) = [ Fx (k)Y (k 1|k 1) FT (k) + Q(k)]1 . x (38) (39)

When an observation occurs, the information contribution and its associated information matrix is i(k) = I(k) = HT (k)R1 (k)[(k) + Hx (k) (k|k 1)], x x HT (k)R1 (k) Hx (k), x (40) (41)

Navigation System Design (KC-4) where the innovation is (k) = z(k) H( (k|k 1)). x

21

(42)

Although the non-trivial derivation of the Jacobians still occurs in this form, the EIF has the benet of being easily initialised, that is, one can set the initial conditions to zero information. Furthermore, there is the added benet of decentralising the lter across multiple nodes and the incorporation of multiple observations to a single lter [20].

3.5

The sensors used in beacon based navigation systems typically fall into two distinct categories. The rst, bearing only, refers to sensors that simply return the bearing and/or azimuth to a given beacon. These type of laser systems are usually cheaper since they do not need to implement the range measurement circuitry. Single cameras also fall into this category, unless some form of stereo vision can also provide range information. The second category of sensors are those that return both range and bearing to a beacon. Typically used range-bearing sensors include sonar, laser, radar, and stereo vision.

Figure 6: Bearing Laser, range and bearing radar and laser In most navigation applications the beacons will be clearly dierentiated from the background but will not report their identity. The navigation system will have a list with the location of all the beacons and will be responsible to determine which is the beacon that has been detected. This process is called data association and will be more dicult and less reliable with bearing only sensors.

22

3.6

This section presents bearing only localisation algorithms using the basic nonlinear kinematic and observations models presented in the previous chapter. The UTE model will be used in the laboratory assignments and will be referred to in this chapter when necessary. The extended Kalman lter is also introduced to deal with nonlinear models. 3.6.1 State Transition (Prediction) Equations

Consider the Ackerman steered vehicle discussed in Section 2.1. Equation 1 gives the continuous time vehicle model. For implementation in a discrete time EKF, the model must be discretised as was shown in Equation 2. This discretised vehicle model (state transition equation) is reproduced here for clarity; x(k) = x(k 1) + T V (k 1) cos((k 1) + (k 1)) y(k) = y(k 1) + T V (k 1) sin((k 1) + (k 1)) V (k 1) sin((k 1)) (k) = (k 1) + T B

(43)

To best illustrate this example, assume that the kinematic assumptions (rigid body, rolling motion) hold true. When this is the case, the only source of process noise is injected via the control inputs; steer angle, and wheel velocity such that these parameter may be modelled as: (t) = (t) + (t) (t) = (t) + (t)

where (t) and (t) are Gaussian, uncorrelated, zero mean random noise processes with 2 2 variances and respectively. The discrete time state vector at time k + 1 can now be dened as x(k + 1) = f (x(k), u(k), k) + v(k) = [x(k + 1), y(k + 1), (k + 1)]T where u(k) = [(k), (k)]T is the control vector at time k, v(k) is additive process noise representing the uncertainty in slip angles and wheel radius, and f () is the nonlinear state transition function mapping (44)

23

the previous state and current control inputs to the current state, here represented by Equation 43. Assume the system has an estimate available at time k which is equal to the conditional mean x(k|k) = E[x(k)|Zk ] (45) The prediction x(k + 1|k) for the state at time k + 1 based on information up to time k is given by expanding Equation 44 as a Taylor series about the estimate x(k|k), eliminating second and higher order terms, and taking expectations conditioned on the rst k observations, giving x(k + 1|k) = E[x(k + 1)|Zk ] = f (x(k|k), u(k), k) (46)

The vector state prediction function f () is dened by Equation 43 assuming zero process and control noise. The prediction of state is therefore obtained by simply substituting the previous state and current control inputs into the state transition equation with no noise. Dene the noise source vector as w(k) = [(k), (k)]T The error between the true state and the estimated state is given by x(k|k) = x(k) x(k|k) the prediction of covariance is then obtained as P(k + 1|k) = E[x(k|k)x(k|k)T |zk ] T = fx (k)P(k|k) fx (k) +

T fw (k)

(47)

fw (k)(k)

(48) (49)

where fx (k) represents the gradient or Jacobian of f () evaluated at time k with respect to the states, fw (k) is the Jacobian of f () with respect to the noise sources, and (k) is the noise strength matrix given by (k) = fx (k) and

2 0 2 0

(50)

Navigation System Design (KC-4) 1 0 T V sin((k|k) + (k)) fx (k) = 0 1 T V cos((k|k) + (k)) 0 0 1 and T cos((k|k) + (k)) T V sin((k|k) + (k)) fw (k) = T sin((k|k) + (k)) T V cos((k|k) + (k))

T sin((k)) B T V cos((k)) B

24

(51)

(52)

For the UTE model the reader needs to select the type of information used to determine the change of bearing, steering encoder or Gyro, and then select the appropriate equation to evaluate the Jacobian. 3.6.2 Sensor Prediction

Assume that the vehicle has a laser sensor attached such that the bearing to a number of xed beacons Bi = [Xi , Yi ], i = 1, . . . , n can be obtained. This observation can be predicted by a non-linear model z(t) = h(x(t)) + v(t) (53)

Y The bearing to a beacon is given by arctan( Xiiy(t) ). However the platform is oriented in x(t) the direction , so for this system, the measurement equation for each beacon detected by the laser is given by the model

zi (t) = arctan

Yi y(t) Xi x(t)

i (t) + v (t)

(54)

i where the laser observation noise v , is uncorrelated zero mean and Gaussian with 2 strength , and is assumed to be identical for each beacon observed.

The predicted observation is found by taking expectations conditioned on all previous observations, truncating at rst order to give (k + 1|k) = E[z(k + 1)|Zk ] z = h((k + 1|k)) x (55)

If there is a predicted state for the vehicle, x(k + 1|k), we can therefore predict the observations that will be made at that state. From Equation 54 and from Equation 55,

Navigation System Design (KC-4) we have the predicted observations as z(k + 1|k) = zi (k + 1|k) = arctan Yi y (k + 1|k) Xi x(k + 1|k) (k + 1|k)

25

(56)

Now, the innovation or observation prediction error covariance S(k), used in the calculation of the Kalman gains must be computed. This is found by squaring the estimated observation error and taking expectations of the rst k measurements to give the following, S(k + 1) = In this case, the Jacobian hx (k + 1)P(k + 1|k) hT (k + 1) + R(k + 1) x hx (k + 1) is given by

y(k+1|k)Yi d2 x(k+1|k)Xi d2

(57)

hx (k + 1) =

(58)

where d = (Xi x(k + 1|k))2 + (Yi y (k + 1|k))2 is the predicted distance between a beacon and the vehicle. The observation variance term R(k) is given by

2 R(k) =

(59)

3.6.3

Update Equations

The state update equations for the EKF are given by x(k + 1|k + 1) = x(k + 1|k) + K(k + 1)[z(k + 1) h( (k + 1|k))] x where K(k + 1) = P(k + 1|k) hT (k + 1)S(k + 1)1 x (61) (60)

3.7

The implementation of a rangebearing beacon based navigation system is very similar to the bearing only system presented in Section 3.6. The only major dierence is in the derivation of the sensor model. The sensor still observes a number of xed beacons, however it now also returns the range to these beacons.

Navigation System Design (KC-4) The prediction equations for such a sensor can be derived as z(k + 1|k) = r zi (k + 1|k) zi (k + 1|k) = (Xi x(k + 1|k)2 + (Yi y (k + 1|k))2 Yi (k+1|k) y + 1|k) arctan (k

Xi (k+1|k) x

26

(62)

where zi is the range measurement returned by the sensor when observing the ith beacon. r In this case, the Jacobian hx (k + 1) is given by

x(k+1|k)Xi d y(k+1|k)Yi d2 y (k+1|k)Yi d x(k+1|k)Xi d2

hx (k + 1) =

0 1

(63)

where d = (Xi x(k + 1|k))2 + (Yi y (k + 1|k))2 is (as before), the predicted distance between the ith beacon and the vehicle. The observation variance term R(k) is given by R(k) =

2 r 0 2 0

(64)

2 where r is the variance associated with the range measurement. As before it is assumed that the sensor is corrupted by Gaussian, zero-mean, uncorrelated measurement noise.

The state and covariance prediction, and update steps are identical to those in Section 3.6.

3.8

Data Association

This is a fundamental problem in beacon navigation: We have a list of beacons with position information but when a beacon is sensed we have the following question: What is the identity of this beacon ? There are several ways to address this problem. The rst (and simplest) is to use beacons that are not identical. A common method for achieving this in bearing only navigation is to use bar codes for uniquely identifying targets. The second method uses statistics to determine how likely a given beacon is the one that was sensed. The rst method makes the association very simple but it can become limited with respect to the number of beacons possible or expensive to be used in large areas. The second method can use an unlimited number of identical pasive beacons (reective tape) but will make the data association more complex. This section will discuss basic data association algorithms based on statistical methods.

27

The innovation () for a navigation lter is dened as the dierence between the observed and predicted observations as (k) = z(k) H(k)(k/k 1) x The innovation mean can be found as E (k)|Zk1 = E z(k) z(k|k 1)|Zk1 = E z(k)|Zk1 z(k|k 1) = 0 with variance S(k) = E (k) T (k) = E (z(k) H(k) (k|k 1))(z(k) H(k) (k|k 1))T x x = E (H(k) [x(k) x(k|k 1)] + v(k))(H(k) [x(k) x(k|k 1)] + v(k))T = R(k) + H(k)P(k|k 1)HT (k) (67) (66) (65)

So, the innovation sequence is white with zero mean and variance S(k). For beacon matching (data validation), these properties can be exploited to determine how likely a beacon is to be at the current observation. The normalised innovations squared q(k) are dened as q(k) = T (k)S1 (k)(k) (68)

which under the assumption that the lter is consistent can be shown to be a 2 random distribution with m degrees of freedom, where m = dim(z(k)), the dimension of the measurement sequence[3]. For gating observations, q is simply formed (usually for each beacon Bi ), a threshold is chosen from 2 tables (given a condence value, 95% or 99% for example), and then q compared to the threshold. If the value of q for a given beacon is less than the threshold, then that beacon is likely to have been the one observed. One caveat however: It is important to keep track of the number of beacons gated for a given observation. Depending on the geometry of the beacons, it is sometimes possible for multiple beacons to (quite validly) match an observation. In these cases, it is usually prudent to ignore the observation, rather than trying to guess which beacon was observed. In other words, it is usually safer to throw away good data than to use bad data. This is very important since the incorporation of a wrong beacon will most likely generate a catastrophic fault in the lter. That is the lter will not be able to match any more

28

beacons univocally since the vehicle will be in another position and the state covariance error will grow indenitely. There are some other methods to improve the robustness of data association that consider more than one observation at the time or perform delay batch association.

3.9

Filter Implementation

Any real localization application will have dead reckoning sensors that will provide high frequency information at xed intervals and external sensors that will provide low frequency information not necessarily at a prex interval. An example of the dead reckoning sensors are velocity and steering encoders that are sampled at given intervals. External sensors usually monitor the environment and eventually report information when a possible beacon is detected. In general we have various prediction cycles for each external information. This external information will not in most cases happen at the predictions time. Furthermore, the external information will be present in some cases with a signicant delay due to the processing necessary to extract the possible features or the validation process. One example is shown in Figure 7. In this case the system perform predictions 1, 2 and 3. At time 3 we realize that an observation is available with time stamp somewhere between 2 and 3. The right procedure to incorporate this information is to save the previous states at 2 and make a new prediction up to the observation time. At this time perform an update and then a prediction to time 3 again. This process will require the lter to save few previous states in accordance with the maximum delay expected in the external information.

Update Pred 4 Pred5

29

The Global Positioning System is an all-weather and continuous signal system designed to provide information to evaluate accurate position worldwide. The GPS system achieved initial operation capabilities on 8 December 1993. It consist of 3 major segments: space, control and user. Space Segment: It consists of the GPS satellites. The GPS operational constellation consists of 24 satellites that orbit the earth with a period of 12 hours. There are 6 dierent orbital planes with 55 degrees inclination. Each plane has a radius of 20200 km and the path is followed by 4 satellites. This distribution assures that at least four satellites are always in view anywhere around the world. The maximum number of satellites that can be seen from any point in the surface of the earth is 12, although this is very rare due to natural terrain ondulation or other obstructions. Control Segment: This segment consist of a system of tracking stations distributed around the world. The main objective is to determine the position of the satellites to update their ephemeris. Satellite clock correction are also updated. User segment: The user segment consists of the GPS receivers that use the GPS signal information. The satellites transmit information at frequencies L1=1575.42 Mhz and L2=1227.6 Mhz. They are modulated with a pseudo random C/A and P codes and with a navigation message. The C/A code is a repeating 1 Mhz pseudo random binary sequence. There is a separate C/A code for each satellite in operation. The code is used by the receiver to identify the satellite and to obtain range information. The Precision Positioning System (PPS) is provided by the P code included in the L2 signal. This code is only available to authorised users. The Standard Positioning System (SPS) is based on the C/A code that is included in both frequencies. The original accuracy of SPS is 100 m (2drms), 156 m vertical (95%) and 340 ns (95 %), with Selective Availability. The drms metric stands for distance root mean square. In this case 2drms is the radius of the circle that bound the error with 95 % probability. The accuracy of the PPS system is 22m horizontal (2drms) 27.7 m vertical (95%), and 200 ns (95%) time. The navigation message is present in both the L1 and L2 frequencies. It contains information about the precise satellite ephemeris, clock correction parameters, and low precision ephemeris data for the other satellites. After 30 seconds of continuous satellite tracking the GPS receiver is able to use the tracked satellite for position determination.

30

4.1

GPS observables

Two types of observations are of main interest. One is the pseudo-range which is the distance from the satellite to the vehicle plus additional errors due to clock drifts, the ionosphere, the troposphere, multi-path and Selective Availability, (SA). SA is a deliberate error introduced by the US Department of Defense and consists of additional noise included in the transmitted satellite clock and satellite ephemeris. SA has been eliminated since May 2000 making the standard GPS more accurate. Another observation used by GPS receivers is Doppler frequency information. As the vehicle and the satellites are in constant motion with respect to each other the receiver signal will experience a change in frequency proportional to their relative velocities. This information, usually referred to as delta range, can be used to generate very accurate velocity estimation. Although most GPS receivers generate position and velocity information, only the more sophisticated systems generate velocity from Doppler observation making it independent of position information. This is an important consideration for the data fusion algorithm since position and velocity observation errors must not be correlated. The precision of the solution is aected by two main factors: the geometry of the actual relative positions of the satellites with respect to the receiver, usually referred as PDOP (position dilution of precision), and secondly, the precision in range determination. With time stamp range and satellite position information of at least 4 satellites the 3-D position x, y and z, and the GPS receiver clock drift t can be obtained using standard triangulation techniques. Equation 69 presents the set of non linear equation that needs to be solved to evaluate the position of the GPS receiver. The position of the satellites is predicted with the ephemeris information. Some GPS receivers include all the satellites in view to obtain a position x. They usually implement a least square or Kalman lter algorithm to estimate position and velocity of the receiver.

4.2

Position evaluation

Once ephimeris and ranges from 4 satellites become available, the set of no-linear equation presented in Equation 69 can be solved. This equation is function of the known ranges and satellite position and unknown position of the GPS receiver and clock drift. r1 r2 r3 r4 = = = = (x x1 )2 + (y y1 )2 + (z z1 )2 + ct (x x2 )2 + (y y2 )2 + (z z2 )2 + ct (x x3 )2 + (y y3 )2 + (z z3 )2 + ct (x x4 )2 + (y y4 )2 + (z z4 )2 + ct

(69)

Although there are closed form solution for this system, GPS receivers usually linearize these equations and evaluate the position by solving a linear set of equations or by im-

31

plementing a least square solution for the case where redundant information is available. Linearizing Equation 69 we have: r1 (p) r1 (p0 ) r2 (p) r2 (p0 ) r3 (p) = r3 (p0 ) r4 (p) r4 (p0 ) with A: A= and Si Si = (x x1 )2 + (y y1 )2 + (z z1 )2 (72) Where p = [x, y, z, ct]T and are the errors due to range noise and missing higher order terms in the linearization. With this approximation the change in position can then be evaluated: p = A1 r (73)

xx1 Si xx1 Si xx1 Si xx1 Si yy1 Si yy1 Si yy1 Si yy1 Si zz1 Si zz1 Si zz1 Si zz1 Si

x x0 x y y0 y + A z z0 + z ct clock

(70)

1 1 1 1

(71)

When ranges from more than four satellites are available a least square solution can be implemented: p = (AT A)1 AT r (74)

Finally the previous position is updated with the correction to obtain the position of the rover at he time stamp of the pseudo-range information: p = p0 + p (75)

The psedudorange is obtained by the GPS hardware through a correlation process and the position of the satellites needs to be computed using the precise satellite ephimeries broadcast as part of the navigation message by each satellite.

32

4.3

There are several sources of ranging error for GPS. Depending on the type of GPS implementation they can introduce signicance errors in the position estimation. Satellite Clock Errors The GPS system ultimately depends on the accuracy of the satellite clock. The ground stations are responsible for estimating the clock errors. These stations do not correct the clock but upload the parameters for the correction formula that the satellite broadcast as part of the navigation message. Each GPS receiver needs to compensate the pseudorange information with a polynomial equation function of these parameters. The major source of clock error is SA. With SA on the RMS value of the error could be as large as 20 m. Without SA the clock drift can be predicted to reduce the error to less than 2 meter for a 12 hour period. Ephemeris Errors These errors result from the satellite transmitting the ephimeris parameter with errors. This will translate in errors in the prediction of the position of the satellite that will generate an incorrect position x of the GPS receiver. These errors grow with time since the last update performed by the ground stations. The GPS receiver usually do not use satellites with ephemeries older than 2 hours. Ionosphere Errors Since there are free electron in the Ionosphere, the GPS signal does not travel at the speed of light while in transit in this region. The delay is proportional to the number of free electrons encountered and to the inverse of the signal frequency. The user can compensate for these errors using a diurnal model for these delays. The parameters of this model are part of the GPS navigation message. The error obtained after this compensation is in the order of 2-5 m. Another compensation method uses the signals at both frequencies, L1 and L2 to solve for the delay. This method can reduce the errors to 1-2 m. Troposphere Errors Another delay is introduced by the variation of the speed of the signal due to variation in temperature, pressure and humidity. Model correction can reduce this error to the order of 1 meter. Receiver Errors This is the error introduced by the GPS receiver when evaluating the range though the correlation process. It is mostly dependent on non-linear eects and thermal noise. The magnitude of this error is in the order of 0.2-0.5 m.

33

Multipath errors The range information is evaluated measuring the travel time of the signal from the satellite to the rover. Sometimes this path may be obstructed but the signal will still reach the rover antenna through an indirect path by multiple reections. The receiver will obtain erroneous range and phase carrier dierence information and an incorrect position x will be generated. The multi-path problem can be reduced with an appropriate selection of the antenna, GPS receiver, and accepting observations only from satellites with a minimum elevation angle. Since multi-path is due to multiple reection, satellites with lower elevation angles are more prone to generate these errors. The antenna can also be retrotted with a choke-ring ground plate to eliminate low angle reections. Some GPS receivers include algorithms and extra hardware to detect and eliminate various type of multi-path errors. Although this technology has been making enormous progress in the past few years it is almost impossible to guarantee a given GPS accuracy 100% of the time when working in application such as mining or stevedoring. Large pieces of equipment, such as cranes, draglines and large structures will almost certainly cause important perturbation to the system at unpredictable times. Selected Availability Selective Availability (SA) is a deliberate error introduced by the US Department of Defense and consists of additional noise included in the transmitted satellite clock and satellite ephemeris. SA has been disconnected since May 2000 and it is not the intent of the US to ever use SA again on the civil GPS signal. Due to this fact this section has a brief description of this type of error to demonstrate the complexity of ltering coloured noise. Figure 8 shows the longitude and latitude error in meters obtained with a standard GPS with SA.

longitude error 100 meters 50 0 -50 -100 0 1000 2000 3000 4000 5000 6000 7000 8000

1000

2000

3000

4000

7000

8000

34

It is clear from this gure that the error is time correlated, showing a characteristic oscillatory behavior with an amplitude of approximately 60 meters. Figure 9 presents the average PSD obtained with sequences of 512 samples of longitude information. Similar results are obtained for latitude information. The position estimation is corrupted with

Power Spectrum Density of Lonngitude error 50

40

30

20

10

-10 -4 10

10

-3

10 frequency in Hz

-2

10

-1

Figure 9: PSD of Longitude Error. The PSD has a gain of approximate 43 db below 2 mHz, continuing with a sharp roll-o of approximate 40 db/decade up to 6 mHz where the magnitude starts to decrease at approximate 20 db/decade. colored noise of very low frequency. It becomes clear that without additional information it is practically impossible to eliminate this added noise in short periods of time. For more information about SA teh reader can see [22] Geometric Dilution of Precision The quality of the solution of the position and clock bias error in relation with the error in the pseudorange measurement is a function of the matrix A, Equation 73. Assuming that the standard deviation for the pseudorange observation is , it can be proved that the matrix covariance for the state p, Equation 75, is given by: 1 P = (AT A) 2 = H 2 (76) From this equation the various denition of estimation accuracy are dened: V DOP = HDOP = P DOP = GDOP = H33 (77) (78) (79) (80)

(H11 ) + (H22 )

35

Finally the estimated error of the individual component of the state vector p can be given as function of the DOP variables var(z) = V DOP var(x) + var(y) = HDOP var(x) + var(y) + var(z) = P DOP var(x) + var(y) + var(z) + var(ct) = GDOP (81) (82) (83) (84)

Most GPS receivers evaluate these uncertainties in real time allowing the user to monitor the accuracy of the solution. This is important in real time application of stand alone GPS or when integration with INS is used. In some cases the GPS may be able to report a solution but due to the geometrical conguration of the satellites in view the accuracy of the solution can be very poor.

4.4

The errors in range determination can be signicantly reduced with the use of another station placed at a known surveyed location. This station processes the information from all satellites in view and determines the errors in range information. These errors are then broadcast by a base station with a radio modem. The other GPS receivers receive the range correction with a radio-modem to compensate the pseudorange observation. The base station is usually placed in a location with good sky visibility. It process satellites with at least 5 degrees over the horizon to avoid multipath problems. This implementation is called dierential GPS (DGPS) and is shown in Figure 10. Although the user can provide for the range correction, base station and telemetry, there are a number of commercial services that provide standard RTCM corrections. With DGPS the errors due to delays in the Troposphere and Ionosphere are signicantly reduced. It eliminates almost all the errors due to SA ( satellite ephimeris and clock dither ). DGPS was of fundamental importance with SA on. With this approach the positioning errors were reduced from approximate 100 meters to under a metre. Without SA the reduction is not that important, specially for low quality DGPS systems as can be seen in the following tables. The prediction of the magnitude of the errors for dierent type of GPS (SPS/PPS/DGPS with and without SA implementation is presented in table 1 ( SPS Autonomous and Dierential without SA), Table 2 (SPS Autonomous and Dierential with SA) and table 3 (PPS Autonomous and Dierential) Finally Table 4 presents actual experimental errors obtained from a 24 hour data set logged with a standard stand-alone GPS receiver. It can be seen that the magnitude of the errors are smaller than the predicted values.

36

Error Source Ephemeris Satellite Clock Ionosphere Troposphere Multipath Receiver measurement User Equiv Range Error Vertical 1 error VDOP Horizontal 1 error HDOP 2.0

Error Source Ephemeris Satellite Clock Ionosphere Troposphere Multipath Receiver measurement User Equiv Range Error Vertical 1 error VDOP Horizontal 1 error HDOP 2.0

37

Error Source Ephemeris Satellite Clock Ionosphere Troposphere Multipath Receiver measurement User Equiv Range Error Vertical 1 error VDOP Horizontal 1 error HDOP 2.0

Total 2.1 2.1 1.2 1.4 1.4 0.5 3.6 8.6 6.6

38

4.5

The more sophisticated GPS receivers make use of phase carrier information to improve the accuracy of the position x. Dierential carrier phase tracking consists of measuring the phase shift between the same signal received at the base and the rover stations. This phase shift is proportional to the distance between these two stations. Much more elaborated hardware and software algorithms are needed for this implementation since these measurements are subject to phase ambiguities, that is uncertainty on the number of whole carrier waves between the receiver and the satellite. Algorithm convergence is a vital factor in achieving the accuracy of the carrier phase measurements. The resolution of the integer ambiguity plays a very important role in the solution of the baseline vector . When the receiver unit is switched on and commences logging the initial whole cycle dierence (ambiguity), between the satellite and the receiver is unknown. As time progresses and the receiver continuously locks on to satellite signals without interruption the receiver moves towards generating a xed integer ambiguity for each corresponding satellite signal. Once the state of the ambiguity is held xed, the receiver is said to have converged and the ambiguity is resolved. In some cases, the receiver is unable to reach a xed integer ambiguity state and is held oating with minimal tolerance and is also said to have converged. Once the ambiguities are said to have converged, each corresponding satellite signal ambiguity, N, is held constant and the change in phase, , is used to calculate the change in the receivers position. Convergence Time for Phase carrier GPS In any navigation application it is expected that in many occasions the number of satellites in view will not be enough to obtain a carrier phase solution. The system will then be restarted and the ambiguities

39

will need to be resolved again. During the convergence period the accuracy is considerably degraded. The convergence time depends on a number of dierent factors, including the baseline, number of visible satellites, satellite conguration, the method of ambiguity resolution: i.e. single frequency, dual frequency or combined dual frequency and code pseudorange data. The most important factor inuencing the time taken for a receiver to solve the unknown ambiguities, to either a xed or oating solution, is the applied method of ambiguity resolution. However, both the visible satellite count and the length of the baseline vector also have an important inuence. Method for Ambiguity Resolution: In this section the comparison between two high quality type of phase carrier implementations is presented. The experimental data was obtained with a GPS receiver manufactured by Novotel working with two dierent algorithms named RT20 and RT2. The RT20 algorithm has the capability of generating a predictable position accuracy of within 20-cm CEP in real time using the single L1 frequency. The RT2 algorithm, however, is capable of generating a predictable accuracy of within 2-cm CEP also in real time. The RT2 algorithm utilizes both the L1 and the L2 frequencies, making it possible to resolve ambiguities faster and more accurately using linear combinations of L1 and L2 frequencies. Figure 11, presents the GPS estimation of its standard deviation for the RT2 and the RT20 algorithms. For this test the GPS antenna was located at a perfectly known position so that the real convergence time can be evaluated. It can be seen that depending on the algorithm implementation the system will take between 100 to 180 seconds to be within specications.

Standard Deviation

1.2

0.8

0.6

0.4

RT20

Time (sec)

Figure 11: Convergence time for steady state position The RT2 algorithm had a very distinct and sudden time of convergence, when compared to the RT20 algorithm. The RT20 algorithm demonstrates a more gradual process of solving

Standard Deviation

1.8 1.6 1.4 1.2

40

1 0.8 0.6 0.4 0.2 0 50 100 150 200 250 300 350 Baseline Approx. 1 km Baseline Approx. 100 m

Time (sec)

Figure 12: Convergence time for the RT2 algorithm with dierent baselines for the ambiguities, slowly moving towards convergence. This distinct dierence was a result of the dual frequency capabilities of the RT2 algorithm. Each individual ambiguity value for any particular observation corresponds to one possible pseudorange value, dened as a lane. So, for any particular observation there are a large number of possible lane combinations, meaning the receiver must analyze each possible lane value in order to select the correct one. For the receivers with only single frequency capabilities, there is no other alternative but to tediously sort through the possible lanes in search for the correct solution. However the application of dual frequency monitoring of the L1 and the L2 frequencies introduces the advantage of building linear combination of the two frequencies and as a result leading to additional wider and narrower lanes. The increased wide lane wavelength provides an increased ambiguity spacing, thus making it easier to choose the correct lane. When the correct wide lane has been selected, the software can then search for the narrow lane and as a result resolve the integer ambiguities faster and more accurately. Once the ambiguities are xed, these values considered constant, resulting in a more consistent and accurate position estimate. However when the number of satellites in view becomes less than ve the ambiguities are lost and a full cycle will be restarted. Visible Satellites An adequate number of visible satellites is an essential component of the GPS system. Not only does the number of visible satellites inuence the accuracy of the generated position, but also, in the case of phase measurements, inuences the resolution of the signal ambiguities. Therefore, the number of visible satellites does inuence the convergence time of a receiver. There are a large number of possible lane combinations

41

for every observation. Therefore, in a situation of increased satellite visibility the number of possible lanes will also increase, thus providing a greater chance of selecting the correct lane and improving the ambiguity resolution time. Baseline Length The length of the baseline between the remote and the reference station has arguably the second largest inuence on the time taken for a receiver to reach a converged state, since, essentially the key to convergence is solving the starting vector. For a receiver to reach convergence, the calculations performed by the relative carrier phase GPS system must generate the position of the remote station with respect to the reference station, i.e. baseline vector. Therefore the length of the baseline plays an important factor in the convergence time. Figure 12 shows the time required for the same receiver, using the RT2 algorithm, to converge to a x integer ambiguity solution, rstly at a baseline of 100 m. and secondly at a baseline of 1 km. At both locations the number of visible satellites were 6. GLONASS The GLONASS is the Russian equivalent GPS system, in this case named Global Navigation Satellite System. It also transmits navigation and range data on two frequencies L1 and L2. Unlike GPS where the satellites are distinguished by the spreadspectrum code, the satellites in GLONASS are distinguished by the frequency of the signal, L1 between 1597-1617 MHz and L2 between 1240-1260 MHz. The system is also envisioned to handle 24 satellites in almost the same conguration as the GPS equivalent. Both systems employ a military only code package which is modulated onto both the L1 and L2 frequencies. The corresponding civilian access code lies only on the L1 frequency. The initial purpose of implementing two frequencies is to remove the ionospheric errors associated with the transmission of the signal through the atmosphere. If both systems are combined then a theoretical doubling of the number of satellites in view is achievable. This increase in the number of satellites increases both the navigation performance and fault detection capabilities. There are various commercial GPS receivers that use both systems.

4.6

Coordinate Transformation

This section presents various coordinate transformation equation that converts position and velocity given in the standard ECEF frame to the local navigation frame used by the data fusion algorithms presented in this course. The position Pg = {PX , PY , PZ } and velocity Vg = {VX , VY , VZ } obtained from the GNSS receiver is delivered in the WGS-84 datum, represented by the ECEF coordinate frame, Figure 13.

42

The latitude and longitude is obtained by transforming the position observation received by the GNSS receiver unit from ECEF to geodetic coordinates. The equations are Semimajor Earth axis (metres) Undulation Semiminor Earth axis (metres) Coecients a = 6378137.0 f = 0.003352810664747 b = a(1 f ) : p = Px 2 + Py 2 Pz a ) pb

t = tan1 (

e1 = 2f f 2 a 2 b2 e2 = b2 PZ + e2 b sin3 t latitude (degrees) = tan1 ( ) p e1 a cos3 t PY longitude (degrees) = tan1 ( ) PX Given the latitude and longitude, a transformation matrix Cn can be formed which g transforms the position and velocity data given by the GNSS receiver from the ECEF frame over to the navigation frame. sin cos sin sin cos sin cos 0 (85) Cn = g cos cos cos sin sin Thus the position and velocity of the vehicle in the local navigation frame is P n = Cn P g g Vn = Cn Vg g (86) (87)

If the vehicles work area has negligible change in latitude and longitude, then Cn is g eectively xed. Equations 86 and 87 form the observations needed for the aided inertial navigation system.

43

Z

N E

Figure 13: Coordinate representations. XYZ represents the orthogonal axes of the ECEF coordinate frame used by the GNSS receiver. represents the longitude and the latitude of the vehicle. NED represents the local navigation frame at which the vehicle states are evaluated to.

44

Inertial Sensors

Inertial sensors make measurements of the internal state of the vehicle. A major advantage of inertial sensors is that they are non-radiating and non-jammable and may be packaged and sealed from the environment. This makes them potentially robust in harsh environmental conditions. Historically, Inertial Navigation Systems (INS) have been used in aerospace vehicles, military applications such as ships, submarines, missiles, and to a much lesser extent, in land vehicle applications. Only a few years ago the application of inertial sensing was limited to high performance high cost aerospace and military applications. However, motivated by requirements for the automotive industry, a whole variety of low cost inertial systems have now become available in diverse applications such as heading and attitude determination. The most common type of inertial sensors are accelerometers and gyroscopes. Accelerometers measure acceleration with respect to an inertial reference frame. This includes gravitational and rotational acceleration as well as linear acceleration. Gyroscopes measure the rate of rotation independent of the coordinate frame. The most common application of inertial sensors is in the use of a heading gyro. Integration of the gyro rate information provides the orientation of the vehicle. Another application of inertial sensors is the use of accelerometers as inclinometers and to perform vibration analysis. The most sophisticated use of inertial sensor is in the full six degree if freedom navigation systems. In this case at least three gyros are used to track the orientation of the platform and a minimum or three accelerometers are integrated to provide velocity and position. They can also provide 3-D position information and, unlike encoders, have the potential of observing wheel slip. This section presents dierent type of inertial sensors and the fundamental principles inertial navigation.

5.1

5.1.1

Accelerometers

Accelerometers measure the inertia force generated when a mass is aected by change in velocity. This force may change the tension of a string or cause a deection of beam or may even change the vibrating frequency of a mass. The accelerometers are composed of three main elements: a mass, a suspension mechanism that positions the mass and a sensing element that returns an observation proportional to the acceleration of the mass. Some devices include an additional servo loop that generates an apposite force to improve the linearity of the sensor. A basic one-degree of freedom accelerometer is shown in Figure 14. This accelerometer is usually referred to as an open loop since the acceleration is indicated by the displacement of the mass. The accelerometer described in this example can be modelled with a second order equation:

45

0

k Mass m c

Damper Frame

Figure 14: Basic Components of an open loop accelerometer

F =m

d2 x dx + c + kx 2t d dt

(88)

where F is the applied force to be measured. The damping can be adjusted to obtain fast responses without oscillatory behaviour. Many of the actual commercial accelerometers are based on the pendulum principle. They are built with a proof mass, a spring hinge and a sensing device. These accelerometers are usually constructed with a feedback loop to constrain the movement of the mass and avoid cross coupling accelerations. There are accelerometers that implement variation of this principle such as the Sundstrand Q-Flex design and the Analog Device silicon micro-machined accelerometer. One important specication of the accelerometers is the minimum acceleration that can be measured. This is of fundamental importance when working in land vehicle applications, where the acceleration expected is usually in the range of 0.1-0.3 g. Figure 15 shows the acceleration measured when travelling with a car at low speed. A standard accelerometer for such applications is usually capable of measuring acceleration of less than 500 g. The dependence of the bias with temperature and the linearity of the device are also important specications. 5.1.2 Gyroscopes

These devices return a signal proportional to the rotational velocity. There is a large variety of gyroscopes that are based on dierent principles. The price and quality of these

46

0.6

0.4

0.2

Acceleration (g)

0.2

0.4

0.6

0.8 0

50

100

150

Time (sec)

Figure 15: Typical acceleration in land vehicle applications sensors varies signicantly. The following sections present some of the most common types of gyroscopes available for industrial applications. 5.1.3 Vibratory gyroscopes

These types of gyroscopes can be manufactured in very dierent forms but they are all based on obtaining rotation rate by measuring coriolis acceleration. The device can be modelled by a simple mass-spring system as shown in the Figure 16. The purpose of the gyroscope is to measure the angular velocity of the particle that is supposed to be rotating about the axis OZ. In this approach the particle is made to vibrate with constant amplitude along the axis OX. This motion is usually referred to as primary motion and is controlled by an embedded circuit that maintains the oscillation at constant amplitude. Under rotation the mass will experience a coriolis inertia force that will be proportional to the applied rate of turn and will have a direction parallel to the OY axis. This motion is referred to as secondary motion and its amplitude is measured to provide information proportional to the angular rotation. Some devices provide an additional feedback mechanism for this secondary motion to increase the linearity of the sensor. The Performance of gyroscopes is mainly characterized by two parameters: scale factor and drift. A gyro with low distortion in the scale factor will be able to accurately sense angular rates at dierent angular velocities. Various system imperfections and as well as environmental conditions can cause signicant variations in the scale factor. Gyro drift is the nominal output of the gyro in steady state.

47

Y Primary motion

Secondary motion O

Figure 16: Mass spring model This non-zero output is usually temperature dependent. Murata manufactures a gyroscope model Gyrostar ENV05 that is based on this principle. This device presents very good high frequency characteristics but poor low frequency performance. The drift expected is in the order of 1 degree / minute. Noise and signicant bias temperature dependence also aect the performance of this particular sensor. Better performance can be expected from the Vibrating Structure Gyroscopes from BAE and the quartz rate sensors build by Systron Donner. These devices have a drift of less that 0.15 degree/minute and the linearity is better than 0.05 % of the full range. 5.1.4 Fiber Optic Gyros

The ber optic gyros are based on the Sagnac eect discovered by Georges Sagnac in 1913. This eect can be easily explained assuming two waves of light circulating in opposite direction around a path of radius R. If the source is rotating at speed , the light traveling in the opposite direction will reach the source sooner than the wave traveling in the same direction, as shown in Figure 17. The wave traveling with the rotation will covers a distance D1 in a transit time t1 , while the other signal covers a distance t2 in time D2 D1 = 2R Rt1 D2 = 2R Rt2 Making the waves travel the path N times, the dierence in transit time becomes: t = N (t2 t1 ) = 4N R2 c2 (90) (89)

48

It is important to relate this time dierence with a phase shift at a particular frequency. = 2tf For a given rotation the phase shift will be = 8 2 RN c (92) (91)

Most low cost implementations of these devices works in an open loop manner. The maximum phase shift that can be evaluated without ambiguities is 90 degrees.

t1

t2

Light source

Figure 17: Transmission time dierence There are commercial laser gyros such as the KVH model Ecore 2000 and 1000, which are capable of drifts as low as 0.036 and degree 0.12 per minute respectively. Closed loop optical gyros are also available but they are more expensive

5.2

Sensor Errors

The measurement errors associated with inertial sensors are dependent on the physical operational principle of the sensor itself. The general error equation used for gyroscopic performance along, say, the x direction, is written as ax (93) x = b + bg ay + sf x + my y + mz z + , az

Navigation System Design (KC-4) and that of the accelerometers as fx = b + sf ax + my ay + mz az + , where b is the residual biases bg is a 1 3 vector representing the g-dependent bias coecients sf is the scale factor term m represents mounting and internal misalignments and is random noise on the sensor signal

49

(94)

Other terms such as anisoelastic and anisoinertia errors mainly aect mechanical gyros while vibro-pendulous errors aect mechanical accelerometers, and hence will not be considered here. Apart from the random noise term , all other error terms are, in principle, predictable thus allowing for some form of compensation. 5.2.1 Evaluation of the Error Components

This section discusses tests which can be performed to gyros in order to systematically remove the errors. Similar tests can be conducted on the accelerometer. It should be noted that temperature and memory eect play a signicant role in the stability of the output of low cost inertial sensors. It is for this reason that when one purchases low cost inertial units, not all the values for the error terms are available, and so testing is required based on the application at hand. If the gyro is left stationary then the only error term encountered is that of the gindependent bias. One of strongest correlations that can be found in inertial sensors is that between the g-independent bias and temperature, also known as in-run bias. Thus by cycling through the temperature that would be encountered in the target application the bias on the gyro can be determined. The better the gyro, the smaller the bias variation over the temperature range. Furthermore, the better the sensor the greater the linearity of this bias variation. There is also a hysterisis eect encountered with most inertial sensors. Thus in an ensemble of tests, cycling through the temperature in the same manner each time, the variation in the bias between ensembles can be determined. This is known as the switch-on to switch-on bias. Again the better the gyro, the better the switch-on to switch-on bias. When testing for the remaining error components, the g-independent bias is assumed

50

known and is removed. Start with a rate table and place the gyro such that its supposed sensitive axis is orthogonal to input rotation vector, then any output signal will be due to internal misalignment of the sensors input axis. The purpose of mounting the sensor orthogonal to the input rotation is to deal with a small error about null as opposed to a small error about a large value (as would be encountered if the sensitive axis was parallel to the rotation input). With the misalignment and g-independent bias available, the gyro is placed on the rate table with its sensitive axis parallel to rotation input. The rate table is cycled from stationary to the maximum rotation measurable by the gyro in steps, holding the rotation rate at particular points. The scale factor and the scale factor linearity can then be determined by comparing the output of the gyro to the rotation rate input. With low cost gyros the temperature also has a part to play with scale factor stability, thus the tests should also be conducted with varying temperature. Finally the gyro can be placed in a centrifuge machine which applies a rotation rate and acceleration to the gyro. The output reading from the gyro minus the previous terms calibrated, results in the determination of bias due to acceleration or g-dependent bias. By mathematically integrating Equations 93 and 94, the eect of each error term on the performance of the gyro can be determined. For each error term that is accounted for there is a corresponding increase in performance. Table 5 compares the error values available for the RLG, FOG, ceramic and silicon VSGs

Characteristic g-independent bias o /hr g-dependent bias o /hr/g scale factor non-linearity %

Table 5: Comparison of some of the major errors with various gyro implementations

5.2.2

The accelerometers measure the absolute acceleration with respect to an inertial frame. We are interested in the translational acceleration hence the algorithms used should compensate for all other accelerations. For practical purposes it can be considered that gravity is the only other acceleration present. In Figure 15 it can be seen that the average acceleration obtained from a land vehicle is smaller that 0.3 g. This implies that the orientation of the accelerometer has to be known with very good accuracy in order to compensate for gravity without introducing errors comparable to the actual translation acceleration.

51

This has been the main limitation that has prevented the widespread use of inertial sensors as position sensors in low cost land navigation applications. The orientation of the accelerometer can be tracked using a gyroscope. These sensors provide an output proportional to the rotation speed but are always contaminated by noise and drift. For short periods of time the drift can be approximated by a constant bias. The orientation is evaluated with a single integration of the gyroscope output: m = + b + dt = + bt + dt (95)

The integration given in equation 95 returns the rotation angle with two additional undesired terms. A random walk due to the noise and another term that grows with time proportional to the gyro bias. As shown in gure 18, the gyro drift will introduce an error

Figure 18: Errors due to drift in the gyro in acceleration given by a = ax sin(bt) (96)

For small angle increments the errors in acceleration, velocity and position are given by:

52

a = ax bt 1 v = ax bt2 2 1 p = ax bt3 6

It is important to remark that a constant gyro bias introduces an error in position determination that grows proportional to t3 . For standard low cost, good quality gyros the bias expected is in the order of 1-10 degrees / hour. Without calibration, the bias could introduce an error of approximately 140 meters due to the incorrect compensation of gravity after only 2 minutes of operation. Another aspect that needs to be considered is the resolution of the analog to digital converter used, especially when working with low drift gyroscopes. The original noise standard deviation of the gyro noise could be increased selecting an inappropriate digital to analog converter. A gyro with stability of the order of 1 degree per hour that works in a range of +/-100 deg/sec and will require a 16-bit converter to operate under its specications. The bias in the accelerometer will also increase the error in position and is proportional to the square of time: v = ba t 1 p = ba t2 2 (100) (101)

The error introduced by the accelerometer is important enough to be neglected. Figure 19 presents the bias measured in a set of identical accelerometers corresponding to an inertial unit over a period of six hours whilst they were stationary. Evidently the biases do not remain constant and in fact it can be clearly seen that any one accelerometer is not indicative of the general performance of the other accelerometers. The dierent biases values indicates that they are physically dierent low cost inertial sensors. The change in the value occur due to an increase of the temperature of the unit due to ambient temperature variations. The estimation of the biases needs to be done each time the vehicle is stationary to minimize this eect. The calibration procedure must incorporate the identication of gyro and accelerometer biases at the beginning of the navigation task and become active each time the vehicle stops. These identication of the biases can also be done on-line but information from other sensors is required. A second mayor error is due to the integration of random walk ( dt) which causes a growing error term known as random walk. Figure 20 presents the eects if integrating a gyro signal on several occasions after the bias was removed and the unit stationary. Although the average error is zero in any particular run the error grow will occur in a random direction. Assuming t unity strength white Gaussian noise and letting x(k) = 0 (u)(u),then the ensemble

x 10

3

53

Y accelerometer 0.03

X accelerometer

5 0 Acceleration (g) 5 10 15 20

Acceleration (g) 0 2 4 6

0.025

0.02

0.015

0.01

Z accelerometer 1 30

Temperature

Acceleration (g)

1.02

20 1.03

1.04

15

4 Time (hours)

Figure 19: Bias change of the accelerometers over a period of 6 hours. mean value is: E[

0

(u)u] =

0

E[(u)]u

(102)

0 t t t t

(u)u

0

(v)v] =

0 0

E[(u)(v)]uv

(103)

E[(u)(v)]is the auto-correlation function, which in this case (assuming uncorrelated errors) is simply a Delta Dirac function (u v). Hence Equation 103 becomes: E[x2 (t)] =

0 t 0 t

(u v)uv = t

(104)

Therefore without any aiding the white noise will cause an unbounded error growth in the inertial sensors whose value at any particular point will be bounded by 2 t 95 % of the time. For white noise of strength K the standard deviation is = K t. The larger the standard deviation of the noise the greater the standard deviation of the error. This is an important specication of the system, since a good gyro can be degraded with

Random Walk X gyro 0.3

54

0.2

0.1

0 Angle (degrees)

0.1

0.2

0.3

0.4

0.5

0.5

2.5

Figure 20: Typical acceleration in land vehicle applications inappropriate signal conditioning and electronic.

5.3

Inclinometers: These sensors are based on accelerometers that resolve the direction of the gravity vector. The gravity vector is a known quantity that is constant in magnitude and direction. Figure 21 presents a basic example of a tilt sensor. These sensors are usually calibrated through a look-up table to determine the oset as a function of temperature. There are tilt sensors available for this purpose that can provide accurate information while the vehicle is stationary. Low cost devices can provide bank and elevation information with an accuracy of up to 0.1 degrees. These sensors can not be used when the platform is moving since they are sensitive to translational and rotational accelerations. Pendulum accelerometers are also used to determine the bank and elevation angles. These devices use the principle that a pendulum with an equivalent radius of the earth will always point to the vertical irrespective of the acceleration. This eect can be approximated with appropriate feedback loop compensation. Although they are able to obtain some rejection of the undesired acceleration due to the movement of the platform they can only be used under very low vehicle accelerations. Vibration: Vibration Analysis consist of measuring the frequency, strength over a range of frequencies of vibrations. Machinery vibration problems can consume excessive power and impose additional wear on bearings, seals, couplings and foundations. Vibrations

55

ro ele Acc

te me

Accelerometer

Vout=Accelerometer Offset

Figure 21: Basic principle of inclinometers are typically caused by machinery misalignment and unbalance. These problems can be detected by obtaining the frequency response of the vibration of the machine under investigation. This is commonly done via analysis of the FFT of an acceleration signal. There are other application where vibration monitoring may be useful such as the analysis and identication of vibration sources, problems in structures, vibration and shock testing, analysis to ensure products comply with specied vibration required for the application etc. Heading: Another application of inertial sensors is in the use of a heading gyro. Integration of the gyro rate information provides the orientation of the vehicle. A good quality gyro will have zero or constant bias, and small noise variance. There are many types of gyroscopes but few in the price range that can be aorded in commercial mobile robot applications. Fiber optic and vibratory gyroscopes have been released with satisfactory specications and reasonable price ranges for many industrial applications Inertial Measurement Unit (IMU) A full inertial navigation system (INS) consists of at least three (triaxial) accelerometers and three orthogonal gyroscopes that provide measurements of acceleration in three dimensions and rotation rates about three axes. The Physical implementation of inertial sensors can take on two forms: Gimballed arrangement: This unit consists of a platform suspended by a gymbal structure that allows three degree of freedom, as shown in Figure 22. This device is usually attached to a vehicle through the outermost gymbal. The vehicle can then perform any trajectory in 3-D while the platform is maintained level with respect to a desired navigation frame using the feedback control loops. These loops use the gyro signal to provide appropriate control singnal to the actuators places in each gymbal. With this approach the platform can be maintained aligned with respect to a coordinate system. The information provided by the accelerometers can then be integrated directly to obtain position and velocities.

56

Figure 22: Gymbaled Six Degree of Freedom INS. Each axis has a control loop to maintain the axis at a particular orientation with respect to a coordinate frame. The gyro provides the feedback information to control the actuators of each gymbal Strapdown arrangement: The IMU system assembled from low cost solid-state components is almost always constructed in a strap-down conguration. This term means that all of the gyros and accelerometers are xed to a common chassis and are not actively controlled on gimbals to align themselves in a pre-specied direction. As shown in Figure 23, a gyro and accelerometer is placed in each axis. This design has the advantage of eliminating all moving parts. The strapdown construction, however, means that substantially more complex software is required to compute and distinguish true linear acceleration from angular acceleration and body roll or pitch with respect to gravity. Once true linear acceleration has been determined, vehicle position may be obtained, in principle, by double integration of the acceleration. Vehicle orientation and attitude may also, in principle, be determined by integration of the rotation rates of the gyros. In practice this integration leads to unbounded growth in position errors with time due to the noise associated with the measurement and the non-linearity of the sensors.

5.4

Coordinate Frames

Inertial Navigation systems require the transformation of the dierent measurement quantities between dierent frames of reference. The accelerometer and gyros measure

57

Az Gz Gx Ax x

Figure 23: Strapdown Six degree of Freedom INS. Each axis has an accelerometer and a gyro. The gyro signal are used to determine the attitude of the unit and compensate the acceleration information for gravity. acceleration and rotation rates with respect to an inertial frame. Other sensors such as Global Positioning System (GPS) report information in Earth Centered Earth Fixed (ECEF) coordinate frame. The navigation system needs to transform all the information to a common coordinate frame to provide the best estimate and sometime be able to report the output in various other coordinate frames. This section will introduce the dierent coordinate system used in this course. Inertial Frame According to Newtowns law of motion, an inertial frame is a frame that does not rotate or accelerate. It is almost impossible to nd a true inertial frame. A good approximation is the one that is inertial with a distant star. For practical purposes we dene an inertial frame with origin at the earth center of mass, with x axis pointing towards the vernal equinox at time T0 , the z axis along the earths spin axis and the y axis completing the rigth handed system.

Ay

58

Earth-centered Earth-Fixed (ECEF) This frame has origin at the mass centre of the Earth, the x axis points to the Greenwich Meridian in the equatorial plane, the y plane at 90 degree east in the same plane and the z axis in the direction of rotation of the reference ellipsoid. This frame is not an inertial frame. It can be approximated to an inertial frame by a negative rotation equivalent to the Greenwich mean Sidereal Time (GMST). The earths geoid is a surface that minimize the dierence with the earths sea level. This geoid is approximated with an ellipsoid around the earths minor axis. Dierent parameter are available for dierent areas of the world. The most common reference ellipsoid used is the WGS 84. This frame is also referred as Transport Frame. Local Level Earth Frame This frame, also known as Geographic or Earth frame, is dened locally relative to the earths geoid. The axis x points to the Norths direction, the z axis is perpendicular to the tangent of the ellipsoid pointing towards the interior of the earth, not necessarily to the centre of the earth. Finally the y axis points east to complete a right handed orthogonal system Wander Frame The Local Level frame presents some numerical problems to integrate the data from inertial system when close to the poles. In this areas signicant rotation around the axis z are required to maintain the x axis pointing North. This is true even with small movement in the y direction. This problem can be solved performing all the computation in a frame that does not require to point to North. The x axis then wanders from North at a rate chosen by the user. Geocentric frame This frame is similar to the Local Level frame with the dierence that the z axis points to the centre of the earth. Local Geodetic This coordinate system is very similar to the Local level Frame. The main dierence is that when the system is in motion the tangent plane origin is xed, while in the local level frame the origin is the projection of the platform origin into the earths geodic. This is frame is mainly used to perform navigation in local areas relative to a given origin point. Body Frame Most navigation systems have sensors attached to the vehicle. The body frame constitutes a new frame that is rigidly attached to the vehicle. The origin of this frame can be arbitrarily chosen although some location may simplify the kinematics models.

59

Figure 24 shows the coordinates system used in this course. Absolute sensors like GPS provide position in ECEF coordinates, using X, Y, Z or latitude, Longitude and Height , , H. Inertial sensors will return information in its body frame, which is in permanent rotation and translation from the navigation frame N, E, D shown in the gure. All the sensory information needs to be converted to the navigation frame prior to perform the data fusion task to obtain the best position estimate. This is one of the tasks of the Navigation system.

Z

N E

5.5

This section presents the derivation of the inertial navigation equations for global and local area navigation. For additional information the reader can see [24] 5.5.1 The Coriolis Theorem

Navigation with respect to a rotating frame such as the earth requires the Coriolis theorem. The theorem states that the velocity of a vehicle with respect to a xed inertial frame vi ,

60

is equal to the ground speed of a vehicle ve (the velocity of the vehicle with respect to the earth), plus the velocity of the vehicle due to the rotation rate of the earth with respect to the inertial frame ie , at the point on earth where the vehicle is located r, that is, vi = ve + ie r, (106)

where ie = [0 0 ] and is the rotation rate of the Earth . Dierentiating this equation with respect to the inertial frame gives vi |i = ve |i + ie |i r + ie v|i . (107)

Now assuming that the angular acceleration of the Earth is zero, that is ie = 0, and substituting Equation 106 into 107, ie v|i = ie [ve + ie r] = ie ve + ie [ ie r], hence Equation 107 becomes vi |i = ve |i + ie ve + ie [ ie r]. (109)

(108)

Now vi |i = f + g where f is the specic force encountered due to the vehicle motion and g is the force due to gravity. Hence Equation 109 becomes f + g = ve |i + ie ve + ie [ ie r], that is, ve |i = f [ ie ve ] + [g ie [ ie r]]. (111) (110)

Equation 111 simply states that the acceleration over the Earths surface is equal to the acceleration measured by the accelerometers compensated for the Coriolis acceleration encountered due to the velocity of the vehicle over a rotating Earth, ie ve , and for the local gravity acceleration, which comprises of the Earths gravity, g, and that due to the rotation of the Earth, also known centripetal acceleration ie [ ie r]. 5.5.2 Navigation in large areas

The Transport frame is used when a vehicle travels over large distances around the earth such as in missile or air transport applications. The objective of this framework is to obtain the ground speed of a vehicle with respect to a navigation frame, generally expressed in North, East and Down coordinates, at a given location, expressed in latitude, longitude

61

and height. In such situations the rotation of the earth needs to be taken into consideration along with the fact that the navigation frame is also rotating. As an example, constantly keeping the North axis aligned on a ight from Sydney to London will require the navigation frame to constantly rotate. This, coupled with the fact that the earth is rotating during this transition, causes a Coriolis acceleration between the navigation frame, the earth rotation and the ground speed of the vehicle. Hence, the Transport framework is dened as: the acceleration of the vehicle with respect to the navigation frame ve |n , is equal to the acceleration of the vehicle with respect to the inertial frame ve |i , minus the Coriolis acceleration due to the navigation frame rotating en , on a rotating earth ie . That is, ve |n = ve |i ( ie + en ) ve . Substituting in Equation 111, ve |n = fn [ ie ve ] + [g ie [ ie r]] ( ie + en ) ve = fn (2 ie + en ) ve + [g ie [ ie r]], (112)

(113)

where fn represents the acceleration in the navigation frame. Since the inertial unit measures the acceleration in the body frame fb , this acceleration needs to be transformed into the navigation frame, that is, fn = fb . (114)

The transformation , can take on three forms as will be discussed in Section 5.6. Regardless of how is obtained, the rotation data relating the body to navigation frame bn is required. However, the gyros measure the total inertial rotation ib which comprises of bn plus the rotation of the navigation frame with respect to the inertial frame, which is the rotation rate of the navigation frame with respect to the earth plus the rotation rate of the earth with respect to the inertial frame, bn = ib [ ie + en ], where en is known as the transport rate and is dened as en = [ ve ven vn tan L , , ]. Ro + h Ro + h Ro + h (116) (115)

L is the latitude of the vehicle, Ro is the radius of the Earth and h is the height of the vehicle above the earths surface. The transport rate is what rotates the navigation frame as the vehicle travels across the surface of the earth.

Navigation System Design (KC-4) 5.5.3 Navigation in local areas, Earth Frame

62

In the Earth frame the acceleration of the vehicle with respect to the earth ve |e , is equal to the acceleration of the vehicle with respect to the inertial frame ve |i , minus the Coriolis acceleration due to the velocity of the vehicle ve , over a rotating earth ie , ve |e = ve |i ie ve . (117)

The Earth frame will be used throughout this work since its denition is well suited for land vehicle applications. Substituting in Equation 111 gives ve |e = fe 2[ ie ve ] + [g ie [ ie re ]]. (118)

Again, since the inertial unit measures the acceleration in the body frame, the acceleration measurements need to be transformed into the earth frame, fe = f b , (119)

where now comprises of the rotation rates be which relates the body to earth frame. However, the gyros measure the total inertial rotation ib which comprises of be plus the rotation of the earth with respect to the inertial frame transformed over to the body frame, so be = ib b ie . e 5.5.4 Schuler Damping (120)

The major dierence between the Transport and Earth frame equations is the transport rate dened by Equation 116. The transport rate is subtracted from the body rates and hence denes the attitude of the system with respect to the Transport frame. However, incorporating the transport rate also adds a bound to the errors associated with inertial systems. This is because the acceleration in the navigation frame is twice integrated to provide velocity and then position. The velocity is used to determine the transport rate which is then subtracted from the body rates. For example, assume that the pitch is in error by a small angle . This error in turn causes an incorrect calculation of the acceleration due to the misalignment and the measurement of gravity given by g. Repeating this process shows that misalignment feeds back onto itself. Looking at the characteristic equation of such a system, it may be observed that it exhibits simple harmonic motion with period s = g , Ro

63

or 84.4 minutes. This is known as the Schuler oscillation. In essence, by implementing the Transport frame, the errors are bounded within the Schuler oscillations. If a system is tuned to the Schuler frequency then some general rules of thumb can be obtained for inertial systems: An initial velocity error vo , causes a position error of vo sin(s t) ; s An initial attitude error , causes a position error of R(1 cos(s t)); An acceleration bias f , causes a position error of f ( 1cos(s t) ); and 2

s

sin(s t) ). s

Thus for all errors, except for gyro bias, the position error is contained within the Schuler oscillation. The gyro bias causes Schuler oscillations which however grow linearly with time. It is for this reason that, with accurate gyros, passenger airlines can travel vast distances relying solely on inertial navigation. A reasonable question is Why not use the Transport frame for land vehicle applications and hence benet from this bounding property?. The Transport frame can be used for any inertial navigation implementation and thus allow for bounded error growth irrespective of how large this error may be. However, as an example, a low cost inertial system with a typical gyro bias of 0.01deg/sec will give rise to a bounded position error of 160000km while a gyro bias of less than 0.001deg/hr, such as encountered with ring laser gyros found in passenger airlines, gives rise to a bounded position error of 4km. Thus Schuler bounding has no purpose when using low grade inertial units. If the transport term from Equation 113 is removed, the resulting equation has a similar structure to that of the Earth frame, Equation 118. However, the Earth frame provides a better conceptual understanding when dealing with navigation within conned areas.

5.6

Attitude Algorithms

The transformation matrix is required to evaluate the acceleration vector in the desired frame given the acceleration vector in the body frame. This transformation matrix has to be accurate since misalignment (errors in estimated attitude) causes the components of the gravity vector measured by the accelerometers to be confused with true vehicle acceleration. Integrated over time, even small misalignment errors will cause large estimated errors. The transformation matrix consists of the roll, pitch and yaw angles required to rotate the body frame to the navigation frame and hence is updated continuously since the body frame is always rotating with respect to the navigation frame. The updating process propagates this matrix based on data obtained from the gyros, thus any errors in is caused

64

by both the physical errors associated with the gyros and the errors in the algorithms used to propagate the transformation matrix. The physical errors is discussed in Section 5.2. There are a number of algorithms available for attitude propagation. However, regardless of the type of attitude algorithm implemented, in their analytical forms they provide identical results. The choice of algorithm depends on the processing power available, on the errors introduced due to discretisation of algorithms, and on the errors introduced due to the simplication of the algorithms. The simplication of algorithms is to ease the computational load on the processor. There are in principle three approaches to propagating the transformation matrix: Euler, Direction Cosine and Quaternion methods. The Euler approach is conceptually easy to understand although it has the greatest computational expense. Although the Quaternion approach is computational inexpensive compared to the other two, it has no direct physical interpretation to the motion of the vehicle. The Direction Cosine approach ts in between, both in terms of physical understanding and in computational expense. 5.6.1 Euler Representation

The Euler representation is the easiest form to understand. It is the mathematical analog form of a gimbal system. The roll , pitch and yaw angles used to represent the rotations from the body frame to the navigation frame are placed in a transformation matrix En , by multiplying the consecutive rotation matrices representing a roll, followed b by a pitch and then a yaw, c s 0 c 0 s 1 0 0 E n = s c 0 0 1 0 0 c s b 0 0 1 s 0 c 0 s c c c c s + s s c s s + c s c = c s c c + s s s s c + c s s (121) s s c c c where subscripts s and c represent sine and cosine of the angle. As the body frame rotates, the new angles are obtained from = (y sin + z cos ) tan + x = y cos z sin = (y sin + z cos ) sec . (122) (123) (124)

where is the rotation rate measured by the gyros about the x, y and z axes. As is apparent from these equations, both the roll and yaw updates have singularities when the pitch of the vehicle is . This is equivalent to the physical phenomenon of gimbal lock 2

65

in gimbal systems. In a strapdown system, as the vehicle approaches then theoretically 2 innite word length and iteration rates are required in order to accurately obtain the result. It is for this reason that Euler updates are not generally implemented, although this is not a problem for land vehicle applications. Furthermore, this approach is computationally expensive due to the trigonometry required in the updates and in forming E n . b Discretisation The discretisation procedure is as follows: The new roll angle is obtained through integration by (i + 1) = dt + (i) (125)

and similarly for pitch and yaw. The angles are then placed into the matrix to form En (i + 1). b Obtain the acceleration data in the body frame fb = [fx , fy , fz ] and evaluate the acceleration in the navigation frame fn = En fb b Integrate the acceleration to obtain velocity and then position. 5.6.2 Direction Cosine Matrix (DCM) Representation (126)

The direction cosine matrix Cn , is a 3 3 matrix containing the cosine of the angles b between the body frame and the navigation frame. Cn is propagated by b n C b = Cn bn , b (127)

where bn is a skew-symmetric matrix representing rotation rates as measured by the gyros, 0 z y 0 x . (128) bn = z y x 0 The initial Cn is simply the initial En . b b

66

Obtain the gyro outputs x , y , z and integrate to determine the change in angle x , y , z . Determine the angular vector magnitude = Determine the series coecients = sin 1 cos = 2 (130) (131) 2 + 2 + 2 x y z (129)

Form the angular skew matrix 0 z y 0 x = z y x 0 Update the direction cosine matrix Cn (i + 1) = Cn (i)[I33 + + ] b b

2

(132)

(133)

The transformation matrix can be simplied if one assumes small angular rotations (< 0.05deg) thus, Cn (i + 1) = Cn (i) [I + ] (134) b b with 0 Y Y 0 = P R P R 0

(135)

Obtain the acceleration data in the body frame and evaluate the acceleration in the navigation frame fn = Cn (i + 1)fb b Integrate the acceleration to obtain velocity and then position. (136)

67

In the Quaternion approach the rotation from one frame to another can be accomplished by a single rotation about a vector q through an angle q. A quaternion consists of four parameters which are a function of this vector and angle. The initial quaternion is obtained from the roll, pitch and yaw angles dened in the Euler representation or alternatively through the Direction Cosine parameters, n n n 0.5 1 + Cb11 + Cb22 + Cb33 1 n n (Cb32 Cb23 ) 4q(1) (137) q(i) = 1 n n (Cb13 Cb31 ) 4q(1) 1 n n (Cb21 Cb12 ) 4q(1) Discretisation The discretisation procedure is as follows: Obtain the gyro outputs x , y , z and integrate to determine change in angle x , y , z . Determine the quaternion operator coecients sin 2 = 2 = cos 2 Determine the quaternion operator x h(i) = y z Generate the quaternion matrix and q(i)1 q(i)2 q(i + 1) = q(i)3 q(i)4 update q(i)2 q(i)3 q(i)4 q(i)1 q(i)4 q(i)3 h(i) q(i)4 q(i)1 q(i)2 q(i)3 q(i)2 q(i)1 (138) (139)

(140)

(141)

Obtain the acceleration data in the body frame and evaluate the acceleration in the

68

(142)

where q is the complex conjugate of q. A less computationally expensive approach is taken by converting the quaternion back into Cn by b 2 2 2 2 (q1 + q2 q3 q4 ) 2(q2 q3 q1 q4 ) 2(q2 q4 + q1 q3 ) 2 2 2 2 (q1 q2 + q3 q4 ) 2(q3 q4 q1 q2 ) , Cn = 2(q2 q3 + q1 q4 ) b 2 2 2 2 2(q2 q4 q1 q3 ) 2(q3 q4 + q1 q2 ) (q1 q2 q3 + q4 ) and then f n = Cn f b b Integrate the acceleration to obtain velocity and then position. 5.6.4 Discussion of Algorithms (143)

All three approaches produce identical results in their complete form. The Euler approach is not commonly used due to the presence of the roll/yaw singularity. This does not pose a problem for land vehicle applications. However, the Euler approach requires high computational eort and thus powerful processing if fast updates are required. The Quaternion has the benet of only requiring the update of four variables. It is most often used in military and space applications. In order to save on computational cost, the quaternion is converted to a DCM representation for transformation of the acceleration from the body frame to the navigation frame. This can be a signicant computational burden if fast sampling rates are required. To overcome this most military users implement a dual sampling system. As an example, a Laser INS (LINS) system typically samples Ring Laser Gyros (RLGs) at 1200Hz and the accelerometers at 400Hz. The lower sampling rate is sucient for typical LINS navigation applications. Three samples of the RLGs are used to update the quaternion, thus forming a super quaternion representing a single rotation. This quaternion is then converted to the DCM representation and used to transform the acceleration measurements. The DCM approach is commonly used in all forms of inertial navigation. It also oers the best representation for land navigation. Although nine variables are updated at each sample, it is less computationally expensive than the Euler representation, and unlike the Quaternion approach, requires no conversion over to another representation in order to transform the acceleration data from the body frame to the navigation frame. In order

69

to satisfy this small angle assumption, the update rates generally have to be high. For example, land vehicles can undertake rotation rates of 50deg/sec, and so a sampling rate of at least 1kHz is required (to keep the angles below 0.05deg). Such a sampling rate is high for low cost inertial systems although in more expensive systems used in military and space applications 20kHz is not uncommon.

5.7

The accuracy of the evaluation of attitude can be aected by a number of factors such as the approximation made by the computation algorithm and the hardware limitation of the technology used. This section present a description of the most important sources of errors. 5.7.1 Errors in attitude computation

Equations 130 and 131 presented the series coecients required for the DCM updating process. If these coecients can be determined exactly then the DCM generated would be exact. However, in practice a Taylor series expansion is required, = 1 2 4 + ... 3! 5! 2 4 1 + ... = 2! 4! 6! (144) (145)

The number of terms which are included in the expansion, and hence the accuracy of the series, describes the order of the algorithm. In [27] it is shown that the error in the DCM algorithm can be represented as Ddc = 1 (a1 cos sin + 2 a2 sin ), t (146)

where a1 = 1 and a2 = 0 for a rst order algorithm, a1 = 1 and a2 = 0.5 for a second 2 order algorithm, a1 = 1 and a2 = 0.5 for a third order algorithm and so on. 3! For example, consider a land vehicle whose axis experiences rotations in the order of 20deg/sec. Table 6 shows the drift in the algorithm when the sampling rates are set to 10, 125, 200 and 500Hz and the order of the algorithm is increased from a rst to a third. What is apparent from the table is that although the performance of the algorithm increases at each increase in sampling rate, there is a more marked improvement by increasing the order of the algorithm. Furthermore, although the sampling rates suggested in this table are not comparable to

70

Order of Algorithm 1 2 3

Table 6: Algorithm Drift Rates in o /hr caused by sampling a continuous rotation rate of 20deg/sec the kHz sampling rates implemented by high grade inertial systems, it is the relative magnitude between the sampling rate and the continuous rotation rate that is of concern. Hence, as is apparent from the table, the drift errors caused by the simplication of the algorithm are kept small as long as the sampling is of higher magnitude than the continuous rotation rate. 5.7.2 Vibration

Vehicle vibration has a detrimental aect on the performance of inertial navigation systems. This is due to the low bandwidth of low cost inertial sensors, causing attenuation or total rejection of motion vibration at high frequencies. Vibration is generally a combination of both angular and translational motion. If the bandwidth of both the gyros and accelerometers are equal, and the vibration of the vehicle at frequencies above this bandwidth has smaller magnitude than the resolution of the sensors (which is likely with low cost units), then vibratory motion can safely be ignored. However, the bandwidth of low cost gyros is almost always signicantly less than for low cost accelerometers. Vibrations cause small changes in attitude which is measured by the accelerometers as a component of gravity. This acceleration component will be inaccurately evaluated as apparent vehicle motion since the gyros can not measure this change in attitude. To overcome this problem it is necessary to introduce vibration absorbers into the physical system to remove high frequency vibration. As an example, ceramic VSGs have a bandwidth of approximately 70Hz, those of a particular brand of piezo-accelerometers 300Hz. Thus the approach would be to obtain vibration absorbers that attenuate as much of the signal as possible above 70Hz. However, while low cost inertial units have the benet of being light weight, it is dicult to nd absorbers that can attenuate high frequencies given the light load on the absorbers themselves. Furthermore, it is important to ensure that the natural frequency associated with the absorbers does not lie in a region where any excitement of the vibration causes inaccurate readings of the sensors. The eect of vibration on the attitude algorithms is termed coning or non-commutativity.

71

The denition of coning motion is simply stated as the cyclic motion of one axis due to the rotational motion of the other two axes. If one axis has an angular vibration rotation of A sin(t) and the other axis A sin(t + ), then the third axis will have a motion describing a cone. A perfect cone motion would occur if the phase dierence between the two axes is ninety degrees, otherwise the motion will exhibit some other form (and in reality would be random). In [10] three forms of coning errors are discussed: True Coning Errors Type 1: Where the true coning motion due to vibration stimulates the presence of coning motion in the algorithms used to determine the attitude of the inertial unit. True Coning Errors Type 2: Where the true coning motion due to vibration causes a real net angular rotation rate c , which if not properly taken into account will be assumed to be rotation of the vehicle when it is not. Pseudo Coning Errors Type 3: When there is no real coning motion due to vibration but coning motion is produced in the algorithms due to induced errors in the inertial unit or in the algorithms themselves. Type 1 errors arise because of sensor errors, namely gyro scale factor and gyro orthogonality errors, both of which are large with low cost inertial units. An example of Type 2 errors may occur when an inertial unit uses RLGs. The operation of this gyro works on the principle of two opposing light beams operating in a xed optical path having the same optical frequencies. When the gyro undergoes rotation the frequencies change in order to maintain the resonant frequency required for the laser beams. At very low rotations this does not happen and the two beams assume the same frequency, thus it appears as though there is no rotation; a phenomenon known as lock-in. To remove this the gyros are dithered. That is, a low amplitude angular vibration is applied to the gyro. This vibration occurs at high frequency and hence can cause coning motion. One way to minimise the aect is to dither the gyros at dierent frequencies. Type 3 errors are of most concern. They arise from errors associated with truncation in the attitude algorithm and the limited bandwidth of the algorithm, both of which are alleviated as the order of the algorithm is increased along with the sampling rate. The problem with coning motion is determining whether the right order algorithm or sampling rate has been chosen, and whether aects such as quantisation errors or dithering is causing any errors. The approach taken in industry is to place the system on a vibration table which can subject the unit to coning motion. The drift in position and attitude is an indication of coning error magnitude. This however, is only benecial in cases where the application is known and the inertial unit is built to suite. However, for generic low cost inertial applications one purchases the unit o the shelf and hence such techniques are not available. In these situations vibration can cause errors in the attitude evaluation and hence drive navigation errors, thus requiring aiding from an external sensor in order to bound these errors.

72

In light of the arguments presented, the approach taken for minimising the errors associated with attitude algorithms for low cost inertial units will consist of the following steps: The bandwidth of the gyros employed will be the limiting factor in performance. Generally the accelerometers employed and the sampling rate achievable will be higher than the bandwidth of the gyro. If vehicle vibration exceeds that of the bandwidth of the gyro, then the only reasonable choice is to use vibration absorbers to attenuate incoming signals above the bandwidth of the gyro, taking into consideration the natural frequency of the absorber. The sampling rate of the inertial sensors should be above the Nyquist frequency and furthermore should be as high as possible. The higher the order of the algorithm the better and this will come down to the sampling rate and the processing power available.

5.8

Error Models

The accuracy of inertial navigation will depend on the accuracy of the sensors employed and on the computer implementation of the inertial algorithms. To analyse how these two eects contribute to the inertial navigation system development, inertial error equations are derived. These equations provide the position, velocity and attitude error growth, given the sensor and algorithm errors. The development of the error equations is accomplished by perturbing (or linearising) the nominal equations. The perturbation can occur in two forms The Computer Approach ( angle) where the analysis occurs about the computed geographical position determined by the inertial computer. Since the computer frame is known the perturbation of the angular position and rate are zero. The True Frame Approach ( angle) where the perturbation occurs about the true position of the vehicle. The true position is not known and hence all the elements are perturbed. The benet of implementing the perturbation in the computer frame is that the misalignment between the computer frame and the true frame is independent of the position of the computer frame. When perturbing in the true frame the misalignment is coupled with the position. However, perturbation in the true frame is simpler. Both perturbation

73

techniques produce identical results as shown in [1, 2, 4]. Perturbation analysis has always centered on the Transport or Wander Azimuth frames as these are due to their wide spread use. The perturbation of the Earth frame is the objective of this section and is done so using the true frame approach. It will be shown that in this frame, misalignment propagation is independent of position, thus delivering the same benet as the computer frame approach. Perturbation in the true frame is accomplished by stating that the error in a particular state is the dierence between the estimated state and the true state. Thus given that the Earth frame velocity equation (Equation 118) is ve |e = Ce fb 2[ ie ve ] + [g ie [ ie re ]], b and dening the total gravity acceleration as gt = [g ie [ ie re ]], then by denition of the perturbation e e v = ve ve e = [C b f b Ce fb ] [2 e e 2 e ve ] + [t g t ]. ie v e g b ie e

e

(147)

(148)

(149)

where v e is the evaluated velocity vector and v e is the true velocity vector. The evaluated e e transformation matrix C b , is the true transformation matrix C e , misaligned by the misb alignment angles . The misalignment is represented in skew-symmetric form []. Hence e C b = [I33 []]C e , b thus

e v = C e f b C e f b []C e f b [2 e v e ] + g t . b b b ie

(150)

(151)

If the errors in Coriolis and gravity terms are insignicant then v = C e f b []C e f b b b = C e f b []f e . b Now T []f e = fe [], (153)

(152)

74

(154)

Thus the propagation of velocity errors in the Earth frame v is equal to the acceleration error in the Earth frame due to the misalignment of the frame [f e ], together with the errors associated with the measurements of the acceleration f b transformed over to the Earth frame via C e . The errors in the measurements of the acceleration are associated b with the accelerometers and are discussed in the next section. Rearranging Equation 150 e b [] = I33 C b C e T , and dierentiating gives e e eT b [] = C b C b C b C e T . (156) (155)

The updating process of the transformation matrix both for the true and evaluated frames are e e e be C b = C e e and C b = C b e . b be Substituting into Equation 156 gives e b be e be b [] = C b [C e b ]T [C b b ]C e T e b = C b [be b ]C e T . be b Perturbation of the rotation update matrix gives b b = be b , be be therefore e be b [] = C b b C e T . Substituting in Equation 150 [] = [[I33 ]C e ]e C e T b be b = C e b C e T + []C e b C e T . b be b be b b (160) (159) (157)

(158)

(161)

Navigation System Design (KC-4) The product of small terms and , is assumed zero. Therefore [] = C e b C e T b be b e = C b b C b . be e = [C e b ]. b be The error form of Equation 120 is b = b C b e . be ib e ie

75

(162)

(163)

Given that the rotation rate of the earth is known, thus e = 0, then Equation 162 ie can be rewritten as [] = C e [ b ] b ib = C e b . b ib or (164)

That is, the propagation of attitude errors is simply the errors associated with the rob tation rate measurements ib , transformed over to the Earth frame via C e . The errors b associated with the measurements are purely dened with the gyros and are discussed in the next section. Note that the propagation of the attitude errors is independent of position. Thus although the derivation was approached through perturbation of the true frame the result delivers the same benet as found in the computer frame approach, and this is due to the structure of the inertial equations in the Earth frame. Thus given the velocity and attitude error propagation equations and an input trajectory, the error growth of the inertial system can be determined. The only terms which need to be determined are the errors in the acceleration measurement f b = [fx , fy , fz ]T , and the errors in the rotation rate b = [x , y , z ]T . ib These terms can be experimentally evaluated.

5.9

The objective of calibration is to determine the biases in the accelerometers and gyros. This is obtained by rst determining the initial alignment of the inertial unit and hence in turn evaluating the initial direction cosine matrix. 5.9.1 Alignment Techniques

If the accelerometer readings are known perfectly then the attitude of the inertial unit can be determined by resolving the gravity component. In order to determine the gravity

Navigation System Design (KC-4) component measured by the accelerometers, Equation 136 is rearranged to give fb = (Cn )1 fn b

76

(165)

Since Cn is orthogonal, its inverse is simply its transpose. The inertial unit is stationary b and hence the only acceleration measured is that due to gravity along the vertical axis. Thus fn = 0 0 g

T

(166)

(167)

where the subscript T represents the true acceleration component due to gravity. Hence fxT = g sin fyT = g sin cos fzT = g cos cos (168) (169) (170)

Although no sensor is perfect, the higher the accuracy the tighter the error tolerances and thus the accuracy of alignment which can be obtained. As the accuracy of sensors decrease, due to the errors mentioned previously, the alignment accuracy also decreases. Rearranging Equation 168 to determine the pitch , and substituting this into either Equations 169 or 170 will solve for roll . This procedure for determining alignment is called coarse alignment. If the accuracy provided by coarse alignment is not sucient for navigation performance then external alignment information will be required. This information can come from tilt sensors or GNSS attitude information for example. Coarse alignment is generally used for rapid alignment, such as in missile applications, where the time required for averaging data from external sensors is not available. The nal term which needs to be evaluated is the heading of the vehicle . Gyrocompassing is the self determination of the heading. Once the attitude of the unit is found, the reading on the gyros can be used to determine the components of the Earths rotation, and by knowing the initial position of the vehicle, heading can be resolved. However, with low cost gyros, gyrocompassing is generally not possible due to the high noise and low resolution of these sensors. In such cases external information is required to determine initial heading.

Navigation System Design (KC-4) 5.9.2 Alignment for Low Cost Inertial Units

77

Due to the low accuracy of the inertial units implemented in this thesis, none of the common self aligning or calibration methods provide results accurate enough for navigation. Instead, the inertial unit uses two tilt sensors which provide measurements of the bank and elevation of the inertial platform, thus providing the initial alignment information required. A positive bank will cause the y-accelerometer to measure a component of gravity equal to fyT = g sin(bank) Similarly a positive elevation will cause the x-accelerometer to measure fxT = g sin(elevation) (172) (171)

Equating Equation 168 with 172, and Equation 169 with 171, the pitch and roll angles are = elevation sin(bank) = sin1 ( ) cos (173) (174)

To resolve the heading a compass is used. Equations 173 and 174 are used along with the initial heading angle to determine the initial Cn . b 5.9.3 Calibration

The simplest method of obtaining the biases of the inertial sensors is to measure the readings from each sensor whilst the vehicle is stationary. These bias values are used to calibrate the IMU. For gyros, the bias is simply the reading from these sensors when the vehicle is stationary. However, the alignment of the inertial unit is required in order to determine the biases on the accelerometers. This is accomplished during the alignment stage since the expected acceleration due to gravity can be determined and hence any anomalies in these values are attributed to bias. Thus the bias on the x-accelerometer is obtained by; bfx = fx fxT (175)

where fx is the measured acceleration and fxT is the expected acceleration obtained during the alignment stage. The bias is obtained similarly for the remaining accelerometers. For

Navigation System Design (KC-4) more information and experimental results the reader can see [21]

78

5.10

Figure 25 presents the tasks required to work with a full inertial measuring unit. The bank, elevation and heading information is used to calibrate and align the unit while n the vehicle is stationary. Once the initial transformation matrix Cb (0) is obtained, it is n updated at fast rates with the gyroscope information to obtain Cb (k). This matrix is used to transform acceleration in the body coordinate frame to the navigation coordinate frame. Then the single and double integration is performed to obtain inertial velocity and position.

Bank

Elevation Heading

Calibration Algorithm

Cbn (0)

Gyros (R,P,Y)

n Ck () b

Integral

Velocity x,y,z

Position x, y ,z

79

GPS/INS Integration

Inertial sensors have been used in numerous applications for the past 50 years. This technology originally developed for military purposes has started to appear in industrial applications. This has been possible due to the signicant reduction in cost of inertial sensors. Low cost full six degree of freedom inertial system are currently available for less than US 10,000. Unfortunately this reduction of cost comes with a substantial reduction in quality. These units without any aiding can only perform navigation for very short period of time. The solution to this problem is aiding inertial systems with external information to maintain the error within certain bonds. The most common aiding sensor for outdoor application has been the GPS in all its forms ( Autonomous / Dierential / RTK ). This section presents various navigation architectures that fuse GPS, INS and modeling information in an optimal manner.

6.1

The navigation architecture depends on the types of sensors and models employed. For aided inertial navigation systems, the inertial component can either be an Inertial Measurement Unit (IMU), which only provides the raw acceleration and rotation rate data, or an Inertial Navigation System (INS) providing position, velocity and attitude information. The aiding source can either be considered as a sensor providing raw sensor information, or as a navigation system providing the position, velocity and/or attitude information. The principle states of interest which are estimated by the lter, and hence which governs the type of model implemented, are the position, velocity and attitude of the vehicle, or the position, velocity and attitude errors. The most natural implementation of an aided inertial navigation system is to drive a non-linear lter with the raw acceleration and rotation rate data provided by the IMU, as shown in Figure 26. The implementation is known as a direct lter structure. The process model usually represents the kinematic relationship of the vehicle and the states of interest. The state vector is propagated by the model and inertial data. The aiding information can be obtained from a navigation system where observations such as position and velocity are supplied to the system. The estimate would be in the form of the vehicle states. The disadvantage of such an implementation is that the prediction equations have to be evaluated at each sample of the inertial data. This requires substantial processing due to the high sample rates of IMUs. Another problem is that in general, high frequency maneuvers are usually ltered in the linear (or linearised) model. The consequence of

80

IMU

Acceleration and Rotation Rates

Position, Velocities and Attitude Estimates

Non-Linear Filter

Observations

Figure 26: The direct structure implements a non-linear lter to estimate the position, velocity and attitude of the vehicle. The inertial data is provided by an IMU and the aiding data from a navigation system.

+ -

Observed Errors

Linear Filter

Observations

Figure 27: The indirect feedback method allows a linear lter implementation and minimises the computational overhead on the lter structure.

Estimated Errors of Position, Velocity and Attitude

81

+ -

Observed Error

Linear Filter

Observations

Figure 28: The direct feedback method overcomes the problem of large observation values being provided to the lter by correcting the INS directly. this omission is that the lter will unnecessarily attenuate the high frequency information provided by the INS. With this approach the system may not be able to track fast maneuvers. To overcome this, an INS should be employed so that a constant stream of vehicle state information is available external to the lter. To correct any errors, the lter estimates the errors in these states. The inertial information can still be obtained even if no additional information is available. Figure 27 illustrates this implementation and is known as the indirect feedback method. The observation which is delivered to the lter is the observed error of the inertial navigation solution, that is, the dierence between the inertial navigation solution and the navigation solution provided by the aiding source. In this implementation the inertial high frequency information is fed directly to the output without attenuation while the Kalman Filter provides the low frequency correction to the IMU. Since the observation is the observed error of the inertial navigation solution, and since the lter is estimating the errors in the inertial navigation solution, then the process model has to be in the form of an error model of the standard inertial navigation equations. Thus the inertial navigation equations are linearised to form error equations. Since the equations are linearised the lter implementation takes on a linear form. Although the prediction stage is still implemented, it can run at the same rate as the sampling rate of the INS or at lesser intervals. The disadvantage of the indirect feedback method is the unbounded error growth of the INS which causes an unbounded growth in the error observation delivered to the lter. This poses a problem to the linear lter since only small errors are allowed due to the linearisation process. That is, large drift in the state values from the INS cause large observation errors being fed into the lter and thus invalidating the assumptions held by the lter. The optimal implementation is illustrated in Figure 28 and is known as the

82

direct feedback method. In this structure the estimated errors are fed back to the INS, thus minimising the growth of the observed error that is delivered as an observation to the lter.

6.2

This navigation architecture is driven with observation errors formed with the predicted and observed position and/or velocities. In order to implement the Kalman Filter proposed, a model of the propagation of the errors is needed. Equations 154 and 164 described the inertial error equations in the Earth frame for velocity and attitude. The rate of change of the position error can be equated as the velocity error. Thus the error equations are p = v v = [f e ] + C e f b b [] = C e b . b ib (176) (177) (178)

Ce transforms vectors from the body frame to the Earth frame. As in the GNSS impleb mentation, the Earth frame is represented by North (N), East (E) and Down (D) vectors and hence for clarication the transformation matrix is represented as Cn . b The navigation loop implements a linear Kalman lter. The state vector consists of the error states, x = The process model is F = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 fD fE 0 fD 0 fN 0 fE fN 0 0 0 0 0 0 0 0 0 0 0 0 0 pN pE pD vN vE vD N E D

T

(179)

(180)

In compact from

83

r 0 I 0 r I33 0 0 0 0 n v = 0 0 an v + 0 C b 0 fa n b 0 0 0 0 0 Cb ib

x = Fg x + Gw(181)

This model can be extended to consider the biases and drift of the accelerometers and gyros. A gyro error model presented here consists of a rst order Markov process with correlation time plus white noise : = (1/ ) + g E[g ] = 0,

T E[g g ] = Rg

(182)

The matrix Tg that takes into account the time constant for the three gyros is 1/x 0 0 1/y 0 Tg = 0 0 0 1/z

(183)

A standard error model for the accelerometers consists of a random constant component plus white noise: a = a E[a ] = 0,

T E[a a ] = Ra

(184)

The matrix Ta that models the accelerometer errors is: 0 0 0 Ta = 0 0 0 0 0 0 Finally the augmented model that matrixes : 0 n Fg Cb 0 F = 0 0 0 Ta 0 0 0 0 x=

(185)

(186)

b ri vi i fgi ib

Where r,v and are the errors in position, velocity and angle misalignments, and fgi and

b ib are the errors in the accelerometers and gyros in the body frame.

84

Note that in this implementation, the acceleration inputs are fed directly into the process model and hence there is no control vector, u = 0. The process model F comprises of time-varying terms. Thus in order to determine the state transition matrix numerical methods are required. If however it is constant over the sampling interval then the state transition matrix is simply F(k) = exp(Ft), where t is the sampling time of the inertial unit. In the case of land vehicles, the dynamics are of a much lower frequency than the sampling frequency. Hence over the sampling period F remains constant, hence F(k) = exp(Ft) = I + Ft + (Ft)2 + ... 2!

The discretisation is only taken to the second term since any extra terms which are added to the nal state transition matrix are of negligible value. Thus the state prediction is x(k|k 1) = F(k) (k 1|k 1). x (187)

The advantage of using this model is that the implementation is linear and the model is independent of the vehicle dynamics. Initially the inertial sensors are calibrated and all the errors are removed, thus x(1|0) is set to zero. Thus, from Equation 187, the state prediction in the next cycle is also zero, and so forth. Therefore the state prediction is always zero and no correction of the inertial errors occurs during the prediction cycle. That is, the position, velocity and attitude information obtained from the navigation system is simply the data from the INS since there are no predict errors to correct it. However, due to the drift in the INS solution, there is a corresponding growth in uncertainty in the states and this is evaluated through the predicted covariance matrix is P(k | k 1) = F(k)P(k 1 | k 1)FT (k) + Q(k) (188)

This is a 9 9 matrix representing the uncertainty in the inertial predicted errors. Q(k) is the discrete process noise matrix also of dimension 9 9 and is evaluated using [18] Q(k) = 1 F(k)G(k)Qc (k)G(k)T F(k)T + G(k)Qc (k)G(k)T t. 2 (189)

Equations 176 - 178 show how the noise terms (in acceleration and angular velocity) are converted from the body frame to the navigation frame and hence Qc (k), which is the

Navigation System Design (KC-4) uncertainty in the process model over a period of one second, is dened as p 0 0 0 . Qc (k) = 0 fb b 0 0 ib

85

(190)

p is the noise injected into the position error evaluation and its value is purely dependent on the uncertainty in the evaluation of the position from the integration of velocity. The remaining error terms in this matrix are the noise values on the sensor readings, that is, the errors in the body frame, which can be obtained from manufacturer specications or through experimentation. G(k) is I33 0 0 . 0 (191) G(k) = 0 Cn (k) b 0 0 Cn (k) b

6.3

In a direct feedback structure, the model implemented in the lter is a linear error model representing the errors in the vehicle states, generally position, velocity and attitude. When an observation becomes available, the lter estimates the errors in these states. Since the model is an error model of the inertial equations, the observation z(k) is the observed error of the inertial navigation solution and not the observation delivered by the aiding sensor. Thus if an aiding system provides position and velocity data then the observation vector becomes, Pinertial (k) PGP S (k) Vinertial (k) VGP S (k)

z(k) =

zP (k) zV (k)

(192)

Figure 29 illustrates the observation structure [18]. The true acceleration, velocity and position of the vehicle have noise added to them to represent measurements taken by the sensors. The acceleration, as measured by the inertial navigation system, is integrated twice to obtain the indicated velocity and position of the vehicle. The acceleration information is obtained by the accelerometers and it is assumed that acceleration due to gravity has been compensated for.

86

wa at + wv vt _ + wx xt + _ +

Inertial Unit vi xi

_ vgps

+ z2 + xgps _

z1

Figure 29: Illustration of how the observation measurements zP and zV are obtained by the inertial and aiding information. By dening the terms P(k) and V(k) as the position and velocity errors in the inertial data after the integration process, the observation model becomes z(k) = = Pinertial (k) PGP S (k) Vinertial (k) VGP S (k) (PT (k) + P(k)) (PT (k) vP (k)) (VT (k) + V(k)) (VT (k) vV (k)) = P(k) V(k) + vP (k) vV (k) (193) The observation is thus the error between the inertial indicated position and velocity and that of the aiding sensor, and the uncertainty in this observation is reected by the noise of the aiding observation. This oers another benet in the direct feedback implementation and involves the tuning implementation; the noise on the observation is the noise on the aiding sensor. Thus once an inertial unit and process model is xed then the process noise matrix Q(k) is also xed, and tuning the lter is solely based on the observation noise

87

matrix R(k). The estimate of the error states at time k given all observations up to time k can then evaluated using the standard kalman lter update equations: x(k|k) = x(k|k 1) + W(k)(k), (194)

where W(k) is a gain matrix produced by the Kalman lter and (k) is the innovation vector. (k) = z(k) H(k) (k|k 1). x W(k) = P(k|k 1)HT (k)S1 (k), where S(k) is known as the innovation covariance and is obtained by S(k) = H(k)P(k|k 1)HT (k) + R(k). Finally the covariance matrix is updated due to this observation: P(k|k) = (I W(k)H(k))P(k|k 1)(I W(k)H(k))T + W(k)R(k)WT (k) (197) (195) (196)

(198)

However with this approach, since the prediction of the error states x(k|k 1) is always zero then the state estimate becomes x(k|k) = W(k)z(k) (199)

That is, the update is simply a weighted sum of the observations. The observation model H(k) is H(k) = I33 0 0 0 (200)

I33 0

The updated position and velocity errors can now be used to correct the position and velocity of the INS, Pinertial (k|k) = Pinertial (k|k 1) p(k|k) Vinertial (k|k) = Vinertial (k|k 1) v(k|k) (201) (202)

Equation 150 related the true direction cosine matrix to the estimated direction cosine

Navigation System Design (KC-4) matrix. In the context of the terminology used in this section this is written as Cn (k|k 1) = [I33 []]Cn (k|k). b b Rearranging this equation as Cn (k|k) = [I33 []]1 Cn (k|k 1), b b

88

(203)

(204)

provides the update equation for the direction cosine matrix once an estimate of the attitude errors occurs. Since the term in the square brackets is orthogonal then its inverse is simply its transpose (A1 = AT ), and furthermore it is a skew-symmetric matrix (thus AT = A), hence Equation 204 is written as Cn (k|k) = [I33 + []]Cn (k|k 1) b b where 0 D E 0 N [] = D E N 0 (206) (205)

Note that [] is in the navigation frame yet it is used to correct the Cn matrix whose b elements are dened in the body frame. It is assumed that the misalignment errors are small and hence the errors associated about the navigation frame are equal to those about the body frame. When large misalignments are encountered, the linear assumptions held are not valid. [16] deals with such situations. 6.3.1 Real Time Implementation Issues

Filter Consistency and Fault Detection The lter implementations discussed provide various methods of estimating the states of interest. However, unless the true values of those states is known, there is no way of determining whether the lter is computing correct estimates. The only information available from the real world is the observation, and hence the only form of measure for determining the behaviour of the lter is the dierence between the observation and the predicted observation, that is, the innovation (Equations 195 and 225 for the Kalman lter and EKF). The innovation has the property that it must be both unbiased and white, and have covariance S(k) if the lter is operating correctly. To determine whether this is the case, the innovation is normalised, = T (k)S1 (k)(k). (207)

89

If the lter assumptions are correct then the samples of are distributed as a 2 distribution in m degrees of freedom (the number of observations being estimated). Instead of using Equation 207 as a method of determining lter consistency, it can be used as a gating function. When an observation is obtained, Equation 207 is formed, and if the value is less than a predened threshold, then the observation is accepted. This allows for a means of detecting any faults within the observation. The threshold value is obtained from standard 2 tables and is chosen based on the condence level required. Thus for example, a 95% condence level, and for a state vector which includes three states of position and three of velocity, then = 12.6. Detection of Multipath High frequency faults arise when the GNSS signals undergo multipath errors. This results in a longer time delay of the signals aecting the x of the Standard Dierential receiver, and also altering the phase of the signal thus aecting the Carrier Phase Dierential x. Another high frequency fault, although it occurs less often and with less eect, is when the receiver utilises a dierent set of satellites in order to determine the position x. The accuracy of the x is dependent on the geometry of the observed satellites. Changes in satellite conguration due to blockages of the satellite view will in turn alter the resulting x. Both forms of high frequency faults cause abrupt jumps in the position and velocity xes obtained by the GNSS receiver. High frequency faults are therefore environment dependent. An open area such as a quarry will be less likely to produce multipath errors than will a container terminal for example. Consequently, the tuning of the lter which fuses the inertial unit and GNSS data is dependent on the environment. The most common method of rejecting multipath errors is obtained when the receiver can distinguish between the true signal and the reected signal. How well the receiver can distinguish between these two signals is dependent on the accuracy of the receivers internal timing procedures. However, these systems cannot reject multipath errors completely, and even with the constant improvement of GNSS technology high frequency faults such as multipath always need to be factored into the development of the navigation system. Equation 207 can be use to mantain lter consistency by evaluating a gating function which has the property of being a 2 distribution. This gating function can be used to determine if the process or observation models are valid or if any observations are spurious. Thus it can potentially determine if the multipath errors have occurred. Due to satellite geometry, the GNSS x in the vertical plane is signicantly less accurate than that in the horizontal plane. Consequently the x in North and East may lie well within the validation region, whilst that of Down may exceed it and force the result of the gating function beyond the threshold if the same noise values were used for all the terms in the observation noise matrix R(k). However, if these noise terms are accounted for by taking the values from the GNSS receiver directly, than the validation gate will

90

detect multipath errors correctly. Similarly, changes in satellite geometry cause the Dilution of Precision (DOP) to vary. The DOP is a measure of the accuracy of the GNSS x and is used in the GNSS position solution to obtain the optimum conguration of satellites which the receiver should track, if it cannot track all of them due to hardware limitations. Changes in satellite geometry occur when part of the sky is invisible to the receiver antenna due to obstacles blocking the GNSS signals. The receiver then has to obtain a new set of satellites. Consequently, a change in DOP will aect the GNSS solution causing high frequency faults. These faults can be detected using the same technique as discussed for multipath errors. However, these changes are not as large as those encountered for multipath errors and generally go undetected, unless the accuracy of the inertial unit is comparable to that of the GNSS receiver. Filter Tuning There are two stages in the lter ow; the prediction stage where the predicted inertial errors are always zero and the uncertainty grows with time, and the estimation stage where the estimates of the inertial errors are obtained by a weighted sum of the observations and the uncertainty is bounded. If no observation is obtained for an extended period of time, or equivalently if GNSS xes are rejected due to errors, the lter will continuously cycle in prediction mode and no corrections to the inertial navigation solution will be made. The longer the duration without correction, the greater the uncertainty in the navigation solution. When a x does occur, the observation may pass the gating function test even though the x may be in error since the uncertainty is the inertial navigation solution is large. This is shown in Figure ??.

GPS

correction

Inertia

As with any Kalman lter implementation, tuning lies in the choice of values for the process Q(k) and observation R(k) covariance matrices. For example, a large Q(k) will imply an inaccurate inertial system. During the prediction stage the uncertainty in the

91

inertial data will grow according to the magnitude of Q(k). When a GNSS x does occur there is a greater possibility the inertial unit will be corrected using the rst available GNSS x, irrespective of the accuracy of this x. Likewise, small values in R(k) will imply accurate GNSS xes which may pose a problem when the x is in error and is being fused with low accuracy inertial sensors. Thus tuning becomes a delicate adjustment of both the Q(k) and R(k) matrices along with the employment of the gating function, Equation 207, in order to reject the high frequency faults of the GNSS. The variances along the diagonal of R(k) are determined simply by obtaining the GDOP values from the receiver and assuming there is no correlation between the xes of the navigation axes (which is how the GDOP is provided). Determining the values for Q(k) depends on the noise level of the sensors implemented which can be obtained from the manufacturer or through experimentation. The principle tuning parameters which need to be addressed are the variances of velocity and attitude. A large variance in the velocity terms will imply greater reliance on the GNSS velocity xes. Furthermore, the greater the accuracy of the velocity estimates, the greater the dampening on the attitude errors. If there are no attitude xes then only the velocity information can contain the drift in attitude and how quickly and accurately this can happen depends on the variance on the attitude. Algorithm There are two points of concern in a real time implementation of an inertially aided GNSS navigation system: processing power; and latency. Data latency is extremely important, especially with the use of GNSS sensors, where it is common to nd position latencies in the order of tens of milliseconds and that of velocity reaching over a second. The processing power internally in a GNSS sensor is used to control the correlators to lock onto satellites, determine the ephemeris of the satellites, and then compute the position and velocity of the receiver with respect to some coordinate frame. The complexity of these calculations increases with the number of satellites used to determine the solution. Standard pseudorange solutions require the least computational processing. RTK technology requires the most processing power. In addition if attitude information is required this also requires additional processing. The processing time is usually not of concern to most GNSS users; surveyors for example or more generally, those users that do not require real time information. To overcome this latency problem, careful consideration needs to be given to the real time implementation, and when the latency is large a complete redesign of the lter structure may be required. When the latency of the solution is low the position information can be propagated using velocity data and a constant velocity model. This is quite sucient for low speed vehicles. Problems arise when the latency of the GNSS solution is high, and the vehicle dynamics are fast. A GNSS position solution with a latency of 50ms will have an error of 0.8m for a vehicle traveling at 60km/hr. If the

92

vehicle is moving with constant velocity and moving in a straight line then the position can be simply propagated forward. However, any deviation from a straight line will lead to incorrect estimates when the GNSS x is fused with the current inertial state. What is required is to store the inertial data and process the estimates at the time that the GNSS x should have occurred, and then propagate the inertial solution through the backlog of data. All is well if both the position and velocity GNSS data have equal latencies. However, if there is a requirement for the GNSS receiver to obtain velocity and position with independent measurements then this requires alternative methods. For example, in RTK systems a doppler approach is used to obtain velocity while the determination of the phase ambiguity is used for position measurement. In such systems the latency of the velocity data can sometimes be over 1s, and hence the position and velocity determination occurs at dierent points in time. To overcome this GNSS manufacturers propagate the velocity through the position data and hence both the position and velocity output occur at the same time with the same latencies. However such an approach produces correlation between the position and velocity data, which is not ideal for navigation systems. Hardware It was shown before that extremely high sampling of inertial measurement sensors is not required in land eld robotics since the maximum rotation and acceleration frequencies are not high (with obvious consideration to vibration). In addition, the power of modern processors is sucient to handle the data throughput required in these applications. The navigation functions required for the aided INS strapdown system are usually partitioned between the fast guidance unit and the navigation processor, as shown in Figure 30

INS Platform

Gyro and Accelerations at 200Hz Accelerations, 10 Hz

DGPS Receiver

Transformation Matrix, 10 Hz

93

The system can be implemented as two independent processes. The rst process is implemented in the fast guidance unit. This process communicates with the inertial unit through a very fast link. The inertial measurement unit transmits triaxial gyro and acceleration information at a high frequency rate. The guidance unit integrates the direction cosine matrix and transforms the acceleration to the navigation frame. It also generates the predictions for velocity and position through single and double integration respectively. The positions, velocities, accelerations and the transformation matrix are transmitted to the navigation unit at a much lower rate. Although the vehicle state information is already transformed into the local navigation coordinate frame, the transformation matrix is still required to implement the data fusion algorithm. The navigation unit receives observed position and velocity information from the DGPS receiver at much lower rates 5-20 Hz. It is responsible for implementing the direct feedback Kalman Filter algorithm. The navigation algorithm ow can be implemented as follows: 1. Since the vehicle is generally autonomous, all guidance commands are known in advance and hence the navigation lter knows when the vehicle is stationary. Whilst the vehicle is stationary the system reads in all acceleration, rotation rate and tilt sensor data from the inertial unit and provide the average of these sensor readings. The logging time for this average depend on the actual vibration present in the system. 2. Once the averaging process has been completed, calibration of the inertial unit is accomplished as described in Section 5.9. This is used to determine biases and to obtain the initial Cn matrix. At this stage the GPS position x is used to determine b the initial Cn matrix, Section 4.6. g 3. The navigation system then proceeds onto the inertial navigation system with the initial position as determined by the GPS receiver and the initial attitude as obtained from the alignment stage. Initially the GPS sensor may not provide the position of the vehicle until it has reached its most accurate solution, which in the case of RTK systems may take up to few minutes. The lter is initialised. The guidance computer is then informed that the navigation system is ready to provide the navigation solution; 4. As the vehicle moves the acceleration and gyro values are read in and the biases removed, Equation 175. Cn is updated, Equation 133, and the acceleration in the b navigation frame is computed, Equation 136. These values are then integrated to provide the position and heading of the vehicle; 5. If no GPS x is available then return to Step 4, otherwise determine the GPS x in the navigation frame, Equations 86 and 87. When the latency is small and the

94

vehicle dynamics are low, the velocity data can be used to propagate the position information using a constant velocity model with reasonable accuracy; 6. Fuse the inertial and GPS data as described in Section 6.2; 7. Use the gating function, Equation 207, to determine if there are any high frequency faults. If high frequency faults exist then only send out the current state values as determined by the inertial system and return to Step 4. If the validation check has passed then correct the inertial unit as shown in Section 6.2 and then return to Step 4. The estimation procedure occurs within the sampling time of the inertial unit, however, if this was not the case and again the latency is low, then the corrected velocity is used to propagate the corrected position value.

6.4

In this section we present a new type of aiding information that does not come from external sensors. This alternative method is based on the application of constraints on the inertial equations if the motion of the vehicle is itself constrained in some way. This approach uses the land vehicles motion to constrain the error growth of inertial units. The lter structure illustrated in Figure 31 is presented here.

Vehicle Constraints

Velocity Observations

IMU

Position, Velocity and Attitude Estimates

Figure 31: Typical acceleration in land vehicle applications The general three dimensional inertial equations are derived from Newtons laws and

95

are thus applicable to all forms of motion. The inertial equations on board a train apply equally as well to an aerobatic aircraft. However, the motion of the train is clearly dierent to that of the aircraft, in particular, the train is constrained to move along the rails. This fact can be used as a virtual observation, constraining the error growth along the lateral and vertical directions. The motion of a general land vehicle is not much dierent from that of the train, and hence can also use constrained motion equations. If the vehicle undergoes excessive slip or movement in the vertical direction o the ground, then extra modelling is required, however, the use of constraints as virtual observations is still valid. 6.4.1 General Three-Dimensional Motion

Let the motion of the vehicle be described by the state equation, x = f(x,u)

T

(208)

T

where the vehicle state vector x = VT , PT , , , , and the measurements u= fT , T . n n b b Using the kinematic relationship between b and the rates of changes of the Euler angles, and assuming that the rate of rotation of the earth is negligible, the state equations for vehicle motion can be written as V n = Cn f b b n = Vn P y sin + z cos cos = y cos z sin = x + (y sin + z cos ) tan = (209) (210)

Equations 209 - 213 are the fundamental equations that enable the computation of the state x of the vehicle from an initial state x(0) and a series of measurements fb and b . The Euler method is implemented here for a better conceptual understanding. It is important to note the following with respect to these equations. 1. These equations are valid for the general motion of a body in three-dimensional space. 2. Equations 209-213 represent a set of non-linear dierential equations that can easily be solved using a variety of dierent techniques.

96

3. It is possible to linearise these equations, for suciently small sampling intervals, by incorporating all the elements of the direction cosine matrix Cn into the state b equation. Clearly, the rate of error growth can be reduced if the velocity of the vehicle and the Euler angles and can be estimated directly. It will be shown in the next section that this is indeed possible for a vehicle moving on a surface. 6.4.2 Motion of a Vehicle on a Surface

Motion of a wheeled vehicle on a surface is governed by two non-holonomic constraints. When the vehicle doesnt jump o the ground and does not slide on the ground, the velocity of the vehicle in the plane perpendicular to the forward direction (x-axis) is zero. Under ideal conditions, there is no side slip along the direction of the rear axle (y-axis) and no motion normal to the road surface (z-axis), so the constraints are Vy = 0 Vz = 0 (214) (215)

In any practical situation, these constraints are to some degree violated due to the presence of side slip during cornering and vibrations caused by the engine and suspension system. In particular the side slip is a function of the vehicle state as well as the interaction between the vehicle tyres and the terrain. A number of models are available for determining side slip, but these models require the knowledge of the vehicle, tyre and ground characteristics that are not generally available. Alternatively, information from external sensors can be used to estimate slip on-line. As a rst approximation however, it is possible to model the constraint violations as follows: Vy y = 0 Vz z = 0 (216) (217)

2 2 where y and z are Gaussian, white noise sources with zero mean and variance y and z respectively. The strength of the noise can be chosen to reect the extent of the expected constraint violations. Using the following equation that relates the velocities in the body frame Vb = [Vx , Vy , Vz ]T to Vn , Vb = [Cn ]T Vn b

97

it is possible to write constraint Equations 216 and 217 as a function of the vehicle state x and a noise vector w= [y , z ]T , Vy Vz where M is =M+ y z , (218)

VN (sin sin cos + cos sin ) + VE (cos cos + sin sin sin ) + VD (sin cos ) VN (sin sin + cos sin cos ) + VE (cos sin sin sin cos ) + VD (cos cos ) (219) The navigation frame implemented here is North (N), East (E) and Down (D). It is now required to obtain the best estimate for the state vector x modeled by the state Equations 209-213 from a series of measurements fb and b , subjected to the constraint Equation 218.

6.5

The state equation, obtained by the discretisation of Equations 209-213, is x(k|k 1) = f( (k 1|k 1), u(k)), x (220)

and the discrete time version of the constraint equation obtained from Equation 218 z(k) = H(x(k)) + v(k). (221)

Estimation of the state vector x subjected to stochastic constraints can be done in the framework of an extended Kalman lter. It is proposed to treat Equation 221 as an observation equation where the observation at each time instant k is in fact identical to zero. The Kalman lter algorithm proceeds recursively in three stages: Prediction: The algorithm rst generates a prediction for the state estimate and the state estimate covariance at time k according to x(k|k 1) = F( (k 1|k 1), u(k)), x P(k|k 1) = Fx (k)P(k 1|k 1) FT (k) + Q(k). x (222) (223)

The term Fx (k) is the Jacobian of the current non-linear state transition matrix F(k) with respect to the predicted state x(k|k 1).

98

Observation: Following the prediction, it is assumed that an observation z(k) that is identical to zero is made. An innovation is then calculated as follows (k) = z(k) (k|k 1) z (224)

where z(k) is in fact set to zero. An associated innovation covariance is computed using S(k) = Hx (k)P(k|k 1) HT (k) + R(k) x (225)

Update: The state estimate and corresponding state estimate covariance are then updated according to x(k|k) = x(k|k 1) + W(k)(k) where (k) is the nonlinear innovation, (k) = z(k) H( (k|k 1)). x The gain and innovation covariance matrices are W(k) = P(k|k 1) HT (k)S1 (k) x (228) (227) (226)

where the term Hx (k) is the Jacobian of the current observation model with respect to the estimated state x(k|k). The updated covariance matrix is P(k|k) = (I W(k) Hx (k))P(k|k 1)(I W(k) Hx (k))T + W(k)R(k)W(k). 6.5.1 Observability of the States

(229)

The extended Kalman lter algorithm obtains estimates of the state x. However, not all the state variables in this estimate are observable. For example, inspection of the state and observation equations suggest that the estimation of the vehicle position, Pn , requires direct integrations and therefore is not observable. Furthermore, if the vehicle moves in a trajectory that does not excite the relevant degreesof-freedom, the number of observable states may be further reduced. Intuitively, forward velocity is the direct integral of the measured forward acceleration during motion along straight lines, therefore is not observable. Clearly an analysis is required to establish the conditions for observability. As the state and observation equations are non-linear, this is not straightforward. In this

99

section an alternative formulation of the state equations, that directly incorporates the non-holonomic constrains, are developed in order to examine this issue. Consider the motion of a vehicle on a surface as shown in Figure 32. Assume that the

Figure 32: Motion of a vehicle on a surface. The navigation frame is xed and the body frame is on the local tangent plane to the surface and is aligned with the kinematic axes of the vehicle. non-holonomic constraints are strictly enforced and therefore the velocity vector V of the vehicle in the navigation frame is aligned with bx. Let s, s and s be the distance measured from some reference location to the current vehicle location along its path, together with its rst and second derivatives with respect to time. Therefore, V=sbx. Acceleration of the vehicle is given by, f =V = sbx+sbx. As the angular velocity of the coordinate frame b is given by b , f = sbx+s b bx, f = sbx+sz by-sy bz.

Navigation System Design (KC-4) Components of the acceleration of the vehicle in the body frame b become, f.bx = s, f.by = sz , f.bz = sy . Given that fn = Cn fb and using this in the above equations, b fx c c c s s 0 Vf fy = Vf z + c s + s s c c c + s s s s c 0 , fz s s + c s c s c + c s s c c g Vf y

100

where g is the gravitational constant and Vf = s is the speed of the vehicle. Rearranging this, the following three equations relating the vehicle motion to the measurements from the inertial unit can now be obtained, Vf fx + g sin = 0, Vf z fy g sin cos = 0, Vf y + fz + g cos cos = 0. It is clear from the above equations that, when the forward acceleration is zero the roll () and pitch () can be directly computed from the inertial measurements if one of the angular velocities y or z is not zero, the forward velocity can also be computed directly. even when the forward acceleration is non-zero, it is possible to write a dierential equation containing only the forward velocity and the inertial measurements by substituting Equations 234 and 235 into 233. Therefore, Vf can be obtained by one integration step involving the inertial measurements. If the constraints are not used, two integration steps are required to obtain velocities. This result is of signicant importance. The fact that the forward acceleration is observable makes the forward velocity error growth only a function of the random walk due to the noise present in the observed acceleration. It is possible to use the Equations 234 and 235 directly to obtain the complete vehicle state without going through the Kalman lter proposed in the previous section. This, however, makes it dicult to incorporate models for constraint violations in the solution. Also, when the constraint violation is signicant, such as in o road situations or cornering at high speeds, the white noise model is inadequate. For example, if there is signicant side slip, explicit slip modeling may be required. (233) (234) (235)

101

Figure ?? shows the performance of the algorithm in large area with the vehicle moving a low speed. It can clearly be seen that the incorporation of the constraint, Vy = 0, Vz = 0, results in a performance of the algorithm very close to GPS / INS. For comparison purposes the raw data integration is also presented showing that a large error is obtained after the rst minute of operation

POSITION

200

150

NORTH (m)

100

50

<

Raw data

Figure 33: Comparison between integration or raw data, GPS/INS and constraint algorithms

6.5.2

This section will discuss the aiding of an inertial unit with three forms of external observations; position and velocity derived from GPS, speed from a drive shaft encoder and the vehicle model constraints. By incorporating all three pieces of observations, more information is provided for correction of errors in the inertial data. Furthermore, the constraint algorithms contain the growth of the attitude error when there are no GPS xes, thus sustaining the accuracy of the inertial unit. The speed information provided by the encoder data makes the forward velocity observable. The approach taken is to use a linear information lter with the inertial error model developed as a process model. Thus a direct feedback structure is implemented as shown in Figure 34. This makes the system practically achievable, and also allows easy information addition from the aiding sensors, especially since they are delivered asynchronously and at dierent rates. It is shown that the additional sensors signicantly improve the quality

102

of the location estimate. This is of fundamental importance since it makes the inertial system less dependent on external information.

Estimated Errors of Position, Velocity and Attitude

INS

+ + -

Kalman Filter

GNSS

Figure 34: Direct feedback structure The inertial error model as outlined in Section 6.2 is implemented here using the Information form of the Kalman lter. This form is adopted in this implementation since various type of external information is incorporated in asynchronous form to the system. Nevertheless identical results can be obtained with the standard Kalman lter. Prediction: The prediction stage is implemented using: y(k|k 1) = L(k|k 1) (k 1|k 1) + B(k)u(k), y where L(k|k 1) = Y(k 1|k 1)F(k)Y1 (k 1|k 1). and Y(k|k 1) = [F(k)Y1 (k 1|k 1)FT (k) + Q(k)]1 . (237) (236)

Observation: When an observation from the aiding sensor or constraint is made, the observation vector generated is the observed error of the inertial system, z(k) = zinertial (k) zaiding (k). At each sampling of the inertial unit, a constraint observation is made, that is, Vy = 0 and Vz = 0. At this stage the velocity vector is only partly completed, requiring the speed of the vehicle along the body x-axis which is obtained from the speed encoder Vx .

103

The sampling rate of the speed encoder is 25Hz. However, it can be safely assumed that constant velocity occurs between these samples, due to the slow dynamics of the vehicle, and hence the observation formed can occur at the sampling rate of the inertial unit. This observation, which is now in the body frame, is converted to the North (N), East (E), Down (D) frame using Cn . That is, b Vx (k) (238) zvelocity (k) = Cn Vy (k) b V Vz (k) Vx (k) cos cos = Vx (k) cos sin . (239) Vx (k) sin Thus the observation vector is z(k) = zinertial (k) zvelocity (k). V V The observation model is given by Hvelocity = V and the observation covariance matrix is Rvelocity V 2 x 0 0 2 = 0 y 0 . 2 0 0 z (241) 033 I33 033 (240)

Since the velocity vector is transformed from the body frame to the navigation frame, the observation noise covariance needs to be transformed as well and is done so by velocity = Cn Rvelocity Cn T R b b (242)

When the position and velocity are obtained from the GNSS receiver the observation vector is z(k) = The observation model is HGN SS (k) = I33 033 033 , 033 I33 033 (244) zinertial (k) zGN SS (k) P P zinertial (k) zGN SS (k) V V (243)

Navigation System Design (KC-4) and the observation noise matrix is 2 P N 0 0 0 0 0 2 0 P E 0 0 0 0 2 0 0 P D 0 0 0 RGN SS (k) = 2 0 0 0 0 0 V N 2 0 0 0 0 V E 0 2 0 0 0 0 0 V D where the individual variances are obtained from the GDOP values.

104

(245)

Update: Once the observations are formed, the information state vector is generated i(k) = HT (k)R1 (k)z(k). (246)

The amount of certainty associated with this observation is in the form of the information observation matrix, I(k) = HT (k)R1 (k)H(k) The estimate then proceeds according to y(k|k) = y(k|k 1) + i(k) Y(k|k) = Y(k|k 1) + I(k). and if there is more than one sensor at this time stamp: y(k|k) = y(k|k 1) + Y(k|k) = Y(k|k 1) + ij (k) Ij (k). (250) (251) (248) (249) (247)

where j is the number of sensors providing information at time k. The algorithms were tested in a very large area shown is Figure 35. In this test we used the velocity constraint and GPS information. The comparison with respect to the errors position and velocity are shown in Figure ?? and ??. It can be seen that the error in velocity tend to zero as expected and the position is also controlled by the close zero velocity error.

6.6

The three methods discussed so far are also known as loosely coupled implementations since there is no feedback to the aiding sensor/navigation system. If feedback is provided

POSITION 300

105

250

200

NORTH (m)

150

100

50

50 100

100

200

400

500

600

700

Figure 35: Vehicle Path to the aiding source a tighter conguration is obtained which in turn improves system integrity. Figure 38 illustrates what is known as a tightly coupled conguration. It oers the advantages of being robust and increases performance since it allows the systems designer to delve into the operation and algorithms of both sensors. The inertial sensor provides raw data to the lter which usually incorporates a kinematic model of the vehicle. The aiding sensor also provides raw information. The lter estimates the states of the vehicle, and uses these estimates to assist the aiding sensor in its attainment of observations. For example, the aiding information can help GNSS with tracking satellites or assist a scanning radar with pointing and stabilisation. 6.6.1 Advantages and Disadvantages of the Tightly and Loosely Coupled Congurations

The loosely coupled congurations oer the distinct advantage of being highly modular in accuracy and cost. The systems designer can implement the model of choice along with the desired INS in whatever navigation structure is preferred. Any aiding sensor can then be added to the navigation system. Although the tightly coupled conguration is more robust than the loosely coupled conguration, it is more expensive to implement and more dicult to develop. Furthermore, if a dierent aiding sensor is employed, the models and algorithms must change substantially. The use of GNSS as an aiding system for inertial navigation systems has been the subject

Position Error 6 5 North Error (m) 4 3 2 1 0 0 20 40 60 80 100 Time (s) 120 140 160 | Constrained + GPS | Constrained Inertial

106

180

200

35 30 East Error (m) 25 20 15 10 5 0 0 20 40 60 80 100 Time (s) 120 140 160 180 200 | Constrained Inertial | Constrained + GPS

Figure 36: Position error incorporating constrain and GPS of study for a number of years. The majority of implementations have been loosely coupled. This is due to companies specialising in inertial systems developing INS units with built in ltering techniques in a loosely coupled conguration and in turn, GNSS companies focusing on delivering state of the art GNSS navigation systems. To implement a tightly coupled conguration requires close collaboration with GNSS companies, since the aiding information from the navigation lter which is fed back to the GNSS sensor assists with the satellite tracking algorithms. These algorithms are kept secret since the speed of satellite reacquisition, the ability to locate and track satellites after they have been lost, is what separates the quality of receivers between dierent manufactures, and hence is a strong marketing tool.

107

Velocity Error 1.4 1.2 North Error (m/s) 1 0.8 0.6 0.4 0.2 0 | Constrained + GPS 0 20 40 60 80 100 Time (s) 120 140 160 180 200 | Constrained Inertial

1.5

20

40

60

80

120

140

160

IMU

Acceleration and Rotation Rates

Position, Velocities and Attitude Estimates

Non-Linear Filter

Raw Observations

Figure 38: The tightly coupled conguration treats both inputs as sensors and not as navigation systems. Furthermore, the lter estimates are sent to the aiding sensor and not the inertial sensor. This conguration allows for a more robust system.

108

Reliable localization is an essential component of any autonomous vehicle system. The basic navigation loop is based on dead reckoning sensors that predict high frequency vehicle manoeuvres and low frequency absolute sensors that bound positioning errors. The problem of localization given a map of the environment or estimating the map knowing the vehicle position has been addressed and solved using a number of dierent approaches. Section 3 presents a Kalman lter technique to estimate the position of the vehicle based on the known position of articial landmarks. Although this method can be made very reliable is has the drawback that requires the modication of the environment with the addition special infrastructure. In addition the location of these infrastructure need to be surveyed. A related problem is when both, the map and the vehicle position are not known. In this case the vehicle start in an unknown location in an unknown environment and proceed to incrementally build a navigation map of the environment while simultaneously use this map to update its location. In this problem, vehicle and map estimates are highly correlated and cannot be obtained independently of one another. This problem is usually known as Simultaneous Localization and Map Building (SLAM) and was originally introduced [26]. During the past few years signicant progress has been made towards the solution of the SLAM problem [17] [23] [12] [7] [25] Kalman lter methods can also be extended to perform simultaneous localization and map building. There have been several applications of this technology in a number of dierent environments, such as indoors, underwater and outdoors. One of the main problems with the SLAM algorithm has been the computational requirements. Although the algorithm is originally of O(N 3 ) the complexity of the SLAM algorithm can be reduced to O(N 2 ), N being the number of landmarks in the map. For long duration missions the number of landmarks will increase and eventually computer resources will not be sucient to update the map in real time. This N 2 scaling problem arises because each landmark is correlated to all other landmarks. The correlation appears since the observation of a new landmark is obtained with a sensor mounted on the mobile robot and thus the landmark location error will be correlated with the error in the vehicle location and the errors in other landmarks of the map. This correlation is of fundamental importance for the long-term convergence of the algorithm and needs to be maintained for the full duration of the mission. This section presents an introduction to the SLAM problem, description of the computation complexity and dierent approaches that makes possible the implementation of SLAM in real time in very large environments.

109

7.1

Fundamentals of SLAM

The SLAM algorithm address the problem of a vehicle with a known kinematic model, starting at an unknown position, moving through an unknown environment populated with articial or natural features. The objective of SLAM is then to localize the vehicle and at the same time build an incremental navigation map with the observed features. The vehicle is equipped with a sensor capable of taking measurement of the relative location of the feature and the vehicle itself. This scenario is shown if Figure 39. To facilitate the introduction of the SLAM equations a linear model for the vehicle and observation is used.

Features and Landmarks Vehicle-Feature Relative Observation

pi

Mobile Vehicle

xv

7.1.1

Process Model

The state of the system consist of the position and orientation of the vehicle augmented with the position of the landmarks. Assuming that the state of the vehicle is given by xv (k) the motion of the vehicle through the environment can be modeled by the following equation: xv (k + 1) = Fv (k)xv (k) + uv (k + 1) + vv (k + 1) (252)

where Fv (k) is the state transition matrix, uv (k) a vector of control inputs, and vv (k) a vector of temporally uncorrelated process noise errors with zero mean and covariance

110

Qv (k) [[18]] for further details). The location of the ith landmark is denoted pi . SLAM considers that all landmarks are stationary. The state transition equation for the ith landmark is pi (k + 1) = pi (k) = pi , (253) It can be seen that the model for the evolution of the landmarks does not have any uncertainty. Assuming that N are actually validated and incorporated by the system then the vector of all N landmarks is denoted p= pT . . . 1 pT N

T

(254)

The augmented state vector containing both the state of the vehicle and the state of all landmark locations is denoted x(k) = xT (k) pT . . . v 1 pT N

T

(255)

The augmented state transition model for the complete system may now be written as xv (k + 1) Fv (k) 0 . . . 0 xv (k) 0 p1 Ip1 . . . 0 p1 = (256) . . . . .. . . . . . 0 . . . . pN 0 0 0 IpN pN uv (k + 1) vv (k + 1) 0p1 0p1 + (257) + . . . . . . 0pN 0pN x(k + 1) = F(k)x(k) + u(k + 1) + v(k + 1) (258)

where Ipi is the dim(pi ) dim(pi ) identity matrix and 0pi is the dim(pi ) null vector. As can be seen from Equation 256 the size of the matrices involved were augmented by n N , being n the number of states required to represent a landmark and N the number of landmarks incorporated into the map. In a large environment this number will tend to grow and eventually the computer resources will not be sucient to process the information from the external sensor in real time. 7.1.2 The Observation Model

The vehicle is equipped with a sensor that can obtain observations of the relative location of landmarks with respect to the vehicle. Assuming the observation to be linear and

Navigation System Design (KC-4) synchronous, the observation model for the ith landmark is written in the form zi (k) = Hi x(k) + wi (k) = Hpi p Hv xv (k) + wi (k)

111

(259) (260)

where wi (k) is a vector of temporally uncorrelated observation errors with zero mean and variance Ri (k). The term Hi is the observation matrix and relates the output of the sensor zi (k) to the state vector x(k) when observing the i( th) landmark. It is important to note that the observation model for the ith landmark is written in the form Hi = [Hv , 0 0, Hpi , 0 0] (261)

This structure reects the fact that the observations are relative between the vehicle and the landmark, often in the form of relative location, or relative range and bearing (see Section 4).

7.2

In the estimation-theoretic formulation of the SLAM problem, the Kalman lter is used to provide estimates of vehicle and landmark location. We briey summarise the notation and main stages of this process The Kalman lter recursively computes estimates for a state x(k) which is evolving according to the process model in Equation 256 and which is being observed according to the observation model in Equation 259. The Kalman lter computes an estimate which is equivalent to the conditional mean x(p|q) = E [x(p)|Zq ] q (p q), where Z is the sequence of observations taken up until time q. The error in the estimate is denoted x(p|q) = x(p|q) x(p). The Kalman lter also provides a recursive estimate of the covariance P(p|q) = E x(p|q)(p|q)T |Zq in the estimate x(p|q). The x Kalman lter algorithm now proceeds recursively in three stages: Prediction: Given that the models described in equations 256 and 259 hold, and that an estimate x(k|k) of the state x(k) at time k together with an estimate of the covariance P(k|k) exist, the algorithm rst generates a prediction for the state estimate, the observation (relative to the ith landmark) and the state estimate covariance at time k + 1 according to x(k + 1|k) = F(k)(k|k) + u(k) x i (k + 1|k) z = Hi (k)(k + 1|k) x P(k + 1|k) = F(k)P(k|k)FT (k) + Q(k), respectively. (262) (263) (264)

112

Observation: Following the prediction, an observation zi (k + 1) of the ith landmark of the true state x(k + 1) is made according to Equation 259. Assuming correct landmark association, an innovation is calculated as follows i (k + 1) = zi (k + 1) i (k + 1|k) z together with an associated innovation covariance matrix given by Si (k + 1) = Hi (k)P(k + 1|k)HT (k) + Ri (k + 1). i (266) (265)

Update: The state estimate and corresponding state estimate covariance are then updated according to: x(k + 1|k + 1) = x(k + 1|k) + Wi (k + 1)i (k + 1) T P(k + 1|k + 1) = P(k + 1|k) Wi (k + 1)S(k + 1)Wi (k + 1) Where the gain matrix Wi (k + 1) is given by Wi (k + 1) = P(k + 1|k)HT (k)S1 (k + 1) i i (269) (267) (268)

The update of the state estimate covariance matrix is of paramount importance to the SLAM problem. Understanding the structure and evolution of the state covariance matrix is the key component to this solution of the SLAM problem. 7.2.1 Example

Assume a vehicle moving in one dimension and observing relative range to a number of landmarks. We would like to design a lter to track the position and velocity of the vehicle. We can also assume that the position of the vehicle is known with some uncertainty, although this is not relevant for this example. Since no additional absolute information is available such as GPS, if one assume the vehicle is traveling at constant velocity the estimation of uncertainty of its position will grow with time. In addition due to initial error in position and velocity the dierence between estimated and real position will grow indenite. pos(k + 1) vel(k + 1) = 1 t 0 1 pos(k) vel(k) + 0 1 v(k) (270)

In this example we can assume that two landmarks are already incorporate into the map. The process model is then extended as follows:

Navigation System Design (KC-4) pos(k + 1) 1 t 0 0 vel(k + 1) 0 1 0 0 p1 (k + 1) = 0 0 1 0 p2 (k + 1) 0 0 0 1 pos(k) 0 vel(k) 1 p1 (k) + 0 p2 (k) 0 v(k)

113

(271)

Once a landmark is incorporated into the map it will remain as part of the state vector. The full augmented system needs to be used each time a prediction or observation is made. In future section we will see that optimizations are possible to reduce the computation complexity of SLAM. On the other hand, the observation model will be a function of the number of landmarks observed. In the case two landmarks are observed we have: z1 (k) z2 (k) = 1 0 1 0 1 0 0 1 pos(k) vel(k) p1 (k) + p2 (k)

1 1

w(k)

(272)

In this example it was also assumed that both observation were taken with the same sensor or with another sensor with similar noise characteristics.

7.3

This section presents three fundamental results in the solution of SLAM. For a full demonstration of this results the readers are encouraged to see [7] The state covariance matrix may be written in block form as P(i|j) = Pvv (i | j) Pvm (i | j) Pmv (i | j) Pmm (i | j))

where Pvv (i | j) is the error covariance matrix associated with the vehicle state estimate, Pmm (i | j) is the map covariance matrix associated with the landmark state estimates, and Pvm (i | j) is the cross-covariance matrix between vehicle and landmark states. Theorem 1 The determinant of any sub-matrix of the map covariance matrix decreases monotonically as successive observations are made. The algorithm is initialised using a positive semi-denite (psd ) state covariance matrix P(0|0). The matrices Q and Ri are both psd, and consequently the matrices P(k + 1|k), T Si (k + 1), Wi (k + 1)Si (k + 1)Wi (k + 1) and P(k + 1|k + 1) are all psd. From Equation

Navigation System Design (KC-4) 268, and for any landmark i, det P(k + 1|k + 1) = det(P(k + 1|k) Wi (k + 1)Si (k + 1)WT (k + 1)) det P(k + 1|k)

114

(273)

The determinant of the state covariance matrix is a measure of the volume of the uncertainty ellipsoid associated with the state estimate. Equation 273 states that the total uncertainty of the state estimate does not increase during an update. Theorem 2 In the limit the landmark estimates become fully correlated As the number of observations taken tends to innity a lower limit on the map covariance limit will be reached such that

k

(274)

Also the limit the determinant of the covariance matrix of a map containing more than one landmark tends to zero.

k

(275)

This result implies that the landmarks become progressively more correlated as successive observations are made. In the limit then, given the exact location of one landmark the location of all other landmarks can be deduced with absolute certainty and the map is fully correlated. Theorem 3 In the limit, the lower bound on the covariance matrix associated with any single landmark estimate is determined only by the initial covariance in the vehicle estimate P0v at the time of the rst sighting of the rst landmark. When the process noise is not zero the two competing eects of loss of information content due to process noise and the increase in information content through observations, determine the limiting covariance. It is important to note that the limit to the covariance applies because all the landmarks are observed and initialised solely from the observations made from the vehicle. The covariances of landmark estimates can not be further reduced by making additional observations to previously unknown landmarks. However, incorporation of external information, for example using an observation is made to a landmark whose location is available through external means such as GPS, will reduce the limiting covariance. In summary, the three theorems presented above describe, in full, the convergence properties of the map and its steady state behaviour. As the vehicle progresses through the

115

environment the total uncertainty of the estimates of landmark locations reduces monotonically to the point where the map of relative locations is known with absolute precision. In the limit, errors in the estimates of any pair of landmarks become fully correlated. This means that given the exact location of any one landmark, the location of any other landmark in the map can also be determined with absolute certainty. As the map converges in the above manner, the error in the absolute location estimate of every landmark (and thus the whole map) reaches a lower bound determined only by the error that existed when the rst observation was made. Thus a solution to the general SLAM problem exists and it is indeed possible to construct a perfectly accurate map describing the relative location of landmarks and simultaneously compute vehicle position estimates without any prior knowledge of landmark or vehicle locations.

7.4

Non-linear Models

In general the models that predict the trajectory of the vehicle and the models that relates the observation with the states are non-linear. The SLAM can still be formulated but requires the linearization of these models. In this case the Jacobian of the process and observation models are used to propagate the covariances. In this section we present a more realistic model of a standard outdoor vehicle. Assume a vehicle equipped with dead reckoning capabilities and an external sensor capable of measuring relative distance between vehicle and the environment as shown in Figure 40. The steering control , and the speed c are used with the kinematic model to predict the position of the vehicle. In this case the external sensor returns range and bearing information to the dierent features Bi(i=1..n) . This information is obtained with respect to the vehicle coordinates (xl , yl ), that is z(k) = (r, ) , where r is the distance from the beacon to the range sensor, is the sensor bearing measured with respect to the vehicle coordinate frame. Considering that the vehicle is controlled through a demanded velocity c and steering angle the process model that predicts the trajectory of the centre of the back axle is given by xc vc cos () yc = vc sin () + vc tan () c L

(276)

Where L is the distance between wheel axles as shown in Figure 41. To simplify the equation in the update stage, the kinematic model of the vehicle is designed to represent

116

yL

b

z(k)=(r,b)

Bi

B3

yl xl

f

vc

B1 xL

Figure 40: Vehicle Coordinate System the trajectory of the centre of the laser. Based on Figure 40 and 41, the translation of the centre of the back axle can be given PL = PC + a T + b T+/ 2 (277)

PL and PC are the position of the laser and the centre of the back axle in global coordinates respectively. The transformation is dened by the orientation angle, according to the following vectorial expression: T = (cos () , sin ()) The scalar representation is xL = xc + a cos () + b cos ( + /2) yL = yc + a sin () + b sin ( + /2) (278)

117

Encoder

Laser yl

H Pc (yc, xc)

xl b

yL xL

L a

Figure 41: Kinematics parameters

Finally the full state representation can be written xL vc cos () vc (a sin () + b cos ()) tan () L yL = vc sin () + vc (a cos () b sin ()) tan () + L vc tan () L

L

(279)

The velocity, c , is measured with an encoder located in the back left wheel. This velocity is translated to the centre of the axle with the following equation: vc = e 1 tan ()

H L

(280)

Where for this car H = 0.75m, L=2.83 m, b = 0.5 and a = L + 0.95m. Finally the discrete model in global coordinates can be approximated with the following set of equations: x(k 1) + T vc (k 1) cos ((k 1)) vc (a sin ((k 1)) + b cos ((k 1))) L tan ((k 1)) x(k) + y(k) = y(k (281) vc (k1) 1) + T vc (k 1) sin ((k 1)) + (a cos ((k 1)) b sin ((k 1))) (k) L tan ((k 1)) vc (k1) tan ((k 1)) L where t is the sampling time, that in this case is not constant. The observation equation relating the vehicle states to the observations is

i zr i z 2 2

where z is the observation vector, is the coordinates of the landmarks, xL , yL and L are the vehicle states dened at the external sensor location and h the sensor noise. The complete non-linear model can be expressed in general form as: X (k + 1) = F (X (k) , u (k) + u (k)) + f (k) z (k) = h (X (k)) + h (k) The eect of the input signal noise is approximated by a linear representation F (X (k) , u (k) + u (k)) + f (k) F (X (k) , u (k)) + (k) = (k) = Ju u (k) + f (k) Ju = F X=X(k),u=u(k) u The matrix noise characteristics are assumed zero mean and white: E {f (k)} = E {u (k)} = E {h (k)} = 0 T E f (i) f (j) = i,j Qf (i) T E h (i) h (j) = i,j R (i) T E u (i) u (j) = i,j Qu (i) T E h (i) h (j) = 0 0 i=j i,j = 1 i=j

T E (i) T (j) = i,j Ju Qu (i) Ju + Qf (i) = i,j Q (i)

(283)

(284)

(285)

An Extended Kalman Filter (EKF) observer based on the process and output models can be formulated in two stages: Prediction and Update stages. The Prediction stage is required to obtain the predicted value of the states X and its error covariance P at time k based on the information available up to time k 1, X (k + 1, k) = F (X (k, k) , u (k)) P (k + 1, k) = J P (k, k) J T + Q (k) The update stage is function of the observation model and the covariances: (286)

119

(X,u)=(X(k),u(k))

(287)

H = H (k) =

h X

(288)

X=X(k)

are the Jacobian matrixes of the vectorial functions F (x, u) and h(x) respect to the state X and R is the covariance matrix characterizing the noise in the observations. Under the SLAM framework the system will detect new features at the beginning of the mission and when exploring new areas. Once these features become reliable and stable they are incorporated into the map becoming part of the state vector. The state vector is then given by: XL X= XI (289) XL = (xL , yL , L )T R3 XI = (x1 , y1 , .., xN , yN )T R2N where (x, y, )L and (x, y)i are the states of the vehicle and features incorporated into the map respectively. Since this environment is consider to be static the dynamic model that includes the new states becomes: XL (k + 1) = f (XL (k)) + XI (k + 1) = XI (k) (290)

It is important to remarks that the landmarks are assumed to be static. Then the Jacobian matrix for the extended system is J1 = I I J1 R3x3 , R3xN , I R2N x2N

F X

f xL

(291)

The observations zr and z are obtained from a range and bearing sensor relative to the vehicle position and orientation. The observation equation given in Equation 282 is a function of the states of the vehicle and the states representing the position of the landmark. The Jacobian matrix of the vector h with respect to the variables (xL , yL , L , xi , yi )

zr X z X

120

ri (xL ,yL ,L ,{xj ,yj }j=1..N ) i (xL ,yL ,L ,{xj ,yj }j=1..N )

(292)

This Jacobian will always have a large number of null elements since only a few landmarks will be observed and validated at a given time. For example, when only one feature is observed the Jacobian has the following form:

zr X z X

x y 2

y x 2

(293)

where x = (xL xi ) ,

y = (yL yi ) ,

These models can then be used with a standard EKF algorithm to build and maintain a navigation map of the environment and to track the position of the vehicle.

7.5

Optimization of SLAM

Under the SLAM framework the size of the state vector is equal to the number of the vehicle states plus twice the number of landmarks, that is 2N +3 = M . This is valid when working with point landmarks in 2 D environments. In most SLAM applications the number of vehicle states will be insignicant with respect to the number of landmarks. The number of landmarks will grow with the area of operation making the standard lter computation impracticable for on-line applications. In this section we present a series of optimizations in the prediction and update stages that reduce the complexity of the SLAM algorithm from O(M 3 ) to O(M 2 ). Then a compressed lter is presented to 2 reduce the real time computation requirement to O(2Na ), being Na the landmarks in the local area. This will also make the SLAM algorithm extremely ecient when the vehicle remains navigation in this area since the computation complexity becomes independent of the size of the global map. These algorithms do not make any approximations and the results are exactly equivalent to a full SLAM implementation. 7.5.1 Standard Algorithm Optimization

Prediction Stage Considering the zeros in the Jacobian matrix of Equation 291 the prediction Equation 286 can be written:

121

J1 T I

T J1 T IT

QV

2 (294)

I R2N x2N

T P21 = P12 ,

P12 R3x2N ,

The time sub-indexes are not used in this explanation for clarity of presentation. Performing the matrix operations explicitly the following result is obtained: J P = J1 T I P11 P12 P21 P22 = J1 P11 J1 P12 I P21 I P22

T J1 T I

J P JT =

(295) It can be proved that the evaluation of this matrix requires approximately only 9M multiplications. In general, more than one prediction step is executed between 2 update steps. This is due to the fact that the prediction stage is usually driven by high frequency sensory information that acts as inputs to the dynamic model of the vehicle and needs to be evaluated in order to control the vehicle. The low frequency external sensors report the observation used in the estimation stage of the EKF. This information is processed at much lower frequency. For example, the steering angle and wheel speed can be sampled every 20 milliseconds but the laser frames can be obtained with a sample time of 200 milliseconds. In this case we have a ratio of approximately 10 prediction steps to one update step. The compact form for n prediction steps without an update is P (k + n, k) = where

n1

(296)

G1 = G1 (k, n) =

i=0

J1 (k + i) = J1 (k + n 1) .... J1 (k)

(297)

122

Update Stage Since only a few features associated with the state vector are observed at a given time, the Jacobian matrix H will have a large number of zeros. When only one feature is incorporated into the observation vector we have: H = H (k) = H1 = H2 =

h XL h Xi h X X=X(k)

= [H1 , 1 , H2 , 2 ] R2xM ,

X=X(k)

M = (2N + 3) (298)

X=X(k) X=X(k)

= =

R2x3

X=X(k)

R2x2

1 , 2 = nullmatrices

= j = i .

T T P H T = P1 H1 + P2 H2 P1 RM x3 , P2 RM x2

It can be proved that the evaluation will require 10M multiplications. Using the previous result, the matrix S and W can be evaluated with a cost of approximately 20M S = H P H T + R R22 W = P H T S 1 RM x2 (299)

The cost of the state update operation is proportional to M . The main computational requirement is in the evaluation of the covariance update where complexity is O(M 2 ). Experimental results The SLAM algorithm presented were tested an outdoor environment with a standard utility vehicle retrotted with dead reckoning sensors and a laser range sensor as shown in Figure 42. In this application the most common relevant feature in the environment were trees. The proles of trees were extracted from the laser information. A Kalman lter was also implemented to reduce the errors due to the different proles obtained when observing the trunk of the trees from dierent locations. The vehicle was started at a location with known uncertainty and driven in this area for approximately 20 minutes. Figure 43 presents the vehicle trajectory and navigation landmarks incorporated into the relative map. This run includes all the features in the environment and the optimisation presented. The system built a map of the environment and localized itself. The accuracy of this map is determined by the initial vehicle position uncertainty and the quality of the combination of dead reckoning and external sensors. In this experimental run an initial uncertainty in coordinates x and y was assumed. Figure 44 presents the estimated error of the vehicle position and selected landmarks. The states corresponding to the vehicle presents oscillatory behaviour displaying the maximum deviation farther from the initial position. This result is expected since there is no

123

absolute information incorporated into the process. The only way this uncertainty can be reduced is by incorporating additional information not correlated to the vehicle position, such as GPS position information or recognizing a beacon located at a known position. It is also appreciated that the covariances of all the landmarks are decreasing with time. This means that the map is learned with more accuracy while the vehicle navigates. The theoretical limit uncertainty in the case of no additional absolute information will be the original uncertainty vehicle location. Figure 45 presents the nal estimation of the landmarks in the map. It can be seen that after 20 minutes the estimated error of all the landmarks are below 60 cm.

Figure 42: Utility car used for the experiments. The vehicle is equipped with a Sick laser range and bearing sensor, linear variable dierential transformer sensor for the steering and back wheel velocity encoders.

7.5.2

Compressed Filter

In this section we demonstrate that it is not necessary to perform a full SLAM update when working in a local area. This is a fundamental result because it reduces the computational requirement of the SLAM algorithm to the order of the number of features in the vicinity of the vehicle; independent of the size of the global map. A common scenario is to have a mobile robot moving in an area and observing features within this area. This situation is shown in Figure 46 where the vehicle is operating in a local area A. The rest of the map is part of the global area B. This approach will also present signicant advantages when the vehicle navigates for long periods of time in a local area or when the

124

Figure 43: Vehicle trajectory and landmarks. The * shows the estimated position of objects that qualied as landmarks for the navigation system. The dots are laser returns that are not stable enough to qualify as landmarks. The solid line shows the 20 minutes vehicle trajectory estimation using full SLAM.

meters

0.5 0.4 0.3 0.2 0.1 0 0 200 400 600 800 1000 subsamples 1200 1400 1600 1800

Figure 44: The History of selected states estimated errors. The vehicle states shows oscillatory behaviour with error magnitude that is decreasing with time due to the learning of the environment. The landmarks always present a exponential decreasing estimated error with a limit of the initial uncertainty of the vehicle position.

125

Figure 45: 11 Final estimated error of all states. For each state the nal estimated error is presented. The maximum error is approximately 60 cm external information is available at high rate. Although high frequency external sensors are desirable to reduce position error growth, they also introduce a high computational cost in the SLAM algorithm. For example a laser sensor can return 2-D information at frequencies of 4 to 30 Hz. To incorporate this information using the full SLAM algorithm will require to update M states at 30 Hz. In this work we show that while working in a local area observing local landmarks we can preserve all the information processing a SLAM algorithm of the order of the number of landmarks in the local area. When the vehicle departs from this area, the information acquired can be propagated to the global landmarks without loss of information. This will also allow incorporating high frequency external information with very low computational cost. Another important implication is that the global map will not be required to update sequentially at the same rate of the local map. Update step Considered the states divided in two groups: X= XA XB , XA R2NA +3 , N = NA + N B XB R2NB , (300)

X R2N 3 ,

126

A B

Figure 46: Local and Global areas The states XA can be initially selected as all the states representing landmarks in an area of a certain size surrounding the vehicle. The states representing the vehicle pose are also included in XA . Assume that for a period of time the observations obtained are only related with the states XA and do not involve states of XB , that is h (X) = h (XA ) Then at a given time k H= h X =

X=X(k)

(301)

h (XA , XB )

=

X=X(k)

h XA

h XB

Ha 0

(302)

Considering the zeros of the matrix H the Kalman gain matrix W is evaluated as follows P = Paa Pab Pba Pbb P HT =

T Paa Ha T Pba Ha

T H P H T = Ha Paa Ha

T S = Ha Paa Ha + R

(303)

W = P H T S 1 =

T Paa Ha S 1 T Pba Ha S 1

Wa Wb

From these equations it is possible to see that 1. The Jacobian matrix Ha has no dependence on the states XB .

127

2. The innovation covariance matrix S and Kalman gain Wa are function of Paa and Ha . They do not have any dependence on Pbb , Pab , Pba and Xb . The update term dP of the covariance matrix can then be evaluated dP = W S W T = Paa Paa Pab T ( Pab ) Pba Pab (304)

T with = Ha S 1 Ha and = Paa . In the previous demonstration the time subindexes were neglected for clarity of the presentation. These indexes are now incorporated to present the recursive update equations. The covariance matrix after one update is

T Pab (k + 1, k + 1) = Pab (k + 1, k) (k) Pab (k + 1, k) = (I (k)) Pab (k + 1, k)

(305)

Pbb (k + 1, k + 1) = Pbb (k + 1, k) Pba (k + 1, k) (k) Pab (k + 1, k) And the covariance variation after t consecutive updates: Pab (k + t, k + t) = (k + t 1) Pab (k, k) Pbb (k + t, k + t) = Pbb (k, k) Pba (k, k) (k 1) Pab (k, k) with (306)

k+t i=k

k+t i=k

(I (i)) (307)

T (i 1) (i) (i 1) (k 1) = 0

(k 1) = I,

The evaluation of the matrices (k) , (k) can be done recursive according to: (k + t) = (I (k + t)) (k + t 1) (k + t) = (k + t 1) + T (k + t 1) (k + t) (k + t 1) (308)

128

with (k), (k), (k), (k) R2N a2N a . During long term navigation missions, the number of states in Xa will be in general much smaller than the total number of states in the global map, that is Na << N b < M . The matrices (k) and (k) are sparse and the 2 calculation of (k) and (k) has complexity O(Na ). It is noteworthy that Xb , Pab , Pba and Pbb are not needed when the vehicle is navigating in a local region looking only at the states Xa . It is only required when the vehicle enters a new region. The evaluation of Xb , Pbb , Pab and Pba can then be done in one iteration with full SLAM computational cost using the compressed expressions. The estimates Xb can be updated after t update steps using Xb (k + t, k + t) = Xb (k + t, k) Pba (k, k) (k + t) with (k + t) =

k+t1 i=k

(309)

this case range and bearing, (k) R2N am , Z(k) Rm , (k) R2N a2N a , Ha (k) Rm2N a and S(k) Rmm . Similarly, since Ha is a sparse matrix, the evaluation cost of the matrix is proportional to Na . The derivation of these equations is presented in [12] Extended Kalman Filter formulation for the compressed lter In order to maintain the information gathered in a local area it is necessary to extend the EKF formulation presented in Equations (286) and (287). The following equations must be added in the prediction and update stage of the lter to be able to propagate the information to the global map once a full update is required: (k) = Jaa (k, k 1) (k 1) (k) = (k 1) Predictionstep (0) = I (310) (0) = 0 (k) = (I (k)) (k 1) Updatestep (k) = (k) + T (k 1) (k) (k 1) When a full update is required the global covariance matrix P and state X is updated with equations (306) and (309) respectively. At this time the matrices and are also re-set to the initial values of time 0, since the next observations will be function of a given set of landmarks. Map Management It has been shown that while the vehicle operates in a local area all the information gathered can be maintained with a cost complexity proportional to the number of landmarks in this area. The next problem to address is the selection of local areas. One convenient approach consists of dividing the global map into rectangular

129

regions with size at least equal to the range of the external sensor. The map management method is presented in Figure 47 . When the vehicle navigates in the region r the compressed lter includes in the group XA the vehicle states and all the states related to landmarks that belong to region r and its eight neighboring regions. This implies that the local states belong to 9 regions, each of size of the range of the external sensor. The vehicle will be able to navigate inside this region using the compressed lter. A full update will only be required when the vehicle leaves the central region r. Every time the

Figure 47: Map Management for the compressed Algorithm. The local area is composed of nine squares of length approximate of the range of the sensor. The vehicle is always within the central region of inside the threshold area. vehicle moves to a new region, the active state group XA , changes to those states that belong to the new region r and its adjacent regions. The active group always includes the vehicle states. In addition to the swapping of the XA states, a global update is also required at full SLAM algorithm cost. Each region has a list of landmarks that are known to be within its boundaries. Each time a new landmark is detected the region that owns it appends an index of the landmark denition to the list of owned landmarks. It is not critical if the landmark belongs to this region or a close connected region. In case of strong updates, where the estimated position of the landmarks changes signicantly, the owners of those landmarks can also be changed. An hysteresis region is included bounding the local area r to avoid multiple map switching when the vehicle navigates in areas close to the boundaries between the region r and surrounding areas. If the side length of the regions are smaller that the range of the external sensor, or if the hysteresis region is made

130

too large, there is a chance of observing landmarks outside the dened local area. This observation will be discarded since they cannot be associated with any local landmarks. In such case the resulting lter will not be optimal since this information is not incorporated into the estimates. Although these marginal landmarks will not incorporate signicant information since they are far from the vehicle, this situation can be easily avoided with appropriate selection of the size of the regions and hysteresis band. Figure 47 presents an example of the application of this approach. The vehicle is navigating in the central region r and if it never leaves this region the lter will maintain its position and the local map with a cost of a SLAM of the number of features in the local area formed by the 9 neighbour regions. A full SLAM update is only required when the vehicle leaves the region. Computational Cost The total computational requirement for this algorithm is of 2 O(Na ) and the cost of the update when the vehicle leaves the local area is of O(Na Nb2 ). Provided that the vehicle remains for a period of time in a given area, the computational saving will be considerable. This has important implications since in many applications it will allow the exact implementation of SLAM in very large areas. This will be possible with the appropriate selection of local areas. The system evaluates the location of the vehicle and the landmark of the local map continuously at the cost of a local SLAM. Although a full update is required at a transition, this update can be implemented as a parallel task. The only states that need to be fully updated are the new states in the new local area. A selective update can then be done only to those states while the full update for the rest of the map runs as a background task with lower priority. These results are important since it demonstrates that even in very large areas the computational limitation of SLAM can be overcame with the compression algorithm and appropriate selection of local areas. Experimental Results The compressed algorithm was implemented using local regions of 40x40 meters square. These regions are appropriate for the Sick laser range sensor used in this experiment. Figure 48 shows part of the trajectory of the vehicle with the local area composed of 9 squares surrounding the vehicle. To demonstrate that the compressed algorithm maintains and propagates all the information obtained, the history of the covariances of the landmarks were compared with the ones obtained with the full SLAM algorithm. Figure 49 shows a time evolution of standard deviation of few landmarks. The dotted line corresponds to the compressed lter and the solid line to the full SLAM. It can be seen that the estimated error of some landmarks are not continuously updated with the compressed lter. These landmarks are not in the local area. Once the vehicle makes a transition the system updates all the landmark performing a full SLAM update. At this time the landmarks outside the local area are updated in one iteration and its estimated error become exactly equal to the full SLAM. This is clearly

131

shown in Figure 50 where at the full update time stamps both estimated covariances become identical. Figure 51 shows the dierence between full SLAM and compressed lter estimated landmarks covariance. It can be seen that at the full update time stamps the dierence between the estimation using both algorithms becomes zero as demonstrated before. This shows that while working in a local area it is possible to maintain all the information gathered with a computational cost proportional to the number of landmarks in the local area. This information can then be propagated to the rest of the landmarks in the map without any loss of information.

350 300 250

Longitude (metres)

150

100

50

Latitude (metres)

50

100

150

200

Figure 48: Vehicle and local areas. This plot presents the estimated trajectory and navigation landmark estimated position . It also shows the local region r with its surrounding regions. The local states XA are the ones included in the nine regions shown enclosed by a rectangle in the left button corner of the gure

7.5.3

Sub-Optimal SLAM

In this section we present a series of simplication that can further reduce the computationally complexity of SLAM. This sub-optimal approach reduces the computational requirements by considering a subset of navigation landmarks present in the global map. It is demonstrated that this approach is conservative and consistent, and can generate close to optimal results when combined with the appropriate relative map representation. Most of the computational requirements of the EKF are needed during the update process of the error covariance matrix. Once an observation is being validated and associated to

132

Figure 49: Landmark estimated position error for full Slam and compressed lter. The solid line shows the estimated error provided by the full SLAM algorithm. This algorithm updates all the landmarks with each observation. The dotted line shows the estimated error provided by the compressed lter. The landmark that are not in the local area are only updated when the vehicle leaves the local area. At this time a full updates is performed and the estimated error becomes exactly equal to full SLAM

133

Standard Deviation

0.24 0.23 0.22 0.21 0.2 0.19 0.18 156 Full Slam

Full Update

158

160

162

164

166 Time

168

170

172

174

Figure 50: Landmark estimated position error for full Slam and compressed lter (enhanced). This plot presents an enhanced view of the instant when the compressed algorithm performed a full update. A time 165 the full slam (solid line) and the compressed algorithm (solid lines with dots) report same estimated error as predicted.

x 10 1

-3

Full Update

Covariance Difference

-1

-2

-3

-4

Figure 51: Estimated error dierences between full slam and compressed lter. The estimated error dierence between both algorithms becomes identically zero when the full update is performed by the compressed algorithm.

Navigation System Design (KC-4) a given landmark, the covariance error matrix of the states is updated according to P = P P P = W S W T

134

(311)

The time subindexes are neglected when possible to simplify the equations. The state vector can be divided in 2 groups, the Preserved P and the Discarded D states X= XP XD , XP RNP , XD RND , X RN , N = NP + ND (312)

With this partition it is possible to generate conservative estimates by updating the states XD but not updating the covariance and cross-covariance matrices corresponding to this sub-vector. The covariance matrix can then be written in the following form: P = PP P PP D T PDP PDD , P = PP P PP D T PDP PDD = W S WT (313)

Conservative updates are obtained if the nominal update matrix P is replaced by the sub-optimal P P = PP P PP D PDP = P PDD

P = P P = P P +

PBB

It can be shown that this simplication generates consistent error covariance estimates. Demonstration: The covariance error matrix P (k + 1) can be rewritten as follows P (k + 1) = P (k) P = P (k) P + where P = PP P PP D PDP PP P PP D PDP PDD = P (315) P = 0 = PBB 0 (314)

135

P =

PP P PP D T PDP PDD

= W S WT

0 (316)

T PDD = WD SD WD 0

As given in Equation 314, the total update is formed by the optimal update plus an additional positive semi-denite noise matrix . The matrix will increase the covariance uncertainty: P (k + 1) = P (k + 1) + (317)

then the sub-optimal update of P becomes more conservative than the full update: P (k + 1) P (k + 1) P (k) (318)

Finally the sub-matrices that need to be evaluated are PP P , PP D and PDP . The significance of this result is that PDD is not evaluated. In general this matrix will be of high order since it includes most of the landmarks. The fundamental problem becomes the selection of the partition P and D of the state vector. The diagonal of matrix P can be evaluated on-line with low computational cost. By inspecting the diagonal elements of P we have that many terms are very small compared to the corresponding previous covariance value in the matrix P . This indicates that the new observation does not have a signicant information contribution to this particular state. This is an indication to select a particular state as belonging to the subset D. The other criterion used is based on the value of the actual covariance of the state. If it is below a given threshold, it can be a candidate for the sub-vector D. In many practical situations a large number of landmarks can usually be associated to the sub-vector D. This will introduce signicant computational savings since PDD can potentially become larger than PP P . The cross-correlation PP D and PDP are still maintained but are in general of lower order as can be appreciated in Figure 52 . Finally the selection criteria to obtain the partition of the state vector can be given with the union of the following Ii sets: I1 = {i : P (i, i) < c1 P (i, i)} , I2 = {i : P (i, i) < c2 } , I = I1 I2 (319)

136

Vehicle States

Figure 52: Full covariance matrix divided into the covariance blocks corresponding to the Vehicle and Preserved landmarks states (XP ) and Discarded landmarks states (XD ). The cross-correlation covariance between the Preserved and Discarded states are fully updated as shown in grey. Finally the cross-correlation between the elements of the states corresponding to the Discarded landmarks are not updated as shown in white. The error covariance matrix is updated with the simplied matrix DP P (k + 1, k + 1) = P (k + 1, k) P (321)

The practical meaning of the set I1 , is that with the appropriate selection of c1 we can reject negligible update of covariances. As mentioned before the selection of I1 requires the evaluation of the diagonal elements of the matrix P . The evaluation of the P (i, i) elements requires a number of operations proportional to the number of states instead of the quadratic relation required for the evaluation of the complete matrix P . The second subset dened by I2 is related to the states whose covariances are small enough to be considered practically zero. In the case of natural landmarks they become almost equivalent to beacons at known positions. The number of elements in the set I2 will increase with time and can eventually make the computational requirements of SLAM algorithms comparable to the standard beacon localisation algorithms . Finally, the magnitude of the computation saving factor depends of the size of the set I. With appropriate exploration policies, real time mission planning, the computation requirements can be maintained within the bounds of the on-board resources. Implementation Issues: Relative Map Representation The sub-optimal approach presented becomes less conservative when the cross correlation between the non relevant landmarks becomes small. This is very unlikely if an absolute reference frame is used, that is when the vehicle, landmarks and observation are represented with respect to

dia

go na

le

lem

en

ts

137

a single reference frame. The cross-correlations between landmarks of dierent regions can be substantially decreased by using a number of dierent bases and making the observation relative to those bases. With this representation the map becomes grouped in constellations. Each constellation has an associated frame based on two landmarks that belong to this constellation. The base landmarks that dene the associated frame are represented in a global frame. All the others landmarks that belong to this constellation are dened in the local frame. For a particular constellation, the local frame is based on the 2 base landmarks La = xa ya , Lb = xb yb (322)

It is possible to dene 2 unitary vectors that describe the orientation of the base frame: v1 = v2 =

1 Lb La

1 (xb xa ) +(yb ya )

2 2

xb xa yb ya

v11 v12

v21 v22

(323)

v2 , v1 = 0

The rest of the landmarks in this particular constellation are represented using a local frame with origin at La and axes parallel to the vectors 1 and 2 . Li = with i = (Li La ) , v1 = (Li La )T v1 i = (Li La ) , v2 = (Li La )T v2 (325) xi yi , La = i i i (324)

The following expression can be used to obtain the absolute coordinates from the relative coordinate representation Li = La + i v1 + i v2 (326)

The reference frame is formed with two landmarks as shown in Figure 53. The observation are then obtained relative to this frame. Assuming that the external sensor returns range and bearing, the observation functions are:

138

hi = Li XL Ri (cos (i ) , sin (i )) = 0 i = i + /2 i : objectanglewithrespecttolaserframe Ri : objectrangewithrespecttolaser (XL , ) = (xL , yL , ) : vehiclestates Finally hi = La + i v1 + i v2 PL Ri (cos (i ) , sin (i )) = 0

(327)

(328)

With this representation the landmark dening the bases will become the natural Preserved landmarks. The observations in each constellation will be evaluated with respect to the bases and can be considered in the limit as observation contaminated with white noise. This will make the relative elements of the constellation uncorrelated with the other constellation relative elements. The only landmarks that will maintain strong correlation will be the ones dening the bases that are represented in absolute form.

Li yi ni yb ei La Lb

ya

xa

xi

xb

139

Experimental Results The next set of plots present a comparison of the performance of the sub-optimal algorithm. Figure 54 and 55 present two runs, one using most of the states and the other with only 100 states. The plots show that the total number of states used by the system grows with time as the vehicle explores new areas. It is also shown the number of states used by the system in grey and the number of states not updated with stars *. In the rst run, very conservative values for the constant I1 and I2 were selected so most of the states were updated with each observation. The second run corresponds to a less conservative selection plus a limitation in the maximum number of states. Figure 55 shows that a large number of states are not updated at every time step resulting in a signicant reduction in the computational cost of the algorithm. From Figures 56 and 57 it can be seen that the accuracy of the SLAM algorithm has not been degraded by this simplication. These Figures present the nal estimated error of all the states for both runs. It is noteworthy that only the bases are represented in absolute form. The other states are represented in relative form and its standard deviation becomes much smaller. One important remark regarding the advantage of the relative representation with respect to the simplication presented: Since the bases are in absolute form they will maintain a strong correlation with the other bases and the vehicle states. They will be more likely to be chosen as preserved landmarks since the observations will have more contribution to them than the relative states belonging to distant bases. In fact the states that will be chosen will most likely be the bases and the states associated with the landmarks in the local constellation. It is also important to remark that with this representation the simplication becomes less conservative than when using the absolute representation. This can clearly be seen by looking at the correlation coecients for all the states in each case. This is shown in Figures 58 and 59 where the correlation of the relative and absolute map respectively is presented. In Figure 58 each block of the diagonal corresponds to a particular constellation and the last block has the vehicle states and the bases. The dierent constellations becomes de-correlated from each other and only correlated to the rst block whose cross correlation are updated by the sub-optimal algorithm presented. These results imply that with the relative representation the cross correlation between constellation becomes zero and the sub-optimal algorithm presented becomes close to optimal. This is not the case for the absolute representation as shown in Figure 59 where all the states maintained strong cross-correlations. Finally Figure 60 presents the results of a 4 km trajectory using the compressed algorithm in a large area. In this case there are approximately 500 states in the global map. The system created 19 dierent constellations to implement the relative map. The crosscorrelation coecients between the dierent constellations become very small as shown in Figure 61. This demonstrates the advantages of the compressed algorithm since the local areas are signicantly smaller than the global map. When compared with the full SLAM implementation the algorithm generated identical results (states and covariance) with the advantage of having very low computational requirements. For larger areas the algorithm becomes more ecient since the cost is basically function of the number of local

140

landmarks. These results are important since it demonstrates even in very large areas the computational limitation of SLAM can be overcame with the compressed algorithm and appropriate selection of local areas.

Figure 54: Total number of states and states used and not updated. The gure presents the total number of states with a solid black line. This number is increasing because the vehicle is exploring new areas and validating new landmarks. The states used by the system are represented in grey. The number of states not used is represented with *. In this run the system used most of the states available.

141

Figure 55: Total number of states and states used and not updated. In this run a maximum number of states was xed as constraint for the sub-optimal SLAM algorithm. This is appreciated in the grey plot where the maximum number of states remain below a given threshold. The number of states not updated increases with time.

final deviations (1 sigma) 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0

meters

50

200

Figure 56: nal estimation errors for relative and absolute states using most of the states.

142

final deviations (1 sigma) 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 0 50 100 150 landmark states 200

Figure 57: Final estimated error of relative and absolute states using a reduced number of states. These results are similar to the ones using most of the states. This result shows that the algorithm is not only consistent but close to optimal when used with the appropriate map representation.

meters

143

20 40 60 80 100

states

120 140 160 180 200 220 20 40 60 80 100 120 states 140 160 180 200 220

Figure 58: Correlation coecient for the relative representation. Each block represents the cross-correlation coecient of the elements of the dierent constellation. The block in the right corner contains the vehicle states and all the bases. It can be seen that the cross-correlation between dierent constellations is very small. It is also clear the non-zero cross-correlation between the bases and the dierent constellations. These correlations are updated by the sub-optimal lter.

covariance coefficients

20 40 60 80 100

states

120 140 160 180 200 220 20 40 60 80 100 120 states 140 160 180 200 220

Figure 59: Correlation Coecient for the absolute representation. In this case the map appears completely correlated and the sub-optimal algorithm will generate consistent but more conservative results.

144

Figure 60: Constellation map and vehicle trajectory. 19 constellations were formed by the algorithm. The intersection of the bases are presented with a + and the other side of the segment with a o. The relative landmarks are represented with * and its association with a base is represented with a line joining the landmark with the origin of the relative coordinate system

covariance coefficients (states ordered by constellation)

states

250 300 350 400 450 50 100 150 200 250 states 300 350 400 450

Figure 61: Cross correlation coecients. The plots shows 19 constellation and a block in the right hand corner containing the correlation coecient for the bases and the vehicle states. It can be appreciated that the crosscorrelation between the relative states of the dierent bases is very small.

145

References

[1] I.Y. Bar-Itzhack. Identity Between INS Position and Velocity Error Models. In Journal of Guidance and Control, pages 568570, September 1981. [2] I.Y. Bar-Itzhack and D. Goshen-Meskin. Identity Between INS Position and Velocity Error Equations in the True Frame. In Journal of Guidance and Control, pages 590592, November 1988. [3] Y. Bar-Shalom and X. Li. Estimation and Tracking: Principles, Techniques and Software. Artech House, Boston, 1993. [4] D.O. Benson. A Comparison of Two Approaches to Pure-Inertial and Doppler-Inertial Error Analysis. In IEEE Transactions on Aerospace and Electronic Systems, pages 447455, July 1975. [5] R. Chatila and S. Lacroix. Adaptive Navigation for Autonomous Mobile Robots. International Symposium on Robotics Research, 1995. [6] I. Cox. Blanche - An Experiment in Guidance and Navigation of an Autonomous Robot Vehicle. IEEE Transactions on Robotics and Automation, 7:193204, 1991. [7] Clark S. Dissanayake G., Newman P. and Durrant-Whyte H. A Solution to Simultaneous Localisation adn Map Building (SLAM) Problem. In IEEE Journal of Robotics and Automaton, volume 17, No.3, June 2001. [8] H. Durrant-Whyte. Where Am I? Industrial Robot, 21(2), 1994. [9] H. F. Durrant-Whyte. An Autonomous Guided Vehicle for Cargo Handling Applications. International Journal of Robotics Research, 15, 1996. [10] D.J. Flynn. A Discussion of Coning Errors Exhibited by Inertial Navigation Systems. In Royal Aircraft Establishment. Technical Memorandum Rad-Nav 243, 1984. [11] Guivant J. Ecient Simultaneous Localisation and Mapping in Large Environments. PhD thesis, University of Sydney, 2003. [12] Guivant J. and Nebot E. Optimization of Simultaneous Localization and Map Building Algorithm for Real Time Implementation. 17, No.3:731747, June 2001. [13] S. Julier. Process Models for the High-Speed Navigation of Conventional Road Vehicles. Technical report, University of Oxford, 1994. [14] S. Julier. Process Models for the Navigation of High Speed Land Vehicles. PhD thesis, University of Oxford, 1996. [15] A. Kelly. A 3D State Space Formulation of a Navigation Kalman Filter for Autonomous Vehicles. CMU Robotics Institute Technical Report, 1994.

146

[16] X. Kong. Development of a Non-Linear Psi-Angle Model for Large Misalignment Errors and its Application in INS Alignment and Calibration. In IEEE International Conference on Robotics and Automation, pages 14301435, May 1999. [17] John Leonard and H. Feder. Decoupled stochastic mapping for mobile robot and auv navigation. IEEE Journal of Oceanic Engineering, 66, No.4:561571, 2001. [18] P. Maybeck. Stochastic Models, Estimation and Control, volume 1. Academic Press, 1982. [19] P. McKerrow. Introduction to Robotics. Addison-Wesley, 1991. [20] A.G.O Mutambara. Decentralised Estimation and Control with Applications to a Modular Robot. PhD thesis, Department of Engineering Science, University of Oxford, 1994. [21] Durrant-Whyte H Nebot E. Initial Alignment and Calibration of Low Cost Inertial Navigation Units for Land Vehicle Applications. In Journal of Robotic System, volume 16(2), pages 8192, February 1999. [22] Scheding S. Nebot E., Durrant-Whyte H. Frequency domain modelling of aided GPS for vehicle navigation systems. In Robotics and Autonomous Systems, volume 25, pages 7382, February 1990. [23] J. Neira and J.D. Tardos. Data association in stochastic mapping using the joint compatibility test. IEEE Transaction of Robotics and Automation, pages 890897, 2001. [24] Sukkarieh S. Aided Inertial Navigation Systems for Autonomous Land Vehicles. PhD thesis, Department of Mechanical and Mechatronic Engineering, University of Sydney, 2000. [25] Michael Montemerlo Sebastian. Fastslam: A factored solution to the simultaneous localization and mapping problem, http://citeseer.nj.nec.com/503340.html. [26] Cheeseman P. Smith R., Self M. A stochastic map for uncertain spatial relationships. In 4th International Symposium on Robotic Research, MIT Press, 1987. [27] D.H. Titterton and J.L. Weston. Strapdown Inertial Navigation Technology. Peter Peregrinus Ltd, 1997.

- Land NavigationTransféré parUnited States Militia
- Volare ATPL Question BankTransféré parmomanbh
- General Navigation FormulaeTransféré parFrank Alex
- NavigationTransféré parmsk5in
- MANUAL OF NAVIGATIONTransféré parIKOROMILOS
- navigationTransféré parSatish Kumar
- Ecdis Book ImoTransféré parfunkyrohit1
- Plotting chartTransféré parMegat Rambai Sr
- NavigationTransféré parMukhram Bhadu
- Inertial NavigationTransféré parM S Prasad
- Navigation for Profesional PilotsTransféré parGuillermo Marcelo Flores
- Electronic NavigationsTransféré parArwa Hussein
- Advances in Robot NavigationTransféré parJosé Ramírez
- Tactical Missile Autopilot DesignTransféré parD.Viswanath
- Aerospace Avionics Systems a Modern SynthesisTransféré pargkgj2000
- Atpl Formulas SummaryTransféré parMahmood Said
- Strategic Missile ZarchanTransféré parPritam Kumar Pratihari
- Instruments Question BankTransféré parmomanbh
- Navigation RulesTransféré parGeorges Zambalos
- Low-Cost INS Report en TranslationTransféré parsdgpass2585
- navigationTransféré parpapinenisandeep
- IMO PS INSTransféré paratinder13
- Image Aided Inertial NavTransféré parPrasanna Ramamurthy
- Memscon Source Codes Design of an Inertial Navigation Unit Using Mems SensorsTransféré parNaveen Kumar
- navigation systemTransféré parPraveen Halladmani
- Navigation SystemTransféré parbhoop_sharma
- Kenneth R. Britting-Inertial Navigation Systems Analysis-John Wiley & Sons Inc (1971)Transféré par서영빈
- Admiralty Manual of Navigation, 1914Transféré parandresmejia68
- An Introduction to the Kalman Filter (FULL VERSION)Transféré parkasra_tm

- Bidirectional Visitor CounterTransféré parSuyog Sawardekar
- UAVTransféré parBhaskar Gorle
- iNTRODUCTION TO MICROCONTROLLERSTransféré parAlok Srivastav
- Wireless DAQ System Final PaperTransféré parBhaskar Gorle
- Digital Code Lock SchematicTransféré parMazlan Ab Rahim
- Doc 0839Transféré parAndré Gubelmann
- Wireless TechnologiesTransféré parbandokkhan
- NavigationTransféré parBhaskar Gorle

- KAP ModelTransféré parmandagi autry
- pset1Transféré parb2876246
- FSA: An Efficient and Flexible C++ Toolkit for Finite State Automata Using On-Demand ComputationTransféré parStewart Henderson
- collected specimens with potential vectors identifiedTransféré parapi-329344734
- Nike Company/ Essay / Paper by AssignmentLab.comTransféré parAssignmentLab.com
- City Standard Construction SpecificationsTransféré parcallertimes
- Metal Matrix Composites Roadmap 2006Transféré parArun Duraisamy
- “Facebook is a Luxury”: An Exploratory Study of Social Media Use in Rural KenyaTransféré parАлександрНарутто
- Anodamine HPFGTransféré parmasgraha
- chap16 (1).pdfTransféré parVeronica Maiselans
- pnfTransféré parsaranpt
- Hizb Ut-tahrir Reportv2Transféré parIslamic Uprising
- Fluke 792Transféré parMomer Borbon
- Saami Marine Services-Company ProfileTransféré parJahangir Hosen
- Cs6401 Operating System Part ATransféré parskumar_587867
- Chocolate PresentationTransféré parValéria Oliveira
- Flt Lcp2 Specsheet FwbTransféré parAhmed
- Samba on AIXTransféré parChristian Fisher
- Botnet Tracking Tools 35347Transféré pardaauit7
- master ppt for light and wavesTransféré parapi-312162583
- Andrew's Geometry Essential Questions CirclesTransféré parMattbrooks33
- Banda GAsTransféré parRamswarup Bhaskar
- Presentation for Faith University - TurkeyTransféré parmamunhabib
- electromagetics matlab projectTransféré parSaloni Agarwal
- Just think, Rajya Sabha for Naresh KadyanTransféré parNaresh Kadyan Pfa Haryana
- chemistry paper 3 answerTransféré parMohd Hafiez
- Cities’ Identity Through Architecture and ArtsTransféré parestrago
- book trailer assessmentTransféré parapi-240696575
- 08-MATH StatisticProject 18BelleSTransféré par18BelleS
- Excel Macro RecorderTransféré paramowafi

## Bien plus que des documents.

Découvrez tout ce que Scribd a à offrir, dont les livres et les livres audio des principaux éditeurs.

Annulez à tout moment.