Vous êtes sur la page 1sur 14

Project

2: Global Positioning System (GPS)


Algorithm

Samuel Vineyard
Professor Ken Kruetz-Delgado
ECE 174: Linear and Nonlinear Optimization
12/1/16

Contents
Objective ................................................................................................................................................................ 3
Derivation of Pseudorange Equation .................................................................................................... 3
Linearization of Pseudorange Equation .............................................................................................. 4
Generalized Gradient Descent Algorithms ........................................................................................ 5
Standard Gradient Descent Algorithm ............................................................................................ 6
Gauss-Newton Algorithm ........................................................................................................................ 7
Algorithm Termination Criteria ......................................................................................................... 7
Algorithm Simulation ..................................................................................................................................... 7
Synthetic Data Generation ...................................................................................................................... 8
Application of Gradient Descent Algorithm ................................................................................. 8
Application of Gauss-Newton Algorithm ........................................................................................ 9
Data Analysis and Discussion .................................................................................................................. 10

Objective



The purpose of this project is to utilize the pseudorange measurements between our
vehicle and four earth orbiting GPS satellites to accurately locate our vehicle in a 3D space.
This pseudorange is an errorful range measurement derived from timing information and
knowledge of satellites orbit. These errors originate from a multitude of factors. Firstly,
our receiver has an inaccurate clock that is not synchronous to the atomic clocks aboard
the satellites. Second, transmission of signals through the Earths atmosphere can cause
distortion. Lastly, noise in both our communication channel and sensors can cause
degradation of our positioning accuracy. By linearizing the pseudorange equation and
applying either the standard Gradient Descent or Gauss-Newton algorithm, we can through
approximate the vehicles position through some number of iterations in discrete time.

Figure 1: Four satellites transmitting to GPS Receiver

Derivation of Pseudorange Equation



This section provides a concrete mathematical description for how the pseudorange
equation came to be. For this process, we will only consider one satellite, , to avoid any
matrices and to maintain simplicity. Using the atomic clock of the satellite, we have a true
time of transmission as well the satellites precise location in space, . The signal is
received at true time and for the average speed of light (accounting for the effects of
atmospheric propagation)
, we have the true range equation:




However, since our GPS receivers have relatively inaccurate clocks, the time of reception t
is really:





We can see here that our synchronization error is . In addition, our receiver doesnt
calculate the adjusted value for speed of light
, but instead the speed for transmission
in a vacuum. Thus, leading to the erroneous approximation known as the pseudorange:




As can clearly be seen, we have two errorful values: and c. Now is the time to formulate a
mathematical relationship between the pseudorange and true range equations. We define
our clock bias error as:




in units of distance for which light travels in a vacuum during
equation then becomes:

. Our pseudorange




Since we know that c is clearly an ideal value in this case, we define c to be the sum of both
the average speed of light
and . Now, we come to the conclusion of a complete
formulation for the pseudorange with its components, true range and noise, evident in
their nature:

Linearization of Pseudorange Equation




Unfortunately, the pseudorange equation we have derived is a nonlinear function
which increases the difficulty in accurate location measurements. It is imperative to
linearize the pseudorange to allow for the implementation of the approximation algorithms
to follow. We begin by defining our X vector and h(X) function as:



S is known to be our receivers precise location and e is simply a column vector of ones as
the receiver clock bias error is identical for all four satellites. We then utilize the Taylor
series expansion of our vector-valued function h(X) about a point :




In the general case of a nonlinear function y(x) that relates the change of x to y, this
approximation can be shown as:




Our algorithms of choice, the Gradient Descent and Gauss-Newton, are applicable to first
order methods, thus allowing us to ignore the higher order terms and avoid the challenges
of second order derivatives. At this point we have derived a critical component of our
linearization process: The Jacobian matrix:




which can be defined as the matrix of all first-order partial derivative of a vector valued
function. Our final product of linearizing the pseudorange equation can then be shown in
all its glory:




Note that for an increase in the true range, our linear approximation decreases in accuracy.

Generalized Gradient Descent Algorithms



Now that we have formulated a linearization of the pseudorange equation, it is time
to develop our two methods of location approximation in discrete time. These algorithms
fall under the family of generalized gradient descent algorithms for minimizing the loss
function, or errors in our approximation. We define this nonlinear weighted least-squares
loss function as:






for the y and h(x) we have seen before. We in moving forward with the linearization of such
a function, we consider the derivative (16) and the resulting Taylor series approximation
(17):




This approximation can have a high degree of accuracy provided that
is
small enough. To implement this condition, we introduce the step size
as a design
parameter to guarantee stability and convergence. The importance of this step size can be
seen from the following relation:




We then expect our algorithm each iteration of x to be a strictly decreasing loss function:




However, if our step size is too large, our approximation (17) proves to be invalid. In the
case that our step size is small enough, we can proceed to define this correction of x as:

where




is an arbitrary positive-definite, symmetric matrix-valued

function of x. From (18) we define


as the generalized gradient and the
correction
as the generalized gradient-descent correction. Utilizing these
components, we can finally formulate the generalized gradient descent algorithm:

Standard Gradient Descent Algorithm




In choosing
in the generalized gradient descent algorithm, we arrive at a
special case known as the standard gradient descent algorithm:






In this subset of the generalized algorithm, we find that our step-size must be significantly
smaller than that of our second special case algorithm, the Gauss-Newton method. For our


application, the step size is chosen to be 0.1. As a result, our rate of convergence should
prove to be significantly slower.

Gauss-Newton Algorithm


In choosing
in the generalized gradient descent algorithm,
we arrive at our second special case known as the Gauss-Newton algorithm:




This algorithm is less sensitive to our design parameter of step-size and good
performance can result from a step size chosen to be 1. The only drawback from the GaussNewton method is that the computational work is significantly higher.

Algorithm Termination Criteria


In order to implement these algorithms on a computer, namely Matlab, we must
think in terms of discrete time. Both methods will rely on a certain number of iterations in
order to reach a final approximation of x (location coordinates and clock bias error). It is
during this process that we must define termination criteria so that our conditional loop in
Matlab knows when our loss function has converged and the loop can exit. For both the
gradient-descent and Gauss-Newton methods, the termination criteria are defined to be:




In other words, when the norm of the difference between our current estimate of x and the
previous iterations estimate of x is less than some defined parameter , the conditional
loop should exit. In this case, is chosen to be 10 centimeters.

Algorithm Simulation



To begin Matlab simulations for both algorithms, we must first generate synthetic
data and formulate a pseudorange equation for all four satellites. In order to compare our
resulting approximation from our algorithms, we are given the actual receiver position
assumed to be at mean sea level along with all four satellite positions, :

The clock bias error b, is defined as:




With ER = Earth radii = 6370 km. To begin our iterations for approximating our receivers
position, we are given an initial estimate of our vehicles location,
and clock bias error



This estimate is an extremely crude guess that is 0.5 km higher in elevation and 2,330 km
off in position from our actual vehicle. It is then our approximation algorithms job to bring
this back to a correct estimate. Note that for this simulation, we are generating synthetic
data for the zero noise case.

Synthetic Data Generation


In Matlab, the cell array data structure is utilized to store each satellites precise
location. A simple for-loop then iterates for each satellite and defines their respective true
range:




Adding the clock bias error b to the true range for each satellite results in the synthetic
pseudorange required for future simulation efforts.

Application of Gradient Descent Algorithm


Before any iterations are performed using the gradient-descent algorithm, all
arrays/matrices are initialized using the zeros() function to lower the runtime cost of
dynamically changing the amount of memory allocation for each array.





A while-loop with conditional termination criteria specified earlier is utilized to
perform enough iterations for locate our vehicle and approximate our clock bias error.
With each iteration k a new is formulated directly from the gradient descent equation.
This equation requires the (1) synthetic data we generated earlier, (2) an h(x) function
(true range + clock bias error), and (3) Jacobian matrix to linearize.

Check termination
criteria
Compute location/
clock bias error
estimate for
next iteration using
gradient-descent

Compute Jacobian
for linearization

Update our current


estimate variables
(s & b) for location
and clock bias error

Compute h(x) = true


range + b


During this while-loop, there are a few items calculated for data analysis and visualization
via 2D and 3D line plots in a later section of the Matlab code. These items are (1) loss
function, (2) receiver position estimate error, and (3) clock bias estimate error.

Application of Gauss-Newton Algorithm


The process for applying the Gauss-Newton algorithm is almost identical to the
gradient-descent method. The main difference lies in our choice of step size, which is set to
be 1, and obviously the formulation for the next iterations estimate of receiver location and
clock error bias.
Before any iterations are performed using the Gauss-Newton algorithm, all
arrays/matrices are initialized using the zeros() function to lower the runtime cost of
dynamically changing the amount of memory allocation for each array.

A while-loop with conditional termination criteria specified earlier is utilized to
perform enough iterations for locate our vehicle and approximate our clock bias error.
With each iteration k a new is formulated directly from the Gauss-Newton equation.
This equation requires the (1) synthetic data we generated earlier, (2) an h(x) function
(true range + clock bias error), and (3) Jacobian matrix to linearize.


Check termination
criteria

Compute location/clock
bias error estimate for
next iteration using
Gauss-Newton

Compute Jacobian for


linearization

Update our current


estimate variables
(s & b) for location
and clock bias error

Compute h(x) = true


range + b

Data Analysis and Discussion



Despite the increase in computational workload for the Gauss-Newton, the
improvement is number of iterations is astounding. The gradient descent algorithm
required 49,244 iterations to converge given our termination criteria while the GaussNewton algorithm only took 3. Below are plots for visualization of the distinct difference in
algorithm implementation between gradient-descent (GDA) and Gauss-Newton (GNA).
Notice that for the gradient descent algorithm, the loss function is minimized very early on.
However, it takes thousands of iterations to minimize the very last bit of norm difference
between iteration estimates for position and clock bias error. For both the receiver position
error and clock bias error, the GDA resulted in a somewhat exponential decay towards an
accurate approximation. The GNA in both cases directly minimized the error towards zero
in a straight line.

Vous aimerez peut-être aussi