Vous êtes sur la page 1sur 9

COOPERATIVE TRACKING OF MOVING TARGETS BY A TEAM OF AUTONOMOUS UAVS

Matt Wheeler, Brad Schrick, The Insitu Group, Bingen, WA William Whitacre, Mark Campbell, Cornell University, Ithaca, NY RolfRysdyk, Richard Wise, University of Washington, Seattle, WA

Abstract
This paper summarizes current work on theoretical and experimental cooperative tracking of moving targets by a team of UAVs. The Insitu Group is leading a diverse group of researchers to develop building block foundations for cooperative tracking. The building block algorithms have been maturing through the partners, and the team led by Insitu is now pulling the technologies together for demonstration and commercialization. The work reported here focuses on cooperative tracking using multiple UAVs, with the ability for one operator to control many UAVs which are tasked to 1) provide autonomous tracking of moving and evading targets, and 2) report to a centralized database (without operator attention): the position, position history, velocity vector of the target being tracked. Flock guidance algorithms have been developed and simulated to enable a flock of UAVs to track an evading vehicle. Algorithms have been demonstrated in simulation that: dynamically allocate tasks and compute near-optimal paths in real-time; minimize the probability that vehicles are destroyed due to collision or damage from threat; and accommodate moving targets, time-on-targets, and sequencing, as well as the effects of weather (especially wind) and terrain. Additionally Geolocation estimation algorithms and software have been developed which exchange information among vehicles, process the information robustly and in real time, and have demonstrated that the joint accuracy is improved. Work has also focused on accurate probabilistic analysis of the estimates, especially considering variations across multiple vehicle sets of ScanEagle UAVs.

on-board camera systems are very important elements for operations, with their ability to both track and identify objects on the ground. With a single UAV and camera as a building block, current research is now focused on developing teams of these vehicles for cooperative missions. These "teams of vehicles" open up very important research questions such as: What information is shared between the vehicles? How do the vehicles cooperatively plan trajectories and tasks for each mission type? How can the impact of communications constraints, such as outages and dropped packets be minimized? The Insitu Group, Cornell University, and the University of Washington have teamed up to conduct research in order to answer many of these questions. The Insitu Group develops a variety of production UAVs, including the ScanEagle/SeaScan system.

Introduction
Recent military operations and search and rescue operations have demonstrated the presence, need, and usefulness of uninhabited aerial vehicles (UAVs). For many of these missions, UAVs with

Figure 1. Multiple aircraft tracking a moving target are more likely to maintain lock. The ScanEagle is currently the smallest UAV with an inertially-stabilized turret, enabling the persistent tracking of coordinates/landmarks with limited input from users. Using the ScanEagle as a basis, the goal of the proposed work is to rapidly evaluate, and demonstrate in flight, algorithms for cooperative tracking of moving targets using
5C2-1

1-4244-0378-2/06/$20.00 (2006 IEEE.

Authorized licensed use limited to: Uppsala Universitetsbibliotek. Downloaded on January 8, 2010 at 03:11 from IEEE Xplore. Restrictions apply.

multiple UAVs, as shown in Figure 1. The work reported here focuses on cooperative tracking using multiple UAVs, with the ability for one operator to control many UAVs which are tasked to 1) provide autonomous tracking of moving and evading targets, and 2) report to a centralized database (without operator attention): the position, position history, velocity vector. Reporting expected position of the assigned target enables Intelligence, Surveillance, and Reconnaissance (ISR) missions to be more cost effective and efficient. ISR resources have generally been regarded as the number one resource scarcity during recent operations in Iraq and Afghanistan. ISR resources will continue to be limited by personnel constraints unless the vehicles' autonomy and cooperation can be increased. This work is intended to lead to a demonstration of the ability for multiple UAVs to self-organize and cooperate for the tracking of a moving target (SUV, vessel, tank, etc). The UAVs will demonstrate autonomous behaviors to modify strategy based upon the number of UAVs available for cooperation, and will help each other regain image lock in the event that the image is temporarily lost by one or more UAV.

will be quite possible to queue up multiple ScanEagle UAVs to a target, and then the ScanEagles could send information (target position, velocity, etc.) directly to the battlefield computers. This paper summarizes current work on theoretical and experimental cooperative tracking of moving targets by a team of UAVs. The Insitu Group is leading a diverse group of researchers to develop building block foundations for cooperative tracking. Cornell University specializes in single vehicle tracking estimation, [1,2], while the University of Washington specializes in cooperative planning for tracking missions under environmental constraints such as wind [3,4]. The Insitu Group develops camera based object tracking at the pixel level in order to inertially point the camera. Finally, the Insitu Group then integrates these technologies first in simulation, and then to flight test. This paper is organized as follows. First, the cooperative planning and tracking problem is introduced, followed by a summary description of the technologies, ranging from video tracking to cooperative planning to cooperative estimation. Finally, a high fidelity distributed simulation testbed is described along with simulation results showing the integrated system. Flight testing of the algorithms is planned for late 2006 or earlier 2007.

Cooperative Tracking Architecture


Figure 3 shows a block diagram of the high level solution proposed. The key elements include: ScanEagle UA V. The UAV includes all traditional elements of the airframe, engine, control actuators and sensors, avionics with embedded processing and the potential for auxiliary processing boards. Also includes a set of basic trajectory algorithms, such as orbiting about a point at a given altitude. ObjectTracker. This system includes a variety of image processing algorithms, or pixel image trackers, in conjunction with a two axis gimbaling payload with camera. The feedback loop rejects engine vibrations while also tracking an approximate GPS location on the ground. Flock Guidance. This system optimizes either the UAV orbiting parameters or specific UAV flight commands in order to improve
5C2-2

Figure 2. Multiple UAVs could cooperatively track a target in urban environments, keeping the vehicle in view much more of the time. The prime benefit for intelligence operations is that UAVs will be able to track evading targets with minimal interaction from operators. This means that one operator could control many UAVs and many targets, thereby gaining more time. Eventually, it

Authorized licensed use limited to: Uppsala Universitetsbibliotek. Downloaded on January 8, 2010 at 03:11 from IEEE Xplore. Restrictions apply.

Geolocation. These algorithms include specific approaches to handle the presence of wind, which is a large factor for these small UAVs. Extensions include dynamically allocating tasks and computing near-optimal paths in real-time. Geolocation. This system takes real time screen coordinate measurements from the ObjectTracker loop, fuses with the UAV attitude, position, and gimbal positions in order to develop a statistically accurate estimate of the target location. In addition, an information form is used in order to fuse measurements

from the other UAVs in a distributed fashion, and to handle measurement delays. In addition to these elements, additional trajectory optimization algorithms could be designed in order to maximize information content of the collected sensors. The visual sensors, with image processing, report bearing information only. Cooperative planning of the vehicle trajectories improves the convergence time of the Geolocation accuracy. A summary of the ScanEagle UAV can be found in [5,6], while a summary of the other blocks is given here.

)ixels

Tler
input
user

ObjectTracker

Tgt pos/vel
! ! r hasin g !

|<UAV 111111111111111 ~~~~~~~~~~~att/


pos

<

'

~~~~~~~aVngles

att/ pos

UAVI

user

in orma tion
to from other other AC AC

user

from other other AC AC


to

input

input

Figure 3. Block diagram of the integrated solution for Cooperative planning and tracking using multiple ScanEagle UAVs.

Object Tracker
The camera on-board the ScanEagle UAV is steered by a multi-layered control system. There are three control loops. Listed in order of fastest to slowest time scales, the loops are: 1. Rate gyro feedback 2. GPS feed forward 3. Video image based feedback

The inner loop uses rate gyro feedback to hold the camera line-of-sight fixed with respect to an inertial reference. This loop rejects aircraft motion to a bandwidth of 5Hz. The GPS loop uses UAVtarget relative position and velocity to dynamically steer the line of sight onto the target coordinates. The outer image feedback loop compensates for low frequency bias errors in the inner loops, to keep the imaged scene stationary in the video window.
5C2-3

Authorized licensed use limited to: Uppsala Universitetsbibliotek. Downloaded on January 8, 2010 at 03:11 from IEEE Xplore. Restrictions apply.

Payload operator steering setpoints are provided at each level. Using the joystick, the operator may steer:

the line-of-sight rate target GPS coordinates or the image patch rate in the video window The effects of vibration and turbulence on image quality become severe at high zoom on the ScanEagle platform. A zoom of times 25 corresponds to a 1.8 degree field of view. At max zoom, the image can shift up to 50 pixels between frame exposures, both vertically and horizontally. ObjectTracker has dramatically reduced operator workload for high zoom imaging, extending the effective zoom range by a factor of three. Presently, ObjectTracker uses "scene" based image processing to steer the camera. A box on the scene image is designated by the user and then this box is pulled to the center of the screen by ObjectTracker's uplinked turret steering commands. For observation of static targets, hands off operation is possible for minutes at a time. However, tracking moving vehicles is still a fully interactive task. The user must nudge the joystick every few seconds to keep the box on the target. We have tracked moving vehicles hands off for tens of seconds, by adjusting the target box to tightly enclose a moving vehicle in the video. However, this is not a significant operator aid. Current research has focused on developing a blob analysis algorithm, which is designed to automatically center the target box on a designated target, relieving the operator of this continuous task. The blob analysis algorithm detects a region of the scene patch in the box which is different from the surrounding image. The "blob" is then isolated and used to center the box on the target. The blob analysis algorithm can detect objects which are:
-

discriminating. When the tracked vehicle moves near a similar looking "blob ," the blob tracker may switch targets. This characteristic will be improved somewhat by including a Kalman filter to allow only realistic target motion. Figure 4 illustrates the behavior of the blob tracker.

Figure 4. Blob tracking

Flock Guidance
The guidance algorithm must accomplish two goals simultaneously to support Geolocation. First, the UAVs flight path must be such that the target is kept in range of the camera and within the camera's line-of-sight. Secondly, the commanded clock angle ( EDES) between the two UAVs must be tracked for improved Geolocation performance. For example, with two UAVs a clock angle offset of 900 minimizes Geolocation uncertainty. Figures 5 and 6 show the desired motion of the UAVs corresponding to the target motion.
.inn.l

200

UAV1 UAV2 Clock Angle

UAV1

100

Rdes
x

\UAV2
A

Vdes
Rdes

lighter than scene darker than scene uniform color

Target

-100

moving on scene

or have strong edges

-200

Wind

This algorithm works well in uncluttered, unoccluded environments. It can extend the period of hands off operation for observing distinct targets indefinitely. However, it is presently not too

-iUUI -300

-200

-100

100

200

300

Figure 5. Desired UAV positions relative to target


5C2-4

Authorized licensed use limited to: Uppsala Universitetsbibliotek. Downloaded on January 8, 2010 at 03:11 from IEEE Xplore. Restrictions apply.

A-c

400 350 F 300 F


0
z

tY 250 h

200

.2_ 0 150 F 0100


50

2lz
500 1000 1500 Position EAST

~~

-50'0

UAV1 UAV2 2000 2500

Target

Figure 6. Absolute motion of UAVs and target

If the UAVs ground speed exceeds that of the target, due to its constrained stand-off distance, the UAV will orbit about the target's position with a clock angle rate given by the ratio of the tangential component of the UAVs speed relative to the target and the radius to the target:
j(Vg

Helmsman' function. In this case, similar 'Good Helmsman' functions are used to convert the clock angle separation error between the two UAVs into radius and speed adjustments. In addition, the speed adjustment is filtered by a first order lead compensator to account for engine dynamics. The 'virtual' leader principle is employed so that the UAVs adjust speed and radius equally but oppositely (one increases, one decreases) based on the sign of the clock angle separation error. The UAV that leads the other in clock angle is determined only by the sign of the clock angle separation command. Inputs required for Flock Guidance are estimated target position (given by the estimation algorithm), host UAV position (given by onboard GPS/INS), and teammate UAV position (communication between UAVs).

Geolocation
The Geolocation estimator was developed under the following constraints: 1. Must be able to run in real-time. 2. Must be able to handle significantly nonlinear dynamics, particularly the measurement equations. 3. Must be computationally efficient for cooperative estimation. 4. Must be dynamically and numerically stable. The details of the Geolocation tracking estimator can be found in [2]. Key attributes of the cooperative Geolocation tracking estimator are: Decentralized. Each vehicle has its own Geolocation estimator, and then communicates only necessary information to the other vehicles. This minimizes memory and communication, and enables robustness. Simplified Prediction. Only the target dynamics are used in the prediction portion of the estimator, thus allowing half of the estimator to scale very well with the number of vehicles. Information Form. An information form is used in order to 1) minimize the amount of information shared between vehicles, 2) simplify the multiple vehicle fusion problem, and 3) simplify
5C2-5

2+ Vt

+ 2V

(t)V, (t) sin(TP(t)


r(t)

qt (t)))

Note that clock angle rate is affected by UAV ground speed (and thus wind) as well as target speed, target heading, and current clock angle. Therefore, clock angle rate is only constant when the target is stationary and there is no wind.

Guidance commands available to the controller are UAV speed, within limits, and specifying an orbit radius and orbit center. By default, the guidance will pass the estimated target location as the orbit center. A nominal target stand-off distance is given as a constant, based on the camera capabilities. Therefore, the radius adjustment is controlled, again within limits. Each UAV has an inner-loop autopilot that converts these commands to actual deflections of the rudder, ailerons, throttle, and elevator thereby controlling airspeed, altitude, heading rate, and sideslip angle.

'Good Helmsman' Tracking


The guidance algorithm used was developed by Rysdyk [7]. In [7], cross track error and heading error, based on a desired path, are brought simultaneously to zero by the use of a 'Good

Authorized licensed use limited to: Uppsala Universitetsbibliotek. Downloaded on January 8, 2010 at 03:11 from IEEE Xplore. Restrictions apply.

the problem of delayed data (from communication drop-outs). Sigma Points. Sigma points are used to develop statistical linearizations of the dynamics, which have been shown to be more accurate than the traditional Extended Kalman Filter (or the Extended Information Filter). Square Root. A square root version of the estimator is used, both for its numerical accuracy in real time implementation and its ease in implementation with the sigma points.
SR-SPIF Prediction,
AC Navigation
AC

XPOS'L
9sm l (XPOI ^
X.ATT,
I

VSCR l)

LX-Gl
ZSCRN

1l

XPOSN
YSCRN(XPOI ^

XAT

2 VSN)

XGIMN

Iteration Over Timle

YNIAV1 '-NAV >


YATT,
TT

AttitUd

Girnbal

(Eiimba~~~lAttdM/NF'-SI Update At:fl~tud:e:


zSCHF SSi R _+

Y(

TGXT

Ca mera Screen Coordinates

IFusion

Cooperative

5(T

TGT

.+

iPolI, IPo3X

~~~~[1,PO012 (31.X C:ommu nicated fromn Other UAV/'s

Communicate to Other UAV's

Where gSCR is a significantly nonlinear function of the POI state as well as the aircraft states. The aircraft states have errors and these errors are characterized by the onboard Inertial Navigation System (INS) already and therefore do not need to be "reestimated" and therein lies the key to the Geolocation estimator. We use the square root of the aircraft state error covariance, Sx2x2, to do a reduced order update using orthogonalizations. The information to be sent between aircraft is

Figure 7. Block diagram of the Geolocation solution Figure 7 shows a block diagram of the Geolocation estimator. The novel feature of the estimator is the update step which is a square root, information form with a reduced number of states for update. Only the basic sketch of the update will be provided here. Those not already familiar with Sigma Point filtering should consult [8,9] since the preliminary definitions will not be given here. The Geolocation estimator is for estimating the state of a target on the ground with dynamics given by,
XP?0
J?oI

^ XS0-1 2x2/,k1+l k+l

i'POIk+l
5k

(ZIZl

'- 1 +

[X l

1)

oIk1

vk+t+l

where Ck+l is the stochastic linearization given by the Sigma Point filter. The fusion of information from other UAVs and the final update step of the local estimator is given in information form as

1XPO:

WoI

where the subscript POI stands for Point of Interest (target). The measurement of the POI is

R- xx
TIi+1

-Il XkL l

IPOI, k + I

I'~.

-j-,

I] N

5C2-6

Authorized licensed use limited to: Uppsala Universitetsbibliotek. Downloaded on January 8, 2010 at 03:11 from IEEE Xplore. Restrictions apply.

Simulation
Since this work is intended to lead to the use of multiple real UAVs cooperatively tracking a moving target, all components must be thoroughly tested. As a step in that direction, a high fidelity distributed simulation has been developed. This simulation testbed utilizes TCP/IP connections and can therefore be used with components in different locations provided high bandwidth loops do not extend over too great a distance.

SceneGenerator (SG): The SceneGenerator creates digital images for ObjectTracker based on aircraft position and attitude, camera attitude and settings, and the target location. The SceneGenerator was running in the simulation at for the tests performed but ObjectTracker was not yet in the loop.

Setup
The simulation testbed consists of the Flock Guidance (FG) and Geolocation (Geo) algorithms implemented in Simulink and Matlab respectively, as well as the following software components. Datahub (DH): The Datahub can be thought of as a distributed data server. Data published on a local Datahub is mirrored to the Datahubs on all other connected computers. The Datahub facilitates development by providing a means for data exchange between components of different languages. This allows each research group to work in whatever environment is most comfortable. The Datahub also has its own scripting language available. FlightSim (FS): This is a realistic flight simulator of the ScanEagle UAV which includes all core components of the UAV including the aircraft flight dynamics with flight control loops, camera turret, and environmental aspects such as wind. It also has the capability to interact with HIL components. Trajectory Generator (TG): This creates realistic random vehicle trajectories to model a car driving through a city or out on the highway. These can also be used with a random sequence of one and then the other. This would be like a target driving from one town to the next along a highway. This was implemented using the scripting language provided with the Datahub. User Interface (UI): This was an interface created in Matlab to give the user a visual awareness of the state of the flock and allowed the user to send commands to the flock through the datahub. Commands include such things as clock angle offsets and stopping cooperation between the aircraft (simulating a communication loss).

Figure 8. Software/hardware architecture for the simulation

Figure 8 shows the hardware and software connections for the simulation testbed. The small blocks represent the various software components with descriptions and abbreviations given previously. The larger blocks represent the computers. The arrows between computers indicate TCP/IP connections between Datahubs and the arrows within each computer represent local connections to the on board Datahub. Computers one and two can be loosely thought of as simulating UAVs one and two while computer three can be thought of as the ground station.

Results Cooperative tracking has been tested for targets moving in both city and highway driving

modes. In a city driving mode the random trajectory is characterized by straight lines, sharp turns, low speeds and frequent stops. In the highway driving mode the speeds are higher, stops less frequent, and turns tend to be more gradual.

5C2-7

Authorized licensed use limited to: Uppsala Universitetsbibliotek. Downloaded on January 8, 2010 at 03:11 from IEEE Xplore. Restrictions apply.

Figure 9 shows a typical result of the cooperative tracking of a car in a city driving mode. The aircraft trajectories show the aircraft orbiting the target and maintaining the commanded clock angle offset of 900. Since the car is moving at a low percentage of the maximum UAV speed we see very rounded frequent orbits as shown in the Flock Guidance Section.

desired and expected behavior.


....................................................................................................... ................................. .......... .......... f ,....
....

......................

Eastl

t I.... (m}

I.

East (m)

Figure 9. Cooperative tracking of a target moving in a city driving mode The estimates and 3G bounds are also shown in Figure 9 and are indicative of the level of performance for Geolocation. The labels have been intentionally left off of the axes due to ITAR restrictions. However since the target is drawn at approximately life size (the UAVs are scaled up for visual effect), it is clear that the estimator is performing well. The estimates and bounds are not perfectly aligned because some of the information being sent between the UAVs was dropped. That is we did not allow the UAVs to have perfect communication since in real life they will not always be able to communicate. Geolocation was tested for both periodic dropped information and long term communication drops and performed well through both tests but is far better with full communication. It should be noted that a more effective means of dealing with delayed information has been developed, [2], but was not implemented for this simulation. When the target is moving in the highway driving mode we see a somewhat different behavior from Flock Guidance. We see the UAVs rarely actually orbiting around the target but more often just following the target while maintaining the proper separation from each other. This is the

Figure 10. Cooperative tracking of a target moving in a highway driving mode Figure 10 shows an example where the target is in the highway driving mode. The UAVs still maintain the commanded clock angle separation of 900 but rarely circle around the target. Geolocation performs quite well and is shown in the zoomed in view in Figure 10. The uncertainty bounds are aligned since information between UAVs reached its destination consistently for at least a few update steps. That is, even with dropped information the estimates and bounds converge to each other very quickly one communication is restored. Again the axis labels have been left off but the performance can be qualitatively assessed from the size of the target as drawn.

5C2-8

Authorized licensed use limited to: Uppsala Universitetsbibliotek. Downloaded on January 8, 2010 at 03:11 from IEEE Xplore. Restrictions apply.

References
[1] Campbell, M. E. and Wheeler, M., "A Vision Based Geolocation Tracking System for UAVs," AIAA Guidance Navigation and Control Conference, 2006. [2] Campbell, M.E., Whitacre, W. W.," Cooperative Tracking using Vision Measurements on SeaScan UAVs," IEEE Transactions on Control Systems Technology, Accepted for Publication. [3] Wise, R., Rysdyk, R. "UAV Coordination for Autonomous Target Tracking," AIAA Guidance Navigation and Control Conference, 2006. [4] Osborne, J., Rysdyk, R. "Waypoint Guidance for Small UAVs in Wind," Infotech@Aerospoace, Arlington, VA; USA; 26-29 Sept. 2005. pp. 1-12. [5] The Insitu Group, "The SeaScan UAV,"

[6] McGeer, T. and Vagners, J., "Historic Crossing: an Unmanned Aircraft's Atlantic Flight," GPS World, Vol. 10, No. 2, Feb 1999, pp. 24-30. [7] Rysdyk, R. "UAV Path Following for Constant Line-of-Sight Target Observation," Journal of Guidance, Navigation, and Control, in press.

[8] Julier, S., Uhlmann, J., and Durrant-Whyte, H.F.," A New Method for the Nonlinear Transformation of Means and Covariances in Filters and Estimators," IEEE Transactions on Automatic Control, Vol. 45, No. 3, 2000, pp. 477-482. [9] Brunke, S. and Campbell, M. E., "Square Root Sigma Point Filtering for Aerodynamic Model Estimation," AIAA Journal ofGuidance, Control, and Dynamics, Vol. 27, No. 2, 2004, pp. 314-317.

25th Digital Avionics Systems Conference


October 15, 2006

5C2-9

Authorized licensed use limited to: Uppsala Universitetsbibliotek. Downloaded on January 8, 2010 at 03:11 from IEEE Xplore. Restrictions apply.

Vous aimerez peut-être aussi