Vous êtes sur la page 1sur 145

Automatic Flight Control System

Classical approach and modern control perspective

Said D. Jenie and Agus Budiyono Department of Aeronautics and Astronautics, ITB Jl. Ganesha 10 Bandung 40132 Indonesia Phone: +62-22-250-4529 Fax: +62-22-253-4164 Email: agus.budiyono@ae.itb.ac.id c Bandung Institute of Technology Copyright January 12, 2006

2
Abstract The document is used as a lecture note for the graduate course on ight control system at the Malaysian Institute of Aviation Technology (MIAT), Kuala Lumpur, Malaysia. Some parts of the document represent an enhanced version of the material given as an elective undergraduate course and a graduate course in optimal control engineering in the Department of Aeronautics and Astronautics at ITB, Bandung, Indonesia. Singgih S Wibowo helped prepare the typesetting of the formulas and check the MATLAB programs. The constructive input and feedbacks from students are also gratefully acknowledged.

Contents
I Classical Approach 9
11 11 11 13 14 16 17 17 19 20 21 27 27 27 34 34 44 53 69 69 77 84

1 Introduction 1.1 Types of Automatic Control System . . . . . . . . . . . . . . . . 1.1.1 AFCS as the trimmed ight holding system . . . . . . . . 1.1.2 AFCS as the stability augmentation system of the aircraft 1.1.3 AFCS as the command augmentation system of the aircraft 1.1.4 AFCS as the stability provider and command optimizer . 1.2 Elements of Automatic Flight Control System . . . . . . . . . . . 1.2.1 Front-end interface of ight control system . . . . . . . . 1.2.2 Back-end interface of ight control system . . . . . . . . . 1.2.3 Information processing system . . . . . . . . . . . . . . . 1.2.4 Control Mechanism System . . . . . . . . . . . . . . . . . 2 Autopilot System 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . 2.2 Working Principle of Autopilot System . . . . . . 2.3 Longitudinal Autopilot . . . . . . . . . . . . . . . 2.3.1 Pitch Attitude Hold System . . . . . . . . 2.3.2 Speed Hold System . . . . . . . . . . . . . 2.3.3 Altitude Hold System . . . . . . . . . . . 2.4 Lateral-Directional Autopilot . . . . . . . . . . . 2.4.1 Bank Angle Hold (Wing Leveler System) 2.4.2 Heading Hold System . . . . . . . . . . . 2.4.3 VOR-Hold System . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . .

3 Stability Augmentation System 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Working Principle of Stability Augmentation System . . . . 3.3 Longitudinal Stability Augmentation System . . . . . . . . 3.3.1 Pitch Damper System . . . . . . . . . . . . . . . . . 3.3.2 Phugoid Damper . . . . . . . . . . . . . . . . . . . . 3.4 Lateral-Directional Stability Augmentation System . . . . . 3.4.1 The Dutch-roll stability augmentation: Yaw Damper 3

93 . 93 . 93 . 94 . 95 . 99 . 102 . 102

CONTENTS

II

Modern Approach

115
117 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 119 120 121 123 126 127 127 129 131 132 137

4 Introduction to optimal control 4.1 4.2 Some Preliminaries . . . . . . . . . . . . . . Linear Systems. . . . . . . . . . . . . . . 4.2.1 Controllability . . . . . . . . . . . . 4.2.2 Conventions for Derivatives . . . . . 4.2.3 Function Minimization . . . . . . . . Constrained Problems . . . . . . . . . . 4.3.1 Elimination . . . . . . . . . . . . . 4.3.2 Method of Lagrange . . . . . . . . . Inequality Constraints . . . . . . . . . . . . Sensitivity of Cost to Constraint Variations Dynamic Programming . . . . . . . . . . . .

4.3

4.4 4.5 4.6

5 Discrete time Optimal Control 5.1 5.2

Higher Dimension Control Problems . . . . . . . . . . . . . . . . 137 Discrete time optimal control problem . . . . . . . . . . . . . . . 142

List of Figures
1.1 1.2 Autopilot Control System Diagram: Example of Pitch Channel . Stability Augmentation System Diagram. Example of Pitch Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Command Augmentation System Diagram. Example of Pitch Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 The Stability Provider and Control Power Optimizer Control System Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 The automatic and manual ight control loop . . . . . . . . . . . 1.6 The aicraft cockpit as the interface between controller and controlled systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Mechanical Control System: example of longitudinal channel . . 1.8 Hydraulically power assisted control system: example of longitudinal channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.9 Hydromechanical Control System . . . . . . . . . . . . . . . . . . 1.10 Electrohydromechanical Control System . . . . . . . . . . . . . . 1.11 Electrohydraulic Control System . . . . . . . . . . . . . . . . . . 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 2.14 SAS as an inner loop of aircraft autopilot system . . . . . . . . . CN235-100 Autopilot control system APS-65 . . . . . . . . . . . CN235-100 Autopilot control system APS-65 . . . . . . . . . . . The location of auto pilot system components in the standard control diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . Functional diagram of pitch attitude hold system . . . . . . . . . Pitch attitude hold system . . . . . . . . . . . . . . . . . . . . . . N250-100 aircraft prototype 2 Krincing Wesi manufactured by the Indonesian Aerospace Inc. . . . . . . . . . . . . . . . . . . . . Root locus of Pitch attitude hold system for N250-100 PA-2 aircraft Root locus of Pitch attitude hold system for N250-100 PA-2 aircraft enlarged to show the phugoid mode . . . . . . . . . . . Time response of (t) due to step ref = 5o with and without pitch attitude hold system. . . . . . . . . . . . . . . . . . . . . . Time response for u(t) and (t) for Kct = 8.9695 . . . . . . . . Speed hold system functional diagram . . . . . . . . . . . . . . . Mathematical diagram of speed hold system . . . . . . . . . . . . Root locus diagram for the speed hold system of N250-100 PA-2 5 12 13 15 16 18 19 22 23 24 25 26 28 29 31 34 35 36 38 41 42 43 44 45 46 50

LIST OF FIGURES 2.15 Root locus diagram for the speed hold system of N250-100 PA2zoomed around the pitch oscillation and phugoid modes . . . . 2.16 Time response of u(t) to maintain uref = 1 with and without the speed hold system, K = 17.3838 . . . . . . . . . . . . . . . . . . 2.17 Time response of (t) and (t) to maintain uref = 1, with the speed hold gain K = 17.3838 . . . . . . . . . . . . . . . . . . . . 2.18 Tail air brake (Fokker F-100/70) . . . . . . . . . . . . . . . . . . 2.19 Outer-wing air brake (Airbus A-320/319/321) . . . . . . . . . . . 2.20 Functional diagram of altitude hold system . . . . . . . . . . . . 2.21 Kinematic diagram of aircraft rate of climb . . . . . . . . . . . . 2.22 Mathematical diagram of the altitude hold system . . . . . . . . 2.23 Root locus for the altitude hold system for the N250-100 with h e feedback with gain Kct < 0 . . . . . . . . . . . . . . . . 2.24 Root locus for the altitude hold system for the N250-100 with h e feedback with gain Kct < 0 zoomed to show the phugoid mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.25 Altitude hold system with an attitude hold system as the inner loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.26 Altitude hold system with inner loop having gain Kct . . . . . . 2.27 Root locus diagram of the outer loop of the altitude hold system for N250-100 PA-2 aircraft . . . . . . . . . . . . . . . . . . . . . . 2.28 Root locus diagram of the outer loop of the altitude hold system for N250-100 PA-2 aircraftzommed to show the phugoid mode . 2.29 Time response of h(t) for an input of href = 1 from a system with and without altitude hold for N250-100 . . . . . . . . . . . . 2.30 Altitude hold system with forward acceleration ax as the inner loop feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.31 Altitude hold system with inner loop using forward acceleration feedback ax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.32 Root locus diagram of inner control loop u e for N250-100 altitude hold system . . . . . . . . . . . . . . . . . . . . . . . . . 2.33 Root locus diagram of inner control loop u e for N250-100 altitude hold systemzoomed around the phugoid mode . . . . . 2.34 Root locus diagram of outer control loop u e for N250-100 altitude hold system . . . . . . . . . . . . . . . . . . . . . . . . . 2.35 Time response of h(t) for N250-100 altitude hold system with the inner loop of ax e feedback . . . . . . . . . . . . . . . . . . . 2.36 Functional diagram of Bank angle hold system . . . . . . . . . . 2.37 Mathematical diagram of bank hold system . . . . . . . . . . . . 2.38 Root locus diagram of the bank hold system of N250-100 aircraft for a number of s values . . . . . . . . . . . . . . . . . . . . . . 2.39 Time response of (t) for a = 1/ s = 10, 5, 2 due to an impulse function input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.40 Time response of (t) for a = 1/ s = 10, 5, 2 due to a step function input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.41 Forces equilibrium during the turn maneuver . . . . . . . . . . .

51 53 54 55 56 57 58 59 60

61 62 63 64 65 66 67 68 69 70 71 72 73 74 76 77 78 79

LIST OF FIGURES 2.42 2.43 2.44 2.45 2.46 2.47 2.48 2.49 2.50 2.51 2.52 2.53 Functional diagram of heading hold system . . . . . . . . . . . . Mathematical diagram of heading hold system . . . . . . . . . . Heading hold system with bank angle hold as an inner loop . . . Root locus diagram of the heading hold system of N250-100 aircraft for a number of a = 1/ s values . . . . . . . . . . . . . . . . Time response of (t) and (t) of the heading hold system of N250-100 aircraft . . . . . . . . . . . . . . . . . . . . . . . . . . . Eect of wind to the aircraft ight path . . . . . . . . . . . . . . VOR guidance path geometry . . . . . . . . . . . . . . . . . . . . The functional diagram of VOR-hold guidance-control system . . The mathematical diagram of the VOR-hold guidance-control system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Navigation and guidance system: VOR hold . . . . . . . . . . . . Root locus diagram for the VOR oset (s) with respect to the bearing ref of the N250-100 aircraft . . . . . . . . . . . . . . . Time response of (t) of the VOR Hold system of the N250-100 aircraft with gains of: k = 1.1152 (guidance loop), k i = 4.5943 (outer loop control) and ki = 8.9344 (inner loop control) . The functional diagram of the pitch damper system . . . . . . . The mathematical diagram of the pitch damper system . . . . . Root locus of the inner control loop: pitch damper system of N250-100 PA-2 at cruise condition with V = 250 KIAS and h = 150000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Root locus of the inner control loop: pitch damper system of N250-100 PA-2 at cruise condition with V = 250 KIAS and h = 150000 enlarged around the phugoid mode . . . . . . . . . . . . Time response of (t) and q (t) with and without pitch damper of N250-100 aircraft . . . . . . . . . . . . . . . . . . . . . . . . . . . The mathematical diagram of the pitch attitude hold with the pitch damper as the inner loop . . . . . . . . . . . . . . . . . . . Root locus of outer control loop: phugoid damper of N250-100 PA2 at cruise condition . . . . . . . . . . . . . . . . . . . . . . . Root locus of outer control loop: phugoid damper of N250-100 PA2 at cruise conditionenlarged around phugoid mode . . . . . Time response of (t) of the phugoid damper with K = 3.7979 and Kq = 0.2432 . . . . . . . . . . . . . . . . . . . . . . . . . . The functional diagram of the yaw damper system with r r feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The mathematical diagram of the yaw damper system with r r feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Root locus of the yaw damper system with r r feedback of N250-100 aircraft . . . . . . . . . . . . . . . . . . . . . . . . . . Root locus of the yaw damper system with r r feedback of N250-100 aircraftenlarged around the dutch-roll mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7 80 81 82 83 84 85 86 87 88 89 90

91 94 95

3.1 3.2 3.3

97

3.4

98 99 100 101 102 104 105 106 108 109 110

3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14

LIST OF FIGURES (t), (t) and (t) of the N250-100 PA2 equipped 3.15 Time response of with the yaw damper . . . . . . . . . . . . . . . . . . . . . . . . . 113 3.16 Phase portrait of (t) vs p(t) for three cases: no yaw damper, with yaw damper and yaw damper+wash-out . . . . . . . . . . . 114 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 5.1 5.2 5.3 5.4 Minimum of a cost function J (x) at a stationary point Minimum of J (x) at the boundary . . . . . . . . . . . Minimum of J (x) at a corner . . . . . . . . . . . . . . Constrained minimum vs unconstrained minimum . . Inequality constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Principle of Optimality . . . . . . . . . . . . . . . . . . Multistage decision example . . . . . . . . . . . . . . . Flight Planning application . . . . . . . . . . . . . . . One dimensional scalar state problem . . . . . . . . . Discrete grid of x and t . . . . . . . . Two-dimensional array of states . . . . Double interpolation of Cost function Interpolation in the grid of u0 s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 124 124 128 130 131 132 133 134 135 138 140 141 143

Part I

Classical Approach

Chapter 1

Introduction
The content of the book is centered on the discussion of automatic control without the pilot in the control loop. The role of the automatic ight control system is to support the pilot in performing his job as the steerer and the mission executor so as to reduce the pilots load. As a steerer, the pilot has two main tasks which are controlling and guiding the aircraft. The control is performed in order to maintain the aircraft at the desired equilibrium ight attitude, whereas the guidance is the task to bring the aircraft from a certain equilibrium state to another equilibrium state. As a mission executor, the pilots task is dependent on the type of the aircrafts mission. For instance, for a ghter aircraft, the pilot has the tasks to nd and to investigate the target and then to aim and shoot it after the process of search and investigation has been determined. The Automatic Flight Control System (AFCS) is designed to ease the pilot in performing the above tasks in such a way that his physical as well as psychological load can be reduced.

1.1

Types of Automatic Control System

Based on the tasks level of diculty, the Automatic Flight Control System can be categorized into four dierent types: 1. AFCS as the trimmed ight holding system 2. AFCS as the stability augmentation system 3. AFCS as the command augmentation system 4. AFCS as the stability maker and command optimization

1.1.1

AFCS as the trimmed ight holding system

This type of automatic control system is commonly known as the auto-pilot which is the abreviation of the automatic-pilot (AP). The AP system has the 11

12

CHAPTER 1. INTRODUCTION

task to take over some parts of the pilots routine tasks. The delegated tasks to the AP system are typically easy and repeated tasks. The types of this AP system, for example, are: Flight condition and conguration holding system, or also known as HoldSystems such as speed hold, altitude hold, attitude hold and directional hold Flight trajectory holding system, or usually known as Guidance Systems such as autoland system This autopilot system does not work continuously but only at a certain period of time. The AP system can be activated by turning the AP switch and deactivated by over-riding the system through the movement of the control manipulator. Therefore the characteristic of the AP system is of limited authority. Fig. 1.1 shows the functional diagram of the autopilot system used in the pitch longitudinal channel of an aircraft.

Autopilot Switch

Basic Control System

AP motor Autopilot Loop

APC Autopilot Computer

Sensor Motion sensor Aircraft Motion

Figure 1.1: Autopilot Control System Diagram: Example of Pitch Channel Note that by determining the desired attitude, the pilot can then press the autopilot switch in order that the reference attitude can be obtained and maintained. During the period in which the AP is working the pilot does not need to grasp the controller stick. To deactivate the autopilot, the pilot just need to slightly move the control stick to cut the AP control circuit. The AP system can work well if the aircraft has a good stability characteristic.

1.1. TYPES OF AUTOMATIC CONTROL SYSTEM

13

1.1.2

AFCS as the stability augmentation system of the aircraft

The type of automatic ight control system that adds stability to the aircraft is usually called the Stability Augmentation System or SAS. This type of automatic ight control system improves the stability of an aircraft at certain ight congurations and conditions within the ight envelope. For conventional aircrafts, the stability augmentation will be needed during the ight at low speed and low altitude for instance during landing or approach. The control optimization of typical aircrafts are conducted only at a certain ight conguration such as cruise conguration. This makes the aircraft stability at other ight congurations namely approach, landing or other special congurations tend to deteriorate. The stability augmentation is therefore necessary for those congurations. The stability augmentation can be achieved by increasing the damping ratio of the existing aerodynamic damping ratio through the application of feedback control system.

Basic Control System

SAS SAS Loop Sensor Motion sensor Aircraft Motion

Figure 1.2: Stability Augmentation System Diagram. Example of Pitch Channel The types of the SAS, for example, are: Damping ratio augmentation system such as pitch damper, yaw damper and roll damper Dynamic compensation supplier system such as wing leveler and turn co-

14 ordinator

CHAPTER 1. INTRODUCTION

Fig.1.2 shows the example of pitch damper SAS implemented for the aircraft pitch longitudinal channel. Note that the SAS signal comes out of FCC (Flight Control Computer) which processes the logic of stability augmentation. This signal directly enters the ECU and is combined with the command signal from the pilot to move the elevator. The SAS signal coupled with the aircraft dynamics will improve the pitch damping ratio such that the aircraft dynamics is more stable. It can be observed that the SAS is dierent from AP in some ways. In the AP system, the output from the AP computer is used to move the control stick in lieu of the pilot input. In the SAS system, the output from the SAS computer is entered into the ECU which forms a closed loop in order to increase the stability of the aircraft. Thus, the SAS system will keep working even though there is an input command from the pilot. Whereas in the AP system, the AP loop will be automatically o once the pilot moves the controller stick. The SAS therefore has higher level of authority compared to AP system. The SAS system is called the ight control system with partial authority. To deactivate the SAS, the pilot can turn the SAS switch.

1.1.3

AFCS as the command augmentation system of the aircraft

The automatic ight control system with this type of task is commonly called Command Augmentation Systems or CAS. This system adds the power of input command of the pilot by processing the input command and the generated aircraft motion to optimize the input command to the aerodynamic control surface. The working principle of this system can be likened to that of the power steering of the ground vehicle. The types of this CAS system, for example, are: Pitch oriented ight control system Roll oriented ight control system Yaw oriented ight control system Fig. 1.3 shows the example of CAS system for the pitch oriented column steering. From the diagram, it is evident that input command from the controller stick is processed to follow the pilots desired pitch angle. The command signal is then corrected by the actual pitch angle and is processed and sent through the ECU to the ECHP (electronically controlled hydraulically powered) actuator. It can also be inferred from the diagram that the pilots desired pitch angle can eectively be achieved by an appropriate control stick input. In summary, the dierentiating features of the CAS, SAS and AP are: CAS reacts due to control stick input and results in the desired orientation. If the pilot does not move the control stick, CAS is not operating

1.1. TYPES OF AUTOMATIC CONTROL SYSTEM

15

Basic Control System

Comparator CAS

comm
Sensor Motion sensor

CAS Loop

Aircraft Motion

Figure 1.3: Command Augmentation System Diagram. Example of Pitch Channel SAS reacts continuously regardless the motion of the controller stick. When the SAS is operating, the stability of the aircraft is increased. AP is operating in the condition that the control stick is not moved. When AP is working, the aicraft will maintain its trimmed condition as desired by the pilot. From the perspective of control circuit, the following feature distinguishes CAS, SAS and AP: CAS the circuit is closed through the Flight Control Computer at the junction point of controller stick and output from the aircraft motion sensor SAS the circuit is closed through the Flight Control Computer directly to the actuator AP the circuit is closed by the motion of the AP electromotor at the controller stick From the above comparison, it is clear that the CAS system has a higher authority than the SAS does because it always reacts to follow the desired attitude set by the pilot.

16

CHAPTER 1. INTRODUCTION

Basic Control System

Sensor Stick Motion sensor

FCC

ACT Actuator Computer

Sensor Motion sensor Aircraft Motion

Figure 1.4: The Stability Provider and Control Power Optimizer Control System Diagram

1.1.4

AFCS as the stability provider and command optimizer

This kind of automatic ight control is commonly called Super Augmentation Flight Control System. This control system is typically used to create an articial stability for the class of aircrafts which are statically unstable. The same system is simultaneously used to optimize the control power through the application of control laws provided by the Flight Control Computer. The domain of this type of control system is electronic and hydraulic. The super augmented control system is often called electro (opto) hydraulic ight control system or Fly by Wire (Light) ight control system which is abbreviated as FbW or FbL. Fig. 1.4 shows the example of the FbW control system diagram which is used by the F-16 ghters of the Indonesian Air Forces. From the diagram, it can be observed that the function of the FCC consists of combination of three activities, namely: Superaugmentation : providing an articial stability and optimizing the control power of the aircraft. This subsystem works continuously and can not be overridden by the pilot. Autopilot : taking over some parts of pilots routine tasks. If this system is in operation, the pilot does not need to hold the control stick. This

1.2. ELEMENTS OF AUTOMATIC FLIGHT CONTROL SYSTEM subsystem can be overruled by the pilot by moving the controller stick

17

Control Law : governing the optimization of the aircraft motion output following the desired mission. Using the control law, the aircraft motion is optimized in such a way that it will not always be the same as the motion due solely to the input command from the pilot. The control law is also used for protection or limit of the state variables of the aircraft at the a certain ight conguration. The articial stability provided by the superaugmentation system is the longitudinal and/or lateral directional static stability. This static stability is created through the continuos feedback process in such a way that the trimmed condition of the aircraft is maintained. It can be observed that this type of ight control system is a control system with a full authority. Without the availability of this type of system, the aircrafts that are statically unstable will not be able to y. Thus, the characteristic of this control system is ight critical.

1.2

Elements of Automatic Flight Control System

The basic elements in the control information loop are plant (the controlled system) and the controller. For an aircraft, the controlled system consists of control apparatus, control surface and the aircraft. Whereas the controller part consists of three subsystems namely aicraft motion sensor, aircraft motion information processor and control command generator. Fig.1.5 shows the functional diagram of the manual and automatic control system for an aircraft. The diagram shows that the primary interface between the controlled and controller systems can be divided into two parts: front-end interface which is the aircraft sensory system and back-end interface which is the control command generator.

1.2.1

Front-end interface of ight control system

The front-end interface of the ight control system is the part where the motion of the aircraft is observed, recorded and displayed in the presentation map or transmitted in the form of information signal to the aicraft motion information processor system. In the study of control engineering, the ability to observe the aircraft motion and to reconstruct it as motion information signal is called observability. In the aircraft ight control system, the front-end interface element is located inside the cockpit (ight deck) consisting of front window, side window, instrumentation display dashboard, pilot vision or aircraft motion sensors such as pitot-static system, vane and inertial platform to name a few.

18

CHAPTER 1. INTRODUCTION

Controller motion

Aircraft motion

Control apparatus

Control surfaces

Aircraft

HUMAN CONTROLLER Command generator Information processor Motion sensor

Pilots hands, feet voices

Pilots brain

Cockpit display, window view, pilots eyes and ears

AUTOMATIC CONTROLLER Actuator FCC RLG,Accel,Pitot

Command generator

Information Processor

Fluids dynamics/ Inertial sensors

Figure 1.5: The automatic and manual ight control loop

1.2. ELEMENTS OF AUTOMATIC FLIGHT CONTROL SYSTEM

19

Front-end interface

Back-end interface

Figure 1.6: The aicraft cockpit as the interface between controller and controlled systems

1.2.2

Back-end interface of ight control system

In the back-end interface, the control command generator is the part of the system through which pilot command is inputted namely the controller manipulator (stick, steering wheel or pedal) and propulsion controller manipulator (power lever and condition lever). In the study of control engineering, the ability to move the control manipulator is called the controllability. In the aircraft ight control system, the back interface element is located inside the cockpit (ight deck) and is composed of among others the controller manipulator and autopilot actuator. See Fig.1.6. It can be concluded that the cockpit or the ight deck is the ight front-end and is the most important part of the aircraft since it is in this part that the two main elements of the control system namely controlled and controller parts are connected. Particularly for unmanned aerial vehicles such as drones, missiles or satellites, the two interfaces are combined into the ground control station system. The ground control station is typically composed of display system which comparable to conventional aircraft cockpit and control and navigation interface where ground pilot can enter control commands or navigation waypoints.

20

CHAPTER 1. INTRODUCTION

1.2.3

Information processing system

Another very critical element of the controller is the information processor system. In the manual controller (human controller), this system is represented by the pilots brain supported by the basic information processing computers to speed up the decision making process. In the automatic controller, the information processing element is represented by a Flight Control Computer (FCC). The FCC works continuously in real time depending on the authority level of the implemented automatic control system. The software inside the FCC that manipulates the input of the FCC to be converted to the desired control signal by the control system is called the control law. The control law can be in the form of simple instructions which typically used by the autopilot. Some of the examples of control law are:
FCC

Constant Gain. The FCC represents a multiplier or an amplier only.

in

out K K

K - constant

yout = Kuin

FCC

Variable Gain (Gain scheduling). The FCC works as a modulated transformer. The gain can be regulated as a function of one or more parameters.

in

out

K = f (i ) yout = f (i )uin
FCC in

Robust Gain. The FCC gives the value of the gain K in the admissible control region. The robust property means that control law will still work when there exists some level of uncertainty or parameter changes in the plant.

yR

out

1.2. ELEMENTS OF AUTOMATIC FLIGHT CONTROL SYSTEM


FCC

21

Optimal Gain. The FCC calculates the optimal gain based on a certain predetermined optimization criterion such as minimum control power, minimum time and minimum fuel.

u y

x
optimal gain

out

Reconstruction

Adaptive Gain. The FCC determines the varying gain that adjusts to the most suitable model of a certain conguration. Other than the above control laws, there are many other approaches that are getting more applications in the automatic ight control design, namely: neural networks, fuzzy logic, H2 and H control, and passitivity-based control.

1.2.4

Control Mechanism System

The control mechanism is the element of control system as a whole that is also important. This system allows the transmission of the control command to the aircraft control surfaces. Based on the physical domain of the control command transmission, the control mechanism is categorized into the following types: 1. Mechanical Control System 2. Hydro-mechanical Assisted Control System 3. Hydro-mechanical Powered Control System 4. Electro hydro-mechanical Control System 5. Electrohydraulic Control System (Fly-By-Wire) 6. Opto Hydraulic Control System (Fly-By-Light) The following gures show the illustration of the physical diagram of each control mechanism type. Mechanical Control System A mechanical control system is composed of physical object components with translation and rotation mechanical domain. Fig.1.7 shows an example of a mechanical control system in the longitudinal channel. The general components of this type of control system are among others: push-pull rod

22
tension regulator

CHAPTER 1. INTRODUCTION

cam

cable mounting controller stick spring/ damper pulley cam push-pull rod pulley

spring/ damper pulley elevator mounting

Figure 1.7: Mechanical Control System: example of longitudinal channel pulleys for rotational transmission cable tension regulator, springs pulley roller cables component mountings mechanical damper The dynamic of mechanical control system is inuenced by their physical dynamic properties such as mass, inertia, damping, friction and stiness. To some extend, these physical dynamic properties reduce the performance of the control system as a control command transmission from the pilot to the aircraft. On the other hand, the greatest advantage of the mechanical control system is the fact that the pilot through the mechanical control linkage can retain the feel of a direct connection with the aircraft that he is controlling. Another important property of the mechanical control system is the ability to imitate the motion transmission from the controller stick to the control surface by the control surface motion back to the controller stick. Hence, the mechanical control system is referred to as a reversible control system. Hydro-mechanical Assisted Control System A hydromechanically assisted control system is a control system with mechanical domain where some parts of the subsystem are strengthened by hydraulic actuators. See Fig.1.8. The typical components of this control system include: mechanical components (as in mechanical control system)

1.2. ELEMENTS OF AUTOMATIC FLIGHT CONTROL SYSTEM


tension regulator cam controller stick mounting hydraulic actuator spring/ damper pulley cam push-pull rod hydraulic actuator pulley pulley hydraulic actuator cable

23

spring/ damper

elevator mounting

Figure 1.8: Hydraulically power assisted control system: example of longitudinal channel hydraulic actuators mounted on a number of locations to strenghten mechanical parts such as push-pull rod, cam and elevator Note that the hydraulic actuators are employed to assist the performance of the existing mechanical components. These actuators do not directly connect the pilot and the aircraft. Thus if these actuators fail to work or are out of order, the pilot can still control the aircraft through the mechanical linkage which maintains a direct connection between the pilot and the aircraft. This type of control system is commonly called Power Assisted Flight Control System. Similar to a fully mechanical control system, it is also reversible meaning the motion transmission from the controller stick to the control surface can be reversed. Hydro-mechanical Powered Control System The hydromechanically powered control system consists of components with the domain of mechanical and hydraulic. In this control system, the hydraulic component directly connects the pilot and the aircraft. Consequently, the pilot does not have the feel of directly controling the aircraft. The detachment of the pilot from the aircraft increases the risk of aircraft operation. To overcome this problem, an articial feel system is introduced to provide the feel to pilot that he is directly controlling the aircraft. As a result, the integration of the pilot and the aircraft can be maintained. The primary components of this type of control system are:

24

CHAPTER 1. INTRODUCTION

tension regulator cam artificial feel system controller stick hydraulic actuator hydraulic power supply

elevator primary hydraulic actuator mounting

Figure 1.9: Hydromechanical Control System mechanical components hydraulic actuators articial feel system See Fig.1.9. From direct observation, it is clear that the system is irreversible. The motion from the controller stick to the control surface can not be returned by the elevator motion to the controller stick. Electro hydro-mechanical Control System The electrohydromechanical control system consists of components with the domain of electrical, mechanical and hydraulic. The automatic control systems as discussed earlier are of this class of control system if the basic control system domain is hydromechanical. A specic feature of this control system is the existence of the closed-loop between the control system and the aircraft through the aircraft motion sensor as shown in Fig.1.10. The electronic circuit closes the control loop through the aircraft motion sensor, control stick motion sensor and information processing computer for control strategy. Due to the hydraulic actuator that directly links the controller stick to the control surface, an articial feel system is introduced to provide the feel to pilot that he is directly controlling the aircraft. As a result, the system is irreversible. In the longitudinal channel, for instance, the motion from the controller stick to the control surface can not be returned by the elevator motion to the controller stick.

1.2. ELEMENTS OF AUTOMATIC FLIGHT CONTROL SYSTEM

25

tension regulator cam artificial feel system controller stick hydraulic actuator primary hydraulic actuator control sensor elevator electrical command signal mounting

FCC electrical signal Sensor Motion sensor

Aircraft Motion

Figure 1.10: Electrohydromechanical Control System

Electrohydraulic Control System (Fly-By-Wire) The electrohydraulic control system consists of components with the domain of electrical and hydraulic. In this control system, all mechanical components are eliminated. The basic philosophy behind this type of control system is the reduction of weight and space while simplifying the installation mechanism. Fig.1.11 shows the schematic diagram of the electrohydraulic control system, popularly known as Fly by Wire control system. A distinct feature of this control system is related to the closed-loop through the aircraft motion sensor, controller stick sensor and information processing computer. The Fly by Wire control system is irreversible. In this electrohydraulic control system, the primary components consist of processor or intrument blocks namely computer block, sensor block, hydraulic actuator block and power supply block. Each of the block is connected by electrical transmission cables. The nature of the system consisting the blocks is thus modular. This modular feature allows for the exible and simple installation and repair which makes maintenance job easier and less time consuming. The feature also enables the design of control system with minimum weight and space requirement.

26
artificial feel system

CHAPTER 1. INTRODUCTION

controller stick

primary hydraulic actuator

information processor FCC control sensor elevator

Motion sensor

Aircraft Motion

Figure 1.11: Electrohydraulic Control System Opto Hydro-Mechanical (Fly-By-Light) The optohydraulic control system consists of components with the domain of optronic (opto-electronic) and hydraulic. In this control system, all mechanical components are eliminated. The working principle and basic philosophy behind this type of control system is similar to that of Fly by Wire control system. The distinction is on the data transmission which is done through the optical domain.

Chapter 2

Autopilot System
2.1 Introduction

An autopilot is ight condition/attitude holding system of an aircraft. In a number of textbooks, this ight condition/attitude holding system is referred to as displacement autopilot due to its task to restore the state variable that it maintains to the original desired value. The autopilot will work well for the aircraft with good stability characteristic. For the aircraft with marginal stability, the autopilot can have a better performance if a stability augmentation system is installed onboard the aircraft as an inner loop of the autopilot system as illustrated in Fig.2.1. Some types of autopilot system that is commonly used for conventional transport aircraft are 1. Longitudinal mode (a) Pitch attitude hold (b) Speed/Mach number hold (c) Altitude hold (d) Glide-slope hold 2. Lateral Directional mode (a) Heading hold (b) Bank angle hold or wing leveler (c) VOR-hold (d) Turn coordinator

2.2

Working Principle of Autopilot System

Fig.2.2 shows an example of the AP system used for the Indonesian Aerospace transport aircraft CN-235. The system is of type APS-65 produced by an avionic 27

28

CHAPTER 2. AUTOPILOT SYSTEM


Control linkages Ref. attitude AFCS computer AP-SAS SAS loop Aircraft motion Aircraft flight attitude/condition

SAS sensor

AP loop

AP sensor

Figure 2.1: SAS as an inner loop of aircraft autopilot system

manufacturer Collins. The gure specically shows the panels inside the cockpit in conjunction with the operation of an AP system. The associated panels, indicators and control manipulator are listed in the following gures. Fig.2.3 further shows the components of an AP system mounted on a number of parts of the aircraft. Refer to Fig.2.2 and 2.3 for the explanation of the working principle of autopilot system. Fig.2.4 shows the location of the autopilot components in the standard control diagram.

2.2. WORKING PRINCIPLE OF AUTOPILOT SYSTEM

29

(B)

Circuit breaker

(E) The status of autopilot and automatic control system

(G) Attitude Direction Indicator

(K) Control Manipulator

(D) AP switch panel

Figure 2.2: CN235-100 Autopilot control system APS-65

30

CHAPTER 2. AUTOPILOT SYSTEM

(E) The status of autopilot and automatic control system

(B)

Circuit breaker

(D) AP switch panel

(K) Control Manipulator (G) Attitude Direction

2.2. WORKING PRINCIPLE OF AUTOPILOT SYSTEM

31

(6) Aileron servo

(9) Elevator trim servo (8) Rudder servo

(7) Elevator servo (3) Normal accelerometer

(2) Slide slip sensor

(4) Avionic rack (1) Air data sensor (5) Autopilot computer

Figure 2.3: CN235-100 Autopilot control system APS-65

32

CHAPTER 2. AUTOPILOT SYSTEM

2.2. WORKING PRINCIPLE OF AUTOPILOT SYSTEM

33

Basically, an AP system is easy to operate. In this case, a pilot can select the state variable to be maintained as the reference by bringing the aircraft to the trimmed condition at the value of the the selected reference variable. The pilot executes this process of trimming by the use the control manipulator (K), wheels, control stick or pedal in consideration of the information from the attitude direction indicator instrument (G). If the trim condition is achieved, the pilot can then press the AP knob (D) so that the AP system is activated and working to maintain the trim condition. The trim condition is maintained by the AP system through the AP actuators (6 8) and AP data processing system (5) which continuously keeping the aircraft trim condition at the value of the state

34

CHAPTER 2. AUTOPILOT SYSTEM

variable selected as the reference. On condition that the AP system is working, the pilot can release his hand and feet from the control manipulators, wheels, and controller stick or pedal. All he needs to do is occasionaly checking up on the aircraft trim condition as displayed by the Attitude-Direction-Indicator (ADI). To disengage the AP system, the pilot would just need to grasp controller stick/wheel/pedal and to displace it a little bit. In this case, the AP control loop will be automatically disengaged and the pilot regains the full control of the aircraft. With this characteristic, the autopilot is often referred to as a low authority system.

+ 4,5 -

6,7,8,9

CN235

1,2,3

Figure 2.4: The location of auto pilot system components in the standard control diagram

2.3

Longitudinal Autopilot

The following subsections will be focused on the longitudinal autopilot design for a transport aircraft. The discussion covers the design for pitch attitude hold, speed hold and altitude hold systems. The elaboration of the altitude hold system include a number of inner loop feedback designs including the pitch attitude hold, forward acceleration and compensator integration.

2.3.1

Pitch Attitude Hold System

The block diagram of a pitch altitude hold system is shown in Fig.2.5. Note that the aircraft is modeled by block (1) with the output of motion states in the longitudinal mode x = {u, , , q } where the pitch angle is sensed by the vertical gyro, represented by block (2). The output of the vertical gyro is the signal m (t) which is then entered to and processed by the autopilot computer, block(3). The computer receives an input from the pilot in the form of the desired value of pitch angle as the reference angle to be maintained, ref (t). In the autopilot computer (APC) the signal ref (t) is compared to m (t) by

2.3. LONGITUDINAL AUTOPILOT

35

using a comparator circuit and the result is amplied by an amplier circuit which yields the output signal . The signal is in turn sent to the autopilot servo motor that moves the steering stick and afterward conveys the signal to the elevator through the control mechanism, block(4). The elevator deection e changes the aircraft attitude and the new pitch angle (t) is sensed by the vertical gyro. The whole process in the autopilot loop is repeated until the desired value = 0 is reached. This condition means that m (t) = ref (t) or the aircraft pitch angle (t) is the same as the reference pitch angle ref desired by the pilot.

AP computer

 ref

Amplifier

T e
(4) AP servo to control stick elevator

(t ) (t )
q(t )

u (t )

(3) AP comparator

(1) aircraft

 m (t )

(t )

(2) Vertical gyro

Figure 2.5: Functional diagram of pitch attitude hold system Since the process from block(2) until block(4) is performed in the electrical domain, it can be considered very fast. Thus the result is the maintenance of the aircraft pitch angle (t) at the value of the reference pitch angle ref desired by the pilot. The above functional diagram can be described in the following mathematical diagram shown in Fig.2.6: In the above model, the transfer functions of the pitch attitude hold closed loop system consists of: 1. Aircraft transfer function matrix, for T = 0, GA/C (s)
T =0

Nlong (s)|r =0 long (s)

x (s) e (s)

(2.1)

36

CHAPTER 2. AUTOPILOT SYSTEM

 ref

T (s)
Gct ( s )
Servo AP

[ N long ( s )]

e ( s)

long ( s )
aircraft

( s) (s)
q( s )

u( s)

 m (t )

Gvg ( s )
Vertical gyro

(t )

Figure 2.6: Pitch attitude hold system 2. Vertical gyro transfer function. The time response of the gyro in sensing the pitch angle (t) is considered very fast, thus: Gvg (s) = Svg constant 3. Autopilot computer transfer function: ref b m (s) b (s) = b (2.3) (2.2)

4. Autopilot servo and control mechanism transfer function. The block is modeled as a system with time response of s representing the rst order lag factor in the transmission of signal to the aircraft elevator.

Gct (s) =

Kct s + 1/ s

(2.4)

The characteristic polynomial of the pitch attitude hold closed loop system is given by:
N (s) e s + 1/ s

cl (s) = long (s) + ke

(2.5a)

2.3. LONGITUDINAL AUTOPILOT where ke Svg Kct

37

(2.5b)

Therefore, the characteristic equation of the closed loop system can be written as
N (s) e =0 long (s) [s + 1/ s ]

1 + ke

(2.6)

The above equation can be written in a simpler form as 1 + ke Gol (s) = 0 where
N (s) e long (s) [ s s + 1] = Svg Kct s

(2.7)

Gol (s) = ke

(2.8a) (2.8b)

In the case of a conventional aircraft, GOL (s) will have two real zeros and ve poles consisting of two pairs of complex poles of the aircraft and one real pole associated with the autopilot control mechanism servo. Observing the closed loop characteristic polynomial, Eq.2.7, note that Kct (servo gain) is the only gain that can still be changed or varied. The vertical gyro sensitivity Svg and servo lag factor 1s are typically of a constant value. As a case study, the pitch attitude hold system of the N250-100 aircraft prototype 2 Krincing Wesi manufactured by the Indonesian Aerospace Inc. is presented. See Fig.2.7. The longitudinal mode transfer function of the N250-100 aircraft during cruise at the altitude of h = 15000 ft and the speed of Vc = 250 kts is given as the following (for the elevator control channel): s s 1 +1 (2.9a) 37.6823 4.2634 s s s +1 +1 +1 Ne (s) = Se 91.1168 0.009 + j 0.0831 0.009 j 0.0831 s s (s) = Se N +1 +1 e 1.2860 0.0222 q N (s) = sN (s) e e
u (s) = Sue N e

with the static sensitivity coecient of

38

CHAPTER 2. AUTOPILOT SYSTEM

Figure 2.7: N250-100 aircraft prototype 2 Krincing Wesi manufactured by the Indonesian Aerospace Inc.

Sue Se Se

= 3598.817 = 1.992 = 8.1443

(2.9b)

and the characteristic polynomial of long (s) = s4 + 3.3115s3 + 8.1448s2 + 0.1604s + 0.0554 (2.10a)

having the characteristic roots: p1,2 p3,4 = 1.6472 j 2.317 = 0.0085 j 0.0824 (2.10b)

The N250-100 utilizes ring laser gyroscope (RLG) as its Inertial Navigation System (INS), therefore the time response can be considered very fast. Hence, the gain of the vertical gyro, Svg , in sensing the changes of the pitch angle (t), can be taken as:

2.3. LONGITUDINAL AUTOPILOT

39

Svg = 1

(2.11)

Also, since the N250-100 employs Fly by Wire control system in its longitudinal and lateral/directional channels, the domain of the autopilot servo is therefore electromechanical with a relatively fast time response. As a result, the time response of the AP servo is taken as s = 0.1 sec (2.12)

Subtituting Eqs.2.9b until 2.12, to Eqs.2.7 and 2.8a, the following closed loop characteristic polynomial of the pitch altitude hold system can be obtained: 1 + ke Gol (s) = 0 where ke = Svg Kct s Se = 0.81443Kct (2.13a)

and
s +1 1.286 1 ps2 + 1 ps3

Gol (s) =

s p1

+1

s 0.0222

s p4

+1

+ 1 ( s s + 1)

(2.13b)

The control algorithm is implemented in MATLAB codes. The programs for generating root locus diagram and time response for the design of the pitch attitude hold is presented along with the commentary below. % Pitch Attitude Hold -- Root locus drawing close all % Static sensitivity coefficient: Sude = 3598.817; Sade = -1.9920; Stde = -8.1443; % coefficients of characteristic polynomial: D_long = [1 3.3115 8.1448 0.1604 0.0554]; % finding the roots: p = roots(D_long) ; % time response of Auto Pilot (AP) servo ts = 0.1; % open loop zeros

40

CHAPTER 2. AUTOPILOT SYSTEM

zero = [-1.286 -0.0222] % open loop poles pole = [p(1) p(2) p(3) p(4) -1/ts] % open loop gain Pipole = pole(1)*pole(2)*pole(3)*pole(4)*pole(5); kol = real(Stde*Pipole/(zero(1)*zero(2))); % Open loop transfer function [N_ol,D_ol] = zp2tf(zero,pole,kol); tf(N_ol,D_ol) % Vertical gyro gain Svg = 1; %Drawing the root locus sys_ol = zpk(zero,pole,kol); figure(1); set(1,Name,Open Loop Root Locus); rlocus(sys_ol); grid on For a positive value of gain Kct , the feedback gain Ke will be negative. Fig.2.8 shows the root locus diagram of the pitch attitude hold system of the N250-100 aircraft. From the root locus, it is evident that for a negative gain the natural frequency of the pitch oscillation mode tends to increase and the damping ratio D also decreases. Nevertheless, it is still acceptable to the extent that the damping ratio is still greater than 0.35. It is interesting to note that, the phugoid mode tends to be more damped and for an even higher value of negative Ke this mode breaks up into two phugoid subsidence modes. This condition is advantageous for maintaining the pitch angle b to be at its reference value b ref since the aircraft speed is also maintained due to the overdamped phugoid damping ratio. As an example, the pitch oscillation damping ratio P O is taken to be 0.35. The corresponding closed loop characteristics roots are given as follows = 0.35 Kct = 8.9695 the roots of the polynomial are: psv ppo1 ppo2 = 11.2777 = 0.5390 + j 4.1205 = 0.5390 j 4.1205 pph1 pph2 = 0.9305 = 0.0254 (2.15) (2.14)

2.3. LONGITUDINAL AUTOPILOT

41

Root Locus 0.86 6 0.94 4 0.985 0.76 0.64 0.5 0.34 0.16

pitch oscillation

phugoid

2 Imaginary Axis

12

10

-2

0.985

-4 0.94 -6 0.76 -12 -10 -8

AP servo

-8 0.86

0.64 -6

0.5 -4

0.34 -2

0.16 0 2

Real Axis

Figure 2.8: Root locus of Pitch attitude hold system for N250-100 PA-2 aircraft The following gure shows the b ref step response for two kind of cases:

Case 1 Without a feedback

Case 2 With a feedback of b e with Ke = 8.9695

% Pitch attitude hold -- Closed loop time response analysis Kct = 8.9695; Ktde = ts*Svg*Kct; % Closed loop N_cl = Ktde*N_ol; D_cl = D_ol + Ktde*N_ol; [zero_cl,pole_cl,Kcl] = tf2zp(N_cl,D_cl); pole_cl; tf(N_cl,D_cl); sys_cl = zpk(zero_cl,pole_cl,Kcl); % Time response for step input t= 0:0.2:700; u = t*0; u(find(t>1)) = 5/57.3; % Open loop time response y_ol = lsim(sys_ol,u,t); % Closed loop time response

42

CHAPTER 2. AUTOPILOT SYSTEM

Root Locus 0.86 0.15 0.94 0.1 0.76 0.64 0.5 0.34 0.16

phugoid
0.05 Imaginary Axis 0.985

35

0.3

0.25

0.2

0.15

0.1

0.05

-0.05

0.985

-0.1 0.94 -0.15

-0.2

0.86 -0.35

0.76 -0.3 -0.25 -0.2

0.64 -0.15 Real Axis

0.5 -0.1

0.34

0.16 -0.05 0 0.05

Figure 2.9: Root locus of Pitch attitude hold system for N250-100 PA-2 aircraft enlarged to show the phugoid mode

y_cl = lsim(sys_cl,u,t); figure(Name,Time Respons Theta); subplot(211); plot(t,u*57.3,t,y_ol*57.3,-m); ylabel(open loop teta); grid on subplot(212); plot(t,u*57.3,t,y_cl*57.3,-r);axis([0 35 0 7]); ylabel(closed loop teta);grid on % u response zero_u = [37.6823 -4.2634]; kcl_u = Kcl*Sude/Stde*zero(1)*zero(2)/(zero_u(1)*zero_u(2)); [Nu_cl,Du_cl] = zp2tf(zero_u,pole_cl,kcl_u); [zero_ucl,pole_ucl,Kucl] = tf2zp(Nu_cl,Du_cl) ; pole_ucl; tf(Nu_cl,Du_cl);sysu_cl = tf(Nu_cl,Du_cl); yu_cl = lsim(sysu_cl,u,t); % alpha response zero_a = [ -91.1168 -0.009+0.0831j -0.009-0.0831j] kcl_a = -Kcl*Sade/Stde*zero(1)*zero(2)/real(zero_a(1)*zero_a(2)*zero_a(3)) [Na_cl,Da_cl] = zp2tf(zero_a,pole_cl,kcl_a);tf(Na_cl,Da_cl); sysa_cl = tf(Na_cl,Da_cl);ya_cl = lsim(sysa_cl,u,t);

2.3. LONGITUDINAL AUTOPILOT figure(Name,Close Loop Time Respons) subplot(211);plot(t,yu_cl,-g);ylabel(closed loop u [m/s]); axis([0 250 -35 0]);grid on subplot(212);plot(t,ya_cl*57.3,-b);grid on axis([0 25 -1 4]);ylabel(closed loop \alpha [deg]);grid on

43

It is apparent that for the case with the pitch attitude hold, the value of can be quickly maintained in less than 15 time units, while when the pitch attitude hold is not present, Ke = 0, the time it takes to reach the steady level of b ref = 0 is more than 500 time unit or 35 times as along as that of the system with the pitch attitude hold. Also note that when the pitch attitude hold is o, there exists an oset angle of about 15o whereas with the pitch attitude hold the oset angle is only approximately 0.25o . Fig.2.11 shows the time response of velocity, u(t), and angle of attack, (t). The bleed-o speed to hold the pitch angle = 5o is about 35 kts while the angle of attack increases by 0.5o .
200 open loop teta

100

-100

100

200

300

400

500

600

700

closed loop teta

6 4 2 0

10

15

20

25

30

35

Figure 2.10: Time response of (t) due to step ref = 5o with and without pitch attitude hold system. This pitch attitude hold system can further be improved by pitch oscillation stability augmentation system (SAS) which will be discussed in more detail in

44

CHAPTER 2. AUTOPILOT SYSTEM

0 closed loop u [m/s] -10 -20 -30 0 50 100 150 200 250

4 closed loop [deg] 3 2 1 0 -1 0 5 10 15 20 25

Figure 2.11: Time response for u(t) and (t) for Kct = 8.9695 the next chapter.

2.3.2

Speed Hold System

In the aircraft application, the speed hold system can be categorized into two types: Speed Hold System: for a low speed aircraft (low subsonic) Mach Hold System: for a high speed aircraft (high subsonic to transonic) Only the speed hold system will be covered in further detail in this course. Refer to Fig.2.12. The gure illustrates the speed hold system that is typically used for an aircraft. The aircrafts airspeed is sensed by the pitot static (2) and the result is sent to the autopilot computer (3) to be compared with the reference ight speed (the speed which will be maintained by this speed hold system). The speed dierence b u will be sent by the AP computer to the engine control system (power lever to propulsion control mechanism) (4). The result is a throttle deection, th applied to the aircraft engine (5). The aircraft engine in turn changes the thrust of the aircraft by T . The aircraft (1) will react to the thrust input T and its velocity u(t) will change accordingly. The process then continues as before and nally stops when the aircraft velocity u(t) has reached

2.3. LONGITUDINAL AUTOPILOT

45

the value of the reference speed u bref (t). In this condition the signal of speed dierence b u from the computer will be zero. When the autopilot is working, the pilot can release the power lever and let the engine control manipulator moves by itself following the closed loop process of the speed hold autopilot system. Hence this type of ight speed holding system is commonly called an Auto Throttle system.

(3) AP computer

ref u

th

T e

(t ) (t )
q(t )
(1) aircraft

u(t )

(4) engine control system

(5) engine and propeller

m ( t ) u

u (t )

(2) Pitot static: velocity sensor

Figure 2.12: Speed hold system functional diagram The following gure describes the mathematical diagram of the aircraft speed hold system. This speed hold system has the following transfer functions 1. Aircraft transfer function matrix, for e = 0. Nlong (s)|r =0 x (s) GA/C (s)e =0 = = long (s) T (s) 1/ ps s + 1/ ps

(2.16)

2. Pitot static transfer function. In measuring the ight speed, the pitot is modeled by rst order lag system which is called pitot-lag Gps (s) = (2.17)

3. Autopilot computer transfer function. The comparator computer is modeled by: b u (s) = u bref u bm (s) (2.18)

46

CHAPTER 2. AUTOPILOT SYSTEM

ref u

th ( s )
G PC ( s )
propulsion control

GENG ( s )
engine

T ( s ) [ N ( s )] long
long ( s )

(t ) (t )
q (t )

u (t )

e (s)

aircraft

m ( t ) u

u (t )

G PS ( s )
pitot static

Figure 2.13: Mathematical diagram of speed hold system 4. Propulsion control transfer function. For the current existing advanced propulsion technology, the transfer function can be modeled by rst order lag as given by Gpc (s) = Kpc 1/ pc s + 1/ pc (2.19)

5. Engine (and propeller) transfer function. For the present technology, compared to the aircraft dynamics, the corresponding transfer function can also be modeled as a rst order lag : Geng (s) = Ke 1/ e s + 1/ e (2.20)

The closed loop characteristic polynomial of the speed hold system can be expressed as the following: cl (s) = long (s) + kuT
u N (s) T [s + 1/ ps ] [s + 1/ pc ] [s + 1/ e ]

(2.21a)

where kuT Kpc Ke / [ ps pc e ] (2.21b)

2.3. LONGITUDINAL AUTOPILOT Thus, the closed loop characteristic equation is given by: 1 + KuT Gol (s) = 0 where KuT = Kpc Ke
u N (s) T

47

(2.22)

(2.23a) (2.23b)

Gol (s) =

[ ps s + 1] [ pc s + 1] [ e s + 1] long (s)

The speed hold system is usually used during the approach and landing in order to reduce the work load of the pilot who has been primarily occupied by the aircraft guidance task. As a case study, the speed hold system of the N250-100 aircraft manufactured by the Indonesian Aerospace during the landing conguration will be analyzed. In this conguration, the aircraft velocity is 125 kias at the altitude of h = 0 from sea level. The transfer functions associated with this conguration are given by: s s s u NT (s) = SuT +1 +1 +1 0.3519 0.8248 + j 0.809 0.8248 j 0.809 s s +1 +1 0.0342 + j 0.1992 0.0342 j 0.1992 (2.24a)

(s) = ST N T

(s) = ST N T

s s +1 1 0.9093 0.0706 q ( s ) = sN ( s ) N T T

with the static sensitivity coecients as follows, SuT ST ST and characteristic polynomial: long (s) = s4 s3 s2 s + + + +1 0.0588 0.02876 0.03296 0.6164 (2.25a) = 25.9337 = 0.1276 = 0.1912 (2.24b)

with the characteristic roots: p1,2 p3,4 = 1.0147 j 0.8304 = 0.0076 j 0.1848 (2.25b)

48

CHAPTER 2. AUTOPILOT SYSTEM

The aircraft velocity is sensed by the the pitot static system, with the transfer function modeled by a rst order lag having a time response of: ps = 0.2 sec (2.26)

The N250-100 propulsion control system uses the Full Authority Digital Engine Control (FADEC) technology and thus the time response is fast. The value of the time response is assumed to be pc = 0.1333 (2.27)

To generate thrust, the engine and propeller system of the N250-100 aircraft employs advanced six bladed propeller yielding a relatively fast time response. The value of the time response is taken to be e = 1 (2.28)

Substituting Eqs.2.24b2.28 to Eqs.2.222.23a, the following equation is obtained 1 + KuT Gol (s) = 0 where KuT = Kpc Ke SuT (2.30a) (2.29)

Gol (s) =

+ 1 7s .5

s 0.3519

s s + 1 0.8348+ j 0.809 + 1 0.8348j 0.809 + 1 s + 1 (s + 1) ps1 + 1 ps2 + 1 ps3 + 1 p + 1 4

(2.30b)

The following gure shows the root locus diagram of the speed hold system of the N250-100 aircraft for positive KuT gain. The root locus shown in the gure is varied as a function of the gain K = KP C Ke . Notice that the pitch oscillation mode does not change at all since it is constrained by the two complex zero of u N (s) = 0. Nonetheless, the damping ratio of the phugoid mode increases up T to its maximum value before moving to the unstable region. The MATLAB code implementation for the root locus drawing of the speed hold system is presented below: % Static sensitivity coefficient: Sudt = 25.9337; Sadt = -0.1276; Stdt = 0.1912; % coefficients of characteristic polynomial:

2.3. LONGITUDINAL AUTOPILOT

49

D_long = [1/0.0588 1/0.02876 1/0.03296 1/0.6164 1]; % roots: p = roots(D_long) ; % time constants tps = 0.2; tpc = 0.13333; te = 1; % open loop zeros zero = [-0.3519 -0.8348-0.809i -0.8348+0.809i] % open loop poles pole = [p(1) p(2) p(3) p(4) -1/tps -1/tpc -1/te] % open loop gain Pipole = pole(1)*pole(2)*pole(3)*pole(4)*pole(5)*pole(6)*pole(7); kol = real(Pipole/(zero(1)*zero(2)*zero(3))); % Open loop transfer function [N_ol,D_ol] = zp2tf(zero,pole,kol);tf(N_ol,D_ol) % Drawing the root locus sys_ol = zpk(zero,pole,kol); figure(1) set(1,Name,Open Loop Root Locus); rlocus(sys_ol);zoom(3); grid on To achieve the desired most favorable performance, a point in the root locus associated with the highest phugoid damping ratio is chosen. Refer to the above gure, this point is given by max = 0.0898 K = 17.3838 and its corresponding roots of the polynomial characteristics are psv1,2 ppo1,2 pph1,2 psv3 = = = = 6.5126 j 0.2513 0.8657 j 0.9481 0.1729 j 0.9474 0.4422 (2.31)

Using the above values of poles and gain K , the closed loop characteristic polynomial of this speed hold system can be written as cl (s, K ) = s7 + 15.5445s6 + 79.3848s5 + 163.9051s4 (2.32) +222.9647s3 + 185.1533s2 + 114.5985s + 28.7110 The MATLAB code implementation for the gain selection process and the associated time response analysis is given below: % Selecting the gain [Ku_dt,pcl] = rlocfind(sys_ol);D_cl = poly(pcl); D_ps = [1 5];N_cl = Ku_dt/5*conv(N_ol,D_ps); % Preserve the dim of N_cl N_cl = N_cl(2:length(N_cl));D_cl = D_ol + Ku_dt*N_ol;

50

CHAPTER 2. AUTOPILOT SYSTEM

Root Locus 6 0.86 0.76 0.64 0.5 0.34 0.16

4 0.94

pitch oscillation prop control system


2 0.985 Imaginary Axis

engine and propeller


10 8 6 4 2

phugoid

-2 0.985

pitot static
-4 0.94

-6

0.86 -10

0.76 -8 -6

0.64

0.5 -4

0.34 -2

0.16 0

Real Axis

Figure 2.14: Root locus diagram for the speed hold system of N250-100 PA-2

[zero_cl,pcl,Kcl] = tf2zp(N_cl,D_cl) ;pcl tf(N_cl,D_cl) sys_cl = zpk(zero_cl,pcl,Kcl); % Time response t = 0:0.2:700;u = t*0;u(find(t>1)) = 1; % open loop time response y_ol = lsim(sys_ol,u,t); % closed loop time response V = 250*1.8*1000/(60*60);y_cl = lsim(sys_cl,u,t); figure(Name,Time Respons) subplot(211);plot(t,u,t,y_ol,-m);ylabel(open loop u);grid on subplot(212);plot(t,u,t,y_cl,-r);axis([0 50 0 2]); ylabel(closed loop u);grid on % pitch angle response zero_theta = [-0.9093 0.0706 -5]; Pipcl = pcl(1)*pcl(2)*pcl(3)*pcl(4)*pcl(5)*pcl(6)*pcl(7); kcl_theta = real(Stdt*Pipcl/(zero_theta(1)*zero_theta(2)*zero_theta(3))); [Ntheta_cl,Dtheta_cl] = zp2tf(zero_theta,pcl,kcl_theta);

2.3. LONGITUDINAL AUTOPILOT

51

Root Locus 1 0.86 0.8 0.6 0.4 Imaginary Axis 0.2 0 -0.2 -0.4 -0.6 -0.8 0.86 -1 -1.4 0.76 -1.2 -1 0.64 -0.8 0.5 -0.6 0.34 -0.4 0.16 -0.2 0 0.2 0.4 0.6 0.94 0.985 0.76 0.64 0.5 0.34 0.16

0.94

engine and propeller pitch oscillation

phugoid

1.4 0.985

1.2

0.8

0.6

0.4

0.2

Real Axis

Figure 2.15: Root locus diagram for the speed hold system of N250-100 PA-2 zoomed around the pitch oscillation and phugoid modes tf(Ntheta_cl,Dtheta_cl) systheta_cl = zpk(zero_theta,pcl,kcl_theta); ytheta_cl = lsim(systheta_cl,u,t); % angle of attack response zero_a = [ -0.0342-0.1992j -0.0342+0.1992j -5]; kcl_a = real(Sadt*Pipcl/(zero_a(1)*zero_a(2)*zero_a(3))); [Na_cl,Da_cl] = zp2tf(zero_a,pcl,kcl_a); tf(Na_cl,Da_cl) % Closed loop time response sysa_cl = zpk(zero_a,pcl,kcl_a); ya_cl = lsim(sysa_cl,u,t); V = 250*1.8*1000/(60*60); figure(Name,Close Loop Time Respons) subplot(211);plot(t,ytheta_cl*57.3,-g);axis([0 50 -60 60]); ylabel(closed loop \theta [deg]);grid on

52

CHAPTER 2. AUTOPILOT SYSTEM

subplot(212);plot(t,ya_cl*57.3,-b);grid on axis([0 50 -40 40]);ylabel(closed loop \alpha [deg]);grid on The time response due to a step input, u(t), can be obtained through the inverse Laplace transformation of the closed loop transfer function as the following u (t) = 1 [Gcl (s) uref (s)] where Gcl (s) =
K (s + 1/ ps ) N (s) T cl (s, K )

(2.33a)

(2.33b)

The following gures show the comparison of the time response u(t) due to a unit step uref = 1, between the aircraft without speed hold system and one with speed hold system operated with gain K . The eectivity of the speed hold system can be observed from the comparison gures. The speed hold system allows the achievement of the steady state value of u(t) at 35 time units. While without the speed hold system, the steady state condition has not been reached until 350 time units. An oset from the steady state value uref = 1 can be compensated through the use of a gain adjuster which is governmed by the Auto Pilot computer of the speed hold system. The corresponding equations for (t) and (t) can be derived as follows: K (s + 1/ pl ) N (s) 1 T (t) = (2.34) u ref cl (s, K ) " # K (s + 1/ pl ) N (s) T (t) = 1 u ref cl (s, K ) The time responses of (t) and (t) are shown by the following gures. In the above analysis, the gain used for the variant in determining the root locus is K = KP C Ke . Typically, for a throttle lever system, the gain KP C can be altered. The only factor that can be varied therefore is the gain associated with engine and propeller, Ke . It is evident from the gure that the faster the time response of the engine and propeller, the higher the phugoid damping ratio and the faster the time response of the speed hold system. For a number of jet transport aircrafts such as, among others, Fokker-100, Airbus A-320/330/340 and Lockheed Tristar L-1011, an in-ight air brake is provided in the aircraft to govern the speed during the approach and landing phases. The following gure describes the speed brake system as the ight speed regulator of the Fokker F-100 and Airbus A 319/321. Through the use of the airspeed brake, the aircraft ight speed can be controlled by modulating the brake relative to its open-close position.

2.3. LONGITUDINAL AUTOPILOT

53

2 open loop u 1.5 1 0.5 0

100

200

300

400

500

600

700

2 closed loop u 1.5 1 0.5 0

10

15

20

25

30

35

40

45

50

Figure 2.16: Time response of u(t) to maintain uref = 1 with and without the speed hold system, K = 17.3838

2.3.3

Altitude Hold System

The altitude hold system is a standard system for medium and long range transport aircrafts. This system maintains a cruise altitude that has been selected by the pilot. The system clearly reduces the pilots work load signicantly. The basic principle of the altitude hold system is the use of a signal proportional to the measured aircraft altitude as a feedback to the elevator in such a way that the elevator motion enable the aircraft to maintain its prescribed altitude. Fig.2.20 shows the functional diagram of the system. The ight altitude is measured by a pitot static system and the elevator is moved by the basic control mechanism through a servo motor. At glance this system looks similar to the ight attitude holding system. The dierence is on the fact that ight altitude variable is not part of the aircraft state variables. Since the ight altitude h(t) is not part of the motion state variable x, in the mathematical analysis, rstly it has to be modeled. The variable h(t) is

54

CHAPTER 2. AUTOPILOT SYSTEM

60 closed loop [deg] 40 20 0 -20 -40 -60 0 5 10 15 20 25 30 35 40 45 50

40 closed loop [deg] 20 0 -20 -40

10

15

20

25

30

35

40

45

50

Figure 2.17: Time response of (t) and (t) to maintain uref = 1, with the speed hold gain K = 17.3838 categorized as the output variable y (t) of the aircraft. From the ight performance analysis, the model of rate of climb can be given as the following. See Fig.2.21. dH = Vss sin dt

(2.35)

where, H the aircraft altitude with respect to sea level the aircraft steady state velocity Vss d ( . ) rate of change with respect to time dt If this steady state condition is perturbed by a small disturbance, then the following relations apply: = Ho + h = o +

(2.36)

2.3. LONGITUDINAL AUTOPILOT

55

Air brake on the tail

Figure 2.18: Tail air brake (Fokker F-100/70) The equation of the rate of change of ight altitude can then be obtained as: dh = Vss dt In the non-dimensional form, it can be rewritten as: 1 dh = h Vss dt

(2.37)

(2.38)

From the kinematic diagram given in Fig.2.21, it can be deduced that:

= therefore, dh = h= dt Z
t

(2.39)

( ) d

(2.40)

56

CHAPTER 2. AUTOPILOT SYSTEM

Air brake on the outer wing

Figure 2.19: Outer-wing air brake (Airbus A-320/319/321) In the Laplace domain, the ight altitude can then be obtained as: h (s) = 1 [ (s) (s)] s (2.41)

This equation for h(s) is called the output equation of the altitude hold system. Using the available mathematical model of h(s), the mathematical diagram of the altitude hold system can then be illustrated as the following gure. Based on Eqs.2.4 and 2.17, the transfer function of the control mechanism/servo and the pitot static can be given respectively as follows: Kct s + 1/ s 1/ s s + 1/ ps

Gct (s) = Gps (s) =

(2.42)

In this model, s and P S have been specied using the instrument data, while the gain Kct can be varied to satisfy the control criterion of the altitude hold system. Referring to Eq.2.41, the transfer function of the ight altitude h(s) with respect to the elevator deection e (s) can be obtained as follows:

2.3. LONGITUDINAL AUTOPILOT

57

AP computer

h ref

T e
control mechanism and servo

(t ) (t )
q (t )
aircraft

u (t )

(t ) h m

h (t )

Pitot static: altitude sensor

Figure 2.20: Functional diagram of altitude hold system

h (s) / e (s) = = = = Therefore,

1 [ (s) (s)] / e (s) s (s) 1 (s) s e (s) e (s) " # (s) N (s) 1 N e e s long (s)
h (s) 1 N e Ghe (s) s long (s)

(2.43)

h (s) = Ghe (s) e (s) where,


h N (s) e slong (s)

(2.44)

Ghe (s) =

(2.45a) (2.45b)

h (s) = N (s) N (s) N e e e

58

CHAPTER 2. AUTOPILOT SYSTEM

dV dt

xs Vss
dH dt

ys

Figure 2.21: Kinematic diagram of aircraft rate of climb Based on Fig.2.22, the closed loop transfer function follows: h (s) = Gcl (s) href where Ncl (s) cl (s)
h N (s) e s (s + 1/ s ) long (s) h(s) href (s)

can be written as

(2.46)

Gcl (s) =

(2.47a) (2.47b) (2.47c)

Ncl (s) = Gct (s) Ghe (s) = Kct cl (s) = 1 + (Kct / ps )

h N (s) e s (s + 1/ s ) (s + 1/ ps ) long (s)

To better understand the work principle of the altitude hold system, a case study will be taken for the Indonesian Aerospace N250-100 PA2 Krincing Wesi at the cruise ight conguration with the speed of V = 250 kias and at the altitude of h = 15000 ft. From Eqs.2.9b2.10a, the data of the altitude hold system of the N250-100 PA2 can be obtained as the following. For s = 0.1 and P S = 0.2, then the numerator polynomial for h due to e can be given as

2.3. LONGITUDINAL AUTOPILOT

59

h ref

T ( s)
GCt ( s )
control mechanism and servo

[ N long ( s )]

e (s)

long ( s )
aircraft

(s) ( s)
q( s )

u( s)

h m

h( s)

G PS ( s )
pitot static

1 s

 h

Figure 2.22: Mathematical diagram of the altitude hold system

h (s) = She N e

s s s 1 +1 + 1 (2.48a) 10.8454 10.8389 0.0167

and the denumerator polynomial is: 1 1 )long (s) (2.48b) h (s) = s(s + )(s + s PS s6 + 18.3115s5 + 107.8173s4 (2.48c) h (s) = s +287.9076s3 + 409.7023s2 + 8.8517s + 2.7708 where She = 0.1230 (2.48d)

The following gure illustrates the root locus of this altitude hold system. It can be observed that for a negative gain, Kct < 0, the system quickly leads to instability through its phugoid mode. The damping ratio for the pitch oscillation mode is not substantially increased either. For the positive gain, Kct > 0, the . condition is worse since the integrator h directly moves to the right half plane of the root locus. Thus it can be concluded that the altitude hold system using the h e feedback fails to give a stable solution in maintaining the ight altitude.

60

CHAPTER 2. AUTOPILOT SYSTEM

Therefore a dierent strategy needs to be devised so that the instability of the phugoid mode can be delayed and the damping ratio of the pitch oscillation can be increased.

Root Locus 30

20

servo pitch oscillation

10 Imaginary Axis

-10

pitot static
-20

-30 -40

-30

-20

-10 Real Axis

10

20

30

Figure 2.23: Root locus for the altitude hold system for the N250-100 with h e feedback with gain Kct < 0 To achieve the above objective a number of methods by adding inner feedback loop are in order, namely: 1. pitch attitude feedback, 2. forward acceleration feedback, ax 3. compensator addition, Gc (s) Altitude hold system using an attitude hold as its inner loop Since the rst method has been discussed in Sec.2.3.1, the design of this pitch attitude hold system is readily implementable for the inner loop of the altitude hold system of the aircraft. See the mathematical diagram of the corresponding in Fig.2.25.

2.3. LONGITUDINAL AUTOPILOT

61

Root Locus

0.1

phugoid

0.05 Imaginary Axis

integrator

-0.05

-0.1 -0.1 -0.05 0 Real Axis 0.05 0.1

Figure 2.24: Root locus for the altitude hold system for the N250-100 with h e feedback with gain Kct < 0 zoomed to show the phugoid mode h to signal ref which is Note that Sh is the conversion gain from signal b a constant. From the elaboration given in Sec.2.3.1, a good pitch attitude hold control loop has been obtained with Kct = 8.9695. The corresponding roots of the closed loop polynomial have been given in Eq.2.15. Thus the inner closed loop characteristics polynomial can be expressed as:
) = s5 + 13.3115s4 + 41.2598s3 + 223.1724s2 + 186.8562s + 4.6024 cli (s, Kct (2.49)

By using this inner loop, the mathematical diagram of the altitude hold system can be simplied as illustrated in Fig.2.26. In the diagram, the numerator h polynomial N can be calculated as follows: e h N = Kct Ne N e e (2.50)

Thus, the result is:

h (s) = 1.5535s3 0.0159s2 + 182.6204s + 3.0580 N e

(2.51)

62

CHAPTER 2. AUTOPILOT SYSTEM

h ref

ref

S h

T ( s)
GCt ( s ) [ N long ( s )]

e (s)
S vg

long ( s )

(s) (s)
q( s )

u(s)

h m

h( s)

G PS ( s )

1 s

 h

Figure 2.25: Altitude hold system with an attitude hold system as the inner loop From the above diagram, the equation for h(s) can be expressed as: ref h (s) = Gclo (s) h (2.52a)

where
h Kh ps (s + 1/ ps ) N (s) e h (s) ps s (s + 1/ ps ) cli (s) + Kh N e h N e
clo

Gclo (s) =

(2.52b)

clo

In this case, the root locus equation for the Gclo (s) is clo (s) = 0 or
h N (s) Kh e =0 ps s (s + 1/ ps ) cli (s)

1+

(2.53)

Fig.2.27 shows the root locus of the altitude hold system using the attitude hold inner loop. From the root locus diagram, it is manifest that the damping ratio of the pitch oscillation mode increases for the low value of gain Kh whereas the phugoid mode moves from the subsidence to a long period oscillation before tending to unstable oscillatory condition.

2.3. LONGITUDINAL AUTOPILOT

63

h ref

K h
Outer gain

K ct * [ N long ( s )] cli ( s, K ct * )
Inner loop with gain

(s) (s)
q( s )

u( s)

K ct *
( s) (s)

h m

h( s)

GPS ( s )

1 s

( s ) h

Figure 2.26: Altitude hold system with inner loop having gain Kct

If the gain is chosen such that: Kh = 1.7372 Kh = 0.17372 ps the following characteristic roots are obtained: p1 p2 p3 p4,5 p6,7 = = = = = 11.2729 5.0514 0.0163 0.5727 j 4.1047 0.4102 j 0.4051 (2.55)

(2.54)

In this case, the outer closed loop gives the damping ratio of the pitch oscillation and phugoid modes as follows: Pitch oscillation po = 0.098 Phugoid ph = 0.356 If this damping can be accepted, then h(t) can be calculated by using Eq.2.52a. Fig.2.29 illustrates the time response of h(t) for href = 1, with and without the altitude hold system. Case 3 For the case without altitude hold system, the step input href = 1 can not be maintained. As a result, the aircraft will keep losing the altitude unless the pilot interferes with corrective actions

64

CHAPTER 2. AUTOPILOT SYSTEM

Root Locus 10 8 6 4 Imaginary Axis 2 0 -2 -4 -6 -8 -10 -10

pitch oscillation

pitot static phugoid


-5 0 Real Axis 5 10

Figure 2.27: Root locus diagram of the outer loop of the altitude hold system for N250-100 PA-2 aircraft Case 4 For the case with altitude hold system, where a pitch attitude hold system is employed as an inner loop, the value of href = 1 can be quickly maintained in not more than 10 time unit. This system works perfectly and yields no oset at all. The altitude hold system employing an attitude hold system as the innner loop demonstrates reliable performance. A better performance of this type of system can be further achieved if a pitch stability augmentation system is employed.

Altitude hold system using a forward acceleration feedback as its inner loop A second method that can be used for the inner loop of the altitude hold system is the feedback of forward acceleration du dt . For illustration of the inner control loop with ax feedback, see the following diagram given in Fig.2.30. From the above diagram, the following expression for u(s) can be derived:

2.3. LONGITUDINAL AUTOPILOT

65

Root Locus 0.7 1.5 0.81 0.56 0.38 0.2

0.89 1 0.95 0.5 Imaginary Axis 0.988 0 2 0.988 -0.5 0.95 -1 0.89 -1.5 1.5 1 0.5

phugoid

integrator

0.81 0.7 -2 -1.5 0.56 -1 0.38 0.2 -0.5 0 Real Axis 0.5 1 1.5 2

Figure 2.28: Root locus diagram of the outer loop of the altitude hold system for N250-100 PA-2 aircraftzommed to show the phugoid mode

# u N (s)Gct (s) e u (s) = ref u (s) a long (s) + sGct (s) Sacc N e

"

(2.56)

Using the model of control mechanism and elevator servo as given in Eq.2.4 with the value of s = 0.1 sec., the polynomial of the inner control loop using the forward acceleration feedback can be expressed as follows:
u sN (s) e =0 (s + 1/ s ) long (s)

1 + Ki where,

(2.57)

Ki = Sacc / s Fig.?? shows the root locus of the inner control loop. It is clear that the phugoid mode gets more stable as the gain Ki goes up, due to the increasing damping

66

CHAPTER 2. AUTOPILOT SYSTEM

40 open loop h 30 20 10 0 closed loop h dengan pitch attitude hold

20

40

60

80

100

120

140

160

1.5

0.5

10

15

20

25

30

35

Figure 2.29: Time response of h(t) for an input of href = 1 from a system with and without altitude hold for N250-100 ratio. However, the pitch oscillation mode becomes oscillatorily unstable when the gain Ki increases. As an example the following design point is taken:
Ki = 0.0409

(2.58)

The poles associated with this design point are: p1 p2,3 p4,5 = 10.0185 = 1.5856 j 2.3045 = 0.0609 j 0.0580

The characteristic polynomial for the inner loop can then be obtained as:
i (s, Ki ) = s5 + 13.3115s4 + 41.2090s3 + 83.3069s2 + 9.8237s + 0.5542 (2.59) With the choice of gain Ki for the inner control loop, the mathematical diagram for the altitude hold system described in Fig.2.30 can be simplied as the diagram shown in Fig.2.31.

2.3. LONGITUDINAL AUTOPILOT

67

h ref

ref a

S ha x

T ( s)
x

GCt ( s )

[ N long ( s )]

e ( s)
x a

long ( s )

( s) ( s)
q( s )

u( s)

a xm

S acc

h m

h( s)

G PS ( s )

1 s

 h

Figure 2.30: Altitude hold system with forward acceleration ax as the inner loop feedback The outer closed loop transfer function relating h(s) and href (s) can then be derived as follows:
h (s) Ko ps (s + 1/ ps ) N h (s) e = h (s) href (s) s (s + 1/ ps ) i (s, Ki ) + Ko N e

(2.60)

h (s) is given by Eq.2.45a and the model for the pitot static Gps (s) is where N e given by Eq.2.42. The outer loop gain is given by:

Ko =

Shax s ps

(2.61)

As a result, the outer loop characteristic polynomial of the altitude hold system can be expressed as:
h N (s) e ) = 0 s (s + 1/ ps ) i (s, Ki

1 + Ko

(2.62)

For the value of s = 0.2, the root locus of the outer control loop is given by Fig.2.32. The root locus shows that the pitch oscillation mode remains oscillatorily stable but with rising natural frequency and slightly increasing damping ratio. Whereas the stability of the phugoid mode decreases with the increase of the gain Ko . If a working point of the outer root locus is selected as

68

CHAPTER 2. AUTOPILOT SYSTEM

h ref

S ha x

ref a

[ N long ( s )]

i ( s, K i * )
Inner loop

(s) (s)
q( s )

u( s)

h m

h( s)

GPS ( s )

1 s

 h

Figure 2.31: Altitude hold system with inner loop using forward acceleration feedback ax

Ki = 0.1752

(2.63)

The corresponding poles are given by p1 p2 p3 = 10.0185 = 0.0102 = 5.0013

p4,5 p6,7

= 1.5883 j 2.3056 = 0.0524 j 0.1105

The outer loop characteristic polynomial can then be found as:


o (s, Ko ) = s7 + 18.3115s6 + 107.77s5 + 289.35s4 +426.31s3 + 49.62s2 + 6.34s + 0.06

(2.64)

Hence, the time response h(t) due to the step input href = 1 for this altitude hold system can be expressed by using Eq.2.60, which can be rewritten for Ko = Ko as follows:
1

h (t) =

"

h Ko ps (s + 1/ ps ) N (s) T href o (s, Ko )

(2.65)

2.4. LATERAL-DIRECTIONAL AUTOPILOT

69

Root Locus 25 20 15 10 Imaginary Axis 5 0 -5 -10 -15 -20 -25 -20

-10

10

20

30 Real Axis

40

50

60

70

80

Figure 2.32: Root locus diagram of inner control loop u e for N250-100 altitude hold system The following gure shows the response of h(t) for the altitude hold system using the forward acceleration ax feedback. Compared to the same system using the pitch attitude hold system as an inner loop, this system exhibits a slower response to reach a steady state value of href = 1.

2.4

Lateral-Directional Autopilot

The following subsections will be focused on the lateral directional autopilot design for a transport aircraft. The discussion covers the design for bank hold, heading hold and VOR-hold systems.

2.4.1

Bank Angle Hold (Wing Leveler System)

The bank angle hold system is designed to keep the magnitude of aircraft bank angle. When the reference bank angle is zero, the system is also called a wing leveler. The following diagram given in Fig.2.36, shows the functional diagram

70

CHAPTER 2. AUTOPILOT SYSTEM

Root Locus

0.08 0.06 0.04 0.02 0 -0.02 -0.04 -0.06 -0.08 -0.1 -0.2 -0.15 -0.1 -0.05 Real Axis 0 0.05 0.1 0.15

Figure 2.33: Root locus diagram of inner control loop u e for N250-100 altitude hold systemzoomed around the phugoid mode

of the bank angle hold. The step by step working procedure for this automatic pilot can be elaborated as follows. First, the aircraft(1) motion is modeled by the state variable x = { , p, r, } and the control variable { a , r }. The rolling motion of the aircraft (t) is sensed by the roll gyroscope(2) and the measured signal, in the form of roll angle signal b m , is inputted to the autopilot computer(3). Within the AP computer, the measured signal b m is compared to the reference bank angle determined by the pilot. The pilot acquired the value of the reference signal b ref by performing a maneuver in such a way that the aircraft plane of symmetry coincides with the local vertical plane. In this attitude, the roll angle of the aircraft (t) will be the same as the bank angle (t). The result of the comparison between b m (t) and b ref is amplied to be converted to control command signal b . The signal b is entered to AP servo system(4) to move the steering wheel and afterward the displacement of the steering wheel is transmitted to the aileron control surface. The aileron will be deected by an angle of a (t) and in turn changes the aircraft attitude. This process is repeated in the bank angle hold closed loop until the measured roll angle b m is the same as the reference roll angle b ref as desired by the pilot. In this situation, the value of the command signal b = 0. Since the process from block(2) until block(4)

Imaginary Axis

2.4. LATERAL-DIRECTIONAL AUTOPILOT

71

Root Locus 10

Imaginary Axis

-5

-10

-10

-5

0 Real Axis

10

15

Figure 2.34: Root locus diagram of outer control loop u e for N250-100 altitude hold system is performed in the electrical domain, it can be considered very fast. While the process from block(4) to block(2) is executed in the hydro-aero-mechanical, therefore the time response is very dependent on the mass and inertia of the associated system. Mathematically the functional diagram of such system can be described in Fig.2.37. In the above model, the transfer functions of the bank angle hold closed loop system consists of: 1. Aircraft transfer function matrix, for r = 0, GA/C (s)
r =0

Nld (s)|r =0 x (s) = ld (s) a (s)

(2.66)

2. Roll gyro transfer function. Due to the possible application of commercially available laser and ber optic gyroscopes, the time response of the gyro in sensing the pitch angle (t) can be considered very fast, thus the transfer function can be modeled as a constant: Grg (s) = Srg constant (2.67)

72

CHAPTER 2. AUTOPILOT SYSTEM

1.5

1 closed loop teta 0.5 0 0

100

200

300

400

500

600

Figure 2.35: Time response of h(t) for N250-100 altitude hold system with the inner loop of ax e feedback 3. Comparator in the autopilot computer transfer function: b ref b m (s) b (s) = (2.68)

4. Signal amplication system transfer function. The transfer function of the amplication can be modeled as the constant gain due to its electrical domain.

Gamp = Samp = constant

(2.69)

5. Lateral autopilot servo and control mechanism transfer function. The system can be considered to have a relatively faster time response compared to the dynamics of the aircraft. The assumption can be especially justied if the domain of the control system is of electro-hydro-mechanical

2.4. LATERAL-DIRECTIONAL AUTOPILOT


(3) AP computer

73

ref


Amp

r
c

a
(4) AP servo to aileron steering (1) aircraft

(t )
p (t ) r (t ) (t )

(t )

(2) Roll gyroscope

Figure 2.36: Functional diagram of Bank angle hold system namely y by wire or y by light. The block of the AP servo and control mechanism can then be modeled as a system with time response of s representing the rst order lag factor as follows: Gact (s) = 1/ s s + 1/ s (2.70)

The characteristic polynomial of the bank angle hold closed loop system is given by:
N (s) a s + 1/ s

cl (s) = ld (s) + kqa

(2.71a)

where: kqa Samp Srg / s (2.71b)

Therefore the closed loop characteristic equation of the bank angle hold system can be given as: 1 + kqa Gol (s) = 0 (2.72a)

74

CHAPTER 2. AUTOPILOT SYSTEM

ref

S amp

r
c

Gact ( s )

[ N ld ( s )] ld ( s )

(s)
p( s) r (s) (s)

Grg ( s )

Figure 2.37: Mathematical diagram of bank hold system


(s) N a (s + 1/ s ) ld (s)

Gol (s)

(2.72b)

For a conventional aircraft, GOL (s) typically has a pair of complex zeros and ve poles consisting of a pair of complex poles and three real poles where one of the poles is associated with the control mechanism and servo system. To have a more elaborate understanding of the work principle of the bank angle hold system, a case study will be taken for the Indonesian Aerospace N250100 PA2 Krincing Wesi at the cruise ight conguration with a low speed of V = 180 KEAS and at the altitude of h = 15000 ft. The lateral directional transfer function for aileron input for the aircraft is given as follows:
N (s) a

= Sa

s s s +1 +1 +1 0.1232 7.7628 + j 14.9198 7.7628 j 14.9198 (2.73) s s +1 +1 0.2278 + j 1.2565 0.2278 j 1.2565

N (s) a

= Sa

r (s) N a

= Sra

s s s +1 1 1 0.6081 0.8112 + j 1.2543 0.8112 j 1.2543

2.4. LATERAL-DIRECTIONAL AUTOPILOT

75

p N (s) = sN (s) a a

with static sensitivity coecients given by: Sa Sa Sra = 3.01743 = 176.40281 = 14.54464 (2.74)

and the characteristic polynomial of: s4 s3 s2 s + + + +1 0.0381 0.01706 0.01544 0.01004

ld (s) =

(2.75)

with the following characteristic roots: p1 p2 p3,4 nDR = = = = 1.9577, associated with the roll subsidence (2.76) 0.0101, associated with the spiral 0.1325 j 1.3817, associated with the dutch roll, where: 1.39 rad/s and DR = 0.0955

Due to the use of ring laser gyroscope, the value of the gain Srg can be taken as unity. The root locus associated with the characteristic equation Eq.2.72a is given for a number of time response, s = 0.1, 0.2 and 0.5, by the following gure. It is evident that for a suciently large value of the gain ka , the dutch roll mode moves to the right hand side of the imaginary axis leading to oscillatory instability. The larger the value of s the quicker the dutch roll mode becomes unstable. In contrast, the roll subsidence and spiral merges to become a stable oscillatory mode. In each of the root locus shown, the following design point can be taken: i kai i characteristic roots 1 8.9344 0.1 10.4101; 0.4014 j 1.7353; 0.51 j 12529 2 3.8243 0.2 5.707; 0.2215 j 1.6754; 0.5414 j 1.1382 3 0.7711 0.5 2.9695; 0.0969 j 1.4637; 0.5348 j 0.7328 For each of the above design point in the root locus, the associated closed loop characteristic polynomial is given the following equations: s1 = 0.1 = cl1 = s5 + 12.2329s4 + 24.797s3 + 65.3037s2 + 54.7837s + 60.4316 (2.77) s2 = 0.2 = cl2 = s5 + 7.2329s4 + 13.6325s3 + 31.8998s2 + 26.2032s + 25.8943

76

CHAPTER 2. AUTOPILOT SYSTEM

Root Locus 3 0.5 0.64 2 0.8 1 0.94 Imaginary Axis 0.38 0.28 0.17 0.08 2.5 2 1.5

0.2 0.5

0.1

Dutch roll
1 0.5

spiral

0 0.5

0.94 -1 0.8 -2 0.64 0.5 -3 -2 -1.5 0.38 -1 0.28 0.17 -0.5 Real Axis 0.08

roll subsidence 0.1

1 1.5 2 2.5 0

0.5 0.2
0.5 1

Figure 2.38: Root locus diagram of the bank hold system of N250-100 aircraft for a number of s values

s3 = 0.1 = cl3 = s5 + 4.2329s4 + 6.9338s3 + 11.9107s2 + 9.0791s + 5.2588 The time response of (t) for any case of si can be calculated as follows:
1

i (t) =

kai N a cli (s, kai ) ref

(2.78)

i=1,2,3

The following gure illustrates the time response of (t) for the three dierent cases of s . It can be observed that for s = 0.1(a = 10), the roll response will be quickly damped even though the overshoot peak is fairly large. It is evident that the larger the value of s , the lower the damping of the time response (t), even though the overshoot peak decreases. Fig.2.40 shows the roll angle response due to unit step input. It is clear that the trend for the damping and the overshoot peak is similar to that of the impulse input. However note that by the selection of the gain for the case when = 0.5, the time response shows no overshoot or oscillation.

2.4. LATERAL-DIRECTIONAL AUTOPILOT

77

1 a=10

0.8

0.6 roll angle phi (rad) a=5 0.4 a=2 0.2

-0.2

-0.4

10

15

20

25

30

35

Figure 2.39: Time response of (t) for a = 1/ s = 10, 5, 2 due to an impulse function input

2.4.2

Heading Hold System

The majority of a heading hold system is designed as an extension of the bank angle hold system. This is so due the fact that there exists a relation between . the heading rate and the roll angle as given by Fig.2.41 The horizontal and vertical forces equilibrium during the turn maneuver at the constant altitude as illustrated by the above diagram can be expressed as follows:

L cos = W W 2 L sin = R g

(2.79a) (2.79b)

78

CHAPTER 2. AUTOPILOT SYSTEM

1.4 a=10 1.2

1 roll angle phi (rad)

0.8

a=5

0.6 a=2 0.4

0.2

10

15

20

25

30

35

Figure 2.40: Time response of (t) for a = 1/ s = 10, 5, 2 due to a step function input where: L: lift W : weight g: gravitation acceleration : roll angle R: turning radius . : heading rate or turn angle rate Substituting Eq.2.79a to Eq.2.79b, the following relation can be obtained: 2 R = g tan

(2.80)

Meanwhile, an expression for heading rate can be written from the kinematics relation of the velocities: =V R Plugging Eq.2.81 to Eq.2.80, the heading rate can be expressed as: (2.81)

2.4. LATERAL-DIRECTIONAL AUTOPILOT

79

W g

 2R

R W R
Figure 2.41: Forces equilibrium during the turn maneuver

= g tan ' g V V or in Laplace domain, it reads: g (s) V s

(2.82)

(s) =

(2.83)

From the above relations, it is clear that the heading angle (s) can be obtained from the integration of the roll rate angle (s). Therefore it can be stated that a bank hold system represents an inner loop of a heading hold system. The functional and mathematical diagram of the heading hold system are illustrated in Fig.2.42 and Fig.2.43, respectively. It is assumed that the heading gyro has a high performance, thus its time response can be considered very fast. As a result, the transfer function of the heading gyro can be represented by a constant gain: Ghg (s) = Shg = constant (2.84)

In the following analysis, it will be assumed that a bank hold system has been designed as the inner loop of the heading hold system. Its characteristic is

80

CHAPTER 2. AUTOPILOT SYSTEM

AP computer

servo and control mechanism


+

ref

r a

aircraft

(t )
p (t ) r (t ) (t )

Amp

Roll gyroscope

Heading gyroscope

Figure 2.42: Functional diagram of heading hold system


specied by appropriately selecting the gain k ai . Hence the transfer function of the inner loop can be expressed as:

where,

Nx k x (s) cli s, k ai a = G = ai c (s) cli s, k ai

(2.85)

= ld (s) + and cli s, k ai

x = {

p r

N (s) a k ai s + 1s

(2.86a) (2.86b)

Using the above equation of inner loop transfer function, the mathematical diagram of the heading hold system as described in Fig.2.43 can be simplied as given by the following gure. See Fig.2.44. The outer closed loop tranfer function, in this case, can be expressed as: (s) = Gclo (s, k )ref where (2.87)

2.4. LATERAL-DIRECTIONAL AUTOPILOT

81

ref

S amp

r
Gact ( s )
m
servo

a
Grg ( s )

[ N ld ( s )] ld ( s )

Aircraft

(s)
p(s) r ( s) (s)

Amplifier

Roll gyro

Ghg ( s )
Heading gyro

g Vs
Model

Figure 2.43: Mathematical diagram of heading hold system

Gclo (s, k ) = k

k Na (s) Shg clo (s, k ) g = Samp Shg k ai V

(2.88a) (2.88b)

The characteristic polynomial of the outer closed loop clo (s, k ) can be expressed as: clo (s, k ) = scli s, k ai + k N a (s)

(2.89)

Thus the characteristic polynomial of the heading hold system can be given as:
N (s) a =0 cli s, k ai

1 + k

(2.90)

The root locus of the heading hold system for the N250-100 is presented in Fig.??. In this case, the inner loop characteristic polynomial cli (s, k ai ) is taken from the example of bank angle hold system given in Sec.2.4.1; for three dierent cases: s = 0.1, 0.2 and 0.5 or a = 10, 5 and 2. The three root locus diagrams associated with Eq.2.90 is shown in the following gure for each value

82

CHAPTER 2. AUTOPILOT SYSTEM

ref

S amp

(s)

Gcli ( s , k

ai

p(s) r (s) (s)

Amplifier

Bank angle hold

S hg
Heading gyro

g Vs
Model

Figure 2.44: Heading hold system with bank angle hold as an inner loop of s (or a). From the above root locus diagrams, the following working points can be taken: i ki ai = 1/ i characteristic roots 1 4.5943 10 10.3913; 0.1744 j 1.8127; 0.3930 j 1.0553; 0.7066 2 2.2746 5 5.6489; 0.8521; 0.0429 j 1.7072; 0.3230 j 0.9923 3 0.1606 2 2.9384; 0.3055; 0.0816 j 1.4565; 0.4129 j 0.6280 In this case, the characteristic polynomial for each value of a can be given as the following: a1 = 10 = clo1 = s6 + 12.2329s5 + 24.7970s4 + 65.3037s3 +73.7215s2 + 69.0615s + 30.8796

(2.91)

a2

= 5 = clo2 = s6 + 7.2329s5 + 13.6375s4 + 31.8998s3 +35.5789s2 + 30.1668s + 15.2879

a3

= 2 = cl03 = s6 + 4.2329s5 + 6.9379s4 + 11.9107s3 +9.7410s2 + 5.5604s + 1.0793

Using the above data, the time response of (t) and (t) can be calculated from the following equations.

2.4. LATERAL-DIRECTIONAL AUTOPILOT

83

Root Locus 2

1.5

a=10 a=2 a=5

Imaginary Axis

0.5

-0.5

-1

-1.5

-1.5

-1

-0.5

0 Real Axis

0.5

1.5

Figure 2.45: Root locus diagram of the heading hold system of N250-100 aircraft for a number of a = 1/ s values

N (s) ki a ref i (t) = Shg clo s, ki V i (t) = 1 s i (s) g 1

"

# (2.92)

The time response is presented in Fig.2.46. It is clearly indicated that the smaller the value of s (or the larger the value of a) the faster the time response of the actuator which in turns causes the higher the damping ratio of the Dutch roll mode. For a fairly large value of s (namely for a = 2), the damping ratio can only be obtained for a small value of gain k . For a small value of s , the high overshoot peak is due to the associated high gain. The heading hold system is capable of maintaining the direction of ight that has been selected by the pilot. However, this system would not be able to maintain the ight path of the aircraft along a certain desired reference. See the sketch presented in Fig.2.47. If the aircraft is induced by side wind, it will

84

CHAPTER 2. AUTOPILOT SYSTEM

0.6
heading angle psi(rad)

a=10 a=5 a=2

0.4 0.2 0 -0.2

10

15

20

25

30

35

40

45

50

0.6
roll angle phi(rad)

0.4 0.2 0 -0.2 -0.4 0 5

a=10 a=2

a=5

10

15

20

25

30

35

40

45

50

Figure 2.46: Time response of (t) and (t) of the heading hold system of N250-100 aircraft be drifted from its desired ight path even though the aircraft heading angle can be maintained. Considering this kind of situation, a system that is capable of simultaneously maintaining the ight direction and ight path of the aircraft is thus desired. This system is referred to as a VOR hold or lateral beam hold.

2.4.3

VOR-Hold System

The VOR-hold system has the role as an automatic guidance system which is combined with the control system for ight direction hold system. The system keeps the ight path to be aligned with the reference direction which is transmitted by the VOR terrestrial guidance system. In principle, this system automates the work of the pilot in order that the CDI (Course Deviation Indicator) needle is kept centered. To understand the automation process associated with this system see Fig.2.48. An aircraft is ying under the coverage of the transmitted wave from a VOR station with a certain bearing reference wave and a certain wave angle. The transmission bearing of the VOR main wave with respect to the magnetic north is represented by ref . The reference angle ref has to be followed by the aircraft ight angle. The width of the transmission wave is .

2.4. LATERAL-DIRECTIONAL AUTOPILOT


To destination

85

wind
pa th

ref ref

Th

e sir de

df

t lig h

ref ref

ref

Figure 2.47: Eect of wind to the aircraft ight path The actual position of the aircraft has an oset angle with respect to the main radial line and a radial distance of R with respect of VOR station. The aircraft position has a bearing with respect to magnetic north and a distance d with respect to the VOR main bearing. From the geometry of the VOR guidance path the following relations can be readily obtained: d R For a small , the above relation can be approximated by: tan = (s) = 57.3 d (s) R (2.93)

(2.94)

(t) can be The rate of change of distance with respect to the main radial, d obtained through the following relation: (t) d (d) = Vp sin (2.95) d ref dt where Vp is the aircraft ight velocity. The above relation can be further simplied for a small angle case as follows: (t) d = Vp ref or sd (s) = Vp (s) ref (2.96)

86

CHAPTER 2. AUTOPILOT SYSTEM

ref

ref

main wave

um
ref
transmission width

Vp

VOR
Figure 2.48: VOR guidance path geometry Substituting Eq.2.96 to Eq.2.94, the oset angle can be expressed as: (s) ref 57.3 Vp R s

(s) =

(2.97)

In the process of guidance, in order that the aircraft position can be xed to the main bearing line, the oset angle has to be made to zero. Therefore the task of the automatic guidance system is the make the (s) zero. The following gures illustrate the functional and mathematical diagrams for the VOR hold system. Refer to Fig.2.50. The system inside the dashed box represents the heading hold control system which has been discussed in Sec.2.4.2. Beyond the box is the ight guidance and navigation. For the above guidance and navigation system, the transfer functions are given as follows: 1. Comparator of aircraft heading angle and VOR bearing (s) = ref (s) (2.98)

The reference angle ref is the bearing of the wave transmitted from the VOR ground station which is nominally received the aircraft.

2.4. LATERAL-DIRECTIONAL AUTOPILOT

87

AP computer
+ +

AP servo and control mechanism

aircraft

a , r

(t )
p (t ) r (t ) (t )

Amp

m
Roll gyroscope

Control loop

Flight heading

Heading gyroscope Guidance loop

m ref
Main radial VOR

comm

Guidance Geometry

Navigation and control computer

Figure 2.49: The functional diagram of VOR-hold guidance-control system 2. Guidance geometry, Gng (s) From Fig.2.48, the aircraft ight path geometry with respect to the VOR bearing has been derived. Eq.2.97 gives (s) = Gng (s) (s) Vp 1 Gng (s) = 57.3 Rs 3. Oset comparator (s) (s) = ref (s) (2.100) (2.99a) (2.99b)

It is clear that ref = 0, since the objective of the system is to make the oset angle = 0. Therefore, (s) = (s) 4. Coupler Gcoup (s) The coupler represents the conversion system from the output of the ight guidance system to the command signal comm (s). This conversion system can be expressed as combination of integrator and rst order proportional (2.101)

88

CHAPTER 2. AUTOPILOT SYSTEM

ref

S amp

r
Gact ( s )
servo

a
Grg ( s )

[ N ld ( s )] ld ( s )

Aircraft

(s)
p(s) r (s) ( s)

coupler

Amplifier

Gcop ( s )
ref
+

m m

+

Roll gyro

Ghg ( s )

(s)

g Vp s
Model

57.3 1 Vp R s
Guidance geometry

ref From the VOR

Figure 2.50: The mathematical diagram of the VOR-hold guidance-control system terms. com (s) = Gcop (s) (s) s + 0.1 Gcop (s) = Kc s 5. Aircraft and heading hold system As explained in Sec.2.4.2, the transfer function of the aircraft and heading hold system is given by Eq.2.92 as follows: (s) = Gclo s, ki com (s) k Shg clo (s, k )
N (s) a

(2.102) (2.103)

(2.104a) (2.104b)

Gclo (s, k ) =

In here the characteristic polynomial cloi (s, ki ) is given by Eq.2.89. From the above transfer functions, the mathematical diagram of the VOR hold can be simplied as one shown in Fig.2.51. Because the controlled variable is the oset angle (s), thus the transfer function between (s) and ref needs to be expressed as follows:

2.4. LATERAL-DIRECTIONAL AUTOPILOT

89

ref 0

comm

( s)

Gcop

Gclo ( s, k * i )
Aircraft and heading hold

(s)

ref

Gng

Figure 2.51: Navigation and guidance system: VOR hold

(s) = where:

N (s) r

ng (s)

ref

(2.105)

(s) = 57.3 N r

Vp scl s, k i R

(2.106a)

ng (s) = s2 cl s, k (s) k (s + 0.1) N a i k = 57.3


Vp ki

(2.106b) (2.106c)

R Shg

Kc

The characteristic equation for (s)/ ref (s) is given by 1 k


(s + 0.1) N (s) a =0 s2 clo s, k i

(2.107)

The root locus of the above characteristic polynomial is illustrated in Fig.2.52 for the gain value of k < 0. From the root locus, if the following working point associated with k = 1.1152 is taken, the following characteristic polynomial is obtained:
) = s8 + 12.2329s7 + 24.797s6 + 65.3037s5 + 73.7215s4 (s, k +73.6583s3 + 33.434s2 + 7.7049s + 0.7495

90

CHAPTER 2. AUTOPILOT SYSTEM

Root Locus 2

1.5

0.5 Imaginary Axis

-0.5

-1

-1.5

-2 -2

-1.5

-1

-0.5

0 Real Axis

0.5

1.5

Figure 2.52: Root locus diagram for the VOR oset (s) with respect to the bearing ref of the N250-100 aircraft with the roots: 0.2436; 10.3918 0.169 j 1.79; 0.43 j 1.01; 0.1998 j 0.1897 The time response of the oset angle (t) can be obtained using Eq.2.105 sclo s, k i
) ng (s, k

(t)
p 57.3 R

= 1

ref

(2.108)

2.4. LATERAL-DIRECTIONAL AUTOPILOT

91

offset angle lambda(rad)

0.5

-0.5

10

15

20

25

30

35

40

Figure 2.53: Time response of (t) of the VOR Hold system of the N250-100 aircraft with gains of: k = 1.1152 (guidance loop), k i = 4.5943 (outer loop control) and ki = 8.9344 (inner loop control)

92

CHAPTER 2. AUTOPILOT SYSTEM

Chapter 3

Stability Augmentation System


3.1 Introduction

A stability augmentation system for an aircraft is designed to improve dynamic characteristics of the aircraft in general. The improvement is often also necessary for a certain critical ight conditions such as take-o, landing or ying in a bad weather. A number of stability augmentation system (SAS) typically used for conventional transport aircraft are: 1. Longitudinal mode (a) Pitch damper (b) Phugoid damper 2. Lateral Directional mode (a) Yaw damper (b) Roll damper (c) Spiral damper

3.2

Working Principle of Stability Augmentation System

A stability augmentation system is a built-in system operated only by an on/o button. Fig.?? shows the yaw damper type stability augmentation system of a twin turboprop engine transport aircraft, Indonesian Aerospace N250-100 Krincing Wesi. To activate the SAS system, the pilot can simply press the ON button. Once the SAS is engaged, the aircraft stability in a certain mode 93

94

CHAPTER 3. STABILITY AUGMENTATION SYSTEM

will be improved, for instance its damping or natural frequency. While SAS is working, the pilot can freely move the controller wheel/stick/pedal without disturbing the SAS system. Thus pilot input can co-exist with SAS operation.

ref

AP + SAS computer
+ +

SAS servo

T e

aircraft

Amp

(t ) (t )
q(t )
Pitch damper loop

u (t )

qm

rate gyroscope

Vertical gyroscope

Figure 3.1: The functional diagram of the pitch damper system For statically stable aircrafts, SAS is typically used only for a certain ight region or conguration such as approach and landing or high speed cruise. For other than the above ight zones, the SAS is typically turned-o. Therefore, for statically stable conventional aircrafts, the SAS is referred to as a limited authority system. For non-conventional aircrafts where the static stability is relaxed or completely removed, this stability augmentation system is commonly called Super Stability Augmented System. This kind of system is denitely working continuously, full-time, since without the system the aircraft will not be able to y. The system works to provide basic stability to the aircraft and thus can not be over-ridden by the pilot. Therefore, this system is often referred to as having a full authority. This kind of Super SAS is used for instance in the ghter aircrafts namely Lockheed Martin F-16 Fighting Falcon, Sukhoi SU-27/35, Rafale, SAAB-Grappen among others.

3.3

Longitudinal Stability Augmentation System

The longitudinal stability augmentation system has the goal to increase the damping ratio of the pitch oscillation mode while maintaining the value of the

3.3. LONGITUDINAL STABILITY AUGMENTATION SYSTEM

95

AP servo

SAS servo

ref

T e

Aircraft

G AP ( s )

qref

GSAS ( s )

[ N long ( s )] long ( s )

(s) (s)
q(s)

u (s)

qm
Altitude hold

Pitch damper loop

S rg
m

Rate gyro

S vg
Vertical gyro

Figure 3.2: The mathematical diagram of the pitch damper system damping ratio of the phugoid mode, and vice versa. The following Table list the possible applications of feedback to increase the longitudinal stability, based on experience: In the previous chapter, the root locus conguration for longitudinal feedback with positive/negative gain has been shown for an ideal condition. The ideal condition assumed that the dynamics of the control mechanism and sensor instruments can be represented a constant gain. If the dynamics of the control mechanism and sensor is taken into consideration, the root locus conguration can be dierent.

3.3.1

Pitch Damper System

The pitch damper system is designed to increase the damping ratio of the pitch oscillation mode. The pitch rate is sensed by the vertical rate gyro and the measured signal is sent to the SAS computer in order to improve the pitch oscillation mode. Refer to the Fig.3.1 in the following. The gure shows the functional diagram describing the use of the pitch damper system as an inner control loop of the pitch attitude hold autopilot. The pitch damper measures the pitch rate by using the rate gyro. The measured signal is then sent to the SAS computer which in turn outputs the control command signal to the elevator servo. The elevator is moved in a way that stabilizes the aircraft in the pitch direction. In the above example, the elevator

96

CHAPTER 3. STABILITY AUGMENTATION SYSTEM

Table 3.1: Feedback for improving longitudinal stability No 1 Objective Increasing the pitch oscillation damping (pitch damper) Increasing the phugoid damping (phugoid damper) Increasing the pitch oscillation natural frequency Feedback q e Side eect natural frequency of the pitch oscillation slightly increases the damping of the pitch oscillation decreases Stability derivative improvement mq

e u e

mu

e nz e

servo has a task to receive command from the SAS computer only. In a number of large size aircrafts, the AP and SAS servos are usually separated. The mathematical diagram of the pitch damper system is illustrated in Fig.3.2. If the SAS servo can be assumed to have a fast time response, it can be modeled as a constant gain: GSAS (s) = Kq , constant (3.1)

The AP servo has typically a slower time response, thus it can be assumed to have a rst order lag model: GAP (s) = Kq s + 1/ s (3.2)

The inner loop of the pitch damper system can be expressed as: [Nix (s)] q ref i (s, Ks )

x (s) = where,

(3.3)

i (s, Ks ) = long (s, Ks ) +

x (s) [Nix (s)] = Kq N e

(3.4a)
Kq sN (s) e

(3.4b)

3.3. LONGITUDINAL STABILITY AUGMENTATION SYSTEM

97

Root Locus 2.5 0.85 2 1.5 0.965 1 Imaginary Axis 0.5 0 -0.5 -1 0.965 -1.5 -2 -2.5 -5 0.992 4 3 2 1 0.92 0.76 0.62 0.44 0.22

pitch oscillation phugoid

0.992

0.92 0.85 -4 0.76 -3 0.62 -2 Real Axis 0.44 -1 0.22 0 1 2

Figure 3.3: Root locus of the inner control loop: pitch damper system of N250100 PA-2 at cruise condition with V = 250 KIAS and h = 150000 From the above equation, the pitch damper loop characteristic polynomial can be given as the following equation.
sN (s) e =0 long (s)

1 + Kq

(3.5)

As a study case, the pitch damper system design for the N250-100 aircraft during cruise at V = 250 KIAS and altitude h = 150000 will be investigated. The modeling data was presented in Eqs.2.9b2.12 in Sec.2.3.1. The root locus diagram for the pitch damper system of the N250-100 is presented in Fig.3.3 for the negative value of the gain. To analyze the eectivity of the pitch damper, the following gain value is selected and the the associated poles are presented.
Kq p1,2 p3,4

= 0.2432 = 3.5657 j 0.5627 ; po = 0.988 , npo = 3.61 = 0.0092 j 0.0646 ; ph = 0.141 , nph = 0.0652

(3.6)

The inner loop characteristic equation in conjunction with the selected gain

98

CHAPTER 3. STABILITY AUGMENTATION SYSTEM

Root Locus 0.15

0.1

0.05 Imaginary Axis

phugoid differentiator

-0.05

-0.1

-0.15 -0.2 -0.15 -0.1 -0.05 0 Real Axis 0.05 0.1 0.15 0.2

Figure 3.4: Root locus of the inner control loop: pitch damper system of N250100 PA-2 at cruise condition with V = 250 KIAS and h = 150000 enlarged around the phugoid mode is
) = s4 + 7.1497s3 + 13.1661s2 + 0.2702s + 0.0554 i (s, Kq

(3.7)

With the above data, the time response for (t) and q (t) with and without pitch damper can be calculated and compared. The response of (t) and q (t) with the pitch damper is given by the following equation: " # Kq Ne (s) 1 q (3.8) (t) = ) ref i (s, Kq
1

q (t) =

q Ne (s) Kq q ) ref i (s, Kq

(3.9)

Fig.3.5 demonstrates the eectiveness of the pitch damper system. The longitudinal damping for the system with and without pitch damper can be obtained as the following: ph|without = 0.0513 ph|with = 0.1410

3.3. LONGITUDINAL STABILITY AUGMENTATION SYSTEM

99

2.5 2 theta(rad) 1.5 1 0.5 0 0 1 2 with pitch damper without pitch damper

10

8 6 pitch rate 4 2 0 -2 0 1 2 3 4 5 6 7 8 9 10 without pitch damper with pitch damper

Figure 3.5: Time response of (t) and q (t) with and without pitch damper of N250-100 aircraft

po|without = 0.2897 po|with = 0.988 It is evident from the time response that in the short period of time the eect of po is signicant both in dampening the pitch and reducing the peak of the response. If the values are acceptable and used for the analysis of the outer loop, pitch attitude hold, the diagram in Fig.3.2 can be simplied as that shown in Fig.3.6 in the following: However, as alluded previously, the e feedback represents the phugoid damping augmentation system. Thus the diagram in Fig.3.6 can be regarded as the diagram of phugoid which will be discussed next.

3.3.2

Phugoid Damper

The outer loop of the functional diagram given in Fig.3.1 shows the diagram for the phugoid damper. The mathematical diagram of the phugoid damper is presented in Fig.3.6. With the model of the autopilot GAP (s) given in Eq.3.2,

100

CHAPTER 3. STABILITY AUGMENTATION SYSTEM

ref

G AP ( s )
AP servo

Gcli ( s , k i )
Aircraft + Pitch damper

( s) (s)
q(s)

u (s)

S vg
Vertical gyro

Figure 3.6: The mathematical diagram of the pitch attitude hold with the pitch damper as the inner loop the outer characteristic equation of the phugoid damper is given by: 1 + K where
Kq K = Kq sN (s) e =0 i (s, Kq ) (s + 1/ s )

(3.10)

(3.11)

The root locus of the phugoid damper system is presented in Fig.3.7. As an example the following design point is taken:
= 3.7979 K

(3.12)

The poles are obtained as: p1 p2,3 p4,5 = 10.2807 = 3.3549 j 1.3043 ; po = 0.932 = 0.0796 j 0.0307 ; ph = 0.933 (3.13)

With the above design point, the phugoid damper characteristic polynomial can be written as:

3.3. LONGITUDINAL STABILITY AUGMENTATION SYSTEM

101

Root Locus 10 8

autopilot servo
6 4 Imaginary Axis 2 0 -2

pitch oscillation
-4 -6 -8 -10 -12

phugoid

-10

-8

-6 Real Axis

-4

-2

Figure 3.7: Root locus of outer control loop: phugoid damper of N250-100 PA2 at cruise condition

) = s5 + 17.1497s4 + 84.6632s3 + 146.5079s2 + 21.8273s + 0.9710 o (s, K (3.14)

And the time response (t) can be calculated using the following equation: "
K Ne (s) ) o (s, Kq ref

(t) = 1

(3.15)

The following gure describes the time response of (t) of the phugoid damper system. Compare the result to Fig.2.10, the pitch attitude hold without the phugoid damper. For the pitch attitude hold system without pitch damper (See Sec.2.3.1), there exits oscillation at the early stage of time response. This oscillation does not occur in the system with the pitch attitude damper as the inner loop. However the settling time gets longer. The settling time for a system without pitch damper is 12 time units whereas for a system with pitch damper it is 65 time units.

102

CHAPTER 3. STABILITY AUGMENTATION SYSTEM

Root Locus

0.1

phugoid
0.05 Imaginary Axis

-0.05

-0.1

-0.16

-0.14

-0.12

-0.1

-0.08

-0.06

-0.04

-0.02

0.02

Real Axis

Figure 3.8: Root locus of outer control loop: phugoid damper of N250-100 PA2 at cruise conditionenlarged around phugoid mode From the ight test experience, it can be deduced that N250-100 PA2 has a very good longitudinal dynamic characteristic, thus the pitch damper system is not needed.

3.4

Lateral-Directional Stability Augmentation System

The stability augmentation system in the lateral directional mode is designed primarily to increase the damping ratio of the dutch roll and to reduce the time response of the roll subsidence and spiral modes without degrading the stability of the other modes. Examples of lateral directional stability augmentation systems are presented in the following table.

3.4.1

The Dutch-roll stability augmentation: Yaw Damper

The damper system is designed to increase the damping ratio of the dutch roll mode within which the dominant factors are the side slip angle (t) and the

3.4. LATERAL-DIRECTIONAL STABILITY AUGMENTATION SYSTEM103

Table 3.2: Feedback for improving lateral directional stability No 1 Objective Increasing the dutch roll damping (yaw damper) Lowering the roll subsidence time response (roll damper) Stabilizing the spiral mode Feedback r r r Side eect the spiral mode is also stabilized Stability derivative improvement nr

Providing coordination to rudder Reducing the adverse yaw

the spiral is stabilized and the dutch roll is sustained by the aileron a the dutch roll time response increases r a the dutch rolll time response decreases, dutch roll got unstable a the dutch roll got unstable, the roll time response slightly decreases the spiral destabi r ny lized and the dutch r roll frequency increases a r (ARI)

p a

lp

104

CHAPTER 3. STABILITY AUGMENTATION SYSTEM

0.4 0.35 0.3 0.25 theta(rad) 0.2 0.15 0.1 0.05 0

50

100

150

Figure 3.9: Time response of (t) of the phugoid damper with K = 3.7979 and Kq = 0.2432

yawing angle (t). In operation, this system is commonly known as the Yaw damper. There are two methods to increase the Dutch-roll damping namely by to rudder the feed-back of yaw rate r and by the feedback of side slip rate deection r. The yaw damper through the feedback of r r In this type of yaw damper, the yaw rate is measured by the directional rate gyro and the measurement output signal is then processed in the SAS computer and sent out as the control command signal to the rudder actuator. Physically the rudder deection motion will continuously make corrections or counter the dutch roll motion (side-slip/yawing) thus providing the damping. Figs.3.10 and 3.11 show the functional and mathematical diagram of the yaw damper with the yaw rate feedback respectively. Note that an electric circuit component called the Wash-out circuit is installed in between the rate gyro and SAS computer. The wash-out circuit is a rst order lead-lag designed to make the measured yaw rate from the rate-gyro transient which means that it only works shortly from the start of the maneuver that produces r(t). After the steady state is reached, the signal r(t) is removed by the wash-out circuit. The objective of the rm (t)

3.4. LATERAL-DIRECTIONAL STABILITY AUGMENTATION SYSTEM105 elimination in the steady state is to make the yaw damper not in opposition to the pilot input during the the stationary turning maneuver. If the rm (t) still exists, it will work against the pilot and will make the coordinated turn dicult to achieve. Therefore, this yaw damper is designed only to enhance the dutch-roll damping in the transient condition such as the period of entering the gust or turbulence or the landing phase with ever decreasing speed.

SAS computer

Rudder servo

rref

a r

aircraft

(t )
p (t ) r (t ) (t )

erm

er

r (t )

Wash-out circuit

rate gyroscope

Figure 3.10: The functional diagram of the yaw damper system with r r feedback The components of the yaw damper system transfer function are: 1. Aircraft, GA/C (s), for the r r then GA/C (s) =
x x (s) Nld (s) = ld (s) r (s)

(3.16)

x = { 2. Rate gyro, Grg (s)

p r

}T (3.17)

Grg (s) = Srg constant rm (s) = r(s)

106

CHAPTER 3. STABILITY AUGMENTATION SYSTEM

Pilot input

 ref r

a
Gact ( s )
Rudder servo + control mechanism

[ N ld ( s )] ld ( s )
aircraft

(s)
p(s) r (s) (s)

m r
Gwo ( s )

rm

Grg ( s )

Wash-out

Rate-gyro

Figure 3.11: The mathematical diagram of the yaw damper system with r r feedback The rate gyro is assumed to have a very fast time response, especially with the availability of the advanced technology such as ring laser gyro or ber optic gyro. Thus it can be modeled by a constant gain 3. Wash-out circuit, GW O (s), to eliminate the stationary cnodition. The transfer function can be given as follows Gwo (s) = = nwo (s) s/ wo = kwo dwo (s) (s + 1/ wo ) r m (s) r(s)

(3.18)

by using the lead-lag term above, in the steady state condition r m (s) 0 4. Control mechanism and yaw damper servo Similar to previous cases, the yaw damper control mechanism is modeled

3.4. LATERAL-DIRECTIONAL STABILITY AUGMENTATION SYSTEM107 as a rst order system as: Gact (s) = Kct = Kct 1 1/ s = s + 1/ s s dact (s) r (s) r (s) (3.19)

The polynomial characteristic of the yaw damper system can be written as:
r (s)nwo (s) cl (s) = dact (s)dwo (s)ld (s) + K N r Kct Kwo Srg K s wo

(3.20)

Therefore, the closed loop characteristic equation of the yaw damper system is given as: 1 + K
r N (s)nwo (s) r =0 dact (s)dwo (s)ld (s)

(3.21)

The locus plot of the roots of the above equation represents the root locus of the yaw damper system. As a case study, the application of the yaw damper to the N250-100 PA-2 Krincing Wesi aircraft will be presented. The aircraft is equipped with the yaw damper system to increase the ride quality particularly during the ight in turbulence and low-speed landing phase. The transfer functions of the N250-100 due to the rudder input r is given as follows:
(s) = Sr N r

s s s 1 +1 +1 0.0193 1.8834 38.6989 s s 1 +1 4.3266 2.6781

(s) = Sr N r

r (s) = Srr N r

s s s +1 +1 +1 1.8552 0.0781 + j 0.3926 0.0781 j 0.3926

p (s) = sN (s) N r r

(3.22)

where, the static sensitivity coecient is given by:

108

CHAPTER 3. STABILITY AUGMENTATION SYSTEM

Root Locus 10 0.66 8 0.8 6 0.9 4 0.97


Imaginary Axis

0.52

0.4

0.26

0.12

No wash-out

dutch roll

2 0 -2 0.97 -4 0.9 -6 -8 0.8 0.66 -10 -10 -8 0.52 -6 0.4 -4 Real Axis 0.26 -2 0.12 0 10 8 6 4 2

spiral

actuator

roll subsidence
2 4

Figure 3.12: Root locus of the yaw damper system with r r feedback of N250-100 aircraft

Sr Sr Srr

= 2.44906 = 235.6588 = 19.560105

(3.23) (3.24) (3.25)

and the characteristic polynomial is:

ld (s) =

s4 s3 s2 s + + + +1 0.0381 0.01706 0.01544 0.01004

(3.26)

with the following characteristic roots

3.4. LATERAL-DIRECTIONAL STABILITY AUGMENTATION SYSTEM109

Root Locus 4

wash-out zero and spiral pole dutch roll

Imaginary Axis

-1

roll subsidence

-2

wash-out pole
-3

-3.5

-3

-2.5

-2

-1.5

-1

-0.5

0.5

Real Axis

Figure 3.13: Root locus of the yaw damper system with r r feedback of N250-100 aircraftenlarged around the dutch-roll mode

p1 p2 p3,4 nDR DR

= = = = =

1.9577, associated with the roll subsidence mode 0.0101, associated with the spiral mode 0.1325 j 1.3817, associated with the dutch roll mode, where: 1.39 rad/s 0.0955 (3.27)

Similar to the previous case study, the model for rate gyro is taken as Srg = 1. This assumed model is considered valid as the N250-100 PA2 uses the ring laser gyro. the actuator model has the time constant of s = 0.1. The transfer function of the wash-out circuit has the time constant of wo = 4. The following gure presents the root locus of the yaw damper system. Note that the dutch-roll mode becomes more stable and the roll subsidence has the faster time response. Fig.?? shows the root locus when the wash-out circuit is omitted.

110

CHAPTER 3. STABILITY AUGMENTATION SYSTEM

Root Locus 10 0.74 8 6 0.88 4 0.96


Imaginary Axis

0.6

0.48

0.34

0.22

0.12

2 0 -2 0.96 -4 -6 0.88 -8 -10 0.74 -10 0.6 -8 -6 0.48 -4 Real Axis 0.34 0.22 -2 0.12 0 2 10 8 6 4 2

Figure 3.14:

It is evident from the root locus that the damping of dutch roll mode is improved while the time responses for the roll subsidence and spiral become faster. For suciently large gain of k , the dutch roll mode can be overdamped to break into two dutch roll subsidence modes. The performance of the yaw damper with and without the wash-out circuit can be compared by taking the following design points in the corresponding root locus diagrams:

Yaw damper system with wash-out The selected gain and the associated poles are:

3.4. LATERAL-DIRECTIONAL STABILITY AUGMENTATION SYSTEM111

k p1 p2 p3 p4 p5,6

= = = = = =

7.6364 7.2651 0.0063 0.01083 2.7346 0.8319 j 0.5053

(3.28)

The closed-loop characteristic polynomial can then be obtained as:

cl s, k = s6 + 12.4829s5 + 47.0047s4 + 73.1907s3 + 53.7354s2 + 15.5727s + 0.0953 (3.29) The dutch roll damper can be obtained as DR = 0.4273 or an increase of 347.7%

Yaw damper system without wash-out The selected gain and associated closed-loop poles without the wash-out are: k p1 p2 p3 p4,5 = = = = = 8.6359 6.8279 2.3636 0.1917 1.1749 j 0.6620 (3.30)

with the closed-loop characteristic polynomial is obtained as: = s5 + 12.2329s4 + 46.4516s3 + 72.03s2 + 47.7434s + 6.8169 cl s, k

(3.31)

The dutch roll damper can be obtained as DR = 0.4356 or an increase of 350%. From the above data, the time response for (t) and (t) can be calculated

112

CHAPTER 3. STABILITY AUGMENTATION SYSTEM

by the following relations: (t) = d ( s ) N ( s ) wo r 1 k r p cl s, k p d ( s ) N ( s ) wo r r 1 k p cl s, k d ( s ) N ( s ) wo r r 1 k p cl s, k d ( s ) N ( s ) r(s) wo r 1 r p = 1 k s s s, k


cl

(3.32a)

p(t) =

(3.32b)

(t) =

(3.32c)

(t) =

(3.32d)

The results of the calculated is presented in the following gure where three cases are compared. Note from the side slip angle response that, the application of the yaw damper enable a very eective suppresion of the response. In the yaw response, it is evident that for the damper system without the wash-out, the steady state (t) is sustained constant at ss (t) = 1 rad. Note, however that, in the no yaw damper case, the yaw angle (t) can freely increase. The inclusion of the wash-out circuit enable the yaw damper to work only during the transient period. After the r m signal is phased-out, the yaw angle (t) can drift and increase without being inhibited by the yaw damper system. It can be concluded that by installing the wash-out circuit in the yaw damper, the maneuvers executed by the pilot during the steady state turn will not be opposed by the yaw damper system. Fig.3.16 shows the phase-portrait of the roll rate with respect to the roll angle. The eectivity of the yaw damper system in dampening down the dutch roll mode and the wash-out circuit in transmitting the yaw rate transiently is evident. The N250-100 PA2 and N250-100 PA1 both implements the yaw damper with the wash-out to reduce the pilot load during the turn maneuver and ight into turbulence.

3.4. LATERAL-DIRECTIONAL STABILITY AUGMENTATION SYSTEM113

2 yaw rate 0 -2 -4 slideslip angle beta 100 0 5 10 15 without yaw damper yaw damper yaw damper + washout 20 25 30

-100 4 yaw angle psi 2 0

10

15

20

25

30

10

15

20

25

30

(t), (t) and (t) of the N250-100 PA2 equipped Figure 3.15: Time response of with the yaw damper

114

CHAPTER 3. STABILITY AUGMENTATION SYSTEM

6 without yaw damper yaw damper yaw damper + washout

yaw angle

-1 -2.5

-2

-1.5

-1

-0.5 yaw rate

0.5

1.5

Figure 3.16: Phase portrait of (t) vs p(t) for three cases: no yaw damper, with yaw damper and yaw damper+wash-out

Part II

Modern Approach

115

Chapter 4

Introduction to optimal control


This part of the book focuses on the discussion of optimal control, mainly from a theoretical point, a little from the stand point of computer based algorithms for actually solving such problems. A typical problem of our interest would be for example "how do we bring the space shuttle back into the atmosphere in such a way that we arrive lined up at the runway at Edwards and do so in such a way that we optimise something (i.e. minimize the total heat on the body,...) subject to some constraints (i.e. 4g max for normal acceleration or total acceleration)". In a case like that we formulate the problem with the dynamics of the aircraft, we might choose angle of attack as control variables, in this way we can deal with just a point of mass dynamics of the vehicle. We do not have to worry about the attitude. The solution that we will get would be just a time history of the optimal angle of attack and bank angle. This will not constitute a control system in the feedback sense. Most of the results that we derive will be of that sort, open loop control histories that optimize some criteria.

4.1

Some Preliminaries

There are only a few cases in where the form of the solution comes out to naturally in terms of feedback law, which then gives rise to a control system. The cases in which that happens are those where the system dynamics are linear and the criteria is quadratic. We are going to formulate these problems in two basically dierent ways: 1. The classical variational calculus 2. Dynamic programming 117

118

CHAPTER 4. INTRODUCTION TO OPTIMAL CONTROL

We will deal with both if these. When an analytical solution exists, both will give rise to the same answer as they must. fi rst, let us consider some preliminary materials. For a problem to be completely defi ned we need to have the following: 1. Cost function The criteria that you wish to minimize or maximize. We will be dealing with the classical and the most common form of optimization, where we have a single scalar criteria that we wish to minimize or maximize. When we are presented with a multi-criteria problem, we will treat it by combining weighted concerns and adding them them up into a single criteria that may reect all of them in some way. In an atmospheric re-entry problem, for instance, we might be concerned with heat load, acceleration history and control activity. 2. Model of the system to be controlled 3. Description of constraints. Constraints come in many dierent forms. We might have limits on our control varibles. There may also be constraints of a more dii cult form such as bounds on system state variables. In re-entry problem for example, we may want to restrict the normal acceleration to a maximum 4g, this is a more dii cult kind of constraint to deal with. Very typically, there will be terminal constraints which are not so hard to deal with. Those are all the elements of the description of the problem. We will be dealing with classical variational problem meaning there is not anything to be controlled. The typical variational problem would be formulated as follows: J = J (x, x, t)

(4.33)

we simply want to fi nd the histories of these functions which minimize or maximize some criteria. A classical problem of this sort would be: fi nd the curve on the surface of a sphere which is a minimum length that joints two points on the surface of the sphere. If however, we pose a control problem, then typically we will have a criteria which will depend upon state variables, and may be time (time is usually the independent variable). In this case: J = J (x, u, t) (4.34)

But in addition to that, we will have a constraint which is in the form of the dynamics of the system to be controlled: x = f (x, u, t)

(4.35)

In other words the system dynamics constitute a set of contraints that relate the control variables and state variables. There might be other constraints too. Also in the classical variational problems, a set of constraints can be present.

4.2. LINEAR SYSTEMS.

119

So historically, the variational calculus was devised to treat problems of type [4.79], and we will look at those. But we will also consider control problems of type [4.34] where we do have a set of system dynamics to deal with [4.35]. And as well we will look at control problems via the method of dynamic programming. If there are limits on the controls, in a control problem, then we say that any history of control which satisfi es the constraints will be called admissible controls. On the other hand, any trajectory that satisfi es all the constraints will be called a feasible trajectory. In terms of constraints that are very often given, we need to think about initial and terminal conditions. When we formulate an optimal control problem, the initial conditions are usually given. I.e. we are starting from some initial state, how do we go from there to some terminal state? The fi nal state is very often either constrained or penalized in the cost function. A typical form of cost function is the following: Z tf J = h[x(tf ), tf ] + g [x(t), u(t), t]dt (4.36)
t0

It is a function of the terminal states and an integral over the interval of the solution of some function of the states, the controls and time itself. Alternatively, we could have the following form: Z tf J= g [x(t), u(t), t]dt (4.37)
t0

m[x(tf ), tf ] = 0

(4.38)

In this case we do not have a terminal term, but we have constraints [4.38]. These two formulations might give rise to almost the same solution. We can either constrain those terminal conditions if we really want to hold them exactly [4.37 and 4.38], or in many cases we do not really have to hold the terminal condition exactly, it is just that we want them to be about something, and in that case we could give up the constraints and instead just penalize departures from the desired terminal conditions in the cost function [4.103]. So this is the nature of a soft constraint, where the penalty in the cost function mean you have to come close to those conditions but you do not have to meet them exactly. In many cases, this is a better formulation. We will be specifi cally dealing with system dynamics model which is universally treated as a set of ordinary dierential equations. We will take lienar systems as a special case of general systems and the results that we get will simplify considerably in that case. In what follows, we will review some properties of linear systems.

4.2

Linear Systems.
x = A(t)x(t) + B (t)u(t)

(4.39)

120

CHAPTER 4. INTRODUCTION TO OPTIMAL CONTROL

One of the main advantages of dealing with linear systems rather than a general system is that we can write down the form of the solution immediately. The solution form is in this case: Z t x(t) = (t, t0 )x(t0 ) + (t, )B ( )u( )d (4.40)
t0

The fi rst term represents the transition from the initial time to the fi nal time of the initial conditions. The second term represents the eect of the control over that time. is called the transition matrix, which satisfi es the following dierential equation: d(t, ) = A(t)(t, ) dt And the boundary conditions at the beginning of the interval: ( , ) = I (4.41)

(4.42)

We will be able to take advantage of that solution form and we will do that. When we are linearizing along the dynamic trajectory, it automatically brings to a time varying linear description. So that will be the most common form of linearization that we will consider and it will be time varying. If we are dealing with a plant that truly is invariant and linear, then: x = Ax(t) + Bu(t)

(4.43)

where A and B are constant now. In this case the dierential equation is a set of fi rst order dierential equations with constant coei cients; the solution of [4.41] is an exponential. So it has the form of the matrix exponential: 1 (t, ) = eA(t ) = I + A(t ) + A2 (t )2 + ... 2 (4.44)

4.2.1

Controllability

Denition 5 A plant is considered controllable if it is possible to drive any initial state to the origin in a f inite time with a f inite control. This defi nition applies to linear systems, nonlinear systems and time varying systems. In linear system, driving from a general initial point in the state space to the origin, is equivalent to the ability to drive from anyone initial point in the state space to any other fi nal point in the state space. This equivalent is not true for nonlinear systems. Especially for the invariant case, an equivalent test is in terms of the controllability matrix: M = [B |AB |A2 B |...|An1 B ] (4.45)

So this is a matrix which has n rows, where n is the dimension of the state and n or more colums, depending on whether there is more than one control or not. The system is controllable if the rank of the matrix is n,which is the highest possible rank.

4.2. LINEAR SYSTEMS.

121

4.2.2

Conventions for Derivatives

Let us defi ne H as a scalar function of the vector x. Then the derivative of the scalar H with respect to the vector x is : dH = Hx dx It can be rewritten as: Hx = [ H H H , , ..., ] x1 x2 xn (4.47) (4.46)

The fi rst variation in this scalar function H can be written as: H = or: H = where x = x1 x2 . . . xn H x = Hx x x (4.49) H H H x1 + x2 + ... + xn x1 x2 xn (4.48)

(4.50)

For a function of more than one variable, we can write: H = H (x, u) (4.51)

where H is a function of both the states and the controls. In this case, we can defi ne both partial derivatives as: H x H u H H H , , ..., ] x1 x2 xn H H H = Hu = [ , , ..., ] u1 u2 un = Hx = [ (4.52) (4.53)

The second mixed derivative can be stated as: 2H 2H = Hxu = [ ] x u xi uj They can also be taken in either order, so: 2H 2H T = Hux = [ ] = Hxu u x ui xj (4.55) (4.54)

122

CHAPTER 4. INTRODUCTION TO OPTIMAL CONTROL

Note that the above expression is true only if the function H is continuously dierentiable to the second order. If we are taking the derivative of a vector value quantity with respect to a vector value quantity, we will have have a two dimensional array: z1 x . dz . = . dx zm x

(4.56)

Now, let us take an example of a particular function that we will use frequently: H (x, u) = xT Au (4.57)

The fi rst derivative with respect to the vector u is a row with the following elements: Hu = xT A (4.58)

The derivative with respect to the vector x can be expressed by writing the transpose of H without changing anything, since it is a scalar: Hx = T T (u A x) = uT AT x (4.59)

The second order derivatives are: Hxu = T T [ (x Au)] = (x A) = A x u x (4.60)

and Hux = T T T [ (x Au)] = (u A ) = AT u x u (4.61)

When we have a quadratic form that we are going to deal with frequently: H (x) = xT Ax Taking the derivative with respect to x we have: Hx = xT A + d T (x Ax) = xT A + xT AT dx (4.63) (4.62)

If A is symmetric then: Hx = 2 xT A (4.64)

4.2. LINEAR SYSTEMS.

123

Figure 4.1: Minimum of a cost function J (x) at a stationary point

4.2.3

Function Minimization

We are talking about fi nding the minimum value of a function J : J = J ( x) (4.65)

We do not have a dynamic situation and thus J is a static function and x is a vector of parameters. We want to fi nd the values of those parameters that gives us the minimum value of J . Let us take the case of a scalar parameter. In this case we have:

There is a bounded range of x. In general a minimum of J within a bounded range can occur under anyone of three conditions. 1. The minimum can occur within the admissible range (fi g. 4.1). It occurs at a stationary point 2. It could happen at the boundary (fi g. 4.2). In this case there are no stationary point, there might be but they are not the minimum. 3. It could happen at a corner (fi g. 4.3). In general minima can be local or global. A global minimum is a point where: J ( x ) J ( x) x admissible (4.66)

124

CHAPTER 4. INTRODUCTION TO OPTIMAL CONTROL

(a) Figure 4.2: Minimum of J (x) at the boundary

(a) Figure 4.3: Minimum of J (x) at a corner

4.2. LINEAR SYSTEMS.

125

This is the solution we like to have. However the necessary conditions that we derive and the algorithm that we defi ne for searching the minima almost always can only as sure as a local minima. A local minima is a point where: J ( x ) J ( x) x in the vicinity of x (4.67)

That only means that there is some region around x , we do not know if it is large or small, where the function is larger than at x . Now we want to concentrate only on stationary conditions which is the most common case for aerospace problems. We will eventually look at problems where we have bounded controls and the solution may lay on the boundaries. Now let us concentrate on stationary points. Denition 6 The stationary point is a point where the function is stationary (constant) to f irst order. That does not mean that the function is constant everywhere, but if it is constant to fi rst order that means that there is some range around x where an expansion of the function would be dominated by the fi rst order term and the fi rst order terms are zero in this case. If x is a scalar : dJ |x=x = 0 dx Now we can write the dierence: J ( x + x) J ( x ) = 1 d2 J |x=x x2 + ... 2 dx2 (4.69) (4.68)

it is dominated by the second derivative. Thus there will be some region around x , where the local behavior of the function is described in terms of the second derivative. Eq. 4.68 is called fi rst order necessary condition. For a minimum, it is necessary that the second order derivative is positive: d2 J |x=x 0 dx2 (4.70)

This is called second order necessary condition. If the second order derivative is not zero and it is strictly positive then we are assured that it is a local minimum. Tha is a second order sui cient condition: d2 J |x=x > 0 dx2 (4.71)

For n dimensions, we can extend those ideas by writing the variation of J in terms of: J = dJ x dx to fi rst order (4.72)

126

CHAPTER 4. INTRODUCTION TO OPTIMAL CONTROL

The fi rst order necessary conditions is: dJ |x=x = 0 dx (n conditions) (4.73)

The second order term can be written as: J ( x + x) J ( x ) = 1 T x Jxx x 2 (4.74)

If Jxx is positive semi-defi nite then the quadratic form on the right side of Eq.4.74 is positive or zero: Jxx 0 (4.75)

This is a second order necessary condition for a local minimum. A test for semidefi niteness like Eq.4.75 would be that all the eigenvalues of Jxx must be greater or equal to zero. And again if Jxx is strictly positive defi nite. Jxx > 0 (4.76)

then we have a sui cient condition for a local minimum. Again that is really the defi nition of positive defi niteness, it is a matrix such that if you form quadratic form the result will be positive for all x.

4.3

Constrained Problems

We left the discussion in the previous session concentrating on stationary points which are governed by the necessary condition: dJ |x=x = 0 dx (4.77)

That is for the case where there are no constraints in the problem. Now we want to look at the issue of constrained problems. That is, problems of the following form: min J (x) (x has dimension n) (4.78) subject to: f (x) = 0 (f has dimension m) m < n Thus the constraints have the form f (x) = 0. In this case, m must be less than n, because each scalar constraint removes one degree of freedom to the problem, and if we remove n degrees of freedom we completely specify the problem if the constraints are independent. Now how do we go about recognizing constraints? Fi rst, we will look into the possibility of elimination.

4.3. CONSTRAINED PROBLEMS

127

4.3.1

Elimination

What we mean by the elimination is the possibility that you can actually solve the constraints for m of the x0 s, in terms of the remaining nm x0 s. Therefore reducing the dimension of the problem and automatically satisfy the constraints. Suppose we partition x : x1 }m (4.79) x= }n m x2 Then if it is possible to take the m equations: f (x) = 0 and solve it for: x1 = g (x2 ) then we can write the cost function: J (x) = J (x1 , x2 ) = J [g (x2 ), x2 ] = J 0 (x2 ) (4.82) (4.81) (4.80)

so this is just another function of J 0 (x2 ). This is now an unconstrained minimization problem in a lower dimension. For every value of x2 we can fi nd the value of x1 from Eq.4.81. And therefore the complete solution that does satisfy the constraints. We can always do this approach if the constraints are linear and they are independent. However, we might not be able to eliminate the constraints explicitly for some classes of problems. We are going to need a more general approach than this and that is the method of Lagrange.

4.3.2

Method of Lagrange

Let us look at a single function J of two variables J (x1 , x2 ). Suppose that the unconstrained minimum occurs at the point A : The level curves represent contours of constant value of J . The fi rst order change in the function J is given by: J = dJ x dx to fi rst order (4.83)

T Since these are level curves, the column Jx must be orthogonal to the x in that direction. Thus the gradient of the function J is everywhere orthogonal to the level curves. The constraint is shown by the function f (x) = 0 which is generally nonlinear. In this case point B , will be the point of constrained minimum. Right at the solution at this point the two curves are tangent, which means that the two gradients are colinear. That property that is obvious here in two dimensions need to be generalized in higher dimensions. The cost gradient and the constraint gradient, span the same space at point B. In higher dimension

128

CHAPTER 4. INTRODUCTION TO OPTIMAL CONTROL

Figure 4.4: Constrained minimum vs unconstrained minimum the cost gradient at a constrained minimum must lie in the subspace that is spanned by all of the constrained gradients. So the fundamental property of a constrained minimum is that the cost gradient lies in the space spanned by the constraint gradients. In this case, we can express it as a linear combination of those constraint gradients: dJ df1 dfm df = 1 ... m = T dx dx dx dx (4.84)

the combination coei cient are 0 s. So this is a necessary condition at a constrained minimum. Together with the constraint itself, it constitutes a complete set of necessary conditions. The way we actually use this is to defi ne a new scalar function L : L = J + T f Then, condition Eq. 4.84 can be written just as: dL =0 dx (4.86) (4.85)

And here we see the standard interpretation, that because of the introduction of some new parameters, as many parameters as there are constraints, we defi ne a new function, an augmented cost function, which we can treat as if it were unconstrained. Eq. 4.86 together with the constraints are the total of m + n

4.4. INEQUALITY CONSTRAINTS conditions to defi ne the m + n parameters x and . =0 f (x) = 0


dL dx

129

(4.87)

The function L is called the Lagrangian function and the 0 s are called the Lagrange multipliers. We can also write: f= dL d (4.88)

which means that the necessary conditions can be written as:


dL dx dL d

=0 =0

(4.89)

If we defi ne a new augmented parameter y : x y= dL =0 dy

(4.90)

Then both of the statements Eq. 4.89 are included in a single statement: (4.91)

And this is a complete set of necessary conditions for constrained minimum.

4.4

Inequality Constraints

Now the problem can be stated as: min J (x) (x has dimension n) subject to: f (x) 0 (f has dimension m)

(4.92)

In this case we can have m n or m n and we can still have a perfectly well defi ned minimization problem. And that because of the fact that with inequality constraints, each constraint does not necessarily remove a degree of freedom for the problem depending on whether the constraint is active or not. We have a similar picture in two dimensions. From the picture, we can see that the constraint related to f1 is not active, it does not take away a degree of freedom out of the situation. The constraint f2 does constrain the problem, because with respect to J we can not go further to the left. The cost gradient lies in the space that is spanned by the gradients of the active constraints: dJ P P dfi i i i df i i dx dx = dx (4.93) (active constraints)i 0 (inactive constraints)i = 0

130

CHAPTER 4. INTRODUCTION TO OPTIMAL CONTROL

Figure 4.5: Inequality constraints There is another extra condition related to the directionality of the gradient. This puts a sign condition on 0 s : i 0 In this way, we defi ne the same Lagrangian as before: ( L = J + T f dL dx = 0 ; i 0 (4.95) (4.94)

The rest of the conditions can be summarized in dierent ways. The usual way of writing this is: i dL =0 di (4.96)

fi nally we need to also state that the constraints must be satisfi ed, thus: dL 0 di (4.97)

Eqs.[4.95,4.96,4.97] summarizes the fi rst order necessary conditions for an inequality constrained problem. In order for the 0 s to be well defi ned, the constraints have to have a property of regularity which means that the constraint gradients that are active must be linearly independent.

4.5. SENSITIVITY OF COST TO CONSTRAINT VARIATIONS

131

4.5

Sensitivity of Cost to Constraint Variations

It is common that the constraints are not really precisely defi ned. It is good to know whether if you make a little adjustment to the constraint values would that have a big eect or a small eect on the cost.

Figure 4.6: We know that to fi rst order: ( J = f =

dJ dx x df dx x

(4.98)

At the solution point we have the necessary condition, and thus we can write: df dJ = T dx dx Substituting to Eq.4.98, we have: J = T Thus we have our sensitivity: dJ = T df (4.101) df x = T f dx (4.100) (4.99)

The sensitivity is given by the Lagrange multipliers, it is negative just because of the convention on the signs from which we started. So the 0 s with large magnitude are the one that correspond to constraints that are highly sensitive.

132

CHAPTER 4. INTRODUCTION TO OPTIMAL CONTROL

Figure 4.7: Principle of Optimality

4.6

Dynamic Programming

Dynamic programming derives from a basic principle known as the principle of optimality that was fi rst stated by Bellman. The principle of optimality can be described by the following example. Suppose we have a control problem starting from a given state and the problem was to drive our system there to a set of terminal conditions. Suppose that the optimal solution passes through some intermediate point (x1 , t1 ). The principle of optimality states that if the optimal solution from the fi rst point passes through (x1 , t1 ) then it must be true that the optimal solution to the problem starting from (x1 , t1 ) is the continuation of the same path. This principle leads to two things: a numerical solution procedure and theoretical results. The numerical procedure is called dynamic programming. Dynamic programming is a method for solving multistage decision problems. To illustrate the procedure, we will use the graph shown in Fi g. 4.8. We start at a certain point and at that stage there several decision we can make. Suppose that we always start at the end of the problem. Each terminal condition has its associated cost value. Now we are going our way back and suppose we have gone back to the third stage. We have made decisions to get to the end point and we found out what is the optimal return that we get from there. We have stored these values. Now principle of optimality help us back up one stage and choose the best branch of the path. However we do it

4.6. DYNAMIC PROGRAMMING

133

Figure 4.8: Multistage decision example

134

CHAPTER 4. INTRODUCTION TO OPTIMAL CONTROL

systematically by considering the following sum:


J1 + J021 J2 + J022

(4.102)

All we have to do is compare the sum of J1 + J021 and J2 + J022 , and whichever one of those is better is the better choice of J02 . Thus we only consider the optimal return from each of those nodes. Example.

Figure 4.9: Flight Planning application

Suppose we want to plan fl ight paths across the Pacifi c. The purpose of the fl ight plans was to take advantage of the winds. We can solve this problem using the dynamic programming approach. The variable along the path is a range variable. And at each one of these stages what we have is a two dimensional grid of values in altitude and cross-range. The idea was that we are fi xing power setting, fi nd the path that minimizes time. And with fi xed power, minimizing time also minimizes fuel. The time taken would depend upon the wind direction and velocity. So, we start from the end, take each one of the previous nodes

4.6. DYNAMIC PROGRAMMING

135

and fi gure out the time that it is going to take to go from that node to Jakarta. Then we back up one stage and use the process that we described. We will illustrate now the case for control problem. For a control problem, we will choose a one-dimensional, scalar state problem so that we can draw the complete picture given by Fig. 4.10.

Figure 4.10: One dimensional scalar state problem We begin by defi ning the grid consisting of time points and state points on which we are going to solve the problem. In this case, we wish to minimize: min h[x(tf ), tf ] + subject to: . x = a(x, u, t) x0 = f ixed tf = f ixed Z
tf

g [x(t), u(t), t]dt

(4.103)

t0

(4.104)

136

CHAPTER 4. INTRODUCTION TO OPTIMAL CONTROL

and there may be other constraints too. These other constraints may aect the control (bounded control for example) or could aect the state (there would be some allowed region). We have to put down the optimal value of the cost, at each one of the grid points at tf . The integral part is vanished at this time, thus we have: Ji = h(xi ) (4.105)

Now we are going to back up one stage and consider all the possible ways of completing the problem from there. We have to consider all of them as far as there are no constraints. Normally, we will approximate the integral terms in the cost function and the dierential equation as just rectangular integration: J [(xi , ti ), (xj , tj )] = g [xi , ui , ti ]t (4.106)

where ui has to be the control that actually takes you from xi at ti to xj at tj . In this case we can solve for it directly, because we have only a scalar control and a scalar state: xj (tj ) = xi (ti ) + a(xi , ui , ti )t And for any xi and xj : a(xi , ui , ti ) = xj xi t (4.108) (4.107)

we can solve the above expression and fi nd ui , giving us for any xi and xj the value of ui . Then we can put this in the expression of J, Eq. 4.106, that is the cost that we incur in making that transition. Now in order to fi nd the optimum path to take from this point, we simply fi nd: J (xi , ti ) = min[J (xi , xj )] + J (xj )] xj (4.109)

Then we label this node with this value. And so we work our way to the left. If we have constraints, it simplify the problem in general. The great advantage of this, if we can solve the whole problem, is that once we found the optimal path from the fi rst stage it will be defi ned throughout the whole space. If at some later stage, we fi nd ourself to be at another node not the optimal one, we know how to go in an optimal path also from this other node to the end. We have all those data stored. This gives us a kind of feedback solution, in that we can store in table the optimal control action to take from every node. This is for one dimension, in the next session we will expand this to higher dimension.

Chapter 5

Discrete time Optimal Control


5.1 Higher Dimension Control Problems

We had just begun to discuss dynamic programming as a numerical method for solving scalar control problems. The problem was to minimize a cost function of the form: Z tf min J = h[x(tf )] + g [x(t), u(t), t]dt (5.1)
t0

We defined a grid in x and t, and saw how we could use the principle of optimality to reduce the total number of computations required to search for an optimal solution. Note that this was approximation in two senses: first because we define a discrete grid values of x and t which are normally continuous, and secondly because the dynamics and cost integral are approximated by a rectangular integration rule. Now we want to generalize the problem that we did in couple of steps. The first generalization will be for the case of a free terminal time. The first thing that happens when the terminal time is free is that the problem statement may have a terminal function that depends on that time: J = h[x(tf ), tf ] + Z
tf

g [x(t), u(t), t]dt

(5.2)

t0

For example, for a minimum time problem, let h = tf , (g = 0), which can also be done by h = 0, g = 1. Once we have allowed the terminal time to be free, then we almost certainly have to specify some terminal constraints or else the 137

138

CHAPTER 5. DISCRETE TIME OPTIMAL CONTROL

Figure 5.1: Discrete grid of x and t problem will not be well posed. For example, if tf is free, and we do not specify terminal constraints on the states, then the obvious trivial solution is to make: tf = t0 x(tf ) = x(t0 ) with J = 0 (5.3)

which is not a well-posed solution. So, in addition to this, in a scalar problem, we have a function m[x(tf ), tf ] = 0, which will specify some terminal conditions that we have to meet. So we have also, tf is free x(0) is free (5.4)

Now in this approximate procedure we must approximate the acceptable terminal points. At those points, the optimal cost is just the terminal cost since the integral does not contribute at tf : J (x, t) = h(x, t) on m(x, t) = 0 From here on the procedure is the same as before: J (xi , tk ) = min{g (xi , uj , tk )t + J (xj , tk+1 )} xj (tk+1 ) (5.6) (5.5)

where uj , the control to go from xi to xj , is found from satisfying the dynamics: xj (tk+1 ) = xi (tk ) + a(xi , uj , tk )t (5.7)

5.1. HIGHER DIMENSION CONTROL PROBLEMS

139

Now, so far, so we could draw the pictures, we have been looking at a scalar state and control. Now let us generalize to higher dimensional problems, both in x and u. The problem is restated with vector x and u: Z tf g [x(t), u(t), t]dt (5.8) min J = h[x(tf ), tf ] +
t0

subject to : x = a(x, u, t) m[x(tf ), tf ] = 0 x(0) = x0 tf is free


.

(5.9)

We initialize the solution with same condition we formed previously in Eq.5.3. Now we have to use a dierent procedure to work our way backward from those terminal values of optimal cost. The main dierence come from the fact that before, we could perform our comparison from a node in terms of the x values at the next stage in time because for each xi and xj we could solve for the control that would make that transition, so the control would be defined. Now, we can not do that. Normally it will be the case that the dimension of u will be less than that of x meaning if we specify an xi and xj , we normally can not solve u to make that transition, in fact, not all such transitions are possible. We look at the case shown by the following figure. Then on this grid at every time, we actually have a 2-dimensional array of x0 s, which can be expanded to a 2-D grid of x1 and x2 all at a particular time. In this case given a value of x at a previous stage, there is really only a 1-dimensional surface of x0 s that can be achieved at the next stage, because there is only 1 DOF in u So, it is not good to pick x0 s and try to solve for u0 s. Instead, we really have to pick u0 s and solve for the corresponding x0 s. The way this is done, is to also define a grid of points on u in addition to x and t. In other words we will test discrete points along this curve of achievable values of x(tk+1 ) corresponding to discrete values of uj . We now can write down the recursion: J (xi , tk ) = where xj = a(xi , uj , tk )t + xi (tk )
.

min{g (xi , uj , tk )t + J (xj tk+1 )} uj (tk )

(5.10)

(5.11)

Referring to Fig.5.3, starting from a node xi at tk , we try all possible trajectories to go to the next step by testing over not all values of x but of u. We will end up somewhere in the space of the grid of 2-D x at tk+1 which generally will require interpolation from among the known costs at the grid points. Any constraints of x and u reduce the amount of calculations and simplify the problem in general. If we have solved this problem, then in eect we have solved all the problems

140

CHAPTER 5. DISCRETE TIME OPTIMAL CONTROL

Figure 5.2: Two-dimensional array of states

5.1. HIGHER DIMENSION CONTROL PROBLEMS

141

Figure 5.3: Double interpolation of Cost function that appear on the grid, meaning that we can save the optimal controls to use from any one of these grid points. That represents a feedback control law and it is true for non-linear problems which can be achieved by other approaches to the optimal control problem. The feedback law is just a look-up table. If we have several u0 s and several x0 s then, the number of points in the solution grid is:
n Ntime .Nx

(5.12)

which for a hundred time steps, with the states consisting 3 positions and 3 velocities:
n = 100 1006 = 1014 Ntime .Nx

(5.13)

namely, we need 100 million megawords just to store the values. Bellman refers to this as curse of dimensionality. He stresses that there is a reduction of the number of computations because we should choose only the optimal paths. In interpreting the fi nal solution, once we have solved the problem on the grid, we will have stored at every grid point, the optimal value of the cost and control to use at each grid point. But as soon as we use u to start from x0 , we will go to a place most likely not on a grid point. So it really is more accurate to interpolate in the grid of u0 s to get the u to use at this point. Thus we can interpolate to get both the J and control u to use at that point. Using that control, we can then solve the approximated dierential equation to the value for the next point.

142

CHAPTER 5. DISCRETE TIME OPTIMAL CONTROL

It is true that the optimal cost and control were found by starting only at the gridded values of x at each stage, and now we are at some other value of x. But this is acceptable to the accuracy with which the whole problem is being solved as long as we do the interpolation at each stage. So we have now covered the numerical procedure that the principle of optimality was to lead to. We are going to cover the theoretical results that we get in the next session.

5.2

Discrete time optimal control problem

The theoretical part gives us the direct way of stating a recursion to solve discrete optimal control problems with no approximations. The problem will be stated as a regulator problems meaning that we have to get from some given initial condition to some stated-set of terminal condition in an optimal fashion. So we have: x(k + 1) = aDk [x(k), u(k)] J = h[x(N ), N ] + N is free
N 1 X k=0

(5.14) gDk [x(k), u(k)]

m[x(N ), N ] = 0,

We will not be concerned with how we will solve it numerically for now. We only want to fi nd a recursion formula for the problem solution. Just as before, for higher dimension of x: J (x, N ) = h(x, N ) on m(x, N ) = 0 And now, the principle of optimality says: J (x, k) = min{gDk (x, u) + J [aDk (x, u), k + 1]} allowable u(k) (5.16) (5.15)

But notice that for any x, the function on the RHS is only a function of u. In principle, we can fi nd the u that minimizes the function in the brackets; but with things being non-linear, a numerical search might be required. But there would be no approximation involved. We would only have to minimize RHS for every x we were interested in. So unless we can functionalize this J as a function of x(k + 1), then we are still stuck with having to solve the problem for a bunch of choices of x so that we can store them for later interpolation to solve for J . Thus, we have stated an exact recursion for a discrete time optimal control problem, but attempting to apply it leads to dynamic programming again. The only way it appears that we can avoid the problem of a very highdimension grid to solve the problem happens when we have the discrete-time,

5.2. DISCRETE TIME OPTIMAL CONTROL PROBLEM

143

Figure 5.4: Interpolation in the grid of u0 s linear, quadratic regulator, i.e. the dynamics are linear: x(k + 1) = A(k)x(k) + B (k)u(k) J = 1 1 x(N )T Hx(N ) + 2 2
N 1 X k=0

(5.17) [x(k)T Q(k)x(k) + u(k)T R(k)u(k)]

No other constraints, N, terminal state, fi xed For a well-posed problem, we need: H = HT 0 Q(k) = Q(k)T 0 R(k) = R(k)T > 0 Our boundary condition is the same as before:
[x(N )] = JN

(5.18)

1 x(N )T Hx(N ) 2

(5.19)

Now, using the recursion from the previous page, we can back up one stage and try to fi nd the optimal control to use now for a given x: 1 2 x(N 1)T Q(N 1)x(N 1)+ min 1 u(N 1)T R(N 1)u(N 1)+ JN 1 [x(N 1)] = (5.20) u(N 1) 2 1 T 2 x(N ) Hx(N )

144

CHAPTER 5. DISCRETE TIME OPTIMAL CONTROL

We would like to minimize this and since it is quadratic in u, we take its fi rst derivative and make it zero to get:
JN 1 [x(N 1)] u(N 1)

Using the dynamics we get: 1 x(N 1)T Q(N 1)x(N 1)+ 2 1 T min 2 u(N 1) R(N 1)u(N 1)+ JN 1 1 = u [A(N 1)x(N 1) + B (N 1)u(N 1)]T N 1 2 H [A(N 1)x(N 1) + B (N 1)u(N 1)]

(5.21)

= u(N 1)T R(N 1) + [A(N 1)x(N 1) (5.22) +B (N 1)u(N 1)]T HB (N 1) = 0

Collecting the two terms in u together: [


JN 1 [x(N 1)] T ] u(N 1)

= [R(N 1) + B (N 1)T HB (N 1)]u(N (5.23) 1) +B (N 1)T HA(N 1)x(N 1) = 0

The second derivative can readily be written as:


2 JN 1 [x(N 1)] = R(N 1) + B (N 1)T HB (N 1)] > 0 u(N 1)2

(5.24)

which defi nes an upward rising bowl, i.e. a global minimum stationary point. Now, solving for u(N 1) = u (N 1) from Eq.5.23, we have: u (N 1) = [R(N 1) + B (N 1)T HB (N 1)]1 B (N 1)T HA(N 1)x(N 1) = F (N 1)x(N 1) where F (N 1) = [R(N 1) + B (N 1)T HB (N 1)]1 B (N 1)T HA(N 1) (5.27) We see here that we have an optimal control which is a direct feedback of our current state or simply a state feedback optimal control law. So now we can calculate the optimal cost:
JN 1 [x(N 1)] = 1 T 2 x(N 1) Q(N 1)x(N 1) 1 + 2 x(N 1)T F (N 1)T R(N 1)F (N 1 T 2 x(N 1) [A(N 1) + B (N 1)F (N

(5.25) (5.26)

+B (N 1)F (N 1)]x(N 1)

1)x(N 1)+ 1)]T H [A(N 1) (5.28)

5.2. DISCRETE TIME OPTIMAL CONTROL PROBLEM

145

Combining the terms using the common factor, we can rewrite the above equation as:
JN 1 [x(N 1)] = 1 2 x(N

1)T [Q(N 1) + F (N 1)T R(N 1)F (N 1)+ [A(N 1) + B (N 1)F (N 1)]T H [A(N 1) +B (N 1)F (N 1)]]x(N 1) (5.29)

Or, simply:
JN 1 [x(N 1)] = 1 2 x(N

1)T P (N 1)x(N 1)

(5.30)

which is quadratic. Note that at stage N , the cost JN was a quadratic cost of x(N ), JN [x(N )] = 1 T 2 x(N ) Hx(N )

(5.31)

Eq.5.30 and Eq.5.31 suggest that the cost is always quadratic. We can show this inductively.

Vous aimerez peut-être aussi