Vous êtes sur la page 1sur 22

KENDALI ADAPTIVE DAN

OPTIMAL
CONCEPTUAL DESIGN OF CONTROL SYSTEM

adaptive control structure


model based control design
LECTURE TOPICS ON OPTIMAL CONTROL

1. Optimal Control : Introduction


2. Optimization
3. Calculus of Variations
4. Hamilton-Jacobi-Bellman Equation
5. Variation of Optimal Control Problem
6. Optimal State Feedback Control Design
7. Optimal Output Feedback Control Design
LECTURE TOPICS ON ADAPTIVE CONTROL

1. Introduction to Adaptive Control


2. Discrete-Time System Models for Control : Review
3. Parameter adaptation algorithms
4. Recursive Plant Model Identification in Open Loop
5. Recursive Plant Model Identification in Closed Loop
6. Direct Adaptive Control
7. Indirect Adaptive Control
PENILAIAN
Tugas 30%
UTS (project) 35%
UAS (project) 35%
REFERENCES

Ioan Dor Landau et al, Adaptive Control: Algorithms,


Algorithms Analysis and Applications, Second Edition,
Springer, 2011
Desineni Subbaram Naidu, Optimal Control System, CRC Press 2003
OPTIMAL CONTROL :
INTRODUCTION

Reference : Desineni Subbaram Naidu,Optimal Control Systems, CRC Press 2003


Outline

1. Difference of Classical & Modern Control


2. Components Of Modern Control
3. Optimization Problem
4. Optimal Control
1. DIFFERENCE OF CLASSICAL & MODERN CONTROL
a. Classical Control Configuration

The classical control theory concerned with single input and single output (SISO) is mainly based on
Laplace transforms theory and its use in system representation in block diagram form as follows.

Y (s) G (s)

R (s) 1 G (s)H (s)

where

G ( s ) G c ( s )G p ( s )
Note :
1. The input u(t) to the plant is determined by the error e(t) and the compensator,
2. All the variables are not readily available for feedback. In most cases only one output variable is availab
for feedback.
B. MODERN CONTROL CONFIGURATION

The modern control theory concerned with multiple inputs and


multiple outputs (MIMO) is based on state variable representation in
terms of a set of first order differential (or difference) equations.

Linear State Space

x ( t ) Ax ( t ) Bu ( t )
y ( t ) Cx ( t ) Du ( t )
General Form

x ( t ) f ( x ( t ), u ( t ), t )
y ( t ) g ( x ( t ), u ( t ), t )
2. COMPONENTS OF MODERN CONTROL
3. OPTIMIZATION
PTIMIZATION PROBLEM
Optimization Is A Very Desirable Feature In Day-to-day Life. We Like To Work And Use Our
Time In An Optimum Manner, Use Resources Optimally And So On.
Optimization :
Static Optimization Is Concerned With Controlling A Plant Under Steady State Conditions, I.E., The
System Variables Are Not Changing With Respect To Time. The Plant Is Then Described By
Algebraic Equations.
Dynamic Optimization Concerns With The Optimal Control Of Plants Under Dynamic Conditions,
I.E., The System Variables Are Changing With Respect To Time And Thus The Time Is Involved In
System Description. Then The Plant Is Described By Differential (Or Difference) Equations.
Optimization Problem

he general form of an optimization model:

min or max f(x1,,xn) (objective function)


ubject to gi(x1,,xn) 0 (functional constraints)
x1,,xn s (set constraints)
x1,,xn are called decision variables
n words,
he goal of optimization is to find x1,,xn that
satisfy the constraints;
achieve min (max) objective function value.
PERFORMANCE INDEX OF OPTIMAL CONTROL

PERFORMANCE INDEX FOR TIME-OPTIMAL CONTROL SYSTEM : WE TRY TO TRANSFER A SYSTEM FRO
AN ARBITRARY INITIAL STATE TO A SPECIFIED FINAL STATE x ( t o ) IN MINIMUM TIME
x (t f )
t f
J dt t f t o t *
to

PERFORMANCE INDEX FOR FUEL-OPTIMAL


u (t ) CONTROL SYSTEM : CONSIDER A SPACECRAFT PROBLEM
LET u (At )ROCKET ENGINE AND ASSUME THAT THE MAGNITUDE OF THE THRUST
BE THE THRUST OF
PROPORTIONAL TO THE RATE OF FUEL CONSUMPTION.
CONSUMPTION
t t m
f f
J to
u ( t ) dt J
to
i 1
R i u i ( t ) dt

R i : Weighting factor
PERFORMANCE INDEX OF OPTIMAL CONTROL

PERFORMANCE INDEX FOR MINIMUM-ENERGY CONTROL SYSTEM: CONSIDER AS T


u i m( t ) IN THE TH LOOP OF AN ELECTRIC NETWORK.
CURRENT i
NETWORK THEN (WHERE, IS T
2
i 1
u i ( t ) ri ri i
RESISTANCE OF THE TH LOOP) IS THE TOTAL POWER OR THE TOTAL RATE OF ENERGY EXPENDITURE
THE NETWORK. THEN, FOR MINIMIZATION OF THE TOTAL EXPENDED ENERGY, WE HAVE A PERFORMAN
CRITERION AS
t m
f
J u i2 ( t ) r i dt
to
i 1

Or, in general
t f T
J u ( t ) Ru ( t ) dt
to

where, R is a positive definite matrix


PERFORMANCE INDEX OF OPTIMAL CONTROL

Similarly, we can think of minimization of the integral of the squared error of a tracking system. We then
have,
t f
J x T ( t ) Qx ( t ) dt
to

where x (t ) x a (t ) x d (t )
x a ( t ) : actual value
x d (t ) : desired value
Q : positive semi-definite
definite weighting matrix

erformance Index for Terminal Control System: In a terminal


arget problem, we are interested in minimizing the error between the desired target position and
ctual target position x d ( t f ) at the end of the maneuver or at the final timex a ( t f .) The terminal (final) erro
Taking care of positive and negative values t f of error and weighting factors, we structure x ( t the
f ) cost
x a (function
t f ) x das
(
PERFORMANCE INDEX OF OPTIMAL CONTROL
J x T ( t f ) Fx ( t f )

Which is also called the terminal cost function, Here F is a positive semi-definite
semi matrix

erformance Index for General Optimal Control System: Combining the above formulations, we have a
erformance index in general form as
t
x
T f T T
J x ( t f ) Fx ( t f ) ( t ) Qx ( t ) u ( t ) Ru ( t ) dt
to
or
t
J S ( x ( t f ), t f ) to
f
V x ( t ), u ( t ), t dt

problems arising in optimal control are classified based on the structure of the performance index J. If the PI
tains the terminal cost function S(x(t), u(t),t)) only, it is called the Mayer problem, if the PI has only the integral
term, it is called the Lagrange problem, and the problem is of the Bolza type if the PI contains both the
minal cost term and the integral cost term.
FORMAL STATEMENT OF OPTIMAL CONTROL SYSTEM

The optimal control problem is to find the optimal control u * ( t ) (* indicates ptimal value) which causes the linear time
invariant plant (system) control
x ( t ) Ax ( t ) Bu ( t )
To give the trajectory x * ( t ) that optimizes or extremizes (minimizes or maximizes) a performance index

x
t f
T T T
J x ( t f ) Fx ( t f ) ( t ) Qx ( t ) u ( t ) Ru ( t ) dt
to

or which causes the nonlinear system


x ( t ) f ( x ( t ), u ( t ), t )
to give the trajectory x * ( t ) that optimizes or extremizes a performance index
t
J S ( x ( t f ), t f ) to
f
V x ( t ), u ( t ), t dt
with some constraints on the control variables and/or the state variables .
OPTIMAL CONTROL PROBLEM
The final time t f may be fixed, or free, and the final (target) state may be fully or partially fixed or fre
4. OPTIMAL CONTROL
The main objective : to determine control signals that will cause a process (plant) to satisfy so
physical constraints and at the same time extremize (maximize or minimize) a chosen performa
criterion (performance index or cost function).
The formulation of optimal control problem requires
a) Plant: a mathematical description (or model) of the process to be controlled (in differen
equation) such as state space model.
b) Performance index
For classical control : typical performance criteria are system time response to step or ramp in
characterized by rise time, settling time, peak overshoot, and steady state error; and
frequency response of the system characterized by gain and phase margins, and bandwidth.
Optimal control problem is to find a control which causes the dynamical system to reach a tar
or follow a state variable (or trajectory) and at the same time extremize a performance index
c) Constraints: a statement of boundary conditions and the physical constraints on the states and
controls.
5. HISTORY : CALCULUS OF VARIATIONS
Calculus of variations : branch of mathematics that deals with finding a function which is an extremum (maximum or minimu
of a functional.
John bernoulli (1667-1748) posed the problem: the problem of finding the path of quickest descent between two points
in the same horizontal or vertical line. This problem which was first posed by galileo (1564-1642) in 1638, was solved
john, by gottfried leibniz (1646-1716), and anonymously by isaac newton (1642-1727)
Leonard euler (1707-1783) joined john bernoulli and made some remarkable contributions, which influenced joseph-lo
lagrange (1736-1813)
Joseph-louis lagrange (1736-1813) gave an elegant way of solving these types of problems by using the method of (fi
variations.
The sufficient was given by andrien marie legendre (1752-1833
1833) in 1786 by considering additionally the second variation.
Carl gustav jacob jacobi (1804-1851) in 1836 came up with a more rigorous analysis of the sufficient conditions. T
sufficient condition was later on termed as the legendre-jacobi
jacobi condition.
At about the same time sir william rowan hamilton (1788-1856
1856) did some remarkable work on mechanics, by showing that
motion of a particle in space, acted upon by various external forces, could be represented by a single function which satisf
two first-order partial differential equations. In 1838 jacobi had some objections to this work and showed the need for o
one partial differential equation. This equation, called hamilton-jacobi
hamilton equation, later had profound influence on the calcu
of variations and dynamic programming, optimal control, and as well as on mechanics.
5. HISTORY : OPTIMAL CONTROL THEORY
The linear quadratic control problem has its origins in the celebrated work of N. Wiener on mean-square
mean
filtering for weapon fire control during world war II (1940-45).
(1940
Wiener solved the problem of designing filters that minimize a mean-square-error
mean criterion (performance
measure)
R. Bellman in 1957 introduced the technique of dynamic programming to solve discrete time optimal
control problems.
The most important contribution to optimal control systems was made in 1956 by l. S. Pontryagin
(formerly of the united soviet socialistic republic (USSR)) and his associates, in development of his
celebrated maximum principle described
In the united states, R. E. Kalman in 1960 provided linear quadratic regulator (LQR) and linear
quadratic gaussian (LQG) theory to design optimal feedback controls.
Matrix riccati equation that appears in all the kalman filtering techniques and many other fields
provided by c. J. Riccati in 1724 without ever knowing that the riccati equation would become so famou
after more than two centuries.

Vous aimerez peut-être aussi