Vous êtes sur la page 1sur 34

OPTIMAL

CONTROL
Linear Quadratic
Regulator
Abdul monem alqzeeri 744

Seraj Mohamed 745

Ehab Ali Buaoud 746

Dr. ibrahim ighneiwa


CONTENTS

Motivation for Control Engineering

Control Theory

Optimal Control

Linear Quadratic Regulator

Examples

Application

Conclusion
Motivation for Control
Engineering
• Control has a long history which began with the early
desire of humans to harness the materials and forces of
nature to their advantage Ex(mechanisms for keeping
wind-mills pointed into the wind).

• Key step forward in the development of control occurred


during the industrial revolution and the Second World
War.
Control Theory

•Control theory is an interdisciplinary branch of engineering and


mathematics that deals with the behavior of dynamical systems
with inputs, and how their behavior is modified by feedback.
Requirement of Good Control
System

Accuracy.

Noise. Sensitivity.

Speed. Stability.

Bandwidth
.
Branches of Control
Engineering

• Classical Controls: Control methodologies where the


ODEs that describe a system are transformed using the
Laplace, Fourier, or Z Transforms, and manipulated in the
transform.

• Stochastic Control: it deals with control design with


uncertainty in the model. In typical stochastic control
problems, it is assumed that there exist random noise and
disturbances in the model and the controller, and the control
design must take into account these random deviations.

• Modern Controls: Methods where high-order differential


equations are broken into a system of first-order equations.
The input, output, and internal states of the system are
described by vectors called "state variables".
Branches of Control
Engineering
Intelligent Control: it uses various AI computing approaches like
neural networks, fuzzy logic, machine learning, evolutionary
computation and genetic algorithms to control a dynamic system.

Adaptive Control: In adaptive control, the control changes its


response characteristics over time to better control the system.

Optimal Control: In a system, performance metrics are identified,


and arranged into a "cost function". The cost function is minimized
to create an operational system with the lowest cost.
Optimal control
Aim of Optimal Control
The main objective of optimal control is to determine control
signals that will cause a process (plant) to satisfy some physical
constraints and at the same time extremize (maximize or
minimize) a chosen performance criterion (performance index or
cost function).

Maximize

Minimize
Pontryagin Kalman
(1988 –1908) BELLMAN 2016- 1930))
(1920 -1984)
By 1958, L.S.
Pontryagin had In 1960 three major papers
developed his maximum R. Bellman [1957] applied were published by R.
principle, which solved dynamic programming to Kalman and coworkers One
optimal control problems the optimal control of of these papers discussed
relying on the calculus discrete-time systems. the optimal control of
of variations developed systems, providing the
by L. Euler (1707-1783). design equations for the
linear quadratic regulator
(LQR).
LINEAR QUADRATIC REGULATOR
An automated algorithm for finding optimal feedback for a
linear system in state space form.

LQR determines the feedback law to find in the least time with
the least control effort.

If our objective is to keep the state x(t) near zero then we call it
state regulator system.
If we try to keep the output or state near a desired state or
output, then we are dealing with a tracking system.
WHY LQR
• Different techniques (such as: full-state-feedback, PID, root
locus…) are used in designing of control systems (i.e. obtaining
of control-gains to achieve desired closed-loop characteristics)
irrespective of the magnitude of the gains.

• Higher gain implies larger power


amplification which may not
be possible to realize in practice.

• Hence, there is a requirement to obtain reasonable closed-loop


performance using optimal control effort. A quadratic
performance index may be developed in this direction,
minimization of which will lead to optimal control-gain.
WHY NOT
• LQR has a drawback that it assumes

that all the states of the system are


measurable.

• LQR requires an analytical


model of the system

• If the system model is


not linear, the design of LQR
mostly requires model linearization and
the design may be quite complex.
LQR DESIGN:
PROBLEM OBJECTIVE
• To drive the state of a linear (rather linearized) system
to the origin by minimizing the following quadratic
performance index (cost function).

• Or :
LQR DESIGN:
PROBLEM OBJECTIVE

• The difference between the


two cost functions is as
follows:

• The 1st function is used for


Infinite-Horizon Control
problem.

• The 2nd function is used for


Finite-Horizon Control
problem.
LQR requirement

S Must be positive semi definite


S
Q Must be positive semi definite
R Must be positive definite

Q R
LQR DESIGN:
PROBLEM STATEMENT
• Performance Index (to minimize):

• Path Constraint:

• Boundary
Conditions:
LQR DESIGN:
NECESSARY CONDITIONS OF
OPTIMALITY
• Hamiltonian:

• Optimal Control Eq.:

• State Equation:
LQR DESIGN:
NECESSARY CONDITIONS OF
OPTIMALITY

• Costate Equation:

• Terminal penalty:

• Boundary Condition:
LQR DESIGN:
DERIVATION OF RICCATI
EQUATION
LQR DESIGN:
DERIVATION OF RICCATI
EQUATION

• Riccati equation:

• Boundary condition:
EXAMPLE - INVERTED PENDULUM
EXAMPLE - INVERTED PENDULUM
EXAMPLE - INVERTED PENDULUM
EXAMPLE - MATLAB
1-CLEAR;CLC ';u=-(K*x')-15
2-ALPHA=0.5; subplot(2,2,1);plot(t,x1);grid-16
3-A=[0 1 0;0 0 1; -35 -27 -5]; xlabel('t(sec)')-17
4-B=[0;0;1]; ylabel('x1')-18
5-Q=[1 0 0;0 1 0; 0 0 1]; subplot(2,2,2);plot(t,x2);grid-19
6-R=ALPHA*1; xlabel('t(sec)')-20
7-[K,P,E]=LQR(A,B,Q,R); ylabel('x2')-21
8-D=ZEROS; subplot(2,2,3);plot(t,x3);grid-22
9-SYS=SS(A-B*K,EYE(3),EYE(3),ZEROS(3,3)); xlabel('t(sec)')-23
10-T=0:0.01:4; ylabel('x3')-24
11-X=INITIAL(SYS,[1;1;1],T); subplot(2,2,4);plot(t,u);grid-25
12-X1=[1 0 0]*X'; xlabel('t(sec)')-26
13-X2=[0 1 0]*X'; ylabel('u')-27
28ylabel('u')
14-X3=(K*X')';
EXAMPLE - MATLAB
EXAMPLE - MATLAB
EXAMPLE - ALGEBRAIC RICCATI EQUATION
EXAMPLE - ALGEBRAIC RICCATI EQUATION
EXAMPLE - ALGEBRAIC RICCATI EQUATION
EXAMPLE - ALGEBRAIC RICCATI EQUATION
APPLICATION
Current research suggests the use of a liner quadratic performance
index for optimal control of regulators in various applications. Some
examples include correcting the trajectory of rocket and air
vehicles, and airplane stability, Segway and robotics.
CONCLUSIONS

Optimal solution to control problem

Automated (and very fast) method

Only has as much complexity as the plant

Identifying the best weighting matrices is an art


CONCLUSIONS
CONCLUSIONS

Vous aimerez peut-être aussi