83 vues

Transféré par Ahmed Medhat Youssef

- a1k1-00208
- Global Optimization over Polynomials
- MB0048 –Operation Research
- Solution of the Transportation Model
- Class Work 8 Solutions
- EE505 Optimization Theory 2014
- Revised Literature Review by Dr
- rfpo-120103003512-phpapp01
- MODEL PREDICTIVE CONTROL USING FPGA
- intro_svm_new_example.pdf
- The Linear Programming Solver_ Example 8.3_ Two-Person Zero-Sum Game _ SAS_O
- Equality
- MB0032 - 1 - Complete
- Operations Research MB0048
- LecturenoteonMPC-JHL
- Three_winding_transformer.pdf
- 35 1490250150_23-03-2017.pdf
- Abstract.doc
- 08004769
- gsm-bss-network-kpi-call-setup-success-rate-optimization-manual-131123150113-phpapp01.pdf

Vous êtes sur la page 1sur 84

Magnus kerblad

Licentiate Thesis Department of Signals, Sensors and Systems Royal Institute of Technology Stockholm, Sweden

Submitted to the School of Electrical Engineering, Royal Institute of Technology, in partial fulllment of the requirements for the degree of Technical Licentiate.

Abstract

In Model Predictive Control (MPC) an optimal control problem has to be solved at each sampling instant. The objective of this thesis is to derive efcient methods to solve the MPC optimization problem. The approach is based on ideas from Interior Point (IP) optimization methods and Riccati recursions. The MPC problem considered here has a quadratic objective and constraints which can be both linear and quadratic. The key to an efcient implementation is to rewrite the optimization problem as a Second Order Cone Program (SOCP). To solve the SOCP a feasible primal-dual IP method is employed. By using a feasible IP method it is possible to determine when the problem is feasible or not by formalizing the search for strictly feasible initial points as a primal-dual IP problem. There are several different ways to rewrite the optimization problem as an SOCP. However, done carefully, it is possible to use very efcient scalings as well as Riccati recursions for computing the search directions. The use of Riccati recursions makes the computational complexity grow at most quadratically with the time horizon, compared to cubically for more standard implementations.

Acknowledgments

First of all I would like to thank my supervisor Dr. Anders Hansson. He always had time for my questions and to help me nd the answers. This work could not have been done without him. I also would like to thank Professor Bo Wahlberg and Professor Lennart Ljung for letting me join the Automatic Control groups at KTH and Linkping University, respectively. It has been a privilege to work at two different departments. Several people have helped me to improve this thesis. Jonas Gillberg, Johan Lfberg and Ragnar Wallin read early versions of the manuscript and gave me valuable comments. Moreover, special thanks to Ulla Salaneck who proofread the nal version of this manuscript.

Notation

Symbols

. . .

. . .

arrow

where

1 20)(

diag

, with

The set of real numbers The set of real valued column vectors of dimension The set of real valued matrices of dimension

vi

Abbreviations

IP KKT LP NT MPC QP QCQP SOCP Interior Point Karush-Kuhn-Tucker Linear Programming Nestorov-Todd Model Predictive Control Quadratic Programming Quadratic Constrained Quadratic Programming Second Order Cone Programming

7

7 7 )

Gradient Hessian Euclidean norm of a vector is a subset of Inequality with respect to a proper cone The Cartesian product of and Block diagonal matrix with blocks and

Contents

1 Introduction 1.1 MPC . . . . . 1.2 IP . . . . . . 1.3 Outline . . . 1.4 Contributions 1 1 2 2 3

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

2 Model Predictive Control 2.1 History of MPC . . . . . . . . . . . . . . . . . . . . . . . 2.2 The MPC Setup . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 The Predictive Model . . . . . . . . . . . . . . . . 2.2.2 The Objective Function . . . . . . . . . . . . . . . 2.2.3 The Constraints . . . . . . . . . . . . . . . . . . . 2.2.4 The Optimization Problem and the MPC Algorithm 2.3 Stability of MPC . . . . . . . . . . . . . . . . . . . . . . 2.4 MPC and IP . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

5 . 5 . 6 . 7 . 7 . 8 . 8 . 9 . 10 11 11 14 16 17 17 18 20 22 25

3 Conic Convex Programming 3.1 The Nonnegative Orthant and the Second Order Cone . . . . . . 3.2 Second Order Cone Programming . . . . . . . . . . . . . . . . 3.3 Karush-Kuhn-Tucker Conditions . . . . . . . . . . . . . . . . . 4 Interior-Point Methods 4.1 Newtons Method . . . . . . . . 4.2 Central Path . . . . . . . . . . . 4.3 Barrier Function . . . . . . . . . 4.4 Potential-Reduction Methods . . 4.5 Nesterov-Todd Search Direction

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

viii

Contents

5 Efcient Computation of Search Direction 29 5.1 Control Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 29 5.2 Search Direction . . . . . . . . . . . . . . . . . . . . . . . . . 31 5.3 Efcient Solution of Equations for Search Direction . . . . . . . 32 6 Strictly Feasible Initial Points 37 6.1 Primal Initial Point . . . . . . . . . . . . . . . . . . . . . . . . 37 6.2 Dual Initial Point . . . . . . . . . . . . . . . . . . . . . . . . . 41 7 Computational Results 7.1 Complexity Analysis . . . . . . . . . 7.1.1 Flop Count . . . . . . . . . . 7.1.2 Complexity of the Algorithm . 7.2 Example . . . . . . . . . . . . . . . . 7.2.1 The Double-Tank Process . . 7.2.2 Problem Formulation . . . . . 7.2.3 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 43 43 44 45 46 47 49

8 Conclusions and Future Work 55 8.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 8.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 A NT Search Direction B Riccati Recursion 57 63

Chapter 1

Introduction

In this thesis we will combine ideas from Interior Point (IP) optimization methods and linear quadratic control in order to efciently solve an Model Predictive Control (MPC) problem. First, we will give a background to MPC and IP methods. Then the outline of the thesis is given, followed by a short summary of the main contributions of this work.

1.1 MPC

The core idea of Model Predictive Control (MPC) is to use a dynamical model of the system to predict the future behavior as a function of the control inputs. The optimal future input sequence is then calculated, i.e., the input which achieves the control objectives in an optimal way. A new optimal input sequence is determined at each sampling instance given the current state estimate. The basic idea of MPC can be seen in the following analogy: You are trying to walking across a street. First you look right and left to estimate if you safely can make it across the street. In other words, you are trying to predict if you can walk fast enough to make it across the street without getting hit by a car. You come to the conclusion that it is safe and start walking. Then something unforeseen happens, a car comes towards you with great speed. You then have to make a new decision, to either walk back to the sidewalk, or to increase your speed to make it across the street. MPC explicitly takes care of constraints. In this example it is quite easy to see what kind of constraints there could be. One example is that the speed is

2 limited.

1 Introduction

1.2 IP

Interior Point (IP) methods are a class of algorithms for solving certain optimization problems. A simple interpretation of an IP method is given by the following analogy: You are standing on an island and are looking for the highest point on the island. To nd this place you look at the surroundings to nd out in which direction the ascent is steepest. You then start walking in that direction. After walking a bit you start looking for a new steepest ascent, this is repeated until you nd yourself in a place were all directions are down, which means you are at the highest point of the island. Interior reects the fact that you have to stay on the island! There are of course many technical details that are not dealt with in this simple example. For example, what happens if the island has a local hill top? This is an issue of the island being concave or not. In this thesis we will only deal with concave maximization problems or equivalently convex minimization problems.

1.3 Outline

A short introduction to MPC is presented in Chapter 2. There the history and background of MPC will be discussed followed by a presentation of the standard formulation of MPC. The chapter will conclude by a discussion of the stability of MPC and how IP methods can be used to efciently solve the resulting optimization problem. In Chapter 3 two kinds of convex cones will be studied, the nonnegative orthant and the second order cone. We will formalize optimization problems for these cones and present optimality conditions. How to solve the optimization problem using an IP method will be discussed in Chapter 4. First we will study Newtons method and how to modify it. Then the potential reduction method will be introduced. The chapter will conclude by investigating how to obtain the so-called Nestorov-Todd search direction. In Chapter 5 an efcient method to solve the optimization problem by using a Riccati recursion to calculate the search directions will be presented.

1.4 Contributions

To be able to start the optimization algorithm, a strictly feasible initial point is needed. How this can be obtained and how one can determine if the optimization problem is feasible at all, is discussed in Chapter 6. A complexity analysis of the proposed method together with computational results are presented in Chapter 7. Finally, in Chapter 8, a short summary of the results is given together with some ideas for future research.

1.4 Contributions

The main contributions of this thesis are: To show how the Riccati recursion approach can be used to nd the search direction when the MPC problem is formulated as a Second order Program (SOCP). The application of the matrix inversion lemma, which makes it possible to calculate the search direction using oating point operations that grow linearly with the time horizon. An efcient method for determining feasibility, and in case the problem is feasible compute the strictly feasible initial points. These results have previously been reported in (kerblad and Hansson, 2002). Other work by the author, not presented in thesis, have been published in (kerblad et al., 2000a; kerblad et al., 2000b)

Chapter 2

Model Predictive Control is a control strategy which takes care of constraints in a straightforward way. MPC is based on predictions of the future of the measured output when a certain control sequence is applied on a process. We will start our review of MPC in Section 2.1 by looking at the history of MPC. In Section 2.2 we will dene the MPC setup. In Section 2.3 we will discuss stability of MPC. Finally in Section 2.4 we will look at how Interior Point methods can be applied to efciently solve the optimization problem of MPC.

The ideas of MPC can be traced back to the 1960s when research on open-loop optimal control was a topic of signicant interest. The idea of a moving horizon, which is the core of all MPC algorithms, was proposed by (Propoi, 1963). Another early work that relates to MPC can be found in (Lee and Markus, 1967, p.423) where the following statement can be found One technique for obtaining a feedback controller synthesis from knowledge of open-loop controllers is to measure the current control process state and then compute very rapidly for the open-loop control function. The rst portion of this function is then used during a short time interval, after which a new measurement of the process state is made and a new open-loop control function is computed for this new measurement. The procedure is then repeated.

This statement captures the essence of MPC, i.e., obtaining an optimal control sequence and then applying only the rst part of it. The true birth of MPC was in industry with the publications of Richalet et al. (Richalet et al., 1976; Richalet et al., 1978) in which Model Predictive Heuristic Control (MPHC) was presented, and the publication of Cutler and Ramaker (Cutler and Ramaker, 1979) which introduced Dynamic Matrix Control (DMC). Both these algorithms used an explicit dynamical model of the plant to predict the effect of future control actions. The determination of the future control actions was done by minimizing the predicted error subject to some operating constraint. The difference in the two algorithms was that MPHC used an impulse response model whereas DMC used a step response model. In the eighties MPC became popular within the chemical process industry. This was mainly due to the simplicity of the algorithm and the simple models it required. A good report on this can be found in (Garcia et al., 1989) and a good survey on how MPC is used in industry can be found in (Qin and Badgwell, 1996). In this period a magnitude of algorithms were created with a multitude of names. The main difference between these algorithms were the process model they used, or how they dealt with noise. More about these algorithms can be found in e.g. (Camacho and Bordons, 1998). One thing to remember is that despite the great success of MPC in industry, no stability theory or robustness results were available. That came later. In the late eighties and early nineties the use of state-space models in MPC became popular, and it soon became the most used MPC formulation in the research literature (Morari and Lee, 1999). The use of state-space models led to some advances towards a stability theory for MPC. Stability will be discussed in Section 2.3

In this section the different key elements of the MPC algorithm will be presented. The key elements in an MPC algorithm are The predictive model The objective function The constraints We will now study these elements closer

In this thesis a state-space model is the only model discussed. The reason for this is, as mentioned earlier, that most of the recent MPC formulations are in state-space form:

is the state, is the control signal and is a where performance related auxiliary variable used in the objective function which is dened next.

The purpose of the objective function is to get a measure of how far away future outputs are from the desired output and also weight that with the amount of control effort. These two objectives often contradict each other, so in choosing the weights, and , one has to decide which is most important, obtaining the desired output or using small power. Dene the objective function as

where is the time horizon and where is the terminal cost. The performance criterion can easily be extended to handle piecewise quadratic end-point with penalties by replacing

This can be used to show stability for larger sets of initial values of , see (Lfberg, 2001a). This issue will not be discussed further in this thesis. It is also possible to write (2.3) such that the output is made to follow a desired trajectory, a reference signal. The ability to use information of future reference signals is one of the greatest advantages with MPC. This means that the process can react before the change is made and thus avoid the effects of delay in the system. This can lead to great improvements in performance especially in industry were the evolution of the reference signal is known beforehand (robotics, servos or batch processes) (Camacho and Bordons, 1998). However, in this thesis we will only discuss the case when the reference signal is zero.

" #

%)

) )

1 8 CA$ ) 92 $5 0)( 8 B @

1 $ 0)(

1 $ 0)(

)0 5

!

B 7 ) 7

7 5 4 63

(2.1) (2.2)

' % &$

) &2) 2 $5

B )

(2.3)

All processes are limited in some way. It may be that the process actuators are limited by how large signals they can put out, or that the actuators have slew rate constraints, i.e., they cannot change arbitrarily fast. The process may also have constraints which are binary. One example is a valve that can either be open or closed. Other reasons to have constraints than process limitations exist. Safety issues is one such reason. Some process variables may not violate certain bounds, which could mean overow in a tank or other hazardous situations for personnel and equipment. Another reason is environmental issues, for example, a gas that should not contain too high concentration of a certain compound when let out into the environment. A general way of writing the constraints is

where and are a non-empty convex sets. A common example is the saturation constraint

We are now ready to state the optimization problem which is used to compute the control signal

where the notation means the state at time given the state at time . In Chapter 5 this notation will be dropped assuming that the starting time is always zero. Notice that if and are described by linear constraints then (2.5) is a quadratic problem. If is an ellipsoid we have a second order cone problem, which will be discussed in Chapter 3. Now the MPC algorithm can be stated as

2 Obtain

)

1 Measure

1

!

(2.5)

B ) @ B

(2.4)

8

!

Stability of MPC was an unsolved problem for a long time. The rst solution to the stability issue was the unconstrained case where stability is achieved by choosing large enough, or having a sufciently long time horizon (Garcia et al., 1989). The unconstrained problem is not very interesting since it can be solved using LQ techniques (Bitmead et al., 1990). The stability theory for the constrained case was mainly developed in the nineties. A good survey on the stability theory can be found in (Mayne et al., 2000). Assume that all states are measured and that the model is time-invariant and that is controllable. The MPC algorithm will guarantee asymptotic stability of the closed loop system if the following assumptions are satised, provided that the optimization problem (2.5) is feasible at time :

where is a terminal state weight and is a nominal controller which maps onto . The proof of stability can be found in e.g. (Lfberg, 2001a; Lee, 2000). There are many choices of and which satisfy these assumptions, and there are also many methods to produce them. In (Keerthi and Gilbert, 1988) the parameters are set to and . This simple choice leads to feasibility problems unless is large. Another choice is were and . This choice of terminal cost together with is presented in (Rawlings and Muske, 1993) and is applicable for stable systems. A third choice is to facilitate

' 1 ()

1 0)( 1 () ) ' 5 3B ) @ 2B 1 () B ) @ 1 () #) 5 12)0( ) @ 1 () ) @ 1 1 () ) ( ) ' ) @ ' 1 0)( ) @ ' 1 ' ( ) 1 B ) @ 3B 2)0( #) B '

' 5

' 2

4 Update time

5 5

) 2 5 ) 1 ()

5

1 @ (

3 Apply

10

e.g. ellipsoidal terminal state constraint in order to establish stability, which is less conservative, i.e., stability can be proven for a larger set of initial values. This will lead to a Quadratically Constrained Quadratic Program (QCQP) that has to be solved at each sample instant. This method was investigated by (Lee, 2000; Lee and Kouvaritakis, 1999; Scokaert and Rawlings, 1998).

The need to solve (2.5) fast is of great importance in MPC, since fast solvers means that MPC can be used on processes with faster sampling time, or that larger optimization problems can be solved. As we saw in the previous section, stability can be achieved by different choices of terminal costs and terminal state constraints. The two rst choices lead to a Quadratic Program (QP) whereas the third choice leads to a QCQP. Recently specially tailored IP methods applicable to MPC have appeared, (Gopal and Biegler, 1998). These algorithms solve the resulting QP by utilizing the special structure of the control problem. By ordering the equations and variables in a certain way, the linear system of equations that has to be solved for the search direction, becomes block-diagonal, (Wright, 1993; Wright, 1996). By further examining this structure it is possible to solve the equations using a Riccati recursion. This makes the computational burden to grow only linearly with the time horizon, (Rao et al., 1997; Hansson, 2000; Vandenberghe et al., 2002). A similar approach is used in (Steinbach, 1994; Blomvall, 2001). Riccati recursions have also been used together with active set methods for solving the optimal control problem, (Arnold and Puta, 1994; Glad and Jonson, 1984). Comparisons between active set methods and IP methods have been done by several authors, (Albuquerque et al., 1997; Biegler, 1997; Wright, 1996). The idea of using Riccati-recursions works also for QCQPs. However, it is not possible to use feasible IP-methods (Wright, 1997). Because of this no proof of polynomial complexity is available. A way to overcome this is to reformulate the QCQPs as SOCPs, (Lobo et al., 1998). The objective of this thesis is to show how Riccati recursions can be used also in this context. SOCPs in the context of MPC is also used in (Wills and Heath, 2002).

Chapter 3

Conic convex programming is a class of optimization problems where the objective is linear and the constraint set is the intersection of an afne space with a proper cone. In this chapter two kinds of proper cones will be studied, the nonnegative orthant, and the second order cone. We will also formalize optimization problems for these cones and present optimality conditions.

Linear programming is by far the most well known problem in optimization. A common way to write a Linear Program (LP) is

, and where . Here the relation denotes component-wise inequality, i.e., the set of satisfying is the nonnegative orthant. The nonnegative orthant can be seen graphically in Figure 3.1. With the notation we generally mean inequality with respect to a proper cone . A set is called a cone if for any and we have . A cone is called a proper cone if

1.

is convex

'

B )

B )

'

where

! B

'

@

(3.1)

'

12

The nonnegative orthant trivially satises the conditions for a proper cone. An extension to LP is what is called an SOCP. In an SOCP a linear function is minimized over a convex set described as the intersection of one or several second order cones with an afne space. A common way to write an SOCP is as (3.1) with a second order cone inequality instead of the nonnegative orthant inequality. In order to dene the second order cone, introduce the partitioning of a vector given by

This cone also satises the conditions for a proper cone. A graphical interpretation of a second order cone can be seen in Figure 3.2. Many problems can

% 7

where that

B )

87 3 %

)

B %

Let

4.

is pointed (

and

' )

B )

B )

3.

2.

such

13

0.6

0.4

0.2

y0

0.2

0.4

0.6

0.8

0.8

0.6

0.4

0.2

0x

0.2

0.4

0.6

0.8

be formulated as an SOCP, see (Lobo et al., 1998). For instance consider the following QCQP

which is equivalent to

1 ) 20

(3.2)

120 ) 6 0 ( ) '

' ( X 1F RS V UT ` @ IQ G P F HD 6 F 1

F 1 1 RS F BC

14

where and are second order cones. Notice that this problem can be written as (3.1) by letting , and dening the and , i.e., . Through cone as the Cartesian product of the use of the Cartesian product it is easy to get a unied treatment of different conic convex optimization problems, see e.g. (Alizadeh and Schmieta, 1997).

As briey mentioned in the previous section, one can combine different cones in a single optimization problem. From now on we will call the mix between an LP and an SOCP just an SOCP since an LP can be written as an SOCP. To be more precise we will from now on consider the following SOCP:

. The cone

B

! ) 5 ")

where

, , , and is the Cartesian product of cones of dimension , i.e., . Notice that and . There exists an SOCP associated to (3.3), called the dual SOCP. In order to derive the dual SOCP dene the dual cone as

B

7 (1 $ &! B % ! B 5$

5$

5$

5 5 5

are partitioned as

7 5

75 5 5

@ 5 @ ' @ 7 @ 7 7

' % %

@

7 75 5

' 0

7 !B

7

and let

)

(3.3)

15 is self-dual.

where is constrained to be in the dual cone . The dual function is dened as the inmum of the Lagrangian over and

@ %$ ) 5 $ !' 0 @ % ) 5% # ) 5 @ % ) 5% @ )5 )5 1 )5 )5 @ % % @ % % ( 5 # 1 2) @ % ) @ @ ( #

A property of the dual function is that it is always less then the optimal value of (3.3), i.e., it provides a lower bound. The largest lower bound is obtained by with respect to and . To get a nontrivial lower maximizing bound we need to add the constraint . This results in the following SOCP, called the dual SOCP:

(3.4)

Problem (3.3) is often called the primal SOCP and problems (3.3) and (3.4) together are called the primal-dual pair. The following relations hold between the primal and the dual: 1 Weak duality : The dual objective is less or equal to the primal objective at optimum. 2 Strong duality : If either problem (3.3) or (3.4) has a strictly feasible point i.e., either there exist feasible such that or there exist feasible such that , then the objective of the primal and dual are equal at optimum. The proof of the rst statement is obvious from what has been said above. The proof of the second statement is given in (Luo et al., 1996; Nesterov and Nemirovsky, 1994, pp.105109). There are several reasons for considering the dual problem. One reason is that so called primal-dual algorithms for solving

5)

1%

It can be shown that for the cones considered here, i.e., The dual problem can be derived from the Lagrangian

'

% ( 5% ) 1

) 5 ) % ) B

) 1 @ ( '

' ) ) 5 % ) 5% ) 5 @ % ) 5% @

B

%5 ) %

5)

0 ' $ ')(&%"#

1 2) @ % ) @ @ ( 1 ) @ % ) 2 ( 1 2) @ (

5 4 63

@ 1 ) A% 0)(

16

SOCPs, which jointly solve both the primal and the dual problems are very efcient. Another reason is that the dual problem provides non-heuristic stopping criteria for algorithms. To this end let us introduce the duality gap, which is dened as the difference between the primal and dual objectives for feasible and , i.e., and which satisfy the constraints of the primal and dual programs, respectively. The duality gap will be denoted by and is given by

Notice that the duality gap always is positive which follows from weak duality. Moreover, it provides for any feasible and an upper bound on the distance from the primal objective to its optimal value, and hence is useful as a stopping criterion.

The Karush-Kuhn-Tucker (KKT) conditions are necessary and sufcient optimality conditions for a general convex optimization problem, assuming that strong duality holds. The KKT conditions for the SOCP are as follows

Condition (3.9) is called the complementary slackness condition. The matrices and are block diagonal, where each block corresponds to one of the cone constraints, i.e., is a nonnegative orthant then and . If is a second order cone then and . The denitions of diag( ) and arrow( ) are given in the Notation section. Moreover , where each corresponds to one of the cone constraints. If is a nonnegative orthant, then , where . If is a second order cone, then is given by the rst unit vector, i.e., .

05 '

' 5

$

5$ $

75 $ 5 $

If

1 90)( 1 ( 1 0 0 1 () 4 %% 4 1 ( 4 %%

5 ) % 5% )

1 @ ( 5 ) 1 % % @ ( 5% ) 5 ) 1 5 ) % 5% 0)( @ 2) 5 @ % ) 5% @ ( @ 5 1

'

'

' )

%) @ @

$ )5 % ) 5%

%

%) @ @

%) @ @

(3.5)

Chapter 4

Interior-Point Methods

Interior-Point methods for solving optimization problems were rst introduced in 1984 by Karmarkar in his famous paper (Karmarkar, 1984). Karmarkars algorithm has polynomial complexity, which means that the problem can be solved in polynomial time. IP methods were rst developed for LPs. LPs have played an important role in optimization since their formulation in the 1930s and 1940s (von Neumann, 1937; Kantorovich, 1939; Dantzig, 1963). One of the advantages of IP methods is that they can easily be extended from the LP case to other optimization problems such as second order cone programming and semidenite programming. In this chapter the IP framework will be introduced by applying a modication of Newtons method on the KKT conditions.

Let us consider the problem of solving the KKT conditions 3.63.11. One direct approach is to apply Newtons method on equations 3.63.9 to obtain search directions and then chose a step length so that the inequalities 3.103.11 are write (3.63.9) as satised. With

Important to note is that the last equality condition is not linear. Newtons method linearizes (4.1) around the current point to obtain the search direction

$ # #) 5 ) 5% ' @ % 1 ( % %

@

"!!

1 @ @ ) @ % 0)(

(4.1)

18 by solving

4 Interior-Point Methods

To avoid violating the inequality constraints a line search is performed to calculate the maximum step length that is allowed. Unfortunately the search direction generated by Newtons method is aggressive in the sense that it wants to decrease the cost function without any consideration to the inequality constraints, and hence only a small step can be used if to maintain feasibility. Therefore a large number of iterations is needed to reach optimum, if it is reached at all. A less aggressive search direction is obtained if one modies Newtons method in the following way, see (Wright, 1997, p.6): 1 Alter the search direction towards the interior of the feasible region. 2 Keep the variables from moving too close to the boundary of the feasible region. These modications are discussed in the following sections.

The central path describes an arc in the interior of the feasible set and is obtained by relaxing the complementary slackness condition (3.9):

where . It can be shown that the arc is uniquely dened for each , if and only if, the feasible set is nonempty, see (Wright, 1997; Wolkowicz et al., 2000, Theorem 9.2.1). As approaches zero the solution of the relaxed KKT conditions approaches the optimal solution. By applying Newtons method on the relaxed system the search direction is going to be biased toward the interior, and hence a longer step can be applied before violating the inequality constraints. The idea is then to gradually decrease towards zero. Notice

'

"!! '

$ # '

'

$ $

where becomes

is the Jacobian of

) @ # ) # 5 5% ' % ' ' !"! "!! ' ' 1 ( ) 1 ( @ # % ) 1 ( "!!

(4.2)

'

1 (

(4.3)

(4.4)

19

6 4 5 3 2

that for each xed value of , the solution of the KKT conditions will result in a duality gap which is equal to where is the number of nonnegative orthant cones of dimension one, and where is the number of second order cones. Let . Therefore, choosing , where is the duality gap for a given strictly feasible point, and applying Newtons method will result in a solution on the central path with the same duality gap . Then reducing and applying Newtons method again will result in a new point on the central path with lower duality gap. By taking , where and where is equal to the duality gap of the current iterate, steps can be obtained which are directed towards the central path if , and towards decreasing the duality gap to zero if . Intermediate choices of can be seen as a trade-off between reducing and improving centrality. The equations for the search direction become:

Many methods for how to follow the central path exist. The method that will be used in this thesis is a so called potential reduction method which will be described in Section 4.4. Other methods are so called path following algorithm which explicitly restrict the iterates to a neighborhood of the central path, see (Wolkowicz et al., 2000, Chapter 10). Figure 4.1 shows the central path parameterize by and how the iterates follow the central path.

@ ' B

"!!

$

'

'

$

"!! '

7 1( '

'

1 0

'

'

(4.5)

20

4 Interior-Point Methods

Another way to derive IP-methods is to remove the inequality constraints by introducing a barrier term added to the primal objective function. The main idea behind this barrier is to keep the variables from becoming infeasible by having the barrier go to innity as the variables approach the boundary of the feasible region, as is seen in Figure 4.2. The barriers that will be used in this

1.8

1.6

1.4

1.2

0.8

0.6

0.4

0.2

0.2

0.4

0.6

0.8

1.2

1.4

1.6

1.8

Figure 4.2: A contour plot of a barrier function for the nonnegative orthant. thesis are logarithmic barrier functions. For the nonnegative orthant in dened as

it is dened as

Both barriers are smooth and convex in the interior. The rst derivatives of the barrier functions are given by

. . .

Introduce the convention that for a cone are nonnegative orthants and the cones

1 @ @ 7 1 $ 5 7 @ 7 @ % % 3

the cones

it is (4.6)

87

17

18 (

@ # 7 "! 7

'

@

% ( ' @ 1 ( 7 (8 ! 1 (

@

'

1 (

1 (

(4.7)

7 1 @ @ 5

21

to the objective function in (3.3) is a barrier for the cone . Now add and keep the cone constraint only implicitly. The primal optimization problem will then be:

Notice that the last equation multiplied by is equal to the relaxed complementary slackness condition (4.4). To see this look at each cone separately. For a nonnegative orthant the last equation in (4.10) reads for each row

Multiply by becomes

to obtain

Multiply by from left to obtain where and are arrow matrices and is the rst unit vector. To conclude we notice that the optimality conditions in (4.10) are the same as the relaxed conditions except for the cone constraints which are kept implicit.

1%

% ( 5% ) 1

%$ %$ 7 @ 7 @ % ' ) 8 % 3 8 ) 8

1 (

(4.9)

1 (

$ (

'

$ '

% %

1 (

@

8 8 ) @

5 ) 91 (

7

1 (

$

'

1 (

@

(4.8)

5 1 (

8

(4.10)

%$

22

4 Interior-Point Methods

In Section 4.2 the central path was introduced, but the strategy for how to keep the iterates close to the central path was not presented in detail. To this end introduce the proximity measure

where is the duality gap. The proximity measure is nonnegative and it is zero if and only if lies on the central path, see (Wolkowicz et al., 2000, pp. 241242). Now dene the primal-dual potential function as

where determines how much weight should be put on centrality and how much weight should be put on decreasing the duality gap. The potential function will be used to obtain equations for the search direction. This is done by applying the steepest descent algorithm to the potential function in the -variable. Steepest decent can be obtained from a st order Taylor series approximation of :

where is called a descent direction if is negative. Just minimizing the second term in the approximation with respect to will make no sense, since the term is unbounded from below. This is overcome in the steepest decent algorithm by either introducing a bound on the norm of or adding a term proportional to the squared norm to the objective function. The two approaches are closely related, see e.g., (Boyd and Vandenberghe, 2001). Here we will use the latter approach:

where and where is a positive denite matrix. The equality constraints of (3.3) must also be satised by the search direction. This is needed to maintain feasibility of (3.3). From this the descent direction can be found as the solution of the following optimization problem

(4.13)

1 () 1 @ 0)(

5 5 1 @ 0)( 1

2 5

1 ( ) 0

' % ' 5

1 ( ' 1 @ 0)(

2 2 5 5 3

'

1

1 @ 0)( 1 @ 0)( @ ()

)

(4.11)

5)

(4.12)

Notice that the last expression on the right hand side can be written as

where and . Hence (4.18) only differs from (4.5) by a multiplication of the right hand side with a scalar . Therefor the resulting search direction will just be scaled differently. Since the search directions are multiplied with a step length that optimizes a criteria, and since the solution of (4.5) and (4.18) only differ by a constant multiplication, it makes no difference whatsoever if (4.5) or (4.18) is used for computing the search direction as long as . Most primal-dual algorithms can be summarized as 1 nd a search direction 2 set by solving (4.5)

The potential function measures how good a given point is, i.e., it weights the distance to the central path and the value of the objective function. After obtaining the search direction the step length can be computed as the minimizer of along the search direction. It is possible to use different step

3 choose straints.

such that the new iterate does not violate the inequality con-

"!!

1 (

'

'

'

1$

where let

and

are Lagrange-multipliers for the equality-constraints. Now and multiply (4.17) with . Then (4.144.17) can be written as

@ ) ' ) 5 %

' '

2 % ) 5%

$

$ $

7

$ $

(4.18)

24

4 Interior-Point Methods

lengths for the primal and dual variables. This leads to a two dimensional optimization problem, one for a step in the primal direction and one for a step in the dual direction. The potential function can be expanded as

(4.19)

where

The constant is calculated in a similar way as the constant by exchanging the primal variable with the dual variable . Now the minimizer of (4.19) can be obtained using standard methods, e.g., damped Newton. This plane search algorithm is similar to (Vandenberghe and Boyd, 1995). Convergence of the potential reduction method can be shown as in (Wolkowicz et al., 2000; Vandenberghe and Boyd, 1996, Theorem 9.3.1). Since (4.11) is nonnegative, the following inequality is true

Assume that

@ 1 @ () 1 7

1 ( 5 $

which is equivalent to

(4.20)

7 5 )

1 9

'

@

7 @ % 7 5 @ % % @ 7 7 @ % % 1 @ @ 7 1 5 ( 7 ( 7 @ 5 ) ' %

1 ( '

7 5 )

'

@ 7 0)(

1 @ () % %

25

To summarize, if the potential function can be reduced by a constant in each iteration then the duality gap can be reduced with a factor in iterations.

In this section the potential function will be used to derive the Nesterov-Todd (NT) search direction for the SOCP. It has been shown that primal-dual algorithms using the NT search direction have polynomial complexity, see (Tsuchiya, 1998). Specically it has been shown that the potential function can be reduced by a positive constant in each iteration using the NT direction. It was shown in Section 4.4 how a search direction could be found from the potential function by applying the steepest descent algorithm. From this we are now going to derive the NT-direction following the motivation given in (Todd, 1999; Wolkowicz et al., 2000, Section 9.5). In Section 4.4 the derivation of the equation for the search direction was done using a Taylor series expansion with respect to the primal variable . It is also possible to derive equations for search directions from an expansion with respect to the dual variable . This results in (4.21)

where

'

' ) 5 %) ) 5 ) ) 5

5%

If

then

1 @ () 5 % % % 1 ( '

@

. Hence is

'

'

1 (

Since

5 @ 1 % @ % 0)(

) @ 1 @

()

) %

'

1 % @ % () @ 1 @ ()

'

5

4 Interior-Point Methods

It is possible to choose and such that (4.144.17) and (4.224.25) result in the same search direction. First note that equations (4.144.16) from the primal are the same as equations (4.224.24) from the dual. That leaves equations (4.17) and (4.25) to determine the conditions on and for to be a NT search direction, i.e., it should be possible to transform

then equations (4.26) and (4.27) are the same. In order to nd matrices and which satisfy these conditions a matrix is dened in Appendix A. There we show that by letting and , we will satisfy the three conditions for the NT search direction. can also be seen as a scaling matrix, or equivalently as a The matrix change of variables. Let the scaling matrix operate on the vectors and matrices of the optimization problem as

@ 7 % ) 7 % ' ) 5 % ) ' %

7 % @ ) ) @ 7 %

'

%

1 ()

5%

1 (

)

1 20)(

@ 1

) @

into

1 (

@ 2 @

' '

) ' ) 5 % ) 5% )

1

%

) @

(4.26)

(4.27)

27

By multiplying (4.29) with and (4.31) with and by realizing that , which is shown in Appendix A, the equations for the NT search direction can be written as

where

(4.32)

!! $ # 7

%

$

$

) @ # % ) !!

7%

!! ' '

$7

Chapter 5

In this chapter the optimization problem presented in Chapter 2, which has a quadratic objective function and linear and quadratic constraints, will be solved using the IP method discussed in Chapter 4. The key to an efcient implementation of a solver for this problem is to rewrite it as an SOCP. This can be fashioned in many different ways. However, done carefully, it is possible to use Riccati recursions for computing the search directions, which will greatly improve speed.

In this section the control problem is described. First the model is presented. Then the performance measure is introduced. The optimization problem is then reformulated as an SOCP. Consider the following model for

(5.1)

! # "

' B

! 6

is the , . The

@ 1

B B # "

@ @ ' 5

$ ) 5$ ) ) ) )

B B ! & B B

7 )

B B )

30

inequality (5.1) should be interpreted as component-wise inequality. With abuse of notation we will denote both the given initial value and the state variable at time zero with . The performance criterion to minimize is dened as

The reason for having one quadratic constraint for each instead of one for all is explained in Section 5.3. To be able to solve this optimization problem efciently we will formulate it as an SOCP. To this end dene

5% $ 2 @ ' # @ 8 @ ' @ # ' ' '3 ' @ "! 5% $ ' 7 $5 @ 1 @ @ ' 5 @ ' @ @ ' # ' ' @

'

'

5

where and

. . .

..

..

..

. . .

. . .

'

( ' 7 % 7 &$ @ % ' $ % % @ % @ ' ' 5 % 5 % 5 5% $6) 7 &$2 7 &$6) 2 5% ) 5 $ ) $5 ) $ $ ) 2 5$ ) 1 @ @ ' 5 @ 5 0 ' & $ " ' %!! (

2 $5

5

7 %

' % $ (

%

$

'

%)

3

(5.2)

(5.3)

31

where is the rst unit vector. Then the optimization problem can be written as the SOCP

where , and where and are modied versions of and with zero columns where comes in. Here are second order cones, and should be interpreted as component-wise inequalities.

In this section the equations for the NT search direction will be stated. Following the method described in Chapter 4 the NT search direction can be found by altering the notation in (4.32). In order to do that dene the matrix and vector as

Partition the inverse of the scaling matrix for the second order cones as

7

4

1 @ (

where , denes

and are dened in Appendix A. Notice that this equation also . For the nonnegative orthant the scaling matrix is given by

7 7 @ 7 # 7 7 7 @ 7 5 @ % @ % 5 @

@

$5

1 @ @ ' 5 @

5% 5 @ 8

$5

' 2 7 % &(% $

' 5 5 5$ 2 6)

'

'

@ 9 ( 1 @ 7 %

3

!

B %$

(5.4)

5

32

The NT scaling matrix maps the primal and dual variable to the same vector , which is given by

With this scaling the following equations for the search direction are obtained

(5.5)

with and . Now when the equations for the search direction have been presented the question arises how to solve this linear system of equations. A simple way is to just apply a LU-factorization on (5.5) and then use forward and backward substitutions. A more efcient way using a Riccati recursion is described in the next section.

In this section it is shown how a Riccati recursion can be used to efciently solve the equations for the search direction obtained in Section 5.2. From the two last

!! !! "!!

7 % $

7 % @ @ ' 5 @ 7 %

!! !! "!! B

'

8 2 % ' 3 ( % 7 &$ @ @ 7 % @ 27 $ 5 @ 7 % %

'

'

'

% & $ 7

'

'

'

1 ( 1 %% 0

2

7

Let

and

7 # 7 2

@

! !! "!!

'

'

..

( 7 % $ 7

,

( 8 7 3 7 % $ @ ( 8 7 7 @ % @ 3 7 % $ % ( 7 % $ 5 5

,

57

# ! !! !!

!! !!

43 # 7 # $" 1 '$" 6 55 ! !! $" 02" % 5 ( !!! )($'# &" "

. . .

!

. . .

' 5 5 5 # ' @ # # @ 7 @ ' ' @ '' "!! ' "!! B "!! ' ' '

@

@

where where

block-rows of (5.5) it follows that and substituting this back into (5.5) the following equation is obtained

. . .

as

. Now dene

..

'

' % $

'

7 '

'

% &$

, where

. . .

34

where

is a block matrix which easily The matrix can be inverted because of the block diagonal structure of and . Also notice that (5.11) can be solved via

Since has a block-diagonal structure and is built from the dynamic system with a very specic structure, can be computed efciently using a Riccati recursion approach as in (Wright, 1993). Had we replaced the quadratic with one constraint in (5.2), then constraints would not have had the desired structure and the Riccati recursion approach could not have been applied. By using the matrix inversion lemma it holds that (5.13)

It can be shown that has the same structure as , see Appendix B, and thus can be factorized by a Riccati recursion as well. Substituting (5.13) into (5.12) we realize that the algorithm for efciently computing the search direction can be divided into the following steps

1 @ 7 %7 75 7 % 1 @ 7 7 %7 75 ( 7 7 %7

B 5 ' 5 5 ' 5 # ' # # 5' @ @ ' 8 8 @ 5 "!! 5 @ "!! "!! ' 5 5

1 7 75 @ ( 1 7 7 @ 7 ( 7 1 75 7 7 @ 7 7 % ( % % %

5

5 5 5 5

1 1 1( 8 7 3 8 7 3

7 % &$ %

(

5 5

5 5

( 1 1 1 ( ( 8 7 7 75 3

7 %

7 7 %7 75 ( @ 7 %7 7 % 1 75 7 % 7 @ 7 (

7

5

35

Notice that in Step 6 parts of the Riccati recursion used in Step 1 can be reused. Detailed information on the Riccati factorization approach is given in Appendix B, see also (Rao et al., 1997). The computational complexity grows linearly with the time horizon as will be discussed more in Chapter 7.

5 now

4 solve

3 let

2 let

1 calculate

7 7 7 % @ 7 7 7 %7 @ 7 %7 7 1 @ 7 7 %7 75 ( 7 %7 75 7 7 @ 7 7

7 %

Chapter 6

So far we have assumed that the starting point is strictly feasible. It is, however, not a trivial task to nd a strictly feasible initial point. In this chapter it will be shown how to obtain strictly feasible initial primal and dual points to the optimization problem (5.4). Existence of solutions will also be discussed.

When looking for a strictly feasible primal point for problem (5.4) one realizes that the inequalities involving are easily satised by choosing large enough. However, the remaining inequalities are more difcult. Therefore consider the following relaxed problem: (6.1)

where . When (6.1) has a solution for which it holds that is a strictly feasible primal point to (5.4). The corresponding dual problem can

'

1 @ ( $ ) 5$ )

'

38 be stated as follows

where

and where

A primal-dual algorithm can be used to determine if the original problem is feasible or not. More precisely three different cases of solutions to (6.1) and (6.2) exist If problem exists. If

If , then a feasible solution to the original problem, however not strictly feasible exists. These three cases are illustrated in gures 6.16.3. Notice that this, so called Phase 1, problem also can be reformulated as an SOCP as in Section 5.1, and thus it also can be solved using a similar approach. However, we now also have to nd a strictly feasible point to the Phase 1 problem. A strictly feasible primal point is trivial to nd. Just let and then calculate by using the dynamic equation, i.e., recursively calculate from . Then choose large enough to satisfy the inequalities. A strictly feasible dual point has to satisfy

'

'

'

'

B5

'

B5

'

5 @ B

% !! 7 )!! 5 5

' 5

( 7%% $

% )!! 7

' 5 @ 5

'

!! '

@ 5 #B

5

'

5 @ '

5 4 3

(6.2)

@ 9B

7 ) )

39

primal objective

Figure 6.1: The dual objective is greater then zero. No strictly feasible solution to the original problem exists.

Figure 6.2: The primal objective is less then zero. A strictly feasible solution to the original problem exists.

primal objective

Figure 6.3: The primal and dual objective are equal to zero. A feasible solution to the original problem exists, however not strictly feasible.

40 If we let and

where . . .

..

Equations (6.7-6.8) can be viewed as a linear system evolving backwards in time. Therefor we can solve (6.7-6.9) recursively for

After solving (6.7-6.9) constraint (6.6) can be satised by taking . Conditions 6.11 and 6.12 can be formulated as a small linear feasibility problem for each , more precisely as nding strictly feasible satisfying

7 %

'

% # 7 &$5

$ B

B 7 % 5

7 % 7 % 5 @ 7 % 7 % 5

'

. . .

..

..

! "!!

'

% # 7 &' 5 $

..

% 5$ &C @

'

!

@

"!! '

' '

'

' 7 % 7 % 7 % 5

5%

5%

% 7 &$

5 %

B B @

7 % @ B 7 % 5 @ B 7 % 5

5 %

$ '

'

'

'

'

5 5

' 7

5

5

5

5

5

' 7

'

% B

%

( 7 $

41

Then

is bonded and nonempty, see (Wright, 1997). A sufcient condition for this is that denes a nonempty polytope. This is true for the important special case

In the previous section we showed how to nd a strictly feasible primal point by solving a so called Phase 1 problem. It now remains to show how to nd a ) to satisfy the dual constraints of (5.4), i.e. strictly feasible dual point (

where the matrices and variables are dened in sections 5.1 and 5.2. By choosing and , rows of (6.13) will be satised. The remaining problem can then be stated as

which is the same as (6.76.9). This means solving the Phase 1 problem together with and are strictly feasible for (6.136.15).

is equivalent to and has full column , which means that a strictly feasible exists.

7 % ' 7 % & 8 @ 3 @ 8 @ 3

' % 7 5

1 @ B( '

'

'

5 5 5 5

%

@ @ B

' 7 ' @ 8 @ 3

%

' 7

' 7

where

' 7 % 7 % 5

5 4 3

' 7

7 % 7 %' B

%

is found from (6.10). The optimization problem has a solution if the set of solutions to the dual problem

Chapter 7

Computational Results

This chapter is divided into two parts. First the complexity of the algorithm will be investigated and then an example will be presented to show some computational results.

7.1.1 Flop Count

The complexity analysis will be done by computing how many ops are required to solve the equations for the search direction. A op is a oating point operation which is here dened as one addition, subtraction, multiplication or division of two oating-point numbers. When oating-point operations were relatively slow the op count gave a good estimate of the total computational time. Now issues such as cache boundaries and locality of reference can dramatically affect the computation time of a numerical algorithm (Boyd and Vandenberghe, 2001, p.498). Flop count can still give a rough estimate of the computational time of a numerical algorithm. Since the op count only gives an approximation of the computational time it is enough to check the order or orders, i.e., the largest exponents. First we will describe some of the standard costs for the operations performed in the optimization algorithm.

Matrix-vector multiplication

where

costs

! B

5)

Inner product

, where

costs

) B @ )

ops. ops.

7 Computational Results

where

where

, by LU factorization

The cost involved in forming the product of several matrices can vary since the matrix-matrix multiplication can be carried out in different ways. Consider , where , and where . The matrix can be computed in two ways. One way is to form the product and then form , which gives a total cost of ops. The second way is to form the product and then form . This gives a total . This leads to the conclusion to use the rst approach when cost of and otherwise use the second approach. So far no structure has been exploited, but if the matrix has structure, then the amount of ops can be reduced considerably. For example a triangular matrix multiplication requires ops compared to for a full matrix multiplication. More about op count and numerical issues can be found in (Golub and van Loan, 1989).

The main computational cost in the optimization algorithm is to solve Equation 5.5, which has to be solved in each iteration to obtain the search directions. Following the method described in Section 5.3 the most computationally intensive step will be solving Equation 5.8, which is done using a Riccati recursion, as is described in Appendix B. The complexity analysis of the Riccati recursion will be divided into three steps. The three steps and all the variables are described in Appendix B. First is the initialization, which is the cost of calculating . The calculation of is divided into matrix multiplications. The cost of each matrix multiplication is , so the total cost of the initialization is . The next step is the backward recursion, which is the calculation of and by computing some intermediate matrices. The cost of forming each is and the cost of forming each is . Thus the total cost of the backward recursion is . The forward recursion which is the calculation of the search directions , and , which are all computed by matrix-vector computations, has a total cost of . A summary of the op count for the Riccati recursion can be found in Table 7.1.

B "

B

6

1 ( 1 9 0(

B

1 1( B ) 1 1 ( A ) 9 9 9 )

" 6

1 #0(

)

and where

! B

( 9 0( 1 12 0( 1 (

1 1( ( 1 #0(

7.2 Example

Table 7.1: The different steps in the Riccati recursion and their op count.

To solve Equation 5.8 three Riccati recursions have to be performed. The rst and third recursion differ only in the right hand side, which means that parts of the Riccati recursion used in the rst recursion can be reused in the third recursion. In the second recursion parts of the multiplications in the initialization of the rst recursion can be reused. Thus the total cost of solving . (5.8) is Another way of solving Equation 5.8 is to use a LU-factorization. This would involve three steps. First, the initialization, which is the same as for the Riccati approach. Then comes the LU factorization, which has times more ops than the Riccati approach. The last step is the forward and backward substitution which costs . Thus the total op count for the LU factorization approach is , which is times more than with the Riccati recursion approach. To test the algorithm it was implemented in Matlab. The use of for loops in Matlab is not very efcient so the matrix multiplications in the algorithm were done by using sparse matrix multiplications. If the algorithm was to be implemented in a more efcient programming language such as C, matrix multiplications should probably be performed so that the structure of the matrices are exploited.

7.2 Example

In this section the algorithm will be evaluated on a double-tank process. In the evaluation we will see that the algorithm has a linear growth in the op count with respect to the time horizon, just as expected.

1 1 ( 1 1( 1 1 ( 1 & 1 ( 1 1(

19 1 1 1 ((

1 1 (

46

7 Computational Results

The double-tank process which was introduced in (strm and sterberg, 1986) is a laboratory process. It is illustrated in Figure 7.1.

PSfrag replacements

Figure 7.1: The tank system. The process dynamics can be described as

where and are the levels in the upper and lower tank, respectively. The pump generates a water ow to tank . The hole in the bottom of makes the water ow to dependent on the water level in . The geometrical data for the tanks are given in Table 7.2. The tank data is taken from (Hansson, 2000). The levels in the tanks are measured with sensors that give an output voltage proportional to the level. The proportional constant is 50 V/m. This means that the range of the sensor outputs is [0, 10] V. The ow [m /s] generated by the pump is proportional to the supply voltage [V] as where [m /Vs]. By linearizing around the steady

7

5

7 7

7 7

7 7

7

2%

' 5

7

(7.1) (7.2)

5 7 7

7.2 Example

47 [m ] [m ]

Table 7.2: The geometrical data of the cross sectional tank area sectional area of the drilled hole .

state solution V and V and by sampling the system with zero order hold (sample interval 2 s) the following discrete state equation is obtained

The MPC algorithm was discussed in Chapter 2. In this discussion we saw that the choice of cost function determines how aggressive the output becomes. The matrices in the cost function are here chosen to be

This means that the cost of the control signal is valued 10000 times less than the rst state variable and that the second state variable is valued 100 times more than the rst state variable. The terminal states are valued equal to the rst state.

8 '3 ' ' ' 1 5@ # ' ' ' ' 1 5 @ #' ' '

1 95 (

% %

' '

) 8 ' '

' '

%

2

%)

(7.3)

5( )

(7.4)

(7.5) (7.6)

7 Computational Results

In the computation the initial value was set to . The optimal trajectories for the levels of the two tanks and the optimal trajectory of the pump voltage for are shown in Figure 7.2. Notice how the different constraints become active. The control signal starts at its maximum level, V and then it starts to decrease. When the level in the upper tank reaches its maximum value after 4 sec. the control signal balance this level with a constant control signal of 0.36 V for about 10 sec. Then the control signal decreases so that the level of the upper tank can decrease and approach the same level as the lower tank at time 40 sec.

5 '

8' '3 ' ' # 1 5 @ ! ! !! "!! @ # 1 5 @ ! ' ! ' !! ' !! ' # '' '' @ 5 @ !! '' !! ' @ "!! '

(7.7)

(7.8)

(7.9)

(7.10)

7.2 Example

49

1

0.5

0.5

10

15

20

25 Time [sec]

30

35

40

45

50

3 2 1

0 1

PSfrag replacements

2 3 0 5 10 15 20 25 Time [sec] 30 35 40 45

Figure 7.2: Optimal levels of tanks upper plot, and optimal control signal lower plot, as a functions of time.

To verify the op count, computations were performed in MATLAB 5.3.1, where op count still is in use. The result of these computations can be seen in Figure 7.3. Notice how the number of ops per iteration grows linearly with the time horizon just as the complexity analysis suggested. Figures 7.4 and 7.5 show how the duality gap decreases exponentially. The rate of the decrease seems to be dependent on the size of the problem with lower rate of decrease per iteration for larger problems. The amount of iterations grows with the problem size as can be expected as the rate of decrease in the duality gap becomes lower for larger problems. Section 4.4 showed that the amount of iterations needed to reduce the duality gap by a xed factor can be approximated by , where is a tuning parameter in the potential function which has to obey for convergence of the algorithm to hold. Notice that the number of cones grows linearly with the time horizon. Thus the number of iterations grows as , which is shown in Figure 7.6. Computations were also performed to test how the overall computational time grew as the time horizon was increased. The result is presented as the solid line in Figure 7.7. The Riccati recursion based algorithm was also compared

$0&

1 (

50

7 Computational Results

10

10

10

10 1 10

10

10

Time horizon

Figure 7.3: Flop count per iteration vs time horizon. against a standard optimization software, SeDuMi, (SeDuMi, 1.3). SeDuMi is developed to solve semidenite problems but also has a solver which can handle SOCP. To be able to use MATLAB an LMI parser, called YALMIP (Lfberg, 2001b), was used. The result is presented as the dashed line in Figure 7.7. The dash-dotted line presents the computational time when SeDuMi was used to solve the problem with the state constraints eliminated using (2.1) recursively. The Riccati recursion based algorithm has quadratic complexity whereas the SeDuMi based implementations have cubic complexity. According to the preceding analysis we would expect the computational . The difference time for the Riccati recursion based approach to be is probably explained by the fact that op count analysis is not exact when computational time is considered.

1 1 (

7.2 Example

51

10

10

10

10 Duality gap

10

10

10

10

10

5 Iteration

10

10

10

10

10

10 Duality gap

10

10

10

10

10

10

10

20

30

40 Iteration

50

60

1

70

'

80

52

7 Computational Results

10

Iterations

10

1

10

10

Time horizon

7.2 Example

53

10

10

Time [sec]

10

10

10

10

10

Time horizon

Figure 7.7: Elapsed computational time vs time horizon. Solid line Riccati recursion based algorithm, dashed line SeDuMi without elimination of states, dashed-doted line SeDuMi with elimination of states.

Chapter 8

8.1 Conclusions

The objective of this thesis has been to give insight in to how the MPC problem with a quadratic cost function and an ellipsoidal terminal constraint can be solved using a feasible IP method. In Chapter 5 it was shown how to efciently solve an optimal control problem with applications to model predictive control. Special attention was given to formulating the problem as an SOCP while still preserving certain structure under scalings. Preserving this structure makes it possible to solve the equations for the search directions using Riccati recursions efciently. By using the matrix inversion lemma it was possible to simplify the computations. The use of the matrix inversion lemma made it possible to solve the equations for the search directions using three Riccati recursions instead of having to solve one Riccati recursion with a right hand side of dimension . In Chapter 6 it was shown that the algorithm in an easy way can determine when a problem is feasible or not. This was done by formalizing the search for strictly feasible initial points as a primal-dual IP problem. The use of Riccati recursions makes the computational complexity grow at most quadratically with the time horizon. This is discussed and shown in an example in Chapter 7.

56

There are many uncovered topics in this thesis. Here are some suggestions for future research How to incorporate a so called hot-start in the algorithm, i.e., how to utilize the fact that the optimization problem for the next sampling time is likely to have a solution close to the one for the current sampling time. Determined how premature termination of the algorithm affects the control performance, i.e., what happens if the algorithm is stopped before optimum is reached? Investigate if a scaling matrix which does not mix up the time indices exists. Finding such a scaling matrix would imply that the approach of having only one second order cone instead of could be employed.

Appendix A

NT Search Direction

Here we will show how to compute a matrix that will satisfy the three conditions on and , when and . The conditions are stated in Section 4.5. Notice that the rst condition is trivially met. Partition the scaling matrix as where is the number of nonnegative orthant cones of dimension one, and where is the number of second order cones. In view of the second condition in Section 4.5 we may dene

where is a scalar if the corresponding cone is the nonnegative orthant of dimension one and where =arrow if the corresponding cone is the second order cone. From now on we will consider the different cones separately and drop the notation for simplicity. First we will look at the case when the cone is a nonnegative orthant of dimension one. Let the scaling matrix be dened as

Now we will look at the two remaining conditions in Section 4.5. The second condition is . This is trivially satised since

)

$ 7 ( $

)

1 (

as

%

7% )

( $ 7 $

)

58

A NT Search Direction

Since, by the denition of and , we have that . We will now look at the case when the cone is the second order cone. Let the scaling matrix be dened as

where

(A.1)

(A.2) (A.3)

First we will state some relations which will be used later on. 1. 2. 3. 4.

The other relations can be trivially veried. Now we are ready to look at the two remaining conditions in Section 4.5 .

The third condition is . By just substituting becomes . Hence also the third condition is satised.

%

1 0)( 1 (

8 7 ) @ % ) 7 @ % 3 7 1 7 @A% ( 7 1 7 A% ( @ % 1 7 ) @ % () 1 7 ) @ % ) ( 17 ) @ 7 @ % ) % ( 1 7 @ %( 7 57 @ % 7 57 @ % %

7 1 7 @ % ( 87 ) @ 7 @

7 5

1 0)( 1 (

@ % %

% ) % 3

1 (

7 ) @ % ) 1 7 ) @ % 0)(

this

@ 7 5 @ 7 7 @ % 7 ) 5

7 7 )

%

7%

@ @

% )

1 7 ) @ 7 ( 5 1 7 ) @ 7 ( 7 ) 75 ) @ 7 ) 75 7 75 @

1 7 ) 7)

@ 7 ( 5 1 7 ) @ 7 ( @ 1 % ) % ( 5 1 7 ) @ 7 ( 1 % ) % ( % ) 7 % 7 57 @ % 7 ) 5 7 % % )

,

57 7 ) @ 7 1 7 ) @ 7 ( 7 7) 57 @ % ) %%

7)

% 1 7 ) @ 7 ( 5 1 7 ) @ 7 ( @ 1 % ) % ( 1 1 7 ) @ 7 ( 5 1 7 ) @ 7 ( @ 1 % ) % ( ( %

1 7 ) @ 7 ( 5 1 7 ) @ 7 ( @ 1 % ) % ( 7 ) 5 1 7 ) @ 7 ( 1 % ) % ( % ) 1 1 7 ) @ 7 ( 5 1 7 ) @ 7 ( 1 % ) % ( (

A NT Search Direction

The second condition is . By using the denitions of of and , (A.2) and (A.3) respectively, and by using relation 1 in forming , the second condition can be written as

)

8 7 3 8 7 )) 3 8 5 5 5 3 % %

By substituting and given by (A.1) and moving side (A.7) can be written as

7 7) 5 7) %)

% 7 ) 5

@ 1 % ) % ( % ) % % % )

% )1 5 (

)

which of course simplies to . Now let us check the second block row of (A.4) which is equivalent to

Substituting and given by (A.1), and performing the multiplication the left hand side of (A.5) can be expressed as

, the left hand side reads to the right hand (A.8) (A.7) (A.6) (A.5) (A.4) 59

87 @3 7 %

5

%@7 5 @ %

5

% @@7 5 @ %

1 7 7 5 @ % ( 1 7 57 @

% &$ 7

87 @3 1 ( % % ( 8 7 @ 3 % $ 7 % % 1 ( @

1 7 75

@

% ( 8 7

@3

1 7 75 1 7 75

% ( 8 7 3 7 % @ %

% ( 8 7 ) ) C7 8 5 5 3 5 % @3 %

%

1 7 ) 7 5 ) @ % ) ( 8 7 ) ) @ 3 7 % 8 5 5 3 % 5 %

60

Multiply with from the left and use the denition of . Then the right hand side of condition three can be written as

7 7 7 5 @ % 8 7 ) 7 ) 7 5 ) @ % ) 8%@3 % ) @3

, where

A NT Search Direction

arrow

1 (

and

7 @ 8 % 3

that

. Notice

(A.10) (A.9)

1 7 @ % ( 7%

@

A NT Search Direction

61

where the second equality follows from (A.10). This means that the right hand side of (A.9) can be written as

where the last equality follows from relation 1. This together with the denition of proves that . For more information on this scaling matrix for the second order cone see (Tsuchiya, 1998).

1 7 75 7 5 75 1 5 ( @ @ 7 5 % 7 7 5 @ 1 5 @ ( %

%$7

%$

1 7 57 @ % ( 1 7 75

% ( 8 7 @ 3 %

%

@ % ( @

7 57 @ %

Appendix B

Riccati Recursion

In this appendix the Riccati recursion approach for solving (5.8) will be derived. The rst step is to calculate . This is equivalent to solving

where and are dened in (5.9), (5.10), respectively, and where in (5.3). First we partition as , where

8 2 @ @ 3

5 5 8 5 3

8 ' @ 3 8 B 3 8 5 '

$ 5

@ $5 5 8 2 @ 3 8 55 @ @ 3

( % $

7 %

7

8

3

$

(B.1)

5 3

is dened

7

5 7 7 % 7 @ 7 ) 5 ) 7 5 7 7 @ 7 @ 7 ) 7 5 @ 7 % ) 5 7 7 %

B

8 11 77

) ( C 5 ) ( 5

# ! "!!

# ) B

7 2 5 7 2 5

7

5 8 C @@ 3

5

# 8

) B

8 3 1 ) ( 7 2 7 ) 7 ) 7 # ' 8 7 ) # 7 B 3 ''

. Notice that for to obtain

2 7 B 5 ' @ 5

@

@ # #

) B 1

5

@

$

$ )

5

!! !! !! !!

. . . . . .

% 5

64

. . . . . .

! !

. . .

#

'

' $ @

$

'

and vectors

)

$ $ 2 2 B 2

' '

or equivalently

7

2 5 5 2 5

3

'

1

3

5 '

and

B Riccati Recursion

(B.3) (B.2)

'

'

' 0 "

0'

0) 2 @ ) ) ) @ % 0 ' 4

B 7 )

7 ) 5 % 1 7 ) 5C (@ 7 0 ' 4 %

A 9 A ' 0 "

7 2 5 5 7 2 5 7 % 7 2 5 0 ' 4

Finally is obtained from (B.3). To summaries the algorithm for the recursion can be written as

) 7 5

) 5 7 7 % ) )

then (B.3) holds. The solution can thus be obtained by rst recursively computing and backwards in time from the two above recursions with nal values and . Then with starting value the following recursion is used to compute and :

' %) 7

7 )

$ )

) 5

7 7

7 %

) 5

7 5 7 7 7 %

2

If

and if

B Riccati Recursion

1. Initialization: Compute

is dened via

and

2. Backward Recursion 1:

4. Forward Recursion :

) %

3. Backward Recursion 2:

. 65

$ )

$ 2 2

) )

66

B Riccati Recursion

Notice that the Riccati recursion is well dened if is invertible, see e.g., (strm and Wittenmark, 1984, p.261). A sufcient condition for this is that has full column rank. To see this notice that

since and for any interior point. This condition means that the control signal is either visible in the objective function or in the inequality constraints. The only difference between the rst and second Riccati recursion is the initialization step. Next we will show that has the same structure as . First look at

The structure of

where . The structure of the matrices in this messy expression can be seen in Figure B.1, and the structure of shown in Figure B.2. The structure is the same as that of (5.9). This leads to the conclusion that the Riccati recursion approach can be applied. The only difference is the initialization where has to be computed instead of . The op count of this is small since many of the matrix multiplications already are done in the initialization in step 1.

1 81 @ 5

@

( 5 ( 5 8' '

8'

8 7 ' 5 3

5 8 1 8 5 5 ( 5 5 ' 3 7 7 %7 75

1 $ @ @ % ( 4

' 8 3 8 '

7 7 %7 75 (

8 @ 5 5

8 5 @ 5 3 7

' 3 5

7 %1 5 7 %7

'

7 5 (

'

8 3

B Riccati Recursion

67

2 4 6 8 10

0 column

2 4 row

8 10 12 14 0 column 5

row

PSfrag replacements

row

10

PSfrag replacements

12 0 2 4 6 column 8 10 12

, , and

when

0 2 4 6 8 10

6 column

5

10

12

Bibliography

kerblad, M., A. Hansson and B Wahlberg (2000a). Automatic tuning for classical step-response specication using iterative feedback tuning. In: Proc. 39th IEEE Conference on Decision and Control. kerblad, M., A. Horch and A. Hansson (2000b). A controller implementation with graphical user interface in matlab. In: Reglermte 2000. kerblad, M. and A. Hansson (2002). Efcient solution of second order cone program for model predictive control. In: Proc. of IFAC 2002 World Congress. Barcelona, Spain. Albuquerque, J. S., V. Gopal, G. H. Staus, L. T. Biegler and B. E. Ydstie (1997). Interior point SQP strategies for structured process optimization problems. Computers Chem. Engng. 21, Suppl., S853S859. Alizadeh, F. and S. Schmieta (1997). Optimization with semidenite quadratic and linear constraints. Technical report RRR 23-97, RUTCOR Research Report. Rutgers University, NJ, USA. Arnold, E. and H. Puta (1994). An SQP-type solution method for constrained discrete-time optimal control problems. In: Computational Optimal Control (R. Bulirsch and D. Kraft, Eds.). Vol. 115 of International Series of Numerical Mathematics. pp. 127136. Birkhuser Verlag. Basel. strm, K. J. and A.-B. sterberg (1986). A modern teaching laboratory for process control. IEEE Contr. Syst. Mag. (5), 3742. strm, K. J. and B. Wittenmark (1984). Computer Controlled Systems: Theory and Design. Prentice Hall. Biegler, L. T. (1997). Advances in nonlinear programming concepts for process control. Journal of Process Control.

70

Bibliography

Bitmead, R. R., M. Gevers and V. Wertz (1990). Adaptive Optimal Control: The Thinking Mans GPC. Prentice Hall. Blomvall, J (2001). Optimization of Financial Decisions using a new Stochastic Programming Method. PhD thesis. Linkpings Universitet. Boyd, S. and L. Vandenberghe (2001). Convex optimization. Manuscript, to be published in 2003. Camacho, E. F. and C. Bordons (1998). Model Predictive Control. Springer. Cutler, C. R. and B. L. Ramaker (1979). Dynamic matrix controla computer control algorithm. In: Proceedings of the AIChE National Meeting. Huston, Texas. Dantzig, G. B. (1963). Linear Programming and Extensions. Princeton University Press. Garcia, C. E., D. M. Prett and M. Morari (1989). Model predictive control: Theory and practicea survey. Automatica 3, 335348. Glad, T. and H. Jonson (1984). A method for state and control constrained linear quadratic control problems. In: Proceedings of the 9th IFAC World Congress. Budapest, Hungary. Golub, G. H. and C. F. van Loan (1989). Matrix computations. second edition ed.. Johns Hopkins. Gopal, V. and L. T. Biegler (1998). Large scale inequality constrained optimization and control. IEEE Control Systems Magazine 18(6), 5968. Hansson, A. (2000). A primal-dual interior-point method for robust optimal control of linear discrete-time systems. IEEE Transactions on Automatic Control 45(9), 16391655. Kantorovich, L.V. (1939). Mathematical Methods in the Organization and Planing of Production. Publication House of the Leningrad State University. in Russian translated in Management Science g, pp. 366422(1960). Karmarkar, N. K. (1984). A new polynomial-time algorithm for linear programming. Combinatorica. Keerthi, S. S. and E. G. Gilbert (1988). Optimal, innite horizon feedback laws for a general class of constrained discreate time systems: Stability and moving-horizon approximation. Jornal of Optimization Theory and Application 57, 265293.

Bibliography

71

Lee, J. H. and L. Markus (1967). Fondations of optimal control theory. New York: Wiley. Lee, J-W. (2000). Exponential stability of constrained receding horizon control with terminal ellipsiod constraints. IEEE Transactions on Automatic Control 45(1), 8388. Lee, Y.I. and B. Kouvaritakis (1999). Stabilizable regions of receding horizon predictive control with input constraints. Systems and Control Letters 38, 1320. Lobo, M., L. Vandenberghe, S. Boyd and H. Lebret (1998). Applications of second-order cone programming. Linear Algebra and its Applications 284, 193228. Lfberg, J. (2001a). Linear model predictive control stability and robustness. Licentiate Thesis No. 866, Linkpings universitet. Lfberg, J. (2001b). A matlab interface to sp, maxdet and socp. Technical report lith-isy-r-2328. Department of Electrical Engineering, Linkping University. SE-581 83 Linkping, Sweden. Luo, Z., J. F. Sturm and S. Zhang (1996). Duality and self-duality for conic convex programming. Technical report. Econometric Institute, Erasmus University Rotterdam. Mayne, D. Q., J. B. Rawlings, C. V. Rao and P. O. M. Scokaert (2000). Constrained model predictive control: Stability and optimality. Automatica 36, 789814. Morari, M. and J. H. Lee (1999). Model predictive control: Past, present and future. Computers and Chemical Engineering 23, 667682. Nesterov, Y and A. Nemirovsky (1994). Interior point polynomial methods in convex programming. SIAM. Propoi, A. I. (1963). Use of lp methods for synthesizing sampled-data automatic systems. Automation and Remote Control. Qin, S. J. and T. A. Badgwell (1996). An overview of industrial predictive control technology. In: Chemical Process ControlV, Assessment and New Directions for Research. Tahoe City, CA.

72

Bibliography

Rao, C. V., S. J. Wright and J. B. Rawlings (1997). Application of interiorpoint methods to model predictive control. Preprint ANL/MCS-P6640597. Mathematics and Computer Science Division, Argonne National Laboratory. Rawlings, J. B. and K. R. Muske (1993). Stability of constrained receding horizon control. IEEE Transactions on Automatic Control 38(10), 15121516. Richalet, J. A., A. Rault, J. L. Testud and J. Papon (1976). Algorithmic control of industrial processes. In: In 4 IFAC Symposium on Identifcation and System Parameter Estimation. Tbilisi URSS. Richalet, J. A., A. Rault, J. L. Testud and J. Papon (1978). Model predictive heuristic control: applications to an industrial process. Automatica 14, 413428. Scokaert, P.O.M. and J.B. Rawlings (1998). Constained linear quadratic control. IEEE Transactions on Automatic Control 43(8), 11631169. SeDuMi (1.3). http://fewcal.kub.nl/sturm/software/sedumi.html. Steinbach, M. C. (1994). A structured interior point SQP method for nonlinear optimal control problems. In: Computational Optimal Control (R. Bulirsch and D. Kraft, Eds.). Vol. 115 of International Series of Numerical Mathematics. pp. 213222. Birkhuser Verlag. Basel. Todd, M. J. (1999). On search directions in interior-point methods for semidefinite programming. Optim. Methods Softw. Tsuchiya, T. (1998). A convergence analysis of the scaling-invariant primaldual path-following algorithms for second-order cone programming. Technical report. The Institute of Statistical Mathematics, Tokyo, Japan. Vandenberghe, L. and S. Boyd (1995). A primal-dual potential reduction method for probelms involving matrix inequalities. Mathematical Programming 69, 205236. Vandenberghe, L. and S. Boyd (1996). Semidenite programming. SIAM Review 38, 4995. Vandenberghe, L., S. Boyd and M. Nouralishahi (2002). Robust linear programming and optimal control. In: Proc. of IFAC 2002 World Congress. Barcelona, Spain.

Bibliography

73

von Neumann, J. (1937). ber ein konomisches gleichungssystem und eine verallgemeinerung des brouwerschen xpunktsatzes. Ergebnisse eines mathematischen Kolloquiums. translated in The Review of Economic Studies, 13, pp. 19 (19451946). Wills, A. G. and W. P. Heath (2002). Using a modied predictor-corrector algorithm for model predictive control. In: Proc. of IFAC 2002 World Congress. Barcelona, Spain. Wolkowicz, H., Saigal, R. and Vandenberghe, L., Eds.) (2000). Handbook of Semidenite Programming Theory, Algorithms and Applications. Kluwer Academic Publishers. Wright, S. J. (1993). Interior-point methods for optimal control of discrete-time systems. J. Optim. Theory Appls. 77, 161187. Wright, S. J. (1996). Applying new optimization algorithms to model predictive control. Chemical Process Control-V. Wright, S. J. (1997). Primal-Dual Interior-Point Methods. SIAM.

- a1k1-00208Transféré parLindsey Smith
- Global Optimization over PolynomialsTransféré parnidalyoucef
- MB0048 –Operation ResearchTransféré parMeha Sharma
- Solution of the Transportation ModelTransféré parrahul
- Class Work 8 SolutionsTransféré parkhairun hafizah
- EE505 Optimization Theory 2014Transféré parMuhammad Adnan Malik
- Revised Literature Review by DrTransféré parapi-3748765
- rfpo-120103003512-phpapp01Transféré parAnonymous pTtEWaa
- MODEL PREDICTIVE CONTROL USING FPGATransféré parijctcmjournal
- intro_svm_new_example.pdfTransféré parmikebites
- The Linear Programming Solver_ Example 8.3_ Two-Person Zero-Sum Game _ SAS_OTransféré parwillcoggins
- EqualityTransféré parpacman123454544
- MB0032 - 1 - CompleteTransféré paraniljoshi26
- Operations Research MB0048Transféré parNitin Sanil
- LecturenoteonMPC-JHLTransféré pargatzke
- Three_winding_transformer.pdfTransféré parHaider Addewany Abu Hakema
- 35 1490250150_23-03-2017.pdfTransféré parEditor IJRITCC
- Abstract.docTransféré parsunil8080
- 08004769Transféré parDiego J. Alvernia
- gsm-bss-network-kpi-call-setup-success-rate-optimization-manual-131123150113-phpapp01.pdfTransféré parHasan
- A novel method for solving the fully neutrosophic linear programming problemsTransféré parAnonymous 0U9j6BLllB
- Project Discover Generative Design Nagy AutodeskTransféré parVictor Okhoya
- Games Decision BookTransféré parAashish Gairola
- SolverTransféré parTom
- Syllabus M Tech RATransféré parPooja Bhati
- Optimal Shift Scheduling With a Global Service Level ConstraintTransféré parAlessandro Nájeraa
- artifTransféré parDiamond Nongmaithem
- Ji & Kodikara 2017Transféré parJose Saturnino Perez Fajardo
- Reactive Power Optimization Using Differential Evolution AlgorithmTransféré parseventhsensegroup
- Energy Enigma-Solution Through Genetic AlgorithmTransféré parAnshuman Das Mohapatra

- The Development of a Hardware in the Loop Simulation System for UAV Autopilot Design Using LabviewTransféré parAhmed Medhat Youssef
- RCGA 1.pdfTransféré parAhmed Medhat Youssef
- Multiphase interleaved boost DC/DC converter for fuel cell/battery powered electric vehiclesTransféré parAhmed Medhat Youssef
- Brilliance GalenaTransféré parAhmed Medhat Youssef
- Back SteppingTransféré parAhmed Medhat Youssef
- piezoelectric actuatorsTransféré parAhmed Medhat Youssef
- System Identification of Unmanned Aerial VehiclesTransféré parAhmed Medhat Youssef
- Discrete Hysteresis for PiezoelectricTransféré parAhmed Medhat Youssef
- Automatic-Flight-Control-Full-Version.pdfTransféré parAhmed Medhat Youssef
- Automatic-Flight-Control-Full-Version.pdfTransféré parAhmed Medhat Youssef

- m33-Iqc8-2014-Belt Conveyer Analysis Using Fault Tree Analysis MethodTransféré parMilos Matejic
- Struc Dyn Sys Control 1Transféré parkhawaja Ali
- Question Bank DspTransféré parAnonymous gAVMpR0a
- ICSS Function Test PresentationTransféré parHashemAliHashem
- Dynamic Simulation of Heat Transfer through Cooling Tower (1).docxTransféré parAparna
- Fourier Family CheatSheet v2.0Transféré parNikesh Bajaj
- White Paper Relaibility of UPSTransféré parMarino Valisi
- MBD.pdfTransféré parAtta Ur Rehman Shah
- Longitudinal Spacing Control of Vehicles in a PlatoonTransféré parerick_haas1
- Manual TestingTransféré parkishoresuma
- Paper 29 Sosece 2 Huynh PaperTransféré paryayarety
- Advanced RoboticsTransféré parakozy
- 32 Tap Fir Filter Using Hann Window-B7Transféré parGopi Channagiri
- Management Control System3265262626Transféré parSantosh Singh
- Topic Vehicle MarineTransféré parCarlos
- Frequency Response of System (Bode Diagram & Nyquist Plot)Transféré parAthirah Abd Aziz
- Tutorial-5.pdfTransféré parAnimesh Choudhary
- 5 Things You Must Know About Requirements PlanningTransféré parPawan Singhal
- ThesisTransféré parblack_jet2740
- Conceptual Model of BPR by Considering the Employee_s ResistanceTransféré parPeymanakhavan
- Neural NetworksTransféré parIsrael LV
- Signal and SystemsTransféré parjaspreet12321
- Simotion Introduction GeneralTransféré parnazaeth_10
- JIT & Lean 1Transféré parThiru Kumaran
- William C. Davis- Equation of State for Detonation ProductsTransféré parFraosm
- QB-A4-DSPTransféré parDevie Mohan
- Mandic_generalized Normalized Gradient Descent AlgoTransféré parPrayag Gowgi
- Chapter 4 Characteristics of Closed-Loop SystemsTransféré parDior Chan
- Case ToolsTransféré parfelipexp
- Self Balancing BotTransféré parshivimshr