Vous êtes sur la page 1sur 11

Modern Control Theory

Third Edition

WILLIAM L. BROGAN, Ph.D.


Professor of Electrical Engineering University of Nevada, Las Vegas

PRENTICE HALL, Upper Saddle River, NJ 07458

Contents

PREFACE 1 BACKGROUND A N D PREVIEW

xvii 1

1.1 1.2 1.3

Introduction 1 Systems, Systems Theory and Control Theory 2 Modeling 4 1.3.1 Analytical Modeling, 4 1.3.2 Experimental Modeling, 8

1.4 1.5 1.6

Classification of Systems 12 Mathematical Representation of Systems 14 Modern Control Theory: The Perspective of This Book 15 References 16 Illustrative Problems 17 Problems 29

HIGHLIGHTS OF CLASSICAL CONTROL THEORY

31

2.1 2.2

Introduction 31 System Representation 31


vii

vili

Contents

2.3 2.4 2.5 2.6

Feedback 31 Measures of Performance and Methods of Analysis in Classical Control Theory 37 Methods of Improving System Performance 41 Extension of Classical Techniques to More Complex Systems 48 References 49 Illustrative Problems 49 Problems 68

STATE VARIABLES A N D THE STATE SPACE DESCRIPTION OF DYNAMIC SYSTEMS

72

3.1 3.2 3.3 3.4

Introduction 72 The Concept of State 74 State Space Representation of Dynamic Systems 75 Obtaining the State Equations 80
3.4.1 3.4.2 3.4.3 3.4.4 3.4.5 From Input-Output Differential or Difference Equations, 80 Simultaneous Differential Equations, 81 Using Simulation Diagrams, 83 State Equations From Transfer Functions, 88 State Equations Directly From the System's Linear Graph, 98

3.5 3.6

Interconnection of Subsystems 101 Comments on the State Space Representation 102 References 103 Illustrative Problems 104 Problems 118

FUNDAMENTALS OF MATRIX ALGEBRA

121

4.1 4.2

Introduction Notation 121

121

Contents

ix

4.3 4.4

Algebraic Operations with Matrices 123 The Associative, Commutative and Distributive Laws of Matrix Algebra 125 Matrix Transpose, Conjugate and the Associative Matrix 126 Determinants, Minors and Cofactors Rank and Trace of a Matrix 128 Matrix Inversion 129 Partitioned Matrices 131 Elementary Operations and Elementary Matrices 132 Differentiation and Integration of Matrices 133 Additional Matrix Calculus 134
4.12.1 The Gradient Vector and Differentiation with Respect to a Vector, 134 Generalized Taylor Series, 137 Vectorizing a Matrix, 138

4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12

126

4.12.2 4.12.3

References

140

Illustrative Problems 140 Problems 155


5 VECTORS AND LINEAR VECTOR SPACES 157

5.1 5.2 5.3 5.4 5.5 5.6

Introduction

157

Planar and Three-Dimensional Real Vector Spaces 158 Axiomatic Definition of a Linear Vector Space 159 Linear Dependence and Independence 161 Vectors Which Span a Space; Basis Vectors and Dimensionality 164 Special Operations and Definitions in Vector Spaces 166

Contents

5.7 5.8 5.9 5.10 5.11 5.12 5.13 5.14

Orthogonal Vectors and Their Construction 168 Vector Expansions and the Reciprocal Basis Vectors 172 Linear Manifolds, Subspaces and Projections 177 Product Spaces 178 Transformations or Mappings 179 Adjoint Transformations Some Finite Dimensional Transformations 184 Some Transformations on Infinite Dimensional Spaces 189 References 190 Illustrative Problems 191 Problems 204 183

SIMULTANEOUS LINEAR EQUATIONS

207

6.1 6.2 6.3

Introduction

207

Statement of the Problem and Conditions for Solutions 207 The Row-Reduced Echelon Form of a Matrix 209 6.3.1 Applications to Polynomial Matrices, 212 6.3.2 Application to Matrix Fraction Description of Systems, 214

6.4 6.5 6.6 6.7 6.8 6.9

Solution by Partitioning

214

A Gram-Schmidt Expansion Method of Solution 215 Homogeneous Linear Equations The Underdetermined Case The Overdetermined Case 219 221 226 218

Two Basic Problems in Control Theory 6.9.1 A Control Problem, 226 6.9.2 A State Estimation Problem, 228

Contents

xi

6.10

Lyapunov Equations 229 References 231 Illustrative Problems 231 Problems 242

EIGENVALUES A N D EIGENVECTORS

245

7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8 7.9

Introduction 245 Definition of the Eigenvalue-Eigenvector Problem 245 Eigenvalues 246 Determination of Eigenvectors 247 Determination of Generalized Eigenvectors 256 Iterative Computer Methods for Determining Eigenvalues and Eigenvectors 260 Spectral Decomposition and Invariance Properties 263 Bilinear and Quadratic Forms 264 Miscellaneous Uses of Eigenvalues and Eigenvectors 265 References 266 Illustrative Problems 267 Problems 280

FUNCTIONS OF SQUARE MATRICES AND THE CAYLEYHAMILTON THEOREM

282

8.1 8.2 8.3 8.4 8.5

Introduction 282 Powers of a Matrix and Matrix Polynomials 282 Infinite Series and Analytic Functions of a Matrix 283 The Characteristic Polynomial and Cayley-Hamilton Theorem 286 Some Uses of the Cayley-Hamilton Theorem 287

x jj

Contents

8.6

Solution of the Unforced State Equations 291 References 292 Illustrative Problems 292 Problems 306 ,

ANALYSIS OF CONTINUOUS- AND DISCRETE-TIME LINEAR STATE EQUATIONS

308

9.1 9.2 9.3 9.4 9.5 9.6 9.7 9.8 9.9 9.10 9.11 9.12 9.13

Introduction 308 First-Order Scalar Differential Equations 309 The Constant Coefficient Matrix Case 310 System Modes and Modal Decomposition 312 The Time-Varying Matrix Case 314 The Transition Matrix 316 Summary of Continuous-Time Linear System Solutions 318 Discrete-Time Models of Continuous-Time Systems 319 Analysis of Constant Coefficient Discrete-Time State Equations 322 Modal Decomposition 323 Time-Variable Coefficients 324 The Discrete-Time Transition Matrix 324 Summary of Discrete-Time Linear System Solutions 325 References 325 Illustrative Problems 325 Problems 340

10

STABILITY

342

10.1 10.2

Introduction 342 Equilibrium Points and Stability Concepts 343

Contents 10.3 10.4 10.5 10.6 10.7 10.8 Stability Definitions 344 Linear System Stability 346 Linear Constant Systems 348 The Direct Method of Lyapunov 349 A Cautionary Note on Time-Varying Systems 358 Use of Lyapunov's Method in Feedback Design 361 References 364 Illustrative Problems 365 Problems 370
11 CONTROLLABILITY AND OBSERVABILITY FOR LINEAR SYSTEMS

XIII

373

11.1 11.2

Introduction 373 Definitions 373


11.2.1 Controllability, 374 11.2.2 Observability, 375 11.2.3 Dependence on the Model, 375

11.3 11.4 11.5 11.6

Time-Invariant Systems with Distinct Eigenvalues 376 Time-Invariant Systems with Arbitrary Eigenvalues 377 Yet Another Controllability/Observability Condition 379 Time-Varying Linear Systems 380
11.6.1 Controllability of ContinuousTime Systems, 380 11.6.2 Observability of Continuous-Time Systems, 382 11.6.3 Discrete-Time Systems, 383

11.7 11.8

Kalman Canonical Forms 383 Stabilizability and Detectability 386 References 387 Illustrative Problems 387 Problems 401

xiv 12 THE RELATIONSHIP BETWEEN STATE VARIABLE AND TRANSFER FUNCTION DESCRIPTIONS OF SYSTEMS

Contents

404

12.1 12.2 12.3 12.4 12.5

Introduction 404 Transfer Function Matrices From State Equations 404 State Equations From Transfer Matrices: Realizations 406 Definition and Implication of Irreducible Realizations 408 The Determination of Irreducible Realizations 411
12.5.1 Jordan Canonical Form Approach, 411 12.5.2 Kalman Canonical Form Approach to Minimal Realizations, 419

12.6 12.7

Minimal Realizations From Matrix Fraction Description 422 Concluding Comments 425 References 425 Illustrative Problems 426 Problems 440

13

DESIGN OF LINEAR FEEDBACK CONTROL SYSTEMS

443

13.1 13.2 13.3 13.4 13.5 13.6

Introduction 443 State Feedback and Output Feedback 443 The Effect of Feedback on System Properties 446 Pole Assignment Using State Feedback 448 Partial Pole Placement Using Static Output Feedback 457 ObserversReconstructing the State From Available Outputs 461
13.6.1 Continuous-Time Full-State Observers, 461

Contents

xv

13.6.2 Discrete-Time Full-State Observers, 464 13.6.3 Continuous-Time Reduced Order Observers, 470 13.6.4 Discrete-Time Reduced-Order Observers, 471 13.7 13.8 A Separation Principle For Feedback Controllers 474 Transfer Function Version of PolePlacement/Observer Design 475 13.8.1 The Full-State Observer, 478 13.8.2 The Reduced-Order Observer, 482 13.8.3 The Discrete-Time Pole Placement!Observer Problem, 484 13.9 The Design of Decoupled or Noninteracting Systems 486 References 488

Illustrative Problems 489 Problems 498 14 AN INTRODUCTION TO OPTIMAL CONTROL THEORY 14.1 14.2 14.3 Introduction 501 501

Statement of the Optimal Control Problem 501 Dynamic Programming 503 14.3.1 General Introduction to the Principle of Optimality, 503 14.3.2 Application to Discrete-Time Optimal Control, 505 14.3.3 The Discrete-Time Linear Quadratic Problem, 507 14.3.4 The Infinite Horizon, Constant Gain Solution, 512

14.4

Dynamic Programming Approach to Continuous-Time Optimal Control 515 14.4.1 Linear-Quadratic (LQ) Problem: The Continuous Riccati Equation, 517

xvi

Contents

14.4.2 Infinite Time-To-Go Problem; The Algebraic Riccati Equation, 522

14.5 14.6 14.7 14.8 14.9

Pontryagin's Minimum Principle 523 Separation Theorem 525 Robustness Issues 527 Extensions 533 Concluding Comments 539 References 540 Illustrative Problems 541 Problems 559

15

AN INTRODUCTION TO NONLINEAR CONTROL SYSTEMS

563

15.1 15.2 15.3 15.4 15.5 15.6

Introduction 563 Linearization: Analysis of Small Deviations from Nominal 565 Dynamic Linearization Using State Feedback 570 Harmonic Linearization: Describing Functions 573 Applications of Describing Functions 578 Lyapunov Stability Theory and Related Frequency Domain Results 582 References 588 Illustrative Problems 589 Problems 614

ANSWERS TO PROBLEMS INDEX

618 639

Vous aimerez peut-être aussi