Vous êtes sur la page 1sur 6

STATISTICAL DIGITAL

SIGNAL PROCESSING
AND MODELING

MONSON H. HAYES
Georgia Institute of Technology

JOHN WILEY & SONS, INC.


CONTENTS

Preface xi

INTRODUCTION

BACKGROUND 7
2.1 Introduction 7
2.2 Discrete-Time Signal Processing 7
2.2 .1 Discrete-Time Signals 8
2.2.2 Discrete-Time Systems
2.2.3 Time-Domain Descriptions of LSI Filters 1 2
2 .2.4 The Discrete-Time Fourier Transform 12
2.2.5 The z-Transform 14
2.2.6 Special Classes of Filters 16
2.2.7 Filter Flowgraphs 18
2.2.8 The DFT and FFT 1 8
2.3 Linear Algebra 20
2.3.1 Vectors 21
2.3.2 Linear Independence, Vector Spaces, and Basis Vectors 24
2.3.3 Matrices 25
2.3.4 Matrix Inverse 27
2.3.5 The Determinant and the Trace 29
2.3.6 Linear Equations 30
2 .3.7 Special Matrix Forms 35
2.3.8 Quadratic and Hermitian Forms 39
2.3.9 Eigenvalues and Eigenvectors 40
2 .3 .10 Optimization Theory 48
2.4 Summary 52
2.5 Problems 52

3 DISCRETE-TIME RANDOM PROCESSES 57


3.1 Introduction 57
3.2 Random Variables 58
3.2.1 Definitions 58
3.2.2 Ensemble Averages 62
3.2.3 Jointly Distributed Random Variables 65
Vi CONTENTS

3.2.4 Joint Moments 66


3.2.5 Independent, Uncorrelated and Orthogonal Random Variables 67

3.2.6 Linear Mean Square Estimation 68


3.2.7 Gaussian Random Variables 71
3.2.8 Parameter Estimation: Bias and Consistency 72
3.3 Random Processes 74
3 .3 .1 Definitions 74
3.3 .2 Ensemble Averages 77
3.3.3 Gaussian Processes st
3.3.4 Stationary Processes 81
3 .3.5 The Autocovariance and Autocorrelation Matrices 85
3 .3.6 Ergodicity 8s
3.3 .7 White Noise 93
3.3 .8 The Power Spectrum 94
3.4 Filtering Random Processes 99
3.5 Spectral Factorization 104
3 .6 Special Types of Random Processes 108
3.6.1 Autoregressive Moving Average Processes 108
3.6.2 Autoregressive Processes 111
3.6.3 Moving Average Processes 115
3 .6.4 Harmonic Processes 116
3 .7 Summary 118
3.8 Problems 120

SIGNAL MODELING 129


4.1 Introduction 129
4 .2 The Least Squares (Direct) Method 131
4.3 The Pade Approximation 133
4.4 Prony's Method 144
4.4.1 Pole-Zero Modeling 144
4.4.2 Shanks' Method 154
4.4.3 All-Pole Modeling 160
4.4.4 Linear Prediction 165
4.4.5 Application : FIR Least Squares Inverse Filters 166
4 .5 Iterative Prefiltering* 174
4.6 Finite Data Records 177
4.6.1 The Autocorrelation Method 178
4.6.2 The Covariance Method 182
4 .7 Stochastic Models 188
4.7.1 Autoregressive Moving Average Models 189
4.7 .2 Autoregressive Models 194
CONTENTS vii

4.7 .3 Moving Average Models 195


4.7 .4 Application : Power Spectrum Estimation 198
4.8 Summary 201
4.9 Problems 203

S THE LEVINSON RECURSION 215


5 .1 Introduction 215
5 .2 The Levinson-Durbin Recursion 216
5.2.1 Development of the Recursion 216
5.2.2 The Lattice Filter 223
5.2.3 Properties 225
5.2.4 The Step-Up and Step-Down Recursions 232
5.2.5 The Inverse Levinson-Durbin Recursion 238
5.2.6 The Schur Recursion* 240
5.2.7 The Cholesky Decomposition 250
5.2.8 The Autocorrelation Extension Problem 254
5.2.9 Inverting a Toeplitz Matrix 256
5.3 The Levinson Recursion 264
5.4 The Split Levinson Recursion* 268
5.5 Summary 276
5.6 Problems 279

LATTICE FILTERS 289


6.1 Introduction 289
6.2 The FIR Lattice Filter 289
6.3 Split Lattice Filter 294
6.4 IIR Lattice Filters 297
6.4.1 All-pole Filter 297
6.4.2 Other All-pole Lattice Structures 299
6.4.3 Lattice Filters Having Poles and Zeros 304
6.5 Lattice Methods for All-Pole Signal Modeling 307
6.5 .1 The Forward Covariance Method 308
6.5 .2 The Backward Covariance Method 313
6.5 .3 Variations 315
6.5.4 Burg's Method 316
6.5.5 Modified Covariance Method 322
6.6 Stochastic Modeling 325
6.7 Summary 327
6.8 Problems 329

7 WIENER FILTERING 335


7.1 Introduction 335
viii CONTENTS

7.2 The FIR Wiener Filter 337


7.2.1 Filtering 339
7 .2.2 Linear Prediction 342
7 .2.3 Noise Cancellation 349
7 .2.4 Lattice Representation for the FIR Wiener Filter 352
7.3 The IIR Wiener Filter 353
7 .3.1 Noncausal IIR Wiener Filter 353
7.3.2 The Causal IIR Wiener Filter 358
7 .3.3 Causal Wiener Filtering 361
7.3.4 Causal Linear Prediction 365
7 .3.5 Wiener Deconvolution 369
7.4 Discrete Kalman Filter 371
7.5 Summary 379
7.6 Problems 380

S SPECTRUM ESTIMATION 391


8.1 Introduction 391
8.2 Nonparametric Methods 393
8.2.1 The Periodogram 393
8.2.2 Performance of the Periodogram 398
8.2 .3 The Modified Periodogram 408
8.2.4 Bartlett's Method : Periodogram Averaging 412
8.2.5 Welch's Method : Averaging Modified Periodograms 415
8.2.6 Blackman-Tukey Approach : Periodogram Smoothing 420
8.2.7 Performance Comparisons 424
8.3 Minimum Variance Spectrum Estimation 426
8.4 The Maximum Entropy Method 433
8 .5 Parametric Methods 440
8.5 .1 Autoregressive Spectrum Estimation 441
8.5 .2 Moving Average Spectrum Estimation 448
8.5 .3 Autoregressive Moving Average Spectrum Estimation 449
8.6 Frequency Estimation 451
8.6.1 Eigendecomposition of the Autocorrelation Matrix 451
8.6.2 Pisarenko Harmonic Decomposition 459
8.6.3 MUSIC 463
8 .6.4 Other Eigenvector Methods 465
8 .7 Principal Components Spectrum Estimation 469
8.7.1 Bartlett Frequency Estimation 470
8.7.2 Minimum Variance Frequency Estimation 471
8.7.3 Autoregressive Frequency Estimation 472
CONTENTS ix

8.8 Summary 473


8.9 Problems 477

ADAPTIVE FILTERING 493


9.1 Introduction 493
9.2 FIR Adaptive Filters 497
9.2.1 The Steepest Descent Adaptive Filter 499
9.2 .2 The LMS Algorithm 505
9 .2.3 Convergence of the LMS Algorithm 506
9 .2.4 Normalized LMS 514
9.2.5 Application : Noise Cancellation 516
9 .2.6 Other LMS-Based Adaptive Filters 521
9 .2.7 Gradient Adaptive Lattice Filter 526
9.2.8 Joint Process Estimator 528
9.2.9 Application : Channel Equalization 530
9 .3 Adaptive Recursive Filters 534
9.4 Recursive Least Squares 541
9.4.1 Exponentially Weighted RLS 541
9.4 .2 Sliding Window RLS 548
9.4.3 Summary 551
9.5 Problems 554

Appendix USE OF MATLAB PROGRAMS 571


A.1 Introduction 571
A.2 General Information 572
A.3 Random Processes 574
A.4 Signal Modeling 574
A.5 Levinson Recursion 578
A.6 Lattice Filters 582
A.7 Optimum Filters 583
A.8 Spectrum Estimation 584
A.9 Adaptive Filtering 592

Table of Symbols 595

Index 599