Available:*
Library | Item Barcode | Call Number | Material Type | Item Category 1 | Status |
---|---|---|---|---|---|
Searching... | 30000010341949 | TK7872.F5 P683 2015 | Open Access Book | Book | Searching... |
On Order
Summary
Summary
Adaptive filters are used in many diverse applications, appearing in everything from military instruments to cellphones and home appliances. Adaptive Filtering: Fundamentals of Least Mean Squares with MATLAB® covers the core concepts of this important field, focusing on a vital part of the statistical signal processing area--the least mean square (LMS) adaptive filter.
This largely self-contained text:
Discusses random variables, stochastic processes, vectors, matrices, determinants, discrete random signals, and probability distributions Explains how to find the eigenvalues and eigenvectors of a matrix and the properties of the error surfaces Explores the Wiener filter and its practical uses, details the steepest descent method, and develops the Newton's algorithm Addresses the basics of the LMS adaptive filter algorithm , considers LMS adaptive filter variants, and provides numerous examples Delivers a concise introduction to MATLAB®, supplying problems, computer experiments, and more than 110 functions and script filesFeaturing robust appendices complete with mathematical tables and formulas, Adaptive Filtering: Fundamentals of Least Mean Squares with MATLAB® clearly describes the key principles of adaptive filtering and effectively demonstrates how to apply them to solve real-world problems.
Author Notes
Alexander D. Poularikas is chairman of the electrical and computer engineering department at the University of Alabama in Huntsville, USA. He previously held positions at University of Rhode Island, Kingston, USA and the University of Denver, Colorado, USA. He has published, coauthored, and edited 14 books and served as an editor-in-chief of numerous book series. A Fulbright scholar, lifelong senior member of the IEEE, and member of Tau Beta Pi, Sigma Nu, and Sigma Pi, he received the IEEE Outstanding Educators Award, Huntsville Section in 1990 and 1996. Dr. Poularikas holds a Ph.D from the University of Arkansas, Fayetteville, USA.
Table of Contents
Preface | p. xi |
Author | p. xiii |
Abbreviations | p. xv |
MATLAB® Functions xvii | |
Chapter 1 Vectors | p. 1 |
1.1 Introduction | p. 1 |
1.1.1 Multiplication by a Constant and Addition and Subtraction | p. 1 |
1.1.1.1 Multiplication by a Constant | p. 1 |
1.1.1.2 Addition and Subtraction | p. 2 |
1.1.2 Unit Coordinate Vectors | p. 3 |
1.1.3 Inner Product | p. 3 |
1.1.4 Distance between Two Vectors | p. 5 |
1.1.5 Mean Value of a Vector | p. 5 |
1.1.6 Direction Cosines | p. 7 |
1.1.7 The Projection of a Vector | p. 9 |
1.1.8 Linear Transformations | p. 10 |
1.2 Linear Independence. Vector Spaces, and Basis Vectors | p. 11 |
1.2.1 Orthogonal Basis Vectors | p. 13 |
Problems | p. 13 |
Hints-Suggestions-Solutions | p. 14 |
Chapter 2 Matrices | p. 17 |
2.1 Introduction | p. 17 |
2.2 General Types of Matrices | p. 17 |
2.2.1 Diagonal, Identity, and Scalar Matrices | p. 17 |
2.2.2 Upper and Lower Triangular Matrices | p. 17 |
2.2.3 Symmetric and Exchange Matrices | p. 18 |
2.2.4 Toeplitz Matrix | p. 18 |
2.2.5 Hankel and Hermitian | p. 18 |
2.3 Matrix Operations | p. 18 |
2.4 Determinant of a Matrix | p. 21 |
2.4.1 Definition and Expansion of a Matrix | p. 21 |
2.4.2 Trace of a Matrix | p. 22 |
2.4.3 Inverse of a Matrix | p. 22 |
2.5 Linear Equations | p. 24 |
2.5.1 Square Matrices (n × n) | p. 24 |
2.5.2 Rectangular Matrices (n | p. 26 |
2.5.3 Rectangular Matrices (m | p. 27 |
2.5.4 Quadratic and Hermitian Forms | p. 29 |
2.6 Eigenvalues and Eigenvectors | p. 31 |
2.6.1 Eigenvectors | p. 32 |
2.6.2 Properties of Eigenvalues and Eigenvectors | p. 33 |
Problems | p. 36 |
Hints-Suggestions-Solutions | p. 37 |
Chapter 3 Processing of Discrete Deterministic Signals: Discrete Systems | p. 41 |
3.1 Discrete-Time Signals | p. 41 |
3.1.1 Time-Domain Representation of Basic Continuous and Discrete Signals | p. 41 |
3.2 Transform-Domain Representation of Discrete Signals | p. 42 |
3.2.1 Discrete-Time Fourier Transform | p. 42 |
3.2.2 The Discrete FT | p. 44 |
3.2.3 Properties of DFT | p. 46 |
3.3 The z-Transform | p. 48 |
3.4 Discrete-Time Systems | p. 52 |
3.4.1 Linearity and Shift Invariant | p. 52 |
3.4.2 Causality | p. 52 |
3.4.3 Stability | p. 52 |
3.4.3 Transform-Domain Representation | p. 57 |
Problems | p. 60 |
Hints-Suggestions-Solutions | p. 61 |
Chapter 4 Discrete-Time Random Processes | p. 63 |
4.1 Discrete Random Signals, Probability Distributions, and Averages of Random Variables | p. 63 |
4.1.1 Stationary and Ergodic Processes | p. 65 |
4.1.2 Averages of RV | p. 66 |
4.1.2.1 Mean Value | p. 66 |
4.1.2.2 Correlation | p. 67 |
4.1.2.3 Covariance | p. 69 |
4.2 Stationary Processes | p. 71 |
4.2.1 Autocorrelation Matrix | p. 71 |
4.2.2 Purely Random Process (White Noise) | p. 74 |
4.2.3 Random Walk | p. 74 |
4.3 Special Random Signals and pdf's | p. 75 |
4.3.1 White Noise | p. 75 |
4.3.2 Gaussian Distribution (Normal Distribution) | p. 75 |
4.3.3 Exponential Distribution | p. 78 |
4.3.4 Lognormal Distribution | p. 79 |
4.3.5 Chi-Square Distribution | p. 80 |
4.4 Wiener-Khinchin Relations | p. 80 |
4.5 Filtering Random Processes | p. 83 |
4.6 Special Types of Random Processes | p. 85 |
4.6.1 Autoregressive Process | p. 85 |
4.7 Nonparametric Spectra Estimation | p. 88 |
4.7.1 Periodogram | p. 88 |
4.7.2 Correlogram | p. 90 |
4.7.3 Computation of Periodogram and Correlogram Using FFT | p. 90 |
4.7.4 General Remarks on the Periodogram | p. 91 |
4.7.4.1 Windowed Periodogram | p. 93 |
4.7.5 Proposed Book Modified Method for Better Frequency Resolution | p. 95 |
4.7.5.1 Using Transformation of the rv's | p. 95 |
4.7.5.2 Blackman-Tukey Method | p. 96 |
4.7.6 Bartlett Periodogram | p. 100 |
4.7.7 The Welch Method | p. 106 |
4.7.8 Proposed Modified Welch Methods | p. 109 |
4.7.8.1 Modified Method Using Different Types of Overlapping | p. 109 |
4.7.8.2 Modified Welch Method Using Transformation of rv's | p. 111 |
Problems | p. 113 |
Hints-Solutions-Suggestions | p. 114 |
Chapter 5 The Wiener Filter | p. 121 |
5.1 Introduction | p. 121 |
5.2 The LS Technique | p. 121 |
5.2.1 Linear LS | p. 122 |
5.2.2 LS Formulation | p. 125 |
5.2.3 Statistical Properties of LSEs | p. 130 |
5.2.4 The LS Approach | p. 132 |
5.2.5 Orthogonality Principle | p. 135 |
5.2.6 Corollary | p. 135 |
5.2.7 Projection Operator | p. 136 |
5.2.8 LS Finite Impulse Response Filter | p. 138 |
5.3 The Mean-Square Error | p. 140 |
5.3.1 The FIR Wiener Filter | p. 142 |
5.4 The Wiener Solution | p. 146 |
5.4.1 Orthogonality Condition | p. 148 |
5.4.2 Normalized Performance Equation | p. 149 |
5.4.3 Canonical Form of the Error-Performance Surface | p. 150 |
5.5 Wiener Filtering Examples | p. 151 |
5.5.1 Minimum MSE | p. 154 |
5.5.2 Optimum Filter (w 0 ) | p. 154 |
5.5.3 Linear Prediction | p. 161 |
Problems | p. 162 |
Additional Problems | p. 164 |
Hints-Solutions-Suggestions | p. 16 |
Additional Problems | p. 16 |
Chapter 6 Eigenvalues of R x : Properties of the Error Surface | p. 171 |
6.1 The Eigenvalues of the Correlation Matrix | p. 171 |
6.1.1 Karhunen-Loeve Transformation | p. 173 |
6.2 Geometrical Properties of the Error Surface | p. 174 |
Problems | p. 178 |
Hints-Solutions-Suggestions | p. 178 |
Chapter 7 Newton's and Steepest Descent Methods | p. 183 |
7.1 One-Dimensional Gradient Search Method | p. 183 |
7.1.1 Gradient Search Algorithm | p. 183 |
7.1.2 Newton's Method in Gradient Search | p. 185 |
7.2 Steepest Descent Algorithm | p. 186 |
7.2.1 Steepest Descent Algorithm Applied to Wiener Filter | p. 187 |
7.2.2 Stability (Convergence) of the Algorithm | p. 188 |
7.2.3 Transient Behavior of MSE | p. 190 |
7.2.4 Learning Curve | p. 191 |
7.3 Newton's Method | p. 192 |
7.4 Solution of the Vector Difference Equation | p. 194 |
Problems | p. 197 |
Edition Problems | p. 197 |
Hints-Solutions-Suggestions | p. 198 |
Additional Problems | p. 200 |
Chapter 8 The Least Mean-Square Algorithm | p. 203 |
8.1 Introduction | p. 203 |
8.2 The LMS Algorithm | p. 203 |
8.3 Examples Using the LMS Algorithm | p. 206 |
8.4 Performance Analysis of the LMS Algorithm | p. 219 |
8.4.1 Learning Curve | p. 221 |
8.4.2 The Coefficient-Error or Weighted-Error Correlation Matrix | p. 224 |
R.4.3 Excess MSE and Misadjustment | p. 225 |
8.4.4 Stability | p. 227 |
8.4.5 The LMS and Steepest Descent Methods | p. 228 |
8.5 Complex Representation of the LMS Algorithm | p. 228 |
Problems | p. 231 |
Hints-Solutions-Suggestions | p. 232 |
Chapter 9 Variants of Least Mean-Square Algorithm | p. 239 |
9.1 The Normalized Least Mean-Square Algorithm | p. 239 |
9.2 Power Normalized LMS | p. 244 |
9.3 Self-Correcting LMS Filter | p. 248 |
9.4 The Sign-Error LMS Algorithm | p. 250 |
9.5 The NLMS Sign-Error Algorithm | p. 250 |
9.6 The Sign-Regressor LMS Algorithm | p. 252 |
9.7 Self-Correcting Sign-Regressor LMS Algorithm | p. 253 |
9.8 The Normalized Sign-Regressor LMS Algorithm | p. 253 |
9.9 The Sign-Sign LMS Algorithm | p. 254 |
9.10 The Normalized Sign-Sign LMS Algorithm | p. 255 |
9.11 Variable Step-Size LMS | p. 257 |
9.12 The Leaky LMS Algorithm | p. 259 |
9.13 The Linearly Constrained LMS Algorithm | p. 262 |
9.14 The Least Mean Fourth Algorithm | p. 264 |
9.15 The Least Mean Mixed Norm LMS Algorithm | p. 265 |
9.16 Short-Length Signal of the LMS Algorithm | p. 266 |
9.17 The Transform Domain LMS Algorithm | p. 267 |
9.17.1 Convergence | p. 271 |
9.18 The Error Normalized Step-Size LMS Algorithm | p. 272 |
9.19 The Robust Variable Step-Size LMS Algorithm | p. 276 |
9.20 The Modified LMS Algorithm | p. 282 |
9.21 Momentum LMS | p. 283 |
9.22 The Block LMS Algorithm | p. 285 |
9.23 The Complex LMS Algorithm | p. 286 |
9.24 The Affine LMS Algorithm | p. 288 |
9.25 The Complex Affine LMS Algorithm | p. 290 |
Problems | p. 291 |
Hints-Solutions-Suggestions | p. 293 |
Appendix 1 Suggestions and Explanations for MATLAB Use | p. 301 |
A1.1 Suggestions and Explanations for MATLAB Use | p. 301 |
A1.1.1 Creating a Directory | p. 301 |
A1.1.2 Help | p. 301 |
A1.1.3 Save and Load | p. 302 |
A1.1.4 MATLAB as Calculator | p. 302 |
A1.1.5 Variable Names | p. 302 |
A1.1.6 Complex Numbers | p. 302 |
A1.1.7 Array Indexing | p. 302 |
A1.1.8 Extracting and Inserting Numbers in Arrays | p. 303 |
A1.1.9 Vectorization | p. 303 |
A1.1.10 Windowing | p. 304 |
A1.1.11 Matrices | p. 304 |
A1.1.12 Producing a Periodic Function | p. 305 |
A1.1.13 Script Files | p. 305 |
A1.1.14 Functions | p. 305 |
A1.1.15 Complex Expressions | p. 306 |
A1.1.16 Axes | p. 306 |
A1.1.17 2D Graphics | p. 306 |
A1.1.18 3D Plots | p. 308 |
A1.1.18.1 Mesh-Type Figures | p. 308 |
A1.2 General Purpose Commands | p. 309 |
A1.2.1 Managing Commands and Functions | p. 309 |
A1.2.2 Managing Variables and Workplace | p. 309 |
A1.2.3 Operators and Special Characters | p. 309 |
A1.2.4 Control Flow | p. 310 |
A1.3 Elementary Matrices and Matrix Manipulation | p. 311 |
A1.3.1 Elementary Matrices and Arrays | p. 311 |
A1.3.2 Matrix Manipulation | p. 311 |
A1.4 Elementary Mathematical Functions | p. 312 |
A1.4.1 Elementary Functions | p. 312 |
A1.5 Numerical Linear Algebra | p. 313 |
A1.5.1 Matrix Analysis | p. 313 |
A1.6 Data Analysis | p. 313 |
A1.6.1 Basic Operations | p. 313 |
A1.6.2 Filtering and Convolution | p. 313 |
A1.6.3 Fourier Transforms | p. 314 |
A1.7 2D Plotting | p. 314 |
A1.7.1 2D Plots | p. 314 |
Appendix 2 Matrix Analysis | p. 317 |
A2.1 Definitions | p. 317 |
A2.2 Special Matrices | p. 319 |
A2.3 Matrix Operation and Formulas | p. 322 |
A2.4 Eigendecomposition of Matrices | p. 325 |
A2.5 Matrix Expectations | p. 326 |
A2.6 Differentiation of a Scalar Function with respect to a Vector | p. 327 |
Appendix 3 Mathematical Formulas | p. 329 |
A3.1 Trigonometric Identities | p. 329 |
A3.2 Orthogonality | p. 330 |
A3.3 Summation of Trigonometric Forms | p. 331 |
A3.4 Summation Formulas | p. 331 |
A3.4.1 Finite Summation Formulas | p. 331 |
A3.4.2 Infinite Summation Formulas | p. 331 |
A3.5 Series Expansions | p. 332 |
A3.6 Logarithms | p. 332 |
A3.7 Some Definite Integrals | p. 332 |
Appendix 4 Lagrange Multiplier Method | p. 335 |
Bibliography | p. 337 |
Index | p. 339 |