Kernel Adaptive Filtering: A Comprehensive Introduction

Hardcover
from $0.00

Author: Jose C. Principe

ISBN-10: 0470447532

ISBN-13: 9780470447536

Category: Signal Processing - General & Miscellaneous

Search in google:

Online learning from a signal processing perspective There is increased interest in kernel learning algorithms in neural networks and a growing need for nonlinear adaptive algorithms in advanced signal processing, communications, and controls. Kernel Adaptive Filtering is the first book to present a comprehensive, unifying introduction to online learning algorithms in reproducing kernel Hilbert spaces. Based on research being conducted in the Computational Neuro-Engineering Laboratory at the University of Florida and in the Cognitive Systems Laboratory at McMaster University, Ontario, Canada, this unique resource elevates the adaptive filtering theory to a new level, presenting a new design methodology of nonlinear adaptive filters. Covers the kernel least mean squares algorithm, kernel affine projection algorithms, the kernel recursive least squares algorithm, the theory of Gaussian process regression, and the extended kernel recursive least squares algorithm Presents a powerful model-selection method called maximum marginal likelihood Addresses the principal bottleneck of kernel adaptive filters—their growing structure Features twelve computer-oriented experiments to reinforce the concepts, with MATLAB codes downloadable from the authors' Web site Concludes each chapter with a summary of the state of the art and potential future directions for original research Kernel Adaptive Filtering is ideal for engineers, computer scientists, and graduate students interested in nonlinear adaptive systems for online applications (applications where the data stream arrives one sample at a time and incremental optimal solutions are desirable). It is also a useful guide for those who look for nonlinear adaptive filtering methodologies to solve practical problems.

Preface xiAcknowledgments xvNotation xviiAbbreviations and Symbols xix1 Background and Preview 11.1 Supervised, Sequential, and Active Learning 11.2 Linear Adaptive Filters 31.3 Nonlinear Adaptive Filters 101.4 Reproducing Kernel Hilbert Spaces 121.5 Kernel Adaptive Filters 161.6 Summarizing Remarks 20Endnotes 212 Kernel Least-Mean-Square Algorithm 272.1 Least-Mean-Square Algorithm 282.2 Kernel Least-Mean-Square Algorithm 312.3 Kernel and Parameter Selection 342.4 Step-Size Parameter 372.5 Novelty Criterion 382.6 Self-Regularization Property of KLMS 402.7 Leaky Kernel Least-Mean-Square Algorithm 482.8 Normalized Kernel Least-Mean-Square Algorithm 482.9 Kernel ADALINE 492.10 Resource Allocating Networks 532.11 Computer Experiments 552.12 Conclusion 63Endnotes 653 Kernel Affine Projection Algorithms 693.1 Affine Projection Algorithms 703.2 Kernel Affine Projection Algorithms 723.3 Error Reusing 773.4 Sliding Window Gram Matrix Inversion 783.5 Taxonomy for Related Algorithms 783.6 Computer Experiments 803.7 Conclusion 89Endnotes 914 Kernel Recursive Least-Squares Algorithm 944.1 Recursive Least-Squares Algorithm 944.2 Exponentially Weighted Recursive Least-Squares Algorithm 974.3 Kernel Recursive Least-Squares Algorithm 984.4 Approximate Linear Dependency 1024.5 Exponentially Weighted Kernel Recursive Least-Squares Algorithm 1034.6 Gaussian Processes for Linear Regression 1054.7 Gaussian Processes for Nonlinear Regression 1084.8 Bayesian Model Selection 1114.9 Computer Experiments 1144.10 Conclusion 119Endnotes 1205 Extended Kernel Recursive Least-Squares Algorithm 1245.1 Extended Recursive Least Squares Algorithm 1255.2 Exponentially Weighted Extended Recursive Least Squares Algorithm 1285.3 Extended Kernel Recursive Least Squares Algorithm 1295.4 EX-KRLS for Tracking Models 1315.5 EX-KRLS with Finite Rank Assumption 1375.6 Computer Experiments 1415.7 Conclusion 150Endnotes 151 l 6 Designing Sparse Kernel Adaptive Filters 1526.1 Definition of Surprise 1526.2 A Review of Gaussian Process Regression 1546.3 Computing Surprise 1566.4 Kernel Recursive Least Squares with Surprise Criterion 1596.5 Kernel Least Mean Square with Surprise Criterion 1606.6 Kernel Affine Projection Algorithms with Surprise Criterion 1616.7 Computer Experiments 1626.8 Conclusion 173Endnotes 174Epilogue 175Appendix 177A Mathematical Background 177A.l Singular Value Decomposition 177A.2 Positive-Definite Matrix 179A.3 Eigenvalue Decomposition 179A.4 Schur Complement 181A.5 Block Matrix Inverse 181A.6 Matrix Inversion Lemma 182A.7 Joint, Marginal, and Conditional Probability 182A.8 Normal Distribution 183A.9 Gradient Descent 184A.10 Newton's Method 184B Approximate Linear Dependency and System Stability 186References 193Index 204