您好,欢迎来到聚文网。 登录 免费注册
机器学习:贝叶斯和优化方法(英文版)

机器学习:贝叶斯和优化方法(英文版)

畅读原版机器学习经典著作,在全景式知识体系中融会传统与创新算法
  • 字数: 650
  • 装帧: 精装
  • 出版社: 机械工业出版社
  • 作者: (希)西格尔斯?西奥多里蒂斯 著
  • 出版日期: 2017-04-01
  • 商品条码: 9787111565260
  • 版次: 1
  • 开本: 16开
  • 页数: 1050
  • 出版年份: 2017
定价:¥269 销售价:登录后查看价格  ¥{{selectedSku?.salePrice}} 
库存: {{selectedSku?.stock}} 库存充足
{{item.title}}:
{{its.name}}
精选
内容简介
本书对所有主要的机器学习方法和新研究趋势进行了深入探索,涵盖概率和确定性方法以及贝叶斯推断方法。其中,经典方法包括平均/小二乘滤波、卡尔曼滤波、随机逼近和在线学习、贝叶斯分类、决策树、逻辑回归和提升方法等,新趋势包括稀疏、凸分析与优化、在线分布式算法、RKH空间学习、贝叶斯推断、图模型与隐马尔可夫模型、粒子滤波、深度学习、字典学习和潜变量建模等。全书构建了一套明晰的机器学习知识体系,各章内容相对独立,物理推理、数学建模和算法实现精准且细致,并辅以应用实例和习题。本书适合该领域的科研人员和工程师阅读,也适合学习模式识别、统计/自适应信号处理和深度学习等课程的学生参考。
作者简介
西格尔斯?西奥多里蒂斯(Sergios Theodoridis) 雅典大学教授,研究兴趣包括机器学习、模式识别和信号处理等。他是IEEE(电气和电子工程师学会)和EURASIP(欧洲信号处理协会)的会士,并担任IEEE信号处理会刊的主编。曾获2014年IEEE信号处理杂志最佳论文奖,2009年lEEE计算智能协会杰出论文奖,以及2014年EURASIP最有价值服务奖等。此外,他还是经典畅销著作《模式识别》的第一作者。
目录
ContentsPreface.ivAcknowledgments.vvNotation.vfiviCHAPTER 1 Introduction .11.1 What Machine Learning is About11.1.1 Classification.21.1.2 Regression31.2 Structure and a Road Map of the Book5References8CHAPTER 2 Probability and Stochastic Processes 92.1 Introduction.102.2 Probability and Random Variables.102.2.1Probability112.2.2Discrete Random Variables122.2.3Continuous Random Variables142.2.4Meanand Variance152.2.5Transformation of Random Variables.172.3 Examples of Distributions182.3.1Discrete Variables182.3.2Continuous Variables202.4 Stochastic Processes292.4.1First and Second Order Statistics.302.4.2Stationarity and Ergodicity302.4.3PowerSpectral Density332.4.4Autoregressive Models382.5 InformationTheory.412.5.1Discrete Random Variables422.5.2Continuous Random Variables452.6 Stochastic Convergence48Problems49References51CHAPTER 3 Learning in Parametric Modeling: Basic Concepts and Directions 533.1 Introduction.533.2 Parameter Estimation: The Deterministic Point of View.543.3 Linear Regression.573.4 Classification603.5 Biased Versus Unbiased Estimation.643.5.1 Biased or Unbiased Estimation?653.6 The Cramér-Rao Lower Bound673.7 Suf?cient Statistic.703.8 Regularization.723.9 The Bias-Variance Dilemma.773.9.1 Mean-Square Error Estimation773.9.2 Bias-Variance Tradeoff783.10 MaximumLikelihoodMethod.823.10.1 Linear Regression: The Nonwhite Gaussian Noise Case843.11 Bayesian Inference843.11.1 The Maximum a Posteriori Probability Estimation Method.883.12 Curse of Dimensionality893.13 Validation.913.14 Expected and Empirical Loss Functions.933.15 Nonparametric Modeling and Estimation.95Problems.97References102CHAPTER4 Mean-quare Error Linear Estimation1054.1Introduction.1054.2Mean-Square Error Linear Estimation: The Normal Equations1064.2.1The Cost Function Surface1074.3A Geometric Viewpoint: Orthogonality Condition1094.4Extensionto Complex-Valued Variables1114.4.1Widely Linear Complex-Valued Estimation1134.4.2Optimizing with Respect to Complex-Valued Variables: Wirtinger Calculus1164.5Linear Filtering.1184.6MSE Linear Filtering: A Frequency Domain Point of View1204.7Some Typical Applications.1244.7.1Interference Cancellation1244.7.2System Identification1254.7.3Deconvolution: Channel Equalization1264.8Algorithmic Aspects: The Levinson and the Lattice-Ladder Algorithms1324.8.1The Lattice-Ladder Scheme.1374.9Mean-Square Error Estimation of Linear Models.1404.9.1The Gauss-Markov Theorem1434.9.2Constrained Linear Estimation:The Beamforming Case1454.10Time-Varying Statistics: Kalman Filtering148Problems.154References158CHAPTER 5 Stochastic Gradient Descent: The LMS Algorithm and its Family .1615.1 Introduction.1625.2 The Steepest Descent Method1635.3 Application to the Mean-Square Error Cost Function1675.3.1 The Complex-Valued Case1755.4 Stochastic Approximation1775.5 The Least-Mean-Squares Adaptive Algorithm1795.5.1 Convergence and Steady-State Performanceof the LMS in Stationary Environments.1815.5.2 Cumulative Loss Bounds1865.6 The Affine Projection Algorithm.1885.6.1 The Normalized LMS.1935.7 The Complex-Valued Case.1945.8 Relatives of the LMS.1965.9 Simulation Examples.1995.10 Adaptive Decision Feedback Equalization2025.11 The Linearly Constrained LMS2045.12 Tracking Performance of the LMS in Nonstationary Environments.2065.13 Distributed Learning:The Distributed LMS2085.13.1Cooperation Strategies.2095.13.2The Diffusion LMS2115.13.3 Convergence and Steady-State Performance: Some Highlights2185.13.4 Consensus-Based Distributed Schemes.2205.14 A Case Study:Target Localization2225.15 Some Concluding Remarks: Consensus Matrix.223Problems.224References227CHAPTER 6 The Least-Squares Family 2336.1 Introduction.2346.2 Least-Squares Linear Regression: A Geometric Perspective.2346.3 Statistical Properties of the LS Estimator2366.4
摘要
PrefaceMachine Learning is a name that is gaining popularity as an umbrella for methods that have been studied and developed for many decades in different scientific communities and underdiffer entnames,such as Statistical Learning,Statistical Signal Processing, Pattern Recognition,Adaptive Signal Processing,Image Processing and Analysis,System Identification and Control,Data Mining and Information Retrieval,Computer Vision,and Computational Learning.The name“Machine Learning”indicates what all these disciplines have in common,that is,to learn from data,and thenmake predictions.What one tries to learn from data is their underlying structure an dregularities, via the development of a model,which can then be used to provide predictions.To this end,anumber of diverse approaches have been developed,ranging from optimization of cost functions,whose goal is to optimize the deviation between what one observes from data and what them odelpredicts,to probabilistic models that attempt to model the statistical properties of the observed data.The goal of this book is to approach the machine learning discipline in a unifying context, by presenting the major paths and approaches that have been followed over the years, without giving preference to a specific one. It is the author’s belief that all of them are valuable to the newcomer who wants to learn the secrets of this topic, from the applications as well as from the pedagogic point of view.As the title of the book indicates,the emphasis is on the processing and analysis front of machine learning and not on topics concerning the theory of learning itself and related performance bounds.In other words,the focusis on methods and algorithms closer to the application level.The book is the outgrowth of more than three decades of the author’s experience on research and teaching various related courses.The book is written in such a way that individual(orpairsof)chapters are as self-contained as possible. So,one can select and combine chapters according to the focus he/she wants to give to the course he/she teaches,or to the topics he/she wants to grasp in a first reading.Some guidelines on how one can use the book for different courses are provided in the introductory chapter.Each chapter grows by starting from the basics and evolving to embrace the more recent advances. Some of the topics had to be split into two chapters,such as sparsity-aware learning, Bayesian learning,probabilistic graphical models, and Monte Carlo methods.The book addresses the needs of advanced graduate, postgraduate,and research students as well as of practicing scientists and engineers whose interests lie beyond black-box solutions. Also,the book can serve the needs of short courses on specific topics,e.g.,sparse modeling, Bayesian learning, robabilistic graphical models,neural networks and deep learning.Most of the chapters include Matlab exercises,and the related code is available from the book’s website. The solutions manual as well as PowerPointlectures are also available from the book’s website.AcknowledgmentsWritingabookisaneffortontopofeverythingelsethatmustkeeprunninginparallel.Thus,writingisbasicallyanearlymorning,afterfive,andovertheweekendsandholidaysactivity.Itisabigeffortthatrequiresdedicationandpersistence.Thiswouldnotbepossiblewithoutthesupportofanumberofpeople—peoplewhohelpedinthesimulations,inthemakingofthefigures,inreadingchapters,andindiscussingvariousissuesconcerningallaspects,fromproofstothestructureandthelayoutofthebook.First,Iwouldliketoexpressmygratitudetomymentor,friend,andcolleagueNicholasKalouptsidis,forthislong-lastingandfruitfulcollaboration.ThecooperationwithKostasSlavakisoverthelastsixyearshasbeenamajorsourceofinspirationandlearningandhasplayedadecisiveroleformeinwritingthisbook.Iamindebtedtothemembersofmygroup,andinparticulartoYannisKopsinis,PantelisBouboulis,SimosChouvardas,KostasThemelis,GeorgePapageorgiou,andCharisGeorgiou.Theywerebesidemethewholetime,especiallyduringthedifficult

蜀ICP备2024047804号

Copyright 版权所有 © jvwen.com 聚文网