您好,欢迎来到聚文网。 登录 免费注册
阿尔法零对最优模型预测自适应控制的启示(国际知名大学原版教材)(英文版)/信息技术学

阿尔法零对最优模型预测自适应控制的启示(国际知名大学原版教材)(英文版)/信息技术学

  • 字数: 347
  • 出版社: 清华大学
  • 作者: (美)德梅萃·P.博塞克斯|
  • 商品条码: 9787302684718
  • 适读年龄: 12+
  • 版次: 1
  • 开本: 16开
  • 页数: 227
  • 出版年份: 2025
  • 印次: 1
定价:¥79 销售价:登录后查看价格  ¥{{selectedSku?.salePrice}} 
库存: {{selectedSku?.stock}} 库存充足
{{item.title}}:
{{its.name}}
精选
内容简介
本书构建了近似动态规 划和强化学习的新的理论框 架,简洁但雄心勃勃。这一 框架以离线训练和在线学习 这两个算法为中心,彼此独 立又通过牛顿法有机融合。 当今新一代人工智能技术发 展绚丽多彩。在看似纷繁复 杂的数据与算法表象之下, 其实蕴藏着简洁而美妙的规 律。通过本书的学习,读者 将能体会经典优化控制理论 在分析理解当代强化学习算 法性能中的强大威力,更能 领悟到以阿尔法零为代表的 新一代算法浪潮对经典理论 提供的新的发展机遇。本书 适合作为普通高等学校信息 科学技术领域研究生、本科 高年级教材,也可用于本领 域科研人员自学使用。
作者简介
[美]德梅萃·P. 博塞克斯(Dimitri P. Bertseka),美国MIT终身教授,美国国家工程院院士,清华大学复杂与网络化系统研究中心客座教授。电气工程与计算机科学领域国际知名作者,著有《非线性规划》《网络优化》《动态规划》《凸优化》《强化学习与最优控制》等十几本畅销教材和专著。
目录
1.AlphaZero, Off-Line Training, and On-Line Play 1.1.Off-Line Training and Policy Iteration 1.2.On-Line Play and Approximation in Value Space-Truncated Rollout 1.3.The Lessons of AlphaZero 1.4.A New Conceptual Framework for Reinforcement Learning 1.5.Notes and Sources 2.Deterministic and Stochastic Dynamic Programming 2.1.Optimal Control Over an Infinite Horizon 2.2.Approximation in Value Space 2.3.Notes and Sources 3.An Abstract View of Reinforcement Learning 3.1.Bellman Operators 3.2.Approximation in Value Space and Newton''s Method 3.3.Region of Stability 3.4.Policy Iteration, Rollout, and Newton''s Method 3.5.How Sensitive is On-Line Play to the Off-Line Training Process? 3.6.Why Not Just Train a Policy Network and Use it Without On-Line Play? 3.7.Multiagent Problems and Multiagent Rollout 3.8.On-Line Simplified Policy Iteration 3.9.Exceptional Cases 3.10.Notes and Sources 4.The Linear Quadratic Case - Illustrations 4.1.Optimal Solution 4.2.Cost Functions of Stable Linear Policies 4.3.Value Iteration 4.4.One-Step and Multistep Lookahead - Newton Step Interpretations 4.5.Sensitivity Issues 4.6.Rollout and Policy Iteration 4.7.Truncated Rollout - Length of Lookahead Issues 4.8.Exceptional Behavior in Linear Quadratic Problems 4.9.Notes and Sources 5.Adaptive and Model Predictive Control 5.1.Systems with Unknown Parameters - Robust and PID Control 5.2.Approximation in Value Space, Rollout, and Adaptive Control 5.3.Approximation in Value Space, Rollout, and Model Predictive Control 5.4.Terminal Cost Approximation - Stability Issues 5.5.Notes and Sources 6.Finite Horizon Deterministic Problems - Discrete Optimization 6.1.Deterministic Discrete Spaces Finite Horizon Problems. 6.2.General Discrete Optimization Problems 6.3.Approximation in Value Space 6.4.Rollout Algorithms for Discrete Optimization .. . 6.5.Rollout and Approximation in Value Space with Multistep Lookahead 6.5.1.Simplified Multistep Rollout - Double Rollout..p. 6.5.2.Incremental Rollout for Multistep Approximation in Value Space 6.6.Constrained Forms of Rollout Algorithms 6.7.Adaptive Control by Rollout with a POMDP Formulation 6.8.Rollout for Minimax Control 6.9.Small Stage Costs and Long Horizon - Continuous-Time Rollout 6.10.Epilogue Appendix A: Newton''s Method and Error Bounds A.1.Newton''s Method for Differentiable Fixed Point Problems A.2.Newton''s Method Without Differentiability of the Hellman Operator A.3.Local and Global Error Bounds for Approximation in Value Space A.4.Local and Global Error Bounds for Approximate Policy Iteration References

蜀ICP备2024047804号

Copyright 版权所有 © jvwen.com 聚文网