您好,欢迎来到聚文网。
登录
免费注册
网站首页
|
搜索
热搜:
磁力片
|
漫画
|
购物车
0
我的订单
商品分类
首页
幼儿
文学
社科
教辅
生活
销量榜
强化学习的数学原理(英文版)
字数: 384000
装帧: 平装
出版社: 清华大学出版社
作者: 赵世钰 著
出版日期: 2024-07-01
商品条码: 9787302658528
版次: 1
开本: 16开
页数: 312
出版年份: 2024
定价:
¥118
销售价:
登录后查看价格
¥{{selectedSku?.salePrice}}
库存:
{{selectedSku?.stock}}
库存充足
{{item.title}}:
{{its.name}}
加入购物车
立即购买
加入书单
收藏
精选
¥5.83
世界图书名著昆虫记绿野仙踪木偶奇遇记儿童书籍彩图注音版
¥5.39
正版世界名著文学小说名家名译中学生课外阅读书籍图书批发 70册
¥8.58
简笔画10000例加厚版2-6岁幼儿童涂色本涂鸦本绘画本填色书正版
¥5.83
世界文学名著全49册中小学生青少年课外书籍文学小说批发正版
¥4.95
全优冲刺100分测试卷一二三四五六年级上下册语文数学英语模拟卷
¥8.69
父与子彩图注音完整版小学生图书批发儿童课外阅读书籍正版1册
¥24.2
好玩的洞洞拉拉书0-3岁宝宝早教益智游戏书机关立体翻翻书4册
¥7.15
幼儿认字识字大王3000字幼儿园中班大班学前班宝宝早教启蒙书
¥11.55
用思维导图读懂儿童心理学培养情绪管理与性格培养故事指导书
¥19.8
少年读漫画鬼谷子全6册在漫画中学国学小学生课外阅读书籍正版
¥64
科学真好玩
¥12.7
一年级下4册·读读童谣和儿歌
¥38.4
原生态新生代(传统木版年画的当代传承国际研讨会论文集)
¥11.14
法国经典中篇小说
¥11.32
上海的狐步舞--穆时英(中国现代文学馆馆藏初版本经典)
¥21.56
猫的摇篮(精)
¥30.72
幼儿园特色课程实施方案/幼儿园生命成长启蒙教育课程丛书
¥24.94
旧时风物(精)
¥12.04
三希堂三帖/墨林珍赏
¥6.88
寒山子庞居士诗帖/墨林珍赏
¥6.88
苕溪帖/墨林珍赏
¥6.88
楷书王维诗卷/墨林珍赏
¥9.46
兰亭序/墨林珍赏
¥7.74
祭侄文稿/墨林珍赏
¥7.74
蜀素帖/墨林珍赏
¥12.04
真草千字文/墨林珍赏
¥114.4
进宴仪轨(精)/中国古代舞乐域外图书
¥24.94
舞蹈音乐的基础理论与应用
内容简介
本书从强化学习最基本的概念开始介绍, 将介绍基础的分析工具, 包括贝尔曼公式和贝尔曼最 优公式, 然后推广到基于模型的和无模型的强化学习算法, 最后推广到基于函数逼近的强化学习方 法。本书强调从数学的角度引入概念、分析问题、分析算法, 并不强调算法的编程实现。本书不要求 读者具备任何关于强化学习的知识背景, 仅要求读者具备一定的概率论和线性代数的知识。如果读者 已经具备强化学习的学习基础, 本书可以帮助读者更深入地理解一些问题并提供新的视角。 本书面向对强化学习感兴趣的本科生、研究生、研究人员和企业或研究所的从业者。
目录
Contents
Overview of this Book 1
Chapter 1 Basic Concepts 6
1.1 A grid world example 7
1.2 State and action 8
1.3 State transition 9
1.4 Policy 11
1.5 Reward 13
1.6 Trajectories, returns, and episodes 15
1.7 Markov decision processes 18
1.8 Summary 20
1.9 Q&A 20
Chapter 2 State Values and the Bellman Equation 21
2.1 Motivating example 1: Why are returns important? 23
2.2 Motivating example 2: How to calculate returns? 24
2.3 State values 26
2.4 The Bellman equation 27
2.5 Examples for illustrating the Bellman equation 30
2.6 Matrix-vector form of the Bellman equation 33
2.7 Solving state values from the Bellman equation 35
2.7.1 Closed-form solution 35
2.7.2 Iterative solution 35
2.7.3 Illustrative examples 36
2.8 From state value to action value 38
2.8.1 Illustrative examples 39
2.8.2 The Bellman equation in terms of action values 40
2.9 Summary 41
2.10 Q&A 42
Chapter 3 Optimal State Values and the Bellman Optimality Equation 43
3.1 Motivating example: How to improve policies? 45
3.2 Optimal state values and optimal policies 46
3.3 The Bellman optimality equation 47
3.3.1 Maximization of the right-hand side of the BOE 48
3.3.2 Matrix-vector form of the BOE 49
3.3.3 Contraction mapping theorem 50
3.3.4 Contraction property of the right-hand side of the BOE 53
3.4 Solving an optimal policy from the BOE 55
3.5 Factors that influence optimal policies 58
3.6 Summary 63
3.7 Q&A 63
Chapter 4 Value Iteration and Policy Iteration 66
4.1 Value iteration 68
4.1.1 Elementwise form and implementation 68
4.1.2 Illustrative examples 70
4.2 Policy iteration 72
4.2.1 Algorithm analysis 73
4.2.2 Elementwise form and implementation 76
4.2.3 Illustrative examples 77
4.3 Truncated policy iteration 81
4.3.1 Comparing value iteration and policy iteration 81
4.3.2 Truncated policy iteration algorithm 83
4.4 Summary 85
4.5 Q&A 86
Chapter 5 Monte Carlo Methods 89
5.1 Motivating example: Mean estimation 91
5.2 MC Basic: The simplest MC-based algorithm 93
5.2.1 Converting policy iteration to be model-free 93
5.2.2 The MC Basic algorithm 94
5.2.3 Illustrative examples 96
5.3 MC Exploring Starts 99
5.3.1 Utilizing samples more efficiently 100
5.3.2 Updating policies more efficiently 101
5.3.3 Algorithm description 101
5.4 MC -Greedy: Learning without exploring starts 102
5.4.1 -greedy policies 103
5.4.2 Algorithm description 103
5.4.3 Illustrative examples 105
5.5 Exploration and exploitation of -greedy policies 106
5.6 Summary 111
5.7 Q&A 111
Chapter 6 Stochastic Approximation 114
6.1 Motivating example: Mean estimation 116
6.2 Robbins-Monro algorithm 117
6.2.1 Convergence properties 119
6.2.2 Application to mean estimation 123
6.3 Dvoretzky's convergence theorem 124
6.3.1 Proof of Dvoretzky's theorem 125
6.3.2 Application to mean estimation. 126
6.3.3 Application to the Robbins-Monro theorem 127
6.3.4 An extension of Dvoretzky's theorem 127
6.4 Stochastic gradient descent 128
6.4.1 Application to mean estimation 130
6.4.2 Convergence pattern of SGD 131
6.4.3 A deterministic formulation of SGD 133
6.4.4 BGD, SGD, and mini-batch GD 134
6.4.5 Convergence of SGD 136
6.5 Summary 138
6.6 Q&A 138
Chapter 7 Temporal-Difference Methods 140
7.1 TD learning of state values 142
7.1.1 Algorithm description 142
7.1.2 Property analysis 144
7.1.3 Convergence analysis 146
7.2 TD learning of action values: Sarsa 149
7.2.1 Algorithm description 149
7.2.2 Optimal policy learning via Sarsa 151
7.3 TD learning of action values: n-step Sarsa 154
7.4 TD learning of optimal action values: Q-learning 156
7.4.1 Algorithm description 156
7.4.2 Off-policy vs. on-policy 158
7.4.3 Implementation 160
7.4.4 Illustrative examples 161
7.5 A unified viewpoint 165
7.6 Summary 165
7.7 Q&A 166
Chapter 8 Value Function Approximation 168
8.1 Value representation: From table to function 170
8.2 TD learning of state values with function approximation 174
8.2.1 Objective function 174
8.2.2 Optimization algorithms 180
8.2.3 Selection of function approximators 182
8.2.4 Illustrative examples 183
8.2.5 Theoretical analysis 187
8.3 TD learning of action values with function approximation 198
8.3.1 Sarsa with function approximation 198
8.3.2 Q-learning with function approximation 200
8.4 Deep Q-learning 201
8.4.1 Algorithm description 202
8.4.2 Illustrative examples 204
8.5 Summary 207
8.6 Q&A 207
Chapter 9 Policy Gradient Methods 211
9.1 Policy representation: From table to function 213
9.2 Metrics for defining optimal policies 214
9.3 Gradients of the metrics 219
9.3.1 Derivation of the gradients in the discounted case 221
9.3.2 Derivation of the gradients in the undiscounted case 226
9.4 Monte Carlo policy gradient (REINFORCE) 232
9.5 Summary 235
9.6 Q&A 235
Chapter 10 Actor-Critic Methods 237
10.1 The simplest actor-critic algorithm (QAC) 239
10.2 Advantage actor-critic (A2C) 240
10.2.1 Baseline invariance 240
10.2.2 Algorithm description 243
10.3 Off-policy actor-critic 244
10.3.1 Importance sampling 245
10.3.2 The off-policy policy gradient theorem 247
10.3.3 Algorithm description 249
10.4 Deterministic actor-critic 251
10.4.1 The deterministic policy gradient theorem 251
10.4.2 Algorithm description 258
10.5 Summary 259
10.6 Q&A 260
Appendix A Preliminaries for Probability Theory 262
Appendix B Measure-Theoretic Probability Theory 268
Appendix C Convergence of Sequences 276
C.1 Convergence of deterministic sequences 277
C.2 Convergence of stochastic sequences 280
Appendix D Preliminaries for Gradient Descent 284
Bibliography 290
Symbols 297
Index 299
×
Close
添加到书单
加载中...
点此新建书单
×
Close
新建书单
标题:
简介:
蜀ICP备2024047804号
Copyright 版权所有 © jvwen.com 聚文网