登入帳戶  | 訂單查詢  | 購物車/收銀台(0) | 在線留言板  | 付款方式  | 運費計算  | 聯絡我們  | 幫助中心 |  加入書簽
會員登入 新用戶登記
HOME新書上架暢銷書架好書推介特價區會員書架精選月讀2023年度TOP分類瀏覽雜誌 臺灣用戶
品種:超過100萬種各類書籍/音像和精品,正品正價,放心網購,悭钱省心 服務:香港台灣澳門海外 送貨:速遞郵局服務站

新書上架簡體書 繁體書
暢銷書架簡體書 繁體書
好書推介簡體書 繁體書

十月出版:大陸書 台灣書
九月出版:大陸書 台灣書
八月出版:大陸書 台灣書
七月出版:大陸書 台灣書
六月出版:大陸書 台灣書
五月出版:大陸書 台灣書
四月出版:大陸書 台灣書
三月出版:大陸書 台灣書
二月出版:大陸書 台灣書
一月出版:大陸書 台灣書
12月出版:大陸書 台灣書
11月出版:大陸書 台灣書
十月出版:大陸書 台灣書
九月出版:大陸書 台灣書
八月出版:大陸書 台灣書

『簡體書』强化学习的数学原理(英文版)

書城自編碼: 4023508
分類:簡體書→大陸圖書→計算機/網絡程序設計
作者: 赵世钰
國際書號(ISBN): 9787302658528
出版社: 清华大学出版社
出版日期: 2024-07-01

頁數/字數: /
書度/開本: 16开 釘裝: 平装

售價:HK$ 135.7

我要買

share:

** 我創建的書架 **
未登入.


新書推薦:
凡事发生皆有利于我(这是一本读了之后会让人运气变好的书”治愈无数读者的心理自助经典)
《 凡事发生皆有利于我(这是一本读了之后会让人运气变好的书”治愈无数读者的心理自助经典) 》

售價:HK$ 45.8
未来特工局
《 未来特工局 》

售價:HK$ 57.3
高术莫用(十周年纪念版 逝去的武林续篇 薛颠传世之作 武学尊师李仲轩家世 凸显京津地区一支世家的百年沉浮)
《 高术莫用(十周年纪念版 逝去的武林续篇 薛颠传世之作 武学尊师李仲轩家世 凸显京津地区一支世家的百年沉浮) 》

售價:HK$ 56.4
英国简史(刘金源教授作品)
《 英国简史(刘金源教授作品) 》

售價:HK$ 101.2
便宜货:廉价商品与美国消费社会的形成
《 便宜货:廉价商品与美国消费社会的形成 》

售價:HK$ 77.3
读书是一辈子的事(2024年新版)
《 读书是一辈子的事(2024年新版) 》

售價:HK$ 79.4
乐道文库·什么是秦汉史
《 乐道文库·什么是秦汉史 》

售價:HK$ 82.8
汉娜·阿伦特与以赛亚·伯林 : 自由、政治与人性
《 汉娜·阿伦特与以赛亚·伯林 : 自由、政治与人性 》

售價:HK$ 109.8

 

編輯推薦:
·从零开始到透彻理解,知其然并知其所以然;
·本书在GitHub收获2000 星;
·课程视频全网播放超过80万;
·国内外读者反馈口碑爆棚;
·教材、视频、课件三位一体。
內容簡介:
本书从强化学习最基本的概念开始介绍, 将介绍基础的分析工具, 包括贝尔曼公式和贝尔曼最
优公式, 然后推广到基于模型的和无模型的强化学习算法, 最后推广到基于函数逼近的强化学习方
法。本书强调从数学的角度引入概念、分析问题、分析算法, 并不强调算法的编程实现。本书不要求
读者具备任何关于强化学习的知识背景, 仅要求读者具备一定的概率论和线性代数的知识。如果读者
已经具备强化学习的学习基础, 本书可以帮助读者更深入地理解一些问题并提供新的视角。
本书面向对强化学习感兴趣的本科生、研究生、研究人员和企业或研究所的从业者。
目錄
Overview of this Book 1
Chapter 1 Basic Concepts 6
1.1 A grid world example 7
1.2 State and action 8
1.3 State transition 9
1.4 Policy 11
1.5 Reward 13
1.6 Trajectories, returns, and episodes 15
1.7 Markov decision processes 18
1.8 Summary 20
1.9 Q&A 20
Chapter 2 State Values and the Bellman Equation 21
2.1 Motivating example 1: Why are returns important 23
2.2 Motivating example 2: How to calculate returns 24
2.3 State values 26
2.4 The Bellman equation 27
2.5 Examples for illustrating the Bellman equation 30
2.6 Matrix-vector form of the Bellman equation 33
2.7 Solving state values from the Bellman equation 35
2.7.1 Closed-form solution 35
2.7.2 Iterative solution 35
2.7.3 Illustrative examples 36
2.8 From state value to action value 38
2.8.1 Illustrative examples 39
2.8.2 The Bellman equation in terms of action values 40
2.9 Summary 41
2.10 Q&A 42
Chapter 3 Optimal State Values and the Bellman Optimality Equation 43
3.1 Motivating example: How to improve policies 45
3.2 Optimal state values and optimal policies 46
3.3 The Bellman optimality equation 47
3.3.1 Maximization of the right-hand side of the BOE 48
3.3.2 Matrix-vector form of the BOE 49
3.3.3 Contraction mapping theorem 50
3.3.4 Contraction property of the right-hand side of the BOE 53
3.4 Solving an optimal policy from the BOE 55
3.5 Factors that influence optimal policies 58
3.6 Summary 63
3.7 Q&A 63
Chapter 4 Value Iteration and Policy Iteration 66
4.1 Value iteration 68
4.1.1 Elementwise form and implementation 68
4.1.2 Illustrative examples 70
4.2 Policy iteration 72
4.2.1 Algorithm analysis 73
4.2.2 Elementwise form and implementation 76
4.2.3 Illustrative examples 77
4.3 Truncated policy iteration 81
4.3.1 Comparing value iteration and policy iteration 81
4.3.2 Truncated policy iteration algorithm 83
4.4 Summary 85
4.5 Q&A 86
Chapter 5 Monte Carlo Methods 89
5.1 Motivating example: Mean estimation 91
5.2 MC Basic: The simplest MC-based algorithm 93
5.2.1 Converting policy iteration to be model-free 93
5.2.2 The MC Basic algorithm 94
5.2.3 Illustrative examples 96
5.3 MC Exploring Starts 99
5.3.1 Utilizing samples more efficiently 100
5.3.2 Updating policies more efficiently 101
5.3.3 Algorithm description 101
5.4 MC -Greedy: Learning without exploring starts 102
5.4.1 -greedy policies 103
5.4.2 Algorithm description 103
5.4.3 Illustrative examples 105
5.5 Exploration and exploitation of -greedy policies 106
5.6 Summary 111
5.7 Q&A 111
Chapter 6 Stochastic Approximation 114
6.1 Motivating example: Mean estimation 116
6.2 Robbins-Monro algorithm 117
6.2.1 Convergence properties 119
6.2.2 Application to mean estimation 123
6.3 Dvoretzky‘s convergence theorem 124
6.3.1 Proof of Dvoretzky’s theorem 125
6.3.2 Application to mean estimation. 126
6.3.3 Application to the Robbins-Monro theorem 127
6.3.4 An extension of Dvoretzkys theorem 127
6.4 Stochastic gradient descent 128
6.4.1 Application to mean estimation 130
6.4.2 Convergence pattern of SGD 131
6.4.3 A deterministic formulation of SGD 133
6.4.4 BGD, SGD, and mini-batch GD 134
6.4.5 Convergence of SGD 136
6.5 Summary 138
6.6 Q&A 138
Chapter 7 Temporal-Difference Methods 140
7.1 TD learning of state values 142
7.1.1 Algorithm description 142
7.1.2 Property analysis 144
7.1.3 Convergence analysis 146
7.2 TD learning of action values: Sarsa 149
7.2.1 Algorithm description 149
7.2.2 Optimal policy learning via Sarsa 151
7.3 TD learning of action values: n-step Sarsa 154
7.4 TD learning of optimal action values: Q-learning 156
7.4.1 Algorithm description 156
7.4.2 Off-policy vs. on-policy 158
7.4.3 Implementation 160
7.4.4 Illustrative examples 161
7.5 A unified viewpoint 165
7.6 Summary 165
7.7 Q&A 166
Chapter 8 Value Function Approximation 168
8.1 Value representation: From table to function 170
8.2 TD learning of state values with function approximation 174
8.2.1 Objective function 174
8.2.2 Optimization algorithms 180
8.2.3 Selection of function approximators 182
8.2.4 Illustrative examples 183
8.2.5 Theoretical analysis 187
8.3 TD learning of action values with function approximation 198
8.3.1 Sarsa with function approximation 198
8.3.2 Q-learning with function approximation 200
8.4 Deep Q-learning 201
8.4.1 Algorithm description 202
8.4.2 Illustrative examples 204
8.5 Summary 207
8.6 Q&A 207
Chapter 9 Policy Gradient Methods 211
9.1 Policy representation: From table to function 213
9.2 Metrics for defining optimal policies 214
9.3 Gradients of the metrics 219
9.3.1 Derivation of the gradients in the discounted case 221
9.3.2 Derivation of the gradients in the undiscounted case 226
9.4 Monte Carlo policy gradient (REINFORCE) 232
9.5 Summary 235
9.6 Q&A 235
Chapter 10 Actor-Critic Methods 237
10.1 The simplest actor-critic algorithm (QAC) 239
10.2 Advantage actor-critic (A2C) 240
10.2.1 Baseline invariance 240
10.2.2 Algorithm description 243
10.3 Off-policy actor-critic 244
10.3.1 Importance sampling 245
10.3.2 The off-policy policy gradient theorem 247
10.3.3 Algorithm description 249
10.4 Deterministic actor-critic 251
10.4.1 The deterministic policy gradient theorem 251
10.4.2 Algorithm description 258
10.5 Summary 259
10.6 Q&A 260
Appendix A Preliminaries for Probability Theory 262
Appendix B Measure-Theoretic Probability Theory 268
Appendix C Convergence of Sequences 276
C.1 Convergence of deterministic sequences 277
C.2 Convergence of stochastic sequences 280
Appendix D Preliminaries for Gradient Descent 284
Bibliography 290
Symbols 297
Index 299
內容試閱
This book aims to provide a mathematical but friendly introduction to the fundamental concepts, basic problems, and classic algorithms in reinforcement learning. Some essential features of this book are highlighted as follows.
* The book introduces reinforcement learning from a mathematical point of view. Hopefully, readers will not only know the procedure of an algorithm but also understand why the algorithm was designed in the first place and why it works effectively.
* The depth of the mathematics is carefully controlled to an adequate level. The mathematics is also presented in a carefully designed manner to ensure that the book is friendly to read. Readers can selectively read the materials presented in gray boxes according to their interests.
* Many illustrative examples are given to help readers understand the topics better. All the examples in this book are based on a grid world task, which is easy to understand and helpful for illustrating concepts and algorithms.
* When introducing an algorithm, the book aims to separate its core idea from complications that may be distracting. In this way, readers can easily grasp the core idea of an algorithm.
* The contents of the book are coherently organized. Each chapter is built based on the preceding chapter and lays a necessary foundation for the subsequent one.
This book is designed for senior undergraduate students, graduate students, researchers, and practitioners interested in reinforcement learning. It does not require readers to have any background in reinforcement learning because it starts by introducing the most basic concepts. If readers already have some background in reinforcement learning, I believe the book can help them understand some topics more deeply or provide different perspectives.This book, however, requires readers to have some knowledge of probability theory and linear algebra. Some basics of the required mathematics are also included in the appendix of this book.
I have been teaching a graduate-level course on reinforcement learning since 2019. I want to thank the students in my class for their feedback on my teaching. I put the draft of this book online in August 2022. Up to now, I have received valuable feedback from many readers. I want to express my gratitude to these readers. Moreover, I would like to thank my research assistant, Jialing Lv, for her excellent support in editing the book and my lecture videos; I also thank my teaching assistants, Jianan Li and Yize Mi, for their help in my teaching; also my Ph.D. student Canlun Zheng for his help in the design of a picture in the book; and my family for their wonderful support. Finally, I would like to thank the editors of this book, Mr. Sai Guo from Tsinghua University Press and Dr. Lanlan Chang from Springer Nature Press, for their great support.
I sincerely hope this book can help readers enter the exciting field of reinforcement learning smoothly.
Shiyu Zhao
May 2024

 

 

書城介紹  | 合作申請 | 索要書目  | 新手入門 | 聯絡方式  | 幫助中心 | 找書說明  | 送貨方式 | 付款方式 香港用户  | 台灣用户 | 大陸用户 | 海外用户
megBook.com.hk
Copyright © 2013 - 2024 (香港)大書城有限公司  All Rights Reserved.