热门搜索: 中考 高考 考试 开卷17
服务电话 024-96192/23945006
 

统计策略搜索强化学习方法及应用

编号:
wx1202488458
销售价:
¥68.73
(市场价: ¥79.00)
赠送积分:
69
商品介绍

智能体AlphaGo战胜人类围棋专家刷新了人类对人工智能的认识,也使得其核心技术强化学习受到学术界的广泛关注。本书正是在如此背景下,围绕作者多年从事强化学习理论及应用的研究内容及国内外关于强化学习的很近动态等方面展开介绍,是为数不多的强化学习领域的专业著作。该著作侧重于基于直接策略搜索的强化学习方法,结合了统计学习的诸多方法对相关技术及方法进行分析、改进及应用。本书以一个全新的现代角度描述策略搜索强化学习算法。从不同的强化学习场景出发,讲述了强化学习在实际应用中所面临的诸多难题。针对不同场景,给定具体的策略搜索算法,分析算法中估计量和学习参数的统计特性,并对算法进行应用实例展示及定量比较。特别地,本书结合强化学习前沿技术将策略搜索算法应用到机器人控制及数字艺术渲染领域,给人以耳目一新的感觉。很后根据作者长期研究经验,对强化学习的发展趋势进行了简要介绍和总结。本书取材经典、全面,概念清楚,推导严密,以期形成一个集基础理论、算法和应用为一体的完备知识体系。

章 强化学习概述···························································································1
1.1 机器学习中的强化学习··········································································1
1.2 智能控制中的强化学习··········································································4
1.3 强化学习分支··························································································8
1.4 本书贡献·······························································································11
1.5 本书结构·······························································································12
参考文献········································································································14
第2章 相关研究及背景知识·············································································19
2.1 马尔可夫决策过程················································································19
2.2 基于值函数的策略学习算法·································································21
2.2.1 值函数·······················································································21
2.2.2 策略迭代和值迭代····································································23
2.2.3 Q-learning ··················································································25
2.2.4 基于最小二乘法的策略迭代算法·············································27
2.2.5 基于值函数的深度强化学习方法·············································29
2.3 策略搜索算法························································································30
2.3.1 策略搜索算法建模····································································31
2.3.2 传统策略梯度算法(REINFORCE算法)······························32
2.3.3 自然策略梯度方法(Natural Policy Gradient)························33
2.3.4 期望优选化的策略搜索方法·····················································35
2.3.5 基于策略的深度强化学习方法·················································37
2.4 本章小结·······························································································38
参考文献········································································································39
第3章 策略梯度估计的分析与改进·································································42
3.1 研究背景·······························································································42
3.2 基于参数探索的策略梯度算法(PGPE算法)···································44
3.3 梯度估计方差分析················································································46
3.4 基于最优基线的算法改进及分析·························································48
3.4.1 最优基线的基本思想································································48
3.4.2 PGPE算法的最优基线······························································49
3.5 实验·······································································································51
3.5.1 示例···························································································51
3.5.2 倒立摆平衡问题········································································57
3.6 总结与讨论····························································································58
参考文献········································································································60
第4章 基于重要性采样的参数探索策略梯度算法··········································63
4.1 研究背景·······························································································63
4.2 异策略场景下的PGPE算法·································································64
4.2.1 重要性加权PGPE算法·····························································65
4.2.2 IW-PGPE算法通过基线减法减少方差····································66
4.3 实验结果·······························································································68
4.3.1 示例···························································································69
4.3.2 山地车任务················································································78
4.3.3 机器人仿真控制任务································································81
4.4 总结和讨论····························································································88
参考文献········································································································89
第5章 方差正则化策略梯度算法·····································································91
5.1 研究背景·······························································································91
5.2 正则化策略梯度算法············································································92
5.2.1 目标函数····················································································92
5.2.2 梯度计算方法············································································94
5.3 实验结果·······························································································95
5.3.1 数值示例····················································································95
5.3.2 山地车任务··············································································101
5.4 总结和讨论··························································································102
参考文献······································································································103
第6章 基于参数探索的策略梯度算法的采样技术········································105
6.1 研究背景·····························································································105
6.2 基于参数探索的策略梯度算法中的采样技术····································107
6.2.1 基线采样··················································································108
6.2.2 最优基线采样··········································································109
6.2.3 对称采样··················································································109
6.2.4 超对称采样··············································································111
6.2.5 多模态超对称采样··································································116
6.2.6 SupSymPGPE 的奖励归一化··················································117
6.3 数值示例实验······················································································119
6.3.1 平方函数··················································································120
6.3.2 Rastrigin函数··········································································120
6.4 本章总结·····························································································124
参考文献······································································································125
第7章 基于样本有效重用的人形机器人的运动技能学习·····························127
7.1 研究背景:真实环境下的运动技能学习···········································127
7.2 运动技能学习框架··············································································128
7.2.1 机器人的运动路径和回报·······················································128
7.2.2 策略模型··················································································129
7.2.3 基于PGPE算法的策略学习方法···········································129
7.3 有效重用历史经验··············································································130
7.3.1 基于重要性加权的参数探索策略梯度算法
(IW-PGPE算法)···································································130
7.3.2 基于IW-PGPE算法的运动技能学习过程·····························131
7.3.3 递归型IW-PGPE算法····························································132
7.4 虚拟环境中的车杆摆动任务·······························································133
7.5 篮球射击任务······················································································137
7.6 讨论与结论··························································································140
参考文献······································································································142
第8章 基于逆强化学习的艺术风格学习及水墨画渲染·································145
8.1 研究背景·····························································································145
8.1.1 计算机图形学背景··································································146
8.1.2 人工智能背景··········································································147
8.1.3 面向艺术风格化的渲染系统···················································148
8.2 基于强化学习的笔刷智能体建模·······················································148
8.2.1 动作的设计··············································································149
8.2.2 状态的设计··············································································150
8.3 离线艺术风格学习阶段······································································151
8.3.1 数据采集··················································································152
8.3.2 基于逆强化学习的奖励函数学习···········································153
8.3.3 基于R-PGPE算法的渲染策略学习·······································154
8.4 A4系统用户界面················································································155
8.5 实验与结果··························································································157
8.5.1 渲染策略学习结果··································································157
8.5.2 基于IRL进行笔画绘制的渲染结果·······································160
8.6 本章小结·····························································································162
参考文献······································································································163

商品参数
基本信息
出版社 电子工业出版社
ISBN 9787121419591
条码 9787121419591
编者 赵婷婷
译者 --
出版年月 2021-09-01 00:00:00.0
开本 其他
装帧 平装
页数 180
字数
版次 1
印次 1
纸张
商品评论

暂无商品评论信息 [发表商品评论]

商品咨询

暂无商品咨询信息 [发表商品咨询]