强化学习

Reinforcement Learning

  • What is Reinforcement Learning
  • Why Reinforcement Learning
  • Basics of Reinforcement Learning
  • Inside an RL agent

什么是强化学习

RL is an agnet learning to interact with an environmnet based on feedback signal (reward) ie receives from the environment, in order to achieve a goal.

也就是通过环境的反馈来学习,最终达到目标,就像训练一只狗,当做了对的事情就给他好吃的,错的事情就打一巴掌。

强化学习是机器学习的一部分,他与监督学习和非监督学习并列,供同行组成机器学习

  • Data:Sparce and time-dalay reward
  • Way to learn:Learn through interaction with environment, learn from scratch
  • Goal: Maximise future rewards

为什么要使用强化学习

  • Learn from scratch,no need of training data
  • Go beyond the level of human being

机器学习的基础:奖励

  • A reward R_t is an immediate feedback signal
  • Indicates how well the agent is doing at step t
  • The agent's job is to maximise cumulative(累计的) reeward

为了获取最大奖励

  • Reward may be delayed
  • Actions may have long term consequences
  • It may be better to sacrificeI(牺牲) immediate reward to gain more long-term reward

机器代理,环境和状态

  • At each step t the agent:
    • Receives observation O_t
    • Receives reward R_t
    • Executes action A_t
  • The environment:
    • Receives action A_t
    • Emits(发出) observation O_{t+1}
    • Emits(发出) reward R_{t+1}
  • State S_t
    • Summary information used to determine what happens next
    • Markov: future depends only on the present, independent of the past(未来仅取决于现在而不取决于过去)

部分可观测环境(Partial observability)

  • Partial observability: agent indirectly observes environment
    • A robot with camera vision is not told its absolute location
    • A pocker player agent only obseves public cards

强化学习代理内部(Inside an RL agent)

  • Policy: agent's behaviour function
  • Value function: huow good is a state

Policy

  • A policy is a map from state to aciton

    • Deterministic policy(决定性策略): a = \pi(s)
    • Stochstic policy(随机策略): \pi(a|s) = P[A = a | S = s] (在状态s下发生a动作的概率)
  • Example:

    • Arrows represent the policy \pi(s) for each state a
截屏2020-04-13下午8.58.18.png

Value Function

  • Value function is a prediction of future reward

  • Used to evaluate the goodness of a state

    V_\pi(s) = E_\pi[R_t + \gamma R_{t+1} + \gamma^2 R_{t+2}+...|S_t=s]

    Where \gamma(0\leq \gamma \le 1) gives an option to discount future reward

  • Example

    • Number represents the value V(s) for each state s.
截屏2020-04-13下午8.58.23.png

不同的强化学习代理(Different RL agents)

  • Value-based agent
    • Value function(价值方程)
    • Inplicit policy(隐含策略)
  • Policy-based agent
    • Policy
    • No value function

Q-Learning

  • Value function is the expected future reward at a given state(价值方程是在某个位置的时候期待奖励)

  • Used to evaluate the goodness of state(价值函数用于评价一个状态的好坏)

    V_\pi(s_t)=E_\pi[R_t+\gamma R_{t+1} + \gamma^2 R_{t+2}+...|s_t,a_t]

  • Q-learning is to learn a particular function:Q-funtion(i.e. Action Value function)(Q-learning是学习一个特定的函数)

  • Q-function is the expected future reward at a given state when taking a particular action(Q-funtion是在某个位置的时候做某种动作所期望的奖励)

  • Used to evaluate the goodness of a pair of state and action(用来评价一对状态和动作的好坏)Q_\pi(s_t,a_t)=E_\pi[R_t+\gamma R_{t+1} + \gamma^2 R_{t+2}+...|s_t,a_t]

Q-Table

  • Q-learning is to build a score table to record Q value for each action at each state(Q-learning 会建立一个表格来记录每一个状态的Q值)

Bellman Equation

Q(s_t,a_t)=(1-\alpha)Q(s_t,a_t)+\alpha[R(s_t,a_t)+\gamma \mathop{max}\limits_{a}Q(s_{t+1},a)]

截屏2020-04-13下午9.00.58.png

Q-Learning Algorithm

  1. Initialise Q table

  2. For each episode
    a. Select a random initial state
    b. Do

  • Select on action(e.g. randomly)

  • Perform that action and then go to next state

  • Get the reward

  • Update Q(s_t,a_t)=(1-\alpha)Q(s_t,a_t)+\alpha[R(s_t,a_t)+\gamma \mathop{max}\limits_{a}Q(s_{t+1},a)]

    End Do While the goal state is reached

End For

Summary:Q-Learning

  • Q-learning evaluates which action to take based on Q table that determines value of being in a certain state and taking a certain action at that state.(Q学习基于Q表来评估采取哪种行动,该表确定处于某种状态并在该状态下采取某种行动的价值。)
  • Q table is updated iteratively as agent playing games by using the Bellman Equation
  • Before exploring the environment Q table gives the same arbitrary fixed value(e.g.zero), but as time goes by Q table gives us a better and better approximation.(在探索环境之前,Q表会给出相同的任意固定值(例如零),但是随着时间的流逝,Q表会给出越来越好的近似值)

Deep Q Network

用于解决有大量状态的问题,因为在Q-learning中无法更新Q-table

对于深度Q学习

  • 输入: State

  • 输出:每个动作对应的Q-value

深度Q学习缺点

  • Cannot handle continuous action spaces(无法处理连续动作的问题)

  • Cannot learn stochastic policies since policy is determinbistically computed from the Q function(无法学习随机策略因为计算式基于Q函数的)

Learn polilcy directly(直接学习策略)

  • Value function: how good is an action at a given state

  • Policy: agent's behaviour function ​

Policy Gradient(策略下降)

  • Deep Q-learning:Approximating Q(s,a) and inferring policy(逼近Q(s,a)并推断策略)

  • Policy Gradient:Directly estimating policy ​(直接建立一个策略)

Basic idea(基本思路)

  1. Start out with an arbitrary random policy(从一个任意的策略开始)

  2. Play game for a while and sample some actions(尝试一些动作)

  3. Increase probability of actions that lead to high reward, and decrease probability of actions that lead to low reward(增加导致高回报动作的可能性并减少低回报动作的可能性)

Find best policy: Two steps(找到最好的策略的两个步骤)

  • Measure the quality of ​(​ represents the network, by defining a score function ​)

  • Use policy gradient ascent to update the parameters in ​ to improve ​

    Step1: To measure policy

    • Store all transitions​ in all episodes when the agent played based on current policy ​

      截屏2020-04-13下午8.23.02

      Step2: Policy gradient ascent

      • The idea is to find the gradient to the current policy ​ and updates the parameters in the direction of the greatest increase.(找到当前策略的梯度,并且在增加最快的方向上更新参数)

        • Policy: ​

        • Objective function: ​

        • Gradient: ​

        • Update : ​

          Important issuses: Model-based RL(重要问题,基于模型的强化学习)

          • Deep Q-learning and Polivy gradient are model-free algorithm

          • A model predicts what the environment will do next

          • A model RL has twp parts:

            1. to predict the next state

            2. to predict the next immediate reward

          Actor Critic

          • Policy Gradient may converge slower than Deep Q-learning; it can take longer to train, and need more data.(策略梯度的收敛可能会比深度Q学习慢,而且会需要更多的数据和更长的时间去学习)

          • Actor Critic; a hybrid between value-based learning and policy-based learning(是一种介于两者之间的方法)

          Exploration vs Exploitation(探索和开发)

          Reinforcement learning is trial-and-error learning; the agent should discover a good policy from its experience interacting with environment

          • Exploration find more informarion about environment(探索是找到更多关于环境的信息)

          • Exploitation exploits known information to maximise the reward(尽可能多的找到最大回报的信息)

          ​- greedy exploration(贪婪搜索)

          截屏2020-04-13下午8.42.15

          Credit assignment problem

          • In RL we assume that since we lost the episode, all of the actions we took there must be bad actions and it is going to reduce the likelihood of those actions in the future

          • However, probably most part of the episode we are doing very well

          • Credit assignment problem: What action that leads to reward we get in the future

          • Sparse reward setting: we only get a reward after entire episode

          Style of play

          Learning outcome

          • What is policy gradient

          • Why do we need policy gradient(vs Q-learning)

          • How to find best policy in policy gradient

          • Important issues in RL

            • Model-based RL

            • Actor Critic

            • Exploration and exploitation

            • Credit assignment problem

            • Style of play in RL

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 204,684评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 87,143评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 151,214评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,788评论 1 277
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,796评论 5 368
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,665评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,027评论 3 399
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,679评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 41,346评论 1 299
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,664评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,766评论 1 331
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,412评论 4 321
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,015评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,974评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,203评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,073评论 2 350
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,501评论 2 343

推荐阅读更多精彩内容