Guest User

Untitled

a guest
Nov 20th, 2017
103
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 3.71 KB | None | 0 0
  1. # Cross Entropy Method
  2.  
  3. How do we solve for the policy optimization problem which is to **maximize** the total reward given some parametrized policy?
  4.  
  5. ## Discounted future reward
  6.  
  7. To begin with, for an episode the total reward is the sum of all the rewards. If our environment is stochastic, we can never be sure if we will get the same rewards the next time we perform the same actions. Thus the more we go into the future the more the total future reward may diverge. So for that reason it is common to use the **discounted future reward** where the parameter `discount` is called the discount factor and is between 0 and 1.
  8.  
  9. A good strategy for an agent would be to always choose an action that maximizes the (discounted) future reward. In other words we want to maximize the expected reward per episode.
  10.  
  11. ## Parametrized policy
  12.  
  13. A stochastic policy is defined as a conditional probability of some action given a state. A family of policies indexed by a parameter vector `theta` are called parametrized policies. These policies are defined analogous to the supervised learning classification or regression problems. In the case of discrete policies we output a vector of probabilities of the possible actions and in the case of continuous policies we output a mean and diagonal covariance of a Gaussian distribution from which we can then sample our continous actions.
  14.  
  15. ## Cross entropy method (CEM)
  16.  
  17. So how do we solve for the policy optimization problem of maximizing the total (discounted) reward given some parametrized policy? The simplest approach is the derivative free optimization (DFO) which looks at this problem as a black box with respect to the parameter `theta`. We try out many different `theta` and store the rewards for each episode. The main idea then is to move towards good `theta`.
  18.  
  19. One particular DFO approach is called the CEM. Here at any point in time, you maintain a distribution over parameter vectors and move the distribution towards parameters with higher reward. This works surprisingly well, even if its not that effictive when `theta` is a high dimensional vector.
  20.  
  21. ## Algorithm
  22.  
  23. The idea is to initialize the `mean` and `sigma` of a Gaussian and then for `n_iter` times we:
  24.  
  25. 1. collect `batch_size` samples of `theta` from a Gaussian with the current `mean` and `sigma`
  26. 2. perform a noisy evaluation to get the total rewards with these `theta`s
  27. 3. select `n_elite` of the best `theta`s into an elite set
  28. 4. upate our `mean` and `sigma` to be that from the elite set
  29.  
  30. ## Discrete linear policy
  31.  
  32. For the `CartPole-v0` case let us define the linear parametrized policy as the following diagram:
  33.  
  34. ```
  35. │ ┌───theta ~ N(mean,std)───┐
  36. 4 observations [[ 2.2 4.5 ]
  37. [-0.1 -0.4 0.06 0.5] * [ 3.4 0.2 ] + [[ 0.2 ]
  38. | [ 4.2 3.4 ] [ 1.1 ]]
  39. │ [ 0.1 9.0 ]]
  40. | W b
  41. ┌────o────┐
  42. <─0─│2 actions│─1─> = [-0.4 0.1] ──argmax()─> 1
  43. └─o─────o─┘
  44. ```
  45.  
  46. Which means we can use the `Space` introspection of the `env` to create an appropriatly sized `theta` parameter vector from which we can use a part as the matrix `W` and the rest as the bias vector `b` so that the number of output probabilities correspond to the number of actions of our particular `env`.
  47.  
  48. ## Extra noise
  49.  
  50. We can also add extra decayed noise to our distribution in the form of `extra_cov` which decays after `extra_decay_time` iterations.
  51.  
  52. ## Discounted total reward
  53.  
  54. We can also return the discounted total reward per episode via the `discount` parameter of the `do_episode` function:
  55.  
  56. ```python
  57. ...
  58. for t in xrange(num_steps):
  59. ...
  60. disc_total_rew += reward * discount**t
  61. ...
  62. ```
Add Comment
Please, Sign In to add comment