Output range for continuous control policy network - neural-network

I tried to implement the simple vanilla policy gradient (REINFORCE) in a continuous control problem by adapting this pytorch implementation to the continuous case and I stumbled upon the following issue.
Usually, when the action space is discrete, the output of the policy network is bounded in (0,1)^n by the softmax function which gives the probability that the agent would pick a certain action given the state (input to the network). However, when the action space is continuous, for example if we have K action such that each action ak has lower and upper bounds lk anduk, I haven't found a way (empirical or theoretical) to limit the output of the network (which is usually the means and the standard deviations of the action probability distribution given the state) using lk and uk.
From the few trials I made, without constraining the output of the policy network, it was very hard, if not impossible, to learn a good policy, but i might be doing something wrong since i am new to reinforcement learning.
My intuition suggests me to limit the means and the standard deviations output of the policy network using, for example, a sigmoid and then scaling them with the absolute difference between lk and uk. I'm not quite sure how to do it properly though, considering also that the sampled action could exceed whatever bound you impose on the distribution parameters when using, for example, a gaussian distribution.
Am I missing something? Are there established ways to limit the output of the policy network for continuous action spaces or there's no need to do that at all?
I am not sure this is the right place for this question, if not I will be glad if you point to me a better place.

Related

REINFORCE algorithm for a continuous action space

I have recently started exploring and playing around with reinforcement learning, and have managed to wrap my head around discrete action spaces, and have working implementations of a few environments in OpenAI Gym using Q-learning and Expected SARSA. However, I am running into some trouble understanding the handling of continuous action spaces.
From what I have understood so far, I have constructed a neural network that outputs the mean of a Gaussian distribution, with the standard deviation being fixed for now. Then using the output from the neural network I sample an action from the Gaussian distribution and perform this action in the environment. For each step in an episode I save the starting state, action and reward. Once the episode is over I am supposed to train the network, but this is were I am struggling.
From what I understand, the loss of policy network is calculated by the log-probability of the chosen action multiplied by the discounted reward of that action. For discrete actions this seems straightforward enough, have a softmax layer as your final layer and define a custom loss function that defines the loss as the logarithm of the softmax output layer multiplied by the target value which we set to be the discounted reward.
But how do you do this for a continuous action? The neural network outputs the mean, not the probability of an action or even the action itself, so how do I define a loss function to pass to keras to perform the learning step in TensorFlow for the continuous case?
I have read through a variety of articles on policy optimization, and while the article might mention the continuous case, all of the associated code always focuses on the discrete action space case for policy optimization, which is starting to become fairly disheartening. Can someone help me understand how to implement the continuous case in TensorFlow 2.0?

Episodic Semi-gradient Sarsa with Neural Network

While trying to implement the Episodic Semi-gradient Sarsa with a Neural Network as the approximator I wondered how I choose the optimal action based on the currently learned weights of the network. If the action space is discrete I can just calculate the estimated value of the different actions in the current state and choose the one which gives the maximimum. But this seems to be not the best way of solving the problem. Furthermore, it does not work if the action space can be continous (like the acceleration of a self-driving car for example).
So, basicly I am wondering how to solve the 10th line Choose A' as a function of q(S', , w) in this pseudo-code of Sutton:
How are these problems typically solved? Can one recommend a good example of this algorithm using Keras?
Edit: Do I need to modify the pseudo-code when using a network as the approximator? So, that I simply minimize the MSE of the prediction of the network and the reward R for example?
I wondered how I choose the optimal action based on the currently learned weights of the network
You have three basic choices:
Run the network multiple times, once for each possible value of A' to go with the S' value that you are considering. Take the maximum value as the predicted optimum action (with probability of 1-ε, otherwise choose randomly for ε-greedy policy typically used in SARSA)
Design the network to estimate all action values at once - i.e. to have |A(s)| outputs (perhaps padded to cover "impossible" actions that you need to filter out). This will alter the gradient calculations slightly, there should be zero gradient applied to last layer inactive outputs (i.e. anything not matching the A of (S,A)). Again, just take the maximum valid output as the estimated optimum action. This can be more efficient than running the network multiple times. This is also the approach used by the recent DQN Atari games playing bot, and AlphaGo's policy networks.
Use a policy-gradient method, which works by using samples to estimate gradient that would improve a policy estimator. You can see chapter 13 of Sutton and Barto's second edition of Reinforcement Learning: An Introduction for more details. Policy-gradient methods become attractive for when there are large numbers of possible actions and can cope with continuous action spaces (by making estimates of the distribution function for optimal policy - e.g. choosing mean and standard deviation of a normal distribution, which you can sample from to take your action). You can also combine policy-gradient with a state-value approach in actor-critic methods, which can be more efficient learners than pure policy-gradient approaches.
Note that if your action space is continuous, you don't have to use a policy-gradient method, you could just quantise the action. Also, in some cases, even when actions are in theory continuous, you may find the optimal policy involves only using extreme values (the classic mountain car example falls into this category, the only useful actions are maximum acceleration and maximum backwards acceleration)
Do I need to modify the pseudo-code when using a network as the approximator? So, that I simply minimize the MSE of the prediction of the network and the reward R for example?
No. There is no separate loss function in the pseudocode, such as the MSE you would see used in supervised learning. The error term (often called the TD error) is given by the part in square brackets, and achieves a similar effect. Literally the term ∇q(S,A,w) (sorry for missing hat, no LaTex on SO) means the gradient of the estimator itself - not the gradient of any loss function.

How can I improve the performance of a feedforward network as a q-value function approximator?

I'm trying to navigate an agent in a n*n gridworld domain by using Q-Learning + a feedforward neural network as a q-function approximator. Basically the agent should find the best/shortest way to reach a certain terminal goal position (+10 reward). Every step the agent takes it gets -1 reward. In the gridworld there are also some positions the agent should avoid (-10 reward, terminal states,too).
So far I implemented a Q-learning algorithm, that saves all Q-values in a Q-table and the agent performs well.
In the next step, I want to replace the Q-table by a neural network, trained online after every step of the agent. I tried a feedforward NN with one hidden layer and four outputs, representing the Q-values for the possible actions in the gridworld (north,south,east, west).
As input I used a nxn zero-matrix, that has a "1" at the current positions of the agent.
To reach my goal I tried to solve the problem from the ground up:
Explore the gridworld with standard Q-Learning and use the Q-map as training data for the Network once Q-Learning is finished
--> worked fine
Use Q-Learning and provide the updates of the Q-map as trainingdata
for NN (batchSize = 1)
--> worked good
Replacy the Q-Map completely by the NN. (This is the point, when it gets interesting!)
-> FIRST MAP: 4 x 4
As described above, I have 16 "discrete" Inputs, 4 Output and it works fine with 8 neurons(relu) in the hidden layer (learning rate: 0.05). I used a greedy policy with an epsilon, that reduces from 1 to 0.1 within 60 episodes.
The test scenario is shown here. Performance is compared beetween standard qlearning with q-map and "neural" qlearning (in this case i used 8 neurons and differnt dropOut rates).
To sum it up: Neural Q-learning works good for small grids, also the performance is okay and reliable.
-> Bigger MAP: 10 x 10
Now I tried to use the neural network for bigger maps.
At first I tried this simple case.
In my case the neural net looks as following: 100 input; 4 Outputs; about 30 neurons(relu) in one hidden layer; again I used a decreasing exploring factor for greedy policy; over 200 episodes the learning rate decreases from 0.1 to 0.015 to increase stability.
At frist I had problems with convergence and interpolation between single positions caused by the discrete input vector.
To solve this I added some neighbour positions to the vector with values depending on thier distance to the current position. This improved the learning a lot and the policy got better. Performance with 24 neurons is seen in the picture above.
Summary: the simple case is solved by the network, but only with a lot of parameter tuning (number of neurons, exploration factor, learning rate) and special input transformation.
Now here are my questions/problems I still haven't solved:
(1) My network is able to solve really simple cases and examples in a 10 x 10 map, but it fails as the problem gets a bit more complex. In cases where failing is very likely, the network has no change to find a correct policy.
I'm open minded for any idea that could improve performace in this cases.
(2) Is there a smarter way to transform the input vector for the network? I'm sure that adding the neighboring positons to the input vector on the one hand improve the interpolation of the q-values over the map, but on the other hand makes it harder to train special/important postions to the network. I already tried standard cartesian two-dimensional input (x/y) on an early stage, but failed.
(3) Is there another network type than feedforward network with backpropagation, that generally produces better results with q-function approximation? Have you seen projects, where a FF-nn performs well with bigger maps?
It's known that Q-Learning + a feedforward neural network as a q-function approximator can fail even in simple problems [Boyan & Moore, 1995].
Rich Sutton has a question in the FAQ of his web site related with this.
A possible explanation is the phenomenok known as interference described in [Barreto & Anderson, 2008]:
Interference happens when the update of one state–action pair changes the Q-values of other pairs, possibly in the wrong direction.
Interference is naturally associated with generalization, and also happens in conventional supervised learning. Nevertheless, in the reinforcement learning paradigm its effects tend to be much more harmful. The reason for this is twofold. First, the combination of interference and bootstrapping can easily become unstable, since the updates are no longer strictly local. The convergence proofs for the algorithms derived from (4) and (5) are based on the fact that these operators are contraction mappings, that is, their successive application results in a sequence converging to a fixed point which is the solution for the Bellman equation [14,36]. When using approximators, however, this asymptotic convergence is lost, [...]
Another source of instability is a consequence of the fact that in on-line reinforcement learning the distribution of the incoming data depends on the current policy. Depending on the dynamics of the system, the agent can remain for some time in a region of the state space which is not representative of the entire domain. In this situation, the learning algorithm may allocate excessive resources of the function approximator to represent that region, possibly “forgetting” the previous stored information.
One way to alleviate the interference problem is to use a local function approximator. The more independent each basis function is from each other, the less severe this problem is (in the limit, one has one basis function for each state, which corresponds to the lookup-table case) [86]. A class of local functions that have been widely used for approximation is the radial basis functions (RBFs) [52].
So, in your kind of problem (n*n gridworld), an RBF neural network should produce better results.
References
Boyan, J. A. & Moore, A. W. (1995) Generalization in reinforcement learning: Safely approximating the value function. NIPS-7. San Mateo, CA: Morgan Kaufmann.
André da Motta Salles Barreto & Charles W. Anderson (2008) Restricted gradient-descent algorithm for value-function approximation in reinforcement learning, Artificial Intelligence 172 (2008) 454–482

"Ringing Artifacts" and box filters

I want to use a box filter but I think it's causing "Ringing Artifiacts" (could be something else). Is there a connection between them- I think I remember my teacher mentioning it but I'm not totally sure what I'm seeing is "Ringing Artifacts" but that's the term he used. Is there a connection between the two? Or am I just witnessing the result of something else?
Ringing is an artifact that occurs when the kernel in the spatial domain has oscillations. Although the box filter has much oscillations in the Fourier domain, it is not the case in the spatial/temporal domain so you should be fine if you directly convolve in the spatial domain.
For instance, if you have a dirac and convolve it with your box filter, you'll strictly obtain a box, which is the expected result. Note that due to the infinite extent of the spectrum of the box kernel in the Fourier domain, this will not remove all the high frequencies (ie., you will still have high frequencies in your final signal, as illustrated with the box example).
However, if you filter with a box in the frequency domain, this corresponds to filtering with a sinc kernel in the spatial domain, which will produce ringing artifacts, but will perfectly remove high frequencies.
For this reason, people tend to make a compromise between keeping as little as possible high frequencies and not having oscillations. In any case, you cannot remove both at the same time (think of Heisenberg uncertainty principle : the product of the variance in the spatial domain with the variance in the frequency domain is bounded from below).
Such tradeoff can be one of the following (non-exhaustively) :
the Prolate kernel : it is optimal in a certain sense with respect to minimizing ringing while minimizing the amount of high frequencies. However, it's not easy to compute.
the Gabor kernel : it is also optimal in some other sense (a somewhat different criterion), but much easier to compute
the Gaussian / Hanning / Hamming kernels : they are not optimal, but most commonly used as they are pretty cheap and easy to analyze.
Ringing artifacts is a known expression.
Since there is a direct connection between filtering in the spatial domain and filtering in the frequency domain, you should start by considering how a box is represented in the later. That will cause the artifacts you see. So there is in fact a direct connection between the two (box filter and ringing artifacts).

Neural network for approximation function for board game

I am trying to make a neural network for approximation of some unkown function (for my neural network course). The problem is that this function has very many variables but many of them are not important (for example in [f(x,y,z) = x+y] z is not important). How could I design (and learn) network for this kind of problem?
To be more specific the function is an evaluation function for some board game with unkown rules and I need to somehow learn this rules by experience of the agent. After each move the score is given to the agent so actually it needs to find how to get max score.
I tried to pass the neighborhood of the agent to the network but there are too many variables which are not important for the score and agent is finding very local solutions.
If you have a sufficient amount of data, your ANN should be able to ignore the noisy inputs. You also may want to try other learning approaches like scaled conjugate gradient or simple heuristics like momentum or early stopping so your ANN isn't over learning the training data.
If you think there may be multiple, local solutions, and you think you can get enough training data, then you could try a "mixture of experts" approach. If you go with a mixture of experts, you should use ANNs that are too "small" to solve the entire problem to force it to use multiple experts.
So, you are given a set of states and actions and your target values are the score after the action is applied to the state? If this problem gets any hairier, it will sound like a reinforcement learning problem.
Does this game have discrete actions? Does it have a discrete state space? If so, maybe a decision tree would be worth trying?