Q-learning using neural networks - neural-network

I'm trying to implement the Deep q-learning algorithm for a pong game.
I've already implemented Q-learning using a table as Q-function. It works very well and learns how to beat the naive AI within 10 minutes. But I can't make it work
using neural networks as a Q-function approximator.
I want to know if I am on the right track, so here is a summary of what I am doing:
I'm storing the current state, action taken and reward as current Experience in the replay memory
I'm using a multi layer perceptron as Q-function with 1 hidden layer with 512 hidden units. for the input -> hidden layer I am using a sigmoid activation function. For hidden -> output layer I'm using a linear activation function
A state is represented by the position of both players and the ball, as well as the velocity of the ball. Positions are remapped, to a much smaller state space.
I am using an epsilon-greedy approach for exploring the state space where epsilon gradually goes down to 0.
When learning, a random batch of 32 subsequent experiences is selected. Then I
compute the target q-values for all the current state and action Q(s, a).
forall Experience e in batch
if e == endOfEpisode
target = e.getReward
else
target = e.getReward + discountFactor*qMaxPostState
end
Now I have a set of 32 target Q values, I am training the neural network with those values using batch gradient descent. I am just doing 1 training step. How many should I do?
I am programming in Java and using Encog for the multilayer perceptron implementation. The problem is that training is very slow and performance is very weak. I think I am missing something, but can't figure out what. I would expect at least a somewhat decent result as the table approach has no problems.

I'm using a multi layer perceptron as Q-function with 1 hidden layer with 512 hidden units.
Might be too big. Depends on your input / output dimensionality and the problem. Did you try fewer?
Sanity checks
Can the network possibly learn the necessary function?
Collect ground truth input/output. Fit the network in a supervised way. Does it give the desired output?
A common error is to have the last activation function something wrong. Most of the time, you will want a linear activation function (as you have). Then you want the network to be as small as possible, because RL is pretty unstable: You can have 99 runs where it doesn't work and 1 where it works.
Do I explore enough?
Check how much you explore. Maybe you need more exploration, especially in the beginning?
See also
My DQN agent
keras-rl

Try using ReLu (or better Leaky ReLu)-Units in the hidden layer and a Linear-Activision for the output.
Try changing the optimizer, sometimes SGD with propper learning-rate-decay helps.
Sometimes ADAM works fine.
Reduce the number of hidden units. It might be just too much.
Adjust the learning rate. The more units you have, the more impact does the learning rate have as the output is the weighted sum of all neurons before.
Try using the local position of the ball meaning: ballY - paddleY. This can help drastically as it reduces the data to: above or below the paddle distinguished by the sign. Remember: if you use the local position, you won't need the players paddle-position and the enemies paddle position must be local too.
Instead of the velocity, you can give it the previous state as an additional input.
The network can calculate the difference between those 2 steps.

Related

Reinforcement Learning- Won't Converge

I'm working on my bachelor thesis.
My topic is reinforcement learning. The Setup:
Unity3D (C#)
Own neural network framework
Confirmed the network working by testing to training a sine-function.
It can approximate it. Well. there are some values which won't get to their desired value but it's good enough.
When training it with single Values it always converges.
Here is my problem:
I try to teach my network the Q-Value-Function of a simple game,
catch balls:
In this game it just has to catch a ball dropping from random position and with random angle.
+1 if catch
-1 if failed
My network-model has 1 hidden layer with neurons ranging from 45-180 (i tested this numbers with no success)
It uses replay with 32 samples from a 100k memory with a learning-rate of 0.0001
It learns for 50000 frames then tests for 10000 frames. This happens 10 times.
Inputs are PlatformPosX, BallPosX, BallPosY from the last 4 frames
Pseudocode:
Choose action (e-greedy)
Do action,
Store state action, CurrentReward. Done in memory
if in learnphase: Replay
My problem is:
Its actions starts clipping to either 0 or 1 with some variance sometimes.
It never has a ideal policy like if the platform would just follow the ball.
EDIT:
Sorry for cheap info...
My Quality-Function is trained by:
Reward + Gamma(nextEstimated_Reward)
So its discounting.
Why would you possibly expect that to work?
Your training can barely approximate a 1-dimensional function. And now you expect it to solve a 12-dimensional function which involves a differential equation? You should have verified first whether your training does even converge for a multi dimensional function at all, with the chosen training parameters.
Your training, given the little detail you provided, also appears to be unsuitable. There is hardly a chance it ever successfully catches the ball, and even when it does, you are rewarding it mostly for random outputs. Only correlation between in- and output is in the last few frames when the pad can only reach the target in time by a limited set of possible actions.
Then there is the choice of inputs. Don't require your model to differentiate by itself. Relevant inputs would had been x, y, dx, dy. Preferably even x, y relative to pad position, not world. Should have a much better chance to converge. Even if it was only learning to keep x minimal.
Working with absolute world coordinates is pretty much bound to fail, as it would require the training to cover the entire range of possible input combinations. And also the network to be big enough to even store all the combinations. Be aware that the network isn't learning the actual function, it's learning an approximation for every single possible set of inputs. Even if the ideal solution is actually just a linear equation, the non linear properties of the activation function make it impossible to learn it in a generalized form for unbound inputs.

Reinforcement Learning training

I am new to reinforcement learning and doing a project about chess. I use neural network and temporal difference learning to train the engine learn the game.
The neural network has one input layer (of 385 features), two hidden layers and one output layer, whose range is [-1,1] where -1 means lose and 1 win (0 draw). I use TD-lambda to self-learn the chess, and the default case is to only consider next 10 moves. All the weights are initialized in the range [-1, 1].
I use forward propagation to estimate the value of state, but most of the values are very close to either 1 or -1, even the result is draw, which I think the engine doesn't learn well. I think some values are large and dominate the result, changing small weights does no help. I change the size of two hidden layers but it doesn't work (However, I have tried a toy example with small size and dimension, it can converge and the estimated value is very close to the target one after dozens of iteration). I don't know how to fix this, could someone give me some advice?
Thank you.
Some references are listed below
Giraffe: Using Deep Reinforcement Learning to Play Chess
KnightCap
Web Stanford

How can I improve the performance of a feedforward network as a q-value function approximator?

I'm trying to navigate an agent in a n*n gridworld domain by using Q-Learning + a feedforward neural network as a q-function approximator. Basically the agent should find the best/shortest way to reach a certain terminal goal position (+10 reward). Every step the agent takes it gets -1 reward. In the gridworld there are also some positions the agent should avoid (-10 reward, terminal states,too).
So far I implemented a Q-learning algorithm, that saves all Q-values in a Q-table and the agent performs well.
In the next step, I want to replace the Q-table by a neural network, trained online after every step of the agent. I tried a feedforward NN with one hidden layer and four outputs, representing the Q-values for the possible actions in the gridworld (north,south,east, west).
As input I used a nxn zero-matrix, that has a "1" at the current positions of the agent.
To reach my goal I tried to solve the problem from the ground up:
Explore the gridworld with standard Q-Learning and use the Q-map as training data for the Network once Q-Learning is finished
--> worked fine
Use Q-Learning and provide the updates of the Q-map as trainingdata
for NN (batchSize = 1)
--> worked good
Replacy the Q-Map completely by the NN. (This is the point, when it gets interesting!)
-> FIRST MAP: 4 x 4
As described above, I have 16 "discrete" Inputs, 4 Output and it works fine with 8 neurons(relu) in the hidden layer (learning rate: 0.05). I used a greedy policy with an epsilon, that reduces from 1 to 0.1 within 60 episodes.
The test scenario is shown here. Performance is compared beetween standard qlearning with q-map and "neural" qlearning (in this case i used 8 neurons and differnt dropOut rates).
To sum it up: Neural Q-learning works good for small grids, also the performance is okay and reliable.
-> Bigger MAP: 10 x 10
Now I tried to use the neural network for bigger maps.
At first I tried this simple case.
In my case the neural net looks as following: 100 input; 4 Outputs; about 30 neurons(relu) in one hidden layer; again I used a decreasing exploring factor for greedy policy; over 200 episodes the learning rate decreases from 0.1 to 0.015 to increase stability.
At frist I had problems with convergence and interpolation between single positions caused by the discrete input vector.
To solve this I added some neighbour positions to the vector with values depending on thier distance to the current position. This improved the learning a lot and the policy got better. Performance with 24 neurons is seen in the picture above.
Summary: the simple case is solved by the network, but only with a lot of parameter tuning (number of neurons, exploration factor, learning rate) and special input transformation.
Now here are my questions/problems I still haven't solved:
(1) My network is able to solve really simple cases and examples in a 10 x 10 map, but it fails as the problem gets a bit more complex. In cases where failing is very likely, the network has no change to find a correct policy.
I'm open minded for any idea that could improve performace in this cases.
(2) Is there a smarter way to transform the input vector for the network? I'm sure that adding the neighboring positons to the input vector on the one hand improve the interpolation of the q-values over the map, but on the other hand makes it harder to train special/important postions to the network. I already tried standard cartesian two-dimensional input (x/y) on an early stage, but failed.
(3) Is there another network type than feedforward network with backpropagation, that generally produces better results with q-function approximation? Have you seen projects, where a FF-nn performs well with bigger maps?
It's known that Q-Learning + a feedforward neural network as a q-function approximator can fail even in simple problems [Boyan & Moore, 1995].
Rich Sutton has a question in the FAQ of his web site related with this.
A possible explanation is the phenomenok known as interference described in [Barreto & Anderson, 2008]:
Interference happens when the update of one state–action pair changes the Q-values of other pairs, possibly in the wrong direction.
Interference is naturally associated with generalization, and also happens in conventional supervised learning. Nevertheless, in the reinforcement learning paradigm its effects tend to be much more harmful. The reason for this is twofold. First, the combination of interference and bootstrapping can easily become unstable, since the updates are no longer strictly local. The convergence proofs for the algorithms derived from (4) and (5) are based on the fact that these operators are contraction mappings, that is, their successive application results in a sequence converging to a fixed point which is the solution for the Bellman equation [14,36]. When using approximators, however, this asymptotic convergence is lost, [...]
Another source of instability is a consequence of the fact that in on-line reinforcement learning the distribution of the incoming data depends on the current policy. Depending on the dynamics of the system, the agent can remain for some time in a region of the state space which is not representative of the entire domain. In this situation, the learning algorithm may allocate excessive resources of the function approximator to represent that region, possibly “forgetting” the previous stored information.
One way to alleviate the interference problem is to use a local function approximator. The more independent each basis function is from each other, the less severe this problem is (in the limit, one has one basis function for each state, which corresponds to the lookup-table case) [86]. A class of local functions that have been widely used for approximation is the radial basis functions (RBFs) [52].
So, in your kind of problem (n*n gridworld), an RBF neural network should produce better results.
References
Boyan, J. A. & Moore, A. W. (1995) Generalization in reinforcement learning: Safely approximating the value function. NIPS-7. San Mateo, CA: Morgan Kaufmann.
André da Motta Salles Barreto & Charles W. Anderson (2008) Restricted gradient-descent algorithm for value-function approximation in reinforcement learning, Artificial Intelligence 172 (2008) 454–482

Encog - Using Hybrid Neural Networks

How is using simulated annealing in conjunction with a feed-forward neural network different than simply resetting the weights (and placing the hidden layer into a new error valley) when a local minimum is reached? Is simulated annealing used by the FFNN as a more systematic way of moving the weights around to find a global minimum, and hence only one iteration is performed each time the validation error begins to increase relative to the training error... slowly moving the current position across the error function? In this case, the simulated annealing is independent of the feed-forward network and the the feed-forward network is dependent on the simulated annealing output. If not, and the simulated annealing is directly dependent on results from the FFNN, I don't see how the simulated annealing trainer would receive this information in terms of how to update its own weights (if that makes sense). One of the examples mentions a cycle (multiple iterations), which doesn't fit into my first assumption.
I have looked at different exmaples, where network.fromArray() and network.toArray() are used, but I only see network.encodeToArray() and network.decodeFromArray(). What is the most current way (v3.2) to transfer weights from one type of network to another? Is this the same for using genetic algorithms, etc?
Neural network training algorithms, such as simulated annealing are essentially searches. The weights of the neural network are essentially vector coordinates that specify a location in a high dimension space.
Consider hill-climbing, possibly the most simple training algorithm. You adjust one weight, thus moving in one dimension and see if it improves your score. If the score is improved, then great, stay there and try a different dimension next iteration. If your score is NOT improved, retreat and try a different dimension next time. Think of a human looking at every point they can reach in one step and choosing the step that increases their altitude the most. If no step will increase altitude (you are standing in the middle of a valley), then your stuck. This is a local minimum.
Simulated annealing adds one critical component to hill-climbing. We might move to a lesser a worse location. (not greedy) The probability that we will move to a lesser location is determined by the decreasing temperature.
If you look inside of the NeuralSimulatedAnnealing classes you will see calls to NetworkCODEC.NetworkToArray() and NetworkCODEC.ArrayToNetwork(). These are how the weight vector is directly updated.

Replicator Neural Network for outlier detection, Step-wise function causing same prediction

In my project, one of my objectives is to find outliers in aeronautical engine data and chose to use the Replicator Neural Network to do so and read the following report on it (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.12.3366&rep=rep1&type=pdf) and am having a slight understanding issue with the step-wise function (page 4, figure 3) and the prediction values due to it.
The explanation of a replicator neural network is best described in the above report but as a background the replicator neural network I have built works by having the same number of outputs as inputs and having 3 hidden layers with the following activation functions:
Hidden layer 1 = tanh sigmoid S1(θ) = tanh,
Hidden layer 2 = step-wise, S2(θ) = 1/2 + 1/(2(k − 1)) {summation each variable j} tanh[a3(θ −j/N)]
Hidden Layer 3 = tanh sigmoid S1(θ) = tanh,
Output Layer 4 = normal sigmoid S3(θ) = 1/1+e^-θ
I have implemented the algorithm and it seems to be training (since the mean squared error decreases steadily during training). The only thing I don't understand is how the predictions are made when the middle layer with the step-wise activation function is applied since it causes the 3 middle nodes' activations to be become specific discrete values (e.g. my last activations on the 3 middle were 1.0, -1.0, 2.0 ) , this causes these values to be forward propagated and me getting very similar or exactly the same predictions every time.
The section in the report on page 3-4 best describes the algorithm but i have no idea what i have to do to fix this, i don't have much time either :(
Any help would be greatly appreciated.
Thank you
I'm facing the problem of implementing this algorithm and here is my insight into the problem that you might have had: The middle layer, by utilizing a step-wise function, is essentially performing clustering on the data. Each layer transforms the data into a discrete number which could be interpreted as a coordinate in a grid system. Imagine we use two neurons in the middle layer with step-wise values ranging from -2 to +2 in increments of 1. This way we define a 5x5 grid where each set of features will be placed. The more steps you allow, the more grids. The more grids, the more "clusters" you have.
This all sounds good and all. After all, we are compressing the data into a smaller (dimensional) representation which then is used to try to reconstruct into the original input.
This step-wise function, however, has a big problem on itself: back-propagation does not work (in theory) with step-wise functions. You can find more about this in this paper. In this last paper they suggest switching the step-wise function with a ramp-like function. That is, to have almost an infinite amount of clusters.
Your problem might be directly related to this. Try switching the step-wise function with a ramp-wise one and measure how the error changes throughout the learning phase.
By the way, do you have any of this code available anywhere for other researchers to use?