My project is to do an AI playing 2048 in python using genetic algorithm and neural-networks.
I'm using neural-network as the structure to compute what is the best move giving the current grid and the genetic algorithm to adjust the weights and bias. So I'm not using back-propagation.
For now, the best scores are only 512 which is just above what a fully random game product (in average a random run reach 128 and most lucky gives 256).
In a lot of tutorial in artificial intelligence, they said that find the right adjustments is a matter of experience. Since that's my first project in this domain I have no such experience.
My question is, giving all the information below, what can be adjusted to improve the results?
Bold numbers are parameters that I think must have to be adjusted.
I'm following the main steps in genetics algorithm:
generation:
I'm generating a population of 600 networks.
The structure of the networks is:
16 neurons for input layer (one for each square of the grid)
3 hidden layers with 10 neurons each
4 neurons for output
evaluation
I test all the population by having them do a full game of 2048 (of course it's a non-graphical simulation) and collecting their score and the value of the max tile. (I use multiprocessing to do several runs in parallel)
I calculate the fitness by score+(maxTileValue)²
selection
I simply sort them and kill the half most bad fitness
reproduction
for every network I need to recreate, I pick one of the survivors and I add to each weight a random value between -0.5 and 0.5 times 100/(average fitness of generation)
I'm repeating that for 200 generations.
I'm not using Tensorflow or any library.
Complete code on github(https://github.com/Nonoreve/2048Solver/tree/master/solver/AIsolver)
(I'm a French student so apologies for syntax and grammar).
Related
I'm working on my bachelor thesis.
My topic is reinforcement learning. The Setup:
Unity3D (C#)
Own neural network framework
Confirmed the network working by testing to training a sine-function.
It can approximate it. Well. there are some values which won't get to their desired value but it's good enough.
When training it with single Values it always converges.
Here is my problem:
I try to teach my network the Q-Value-Function of a simple game,
catch balls:
In this game it just has to catch a ball dropping from random position and with random angle.
+1 if catch
-1 if failed
My network-model has 1 hidden layer with neurons ranging from 45-180 (i tested this numbers with no success)
It uses replay with 32 samples from a 100k memory with a learning-rate of 0.0001
It learns for 50000 frames then tests for 10000 frames. This happens 10 times.
Inputs are PlatformPosX, BallPosX, BallPosY from the last 4 frames
Pseudocode:
Choose action (e-greedy)
Do action,
Store state action, CurrentReward. Done in memory
if in learnphase: Replay
My problem is:
Its actions starts clipping to either 0 or 1 with some variance sometimes.
It never has a ideal policy like if the platform would just follow the ball.
EDIT:
Sorry for cheap info...
My Quality-Function is trained by:
Reward + Gamma(nextEstimated_Reward)
So its discounting.
Why would you possibly expect that to work?
Your training can barely approximate a 1-dimensional function. And now you expect it to solve a 12-dimensional function which involves a differential equation? You should have verified first whether your training does even converge for a multi dimensional function at all, with the chosen training parameters.
Your training, given the little detail you provided, also appears to be unsuitable. There is hardly a chance it ever successfully catches the ball, and even when it does, you are rewarding it mostly for random outputs. Only correlation between in- and output is in the last few frames when the pad can only reach the target in time by a limited set of possible actions.
Then there is the choice of inputs. Don't require your model to differentiate by itself. Relevant inputs would had been x, y, dx, dy. Preferably even x, y relative to pad position, not world. Should have a much better chance to converge. Even if it was only learning to keep x minimal.
Working with absolute world coordinates is pretty much bound to fail, as it would require the training to cover the entire range of possible input combinations. And also the network to be big enough to even store all the combinations. Be aware that the network isn't learning the actual function, it's learning an approximation for every single possible set of inputs. Even if the ideal solution is actually just a linear equation, the non linear properties of the activation function make it impossible to learn it in a generalized form for unbound inputs.
I am new to reinforcement learning and doing a project about chess. I use neural network and temporal difference learning to train the engine learn the game.
The neural network has one input layer (of 385 features), two hidden layers and one output layer, whose range is [-1,1] where -1 means lose and 1 win (0 draw). I use TD-lambda to self-learn the chess, and the default case is to only consider next 10 moves. All the weights are initialized in the range [-1, 1].
I use forward propagation to estimate the value of state, but most of the values are very close to either 1 or -1, even the result is draw, which I think the engine doesn't learn well. I think some values are large and dominate the result, changing small weights does no help. I change the size of two hidden layers but it doesn't work (However, I have tried a toy example with small size and dimension, it can converge and the estimated value is very close to the target one after dozens of iteration). I don't know how to fix this, could someone give me some advice?
Thank you.
Some references are listed below
Giraffe: Using Deep Reinforcement Learning to Play Chess
KnightCap
Web Stanford
I'm trying to implement the Deep q-learning algorithm for a pong game.
I've already implemented Q-learning using a table as Q-function. It works very well and learns how to beat the naive AI within 10 minutes. But I can't make it work
using neural networks as a Q-function approximator.
I want to know if I am on the right track, so here is a summary of what I am doing:
I'm storing the current state, action taken and reward as current Experience in the replay memory
I'm using a multi layer perceptron as Q-function with 1 hidden layer with 512 hidden units. for the input -> hidden layer I am using a sigmoid activation function. For hidden -> output layer I'm using a linear activation function
A state is represented by the position of both players and the ball, as well as the velocity of the ball. Positions are remapped, to a much smaller state space.
I am using an epsilon-greedy approach for exploring the state space where epsilon gradually goes down to 0.
When learning, a random batch of 32 subsequent experiences is selected. Then I
compute the target q-values for all the current state and action Q(s, a).
forall Experience e in batch
if e == endOfEpisode
target = e.getReward
else
target = e.getReward + discountFactor*qMaxPostState
end
Now I have a set of 32 target Q values, I am training the neural network with those values using batch gradient descent. I am just doing 1 training step. How many should I do?
I am programming in Java and using Encog for the multilayer perceptron implementation. The problem is that training is very slow and performance is very weak. I think I am missing something, but can't figure out what. I would expect at least a somewhat decent result as the table approach has no problems.
I'm using a multi layer perceptron as Q-function with 1 hidden layer with 512 hidden units.
Might be too big. Depends on your input / output dimensionality and the problem. Did you try fewer?
Sanity checks
Can the network possibly learn the necessary function?
Collect ground truth input/output. Fit the network in a supervised way. Does it give the desired output?
A common error is to have the last activation function something wrong. Most of the time, you will want a linear activation function (as you have). Then you want the network to be as small as possible, because RL is pretty unstable: You can have 99 runs where it doesn't work and 1 where it works.
Do I explore enough?
Check how much you explore. Maybe you need more exploration, especially in the beginning?
See also
My DQN agent
keras-rl
Try using ReLu (or better Leaky ReLu)-Units in the hidden layer and a Linear-Activision for the output.
Try changing the optimizer, sometimes SGD with propper learning-rate-decay helps.
Sometimes ADAM works fine.
Reduce the number of hidden units. It might be just too much.
Adjust the learning rate. The more units you have, the more impact does the learning rate have as the output is the weighted sum of all neurons before.
Try using the local position of the ball meaning: ballY - paddleY. This can help drastically as it reduces the data to: above or below the paddle distinguished by the sign. Remember: if you use the local position, you won't need the players paddle-position and the enemies paddle position must be local too.
Instead of the velocity, you can give it the previous state as an additional input.
The network can calculate the difference between those 2 steps.
I'm trying to navigate an agent in a n*n gridworld domain by using Q-Learning + a feedforward neural network as a q-function approximator. Basically the agent should find the best/shortest way to reach a certain terminal goal position (+10 reward). Every step the agent takes it gets -1 reward. In the gridworld there are also some positions the agent should avoid (-10 reward, terminal states,too).
So far I implemented a Q-learning algorithm, that saves all Q-values in a Q-table and the agent performs well.
In the next step, I want to replace the Q-table by a neural network, trained online after every step of the agent. I tried a feedforward NN with one hidden layer and four outputs, representing the Q-values for the possible actions in the gridworld (north,south,east, west).
As input I used a nxn zero-matrix, that has a "1" at the current positions of the agent.
To reach my goal I tried to solve the problem from the ground up:
Explore the gridworld with standard Q-Learning and use the Q-map as training data for the Network once Q-Learning is finished
--> worked fine
Use Q-Learning and provide the updates of the Q-map as trainingdata
for NN (batchSize = 1)
--> worked good
Replacy the Q-Map completely by the NN. (This is the point, when it gets interesting!)
-> FIRST MAP: 4 x 4
As described above, I have 16 "discrete" Inputs, 4 Output and it works fine with 8 neurons(relu) in the hidden layer (learning rate: 0.05). I used a greedy policy with an epsilon, that reduces from 1 to 0.1 within 60 episodes.
The test scenario is shown here. Performance is compared beetween standard qlearning with q-map and "neural" qlearning (in this case i used 8 neurons and differnt dropOut rates).
To sum it up: Neural Q-learning works good for small grids, also the performance is okay and reliable.
-> Bigger MAP: 10 x 10
Now I tried to use the neural network for bigger maps.
At first I tried this simple case.
In my case the neural net looks as following: 100 input; 4 Outputs; about 30 neurons(relu) in one hidden layer; again I used a decreasing exploring factor for greedy policy; over 200 episodes the learning rate decreases from 0.1 to 0.015 to increase stability.
At frist I had problems with convergence and interpolation between single positions caused by the discrete input vector.
To solve this I added some neighbour positions to the vector with values depending on thier distance to the current position. This improved the learning a lot and the policy got better. Performance with 24 neurons is seen in the picture above.
Summary: the simple case is solved by the network, but only with a lot of parameter tuning (number of neurons, exploration factor, learning rate) and special input transformation.
Now here are my questions/problems I still haven't solved:
(1) My network is able to solve really simple cases and examples in a 10 x 10 map, but it fails as the problem gets a bit more complex. In cases where failing is very likely, the network has no change to find a correct policy.
I'm open minded for any idea that could improve performace in this cases.
(2) Is there a smarter way to transform the input vector for the network? I'm sure that adding the neighboring positons to the input vector on the one hand improve the interpolation of the q-values over the map, but on the other hand makes it harder to train special/important postions to the network. I already tried standard cartesian two-dimensional input (x/y) on an early stage, but failed.
(3) Is there another network type than feedforward network with backpropagation, that generally produces better results with q-function approximation? Have you seen projects, where a FF-nn performs well with bigger maps?
It's known that Q-Learning + a feedforward neural network as a q-function approximator can fail even in simple problems [Boyan & Moore, 1995].
Rich Sutton has a question in the FAQ of his web site related with this.
A possible explanation is the phenomenok known as interference described in [Barreto & Anderson, 2008]:
Interference happens when the update of one state–action pair changes the Q-values of other pairs, possibly in the wrong direction.
Interference is naturally associated with generalization, and also happens in conventional supervised learning. Nevertheless, in the reinforcement learning paradigm its effects tend to be much more harmful. The reason for this is twofold. First, the combination of interference and bootstrapping can easily become unstable, since the updates are no longer strictly local. The convergence proofs for the algorithms derived from (4) and (5) are based on the fact that these operators are contraction mappings, that is, their successive application results in a sequence converging to a fixed point which is the solution for the Bellman equation [14,36]. When using approximators, however, this asymptotic convergence is lost, [...]
Another source of instability is a consequence of the fact that in on-line reinforcement learning the distribution of the incoming data depends on the current policy. Depending on the dynamics of the system, the agent can remain for some time in a region of the state space which is not representative of the entire domain. In this situation, the learning algorithm may allocate excessive resources of the function approximator to represent that region, possibly “forgetting” the previous stored information.
One way to alleviate the interference problem is to use a local function approximator. The more independent each basis function is from each other, the less severe this problem is (in the limit, one has one basis function for each state, which corresponds to the lookup-table case) [86]. A class of local functions that have been widely used for approximation is the radial basis functions (RBFs) [52].
So, in your kind of problem (n*n gridworld), an RBF neural network should produce better results.
References
Boyan, J. A. & Moore, A. W. (1995) Generalization in reinforcement learning: Safely approximating the value function. NIPS-7. San Mateo, CA: Morgan Kaufmann.
André da Motta Salles Barreto & Charles W. Anderson (2008) Restricted gradient-descent algorithm for value-function approximation in reinforcement learning, Artificial Intelligence 172 (2008) 454–482
I have a 4 Input and 3 Output Neural network trained by particle swarm optimization (PSO) with Mean square error (MSE) as the fitness function using the IRIS Database provided by MATLAB. The fitness function is evaluated 50 times. The experiment is to classify features. I have a few doubts
(1) Does the PSO iterations/generations = number of times the fitness function is evaluated?
(2) In many papers I have seen the training curve of MSE vs generations being plot. In the picture, the graph (a) on left side is a model similar to NN. It is a 4 input-0 hidden layer-3 output cognitive map. And graph (b) is a NN trained by the same PSO. The purpose of this paper was to show the effectiveness of the new model in (a) over NN.
But they mention that the experiment is conducted say Cycles = 100 times with Generations =300. In that case, the Training curve for (a) and (b) should have been MSE vs Cycles and not MSE vs PSO generations ? For ex, Cycle1 : PSO iteration 1-50 --> Result(Weights_1,Bias_1, MSE_1, Classification Rate_1). Cycle2: PSO iteration 1- 50 -->Result(Weights_2,Bias_2, MSE_2, Classification Rate_2) and so on for 100 Cycles. How come the X axis in (a),(b) is different and what do they mean?
(3) Lastly, for every independent run of the program (Running the m file several times independently, through the console) , I never get the same classification rate (CR) or the same set of weights. Concretely, when I first run the program I get W (Weights) values and CR =100%. When I again run the Matlab code program, I may get CR = 50% and another set of weights!! As shown below for an example,
%Run1 (PSO generaions 1-50)
>>PSO_NN.m
Correlation =
0
Classification rate = 25
FinalWeightsBias =
-0.1156 0.2487 2.2868 0.4460 0.3013 2.5761
%Run2 (PSO generaions 1-50)
>>PSO_NN.m
Correlation =
1
Classification rate = 100
%Run3 (PSO generaions 1-50)
>>PSO_NN.m
Correlation =
-0.1260
Classification rate = 37.5
FinalWeightsBias =
-0.1726 0.3468 0.6298 -0.0373 0.2954 -0.3254
What should be the correct method? So, which weight set should I finally take and how do I say that the network has been trained? I am aware that evolutionary algorithms due to their randomness will never give the same answer, but then how do I ensure that the network has been trained?
Shall be obliged for clarification.
As in most machine learning methods, the number of iterations in PSO is the number of times the solution is updated. In the case of PSO, this is the number of update rounds over all particles. The cost function here is evaluated after every particle is updated, so more than the number of iterations. Approximately, (# cost function calls) = (# iterations) * (# particles).
The graphs here are comparing different classifiers, fuzzy cognitive maps for graph (a) and a neural network for graph (b). So the X-axis displays the relevant measures of learning iterations for each.
Every time you run your NN you initialize it with different random values, so the results are never the same. The fact that the results vary greatly from one run to the next means you have a convergence problem. The first thing to do in this case is to try running more iterations. In general convergence is a rather complicated issue and the solutions vary greatly with applications (and read carefully through the answer and comments Isaac gave you on your other question). If your problem persists after increasing the number of iterations, you can post it as a new question, providing a sample of your data and the actual code you use to construct and train the network.