I'm newbie in Neural network So I little bit confuse about ADAM optimezer. For Example I use MLP with architecture like this:
I've used SDG before, so I want to ask if changing the weight with adam's optimization is the same as SDG updating the weight on each layer? In the example above, does that mean there will be 2 weight changes from output to hid layer 2, 8 weight changes from hid layer 2 to hid layer 1, and finally 4 weight changes from hid layer 1 to input?
because of the example I see, they only update the weights from the output to the hidden layer 2 only.
You can use both SGD and Adam to calculate updates for every weight in your network (as long as the loss is differentiable w.r.t. the weight). If you use Tensorflow or Pytorch and build the model in the sketch, by default, all weights will be updated when you perform an optimizer step. (If you really want to, you can also limit the optimizer to only a subset of parameters.)
The difference between SGD and Adam is that with SGD the weight updates are simple steps in the direction of the (negative) gradient, while with Adam the gradient steps are scaled the using running statistics of previous weight updates.
Related
My project is to do an AI playing 2048 in python using genetic algorithm and neural-networks.
I'm using neural-network as the structure to compute what is the best move giving the current grid and the genetic algorithm to adjust the weights and bias. So I'm not using back-propagation.
For now, the best scores are only 512 which is just above what a fully random game product (in average a random run reach 128 and most lucky gives 256).
In a lot of tutorial in artificial intelligence, they said that find the right adjustments is a matter of experience. Since that's my first project in this domain I have no such experience.
My question is, giving all the information below, what can be adjusted to improve the results?
Bold numbers are parameters that I think must have to be adjusted.
I'm following the main steps in genetics algorithm:
generation:
I'm generating a population of 600 networks.
The structure of the networks is:
16 neurons for input layer (one for each square of the grid)
3 hidden layers with 10 neurons each
4 neurons for output
evaluation
I test all the population by having them do a full game of 2048 (of course it's a non-graphical simulation) and collecting their score and the value of the max tile. (I use multiprocessing to do several runs in parallel)
I calculate the fitness by score+(maxTileValue)²
selection
I simply sort them and kill the half most bad fitness
reproduction
for every network I need to recreate, I pick one of the survivors and I add to each weight a random value between -0.5 and 0.5 times 100/(average fitness of generation)
I'm repeating that for 200 generations.
I'm not using Tensorflow or any library.
Complete code on github(https://github.com/Nonoreve/2048Solver/tree/master/solver/AIsolver)
(I'm a French student so apologies for syntax and grammar).
I've seen in multiple places that you should disable dropout during validation and testing stages and only keep it during the training phase. Is there a reason why that should happen? I haven't been able to find a good reason for that and was just wondering.
One reason I'm asking is because I trained a model with dropout, and the results turned out well - about 80% accuracy. Then, I went on to validate the model but forgot to set the prob to 1 and the model's accuracy went down to about 70%. Is it supposed to be that drastic? And is it as simple as setting the prob to 1 in each dropout layer?
Thanks in advance!
Dropout is a random process of disabling neurons in a layer with chance p. This will make certain neurons feel they are 'wrong' in each iteration - basically, you are making neurons feel 'wrong' about their output so that they rely less on the outputs of the nodes in the previous layer. This is a method of regularization and reduces overfitting.
However, there are two main reasons you should not use dropout to test data:
Dropout makes neurons output 'wrong' values on purpose
Because you disable neurons randomly, your network will have different outputs every (sequences of) activation. This undermines consistency.
However, you might want to read some more on what validation/testing exactly is:
Training set: a set of examples used for learning: to fit the parameters of the classifier In the MLP case, we would use the training set to find the “optimal” weights with the back-prop rule
Validation set: a set of examples used to tune the parameters of a classifier In the MLP case, we would use the validation set to find the “optimal” number of hidden units or determine a stopping point for the back-propagation algorithm
Test set: a set of examples used only to assess the performance of a fully-trained classifier In the MLP case, we would use the test to estimate the error rate after we have chosen the final model (MLP size and actual weights) After assessing the final model on the test set, YOU MUST NOT tune the model any further!
Why separate test and validation sets? The error rate estimate of the final model on validation data will be biased (smaller than the true error rate) since the validation set is used to select the final model After assessing the final model on the test set, YOU MUST NOT tune the model any further!
source : Introduction to Pattern Analysis,Ricardo Gutierrez-OsunaTexas A&M University, Texas A&M University (answer)
So even for validation, how would you determine which nodes you remove if the nodes have a random probability of being disactivated?
Dropout is a method of making bagging practical for ensembles of very many large neural networks.
Along the same line we may remember that using the following false explanation:
For the new data, we can predict their classes by taking the average of the results from all the learners:
Since N is a constant we can just ignore it and the result remains the same, so we should disable dropout during validation and testing.
The true reason is much more complex. It is because of the weight scaling inference rule:
We can approximate p_{ensemble} by evaluating p(y|x) in one model: the model with all units, but with the weights going out of unit i multiplied by the probability of including unit i. The motivation for this modification is to capture the right expected value of the output from that unit. There is not yet any theoretical argument for the accuracy of this approximate inference rule in deep nonlinear networks, but empirically it performs very well.
When we train the model using dropout(for example for one layer) we zero out some outputs of some neurons and scale the others up by 1/keep_prob to keep the expectation of the layer almost the same as before. In the prediction process, we can use dropout but we can only get different predictions each time because we drop the values out randomly, then we need to run the prediction many times to get the expected output. Such a process is time-consuming so we can remove the dropout and the expectation of the layer remains the same.
Reference:
Difference between Bagging and Boosting?
7.12 of Deep Learning
Simplest reason can be, during prediction(test, validation or after production deployment) you want to use the capability of each and every learned neurons and really don't like to skip some of them randomly.
Thats the only reason we set probability as 1 during testing.
There is a Bayesian technique called Monte Carlo dropout in which the dropout would be not disabled during testing. The model will run several times with the same dropout rate(or in one go as a batch), and the mean(line 6 depicted below) and variance(line 7 depicted below) of the results will be calculated to determine the uncertainty.
Here is Uber's application to quantify uncertainty:
Short answer:
Dropouts to bring down over fitting in the training data. They are used as a regularization parameters. So if you have high variance (i.e. look at the difference between training set and validation set accuracy for this) then use drop out on training data, as it won't be good enough to apply dropout on test and validation data as you haven't been sure about the neurons which are going to shut off hence laying off the importance of random neurons which can be important.
In the context of a binary classification, I use a neural network with 1 hidden layer using a tanh activation function. The input is coming from a word2vect model and is normalized.
The classifier accuracy is between 49%-54%.
I used a confusion matrix to have a better understanding on what’s going on. I study the impact of feature number in input layer and the number of neurons in the hidden layer on the accuracy.
What I can observe from the confusion matrix is the fact that the model predict based on the parameters sometimes most of the lines as positives and sometimes most of the times as negatives.
Any suggestion why this issue happens? And which other points (other than input size and hidden layer size) might impact the accuracy of the classification?
Thanks
It's a bit hard to guess given the information you provide.
Are the labels balanced (50% positives, 50% negatives)? So this would mean your network is not training at all as your performance corresponds to the random performance, roughly. Is there maybe a bug in the preprocessing? Or is the task too difficult? What is the training set size?
I don't believe that the number of neurons is the issue, as long as it's reasonable, i.e. hundreds or a few thousand.
Alternatively, you can try another loss function, namely cross entropy, which is standard for multi-class classification and can also be used for binary classification:
https://www.tensorflow.org/api_docs/python/nn/classification#softmax_cross_entropy_with_logits
Hope this helps.
The data set is well balanced, 50% positive and negative.
The training set shape is (411426,X)
The training set shape is (68572,X)
X is the number of the feature coming from word2vec and I try with the values between [100,300]
I have 1 hidden layer, and the number of neurons that I test varied between [100,300]
I also test with mush smaller features/neurons size: 2-20 features and 10 neurons on the hidden layer.
I use also the cross entropy as cost fonction.
I'm trying to implement the Deep q-learning algorithm for a pong game.
I've already implemented Q-learning using a table as Q-function. It works very well and learns how to beat the naive AI within 10 minutes. But I can't make it work
using neural networks as a Q-function approximator.
I want to know if I am on the right track, so here is a summary of what I am doing:
I'm storing the current state, action taken and reward as current Experience in the replay memory
I'm using a multi layer perceptron as Q-function with 1 hidden layer with 512 hidden units. for the input -> hidden layer I am using a sigmoid activation function. For hidden -> output layer I'm using a linear activation function
A state is represented by the position of both players and the ball, as well as the velocity of the ball. Positions are remapped, to a much smaller state space.
I am using an epsilon-greedy approach for exploring the state space where epsilon gradually goes down to 0.
When learning, a random batch of 32 subsequent experiences is selected. Then I
compute the target q-values for all the current state and action Q(s, a).
forall Experience e in batch
if e == endOfEpisode
target = e.getReward
else
target = e.getReward + discountFactor*qMaxPostState
end
Now I have a set of 32 target Q values, I am training the neural network with those values using batch gradient descent. I am just doing 1 training step. How many should I do?
I am programming in Java and using Encog for the multilayer perceptron implementation. The problem is that training is very slow and performance is very weak. I think I am missing something, but can't figure out what. I would expect at least a somewhat decent result as the table approach has no problems.
I'm using a multi layer perceptron as Q-function with 1 hidden layer with 512 hidden units.
Might be too big. Depends on your input / output dimensionality and the problem. Did you try fewer?
Sanity checks
Can the network possibly learn the necessary function?
Collect ground truth input/output. Fit the network in a supervised way. Does it give the desired output?
A common error is to have the last activation function something wrong. Most of the time, you will want a linear activation function (as you have). Then you want the network to be as small as possible, because RL is pretty unstable: You can have 99 runs where it doesn't work and 1 where it works.
Do I explore enough?
Check how much you explore. Maybe you need more exploration, especially in the beginning?
See also
My DQN agent
keras-rl
Try using ReLu (or better Leaky ReLu)-Units in the hidden layer and a Linear-Activision for the output.
Try changing the optimizer, sometimes SGD with propper learning-rate-decay helps.
Sometimes ADAM works fine.
Reduce the number of hidden units. It might be just too much.
Adjust the learning rate. The more units you have, the more impact does the learning rate have as the output is the weighted sum of all neurons before.
Try using the local position of the ball meaning: ballY - paddleY. This can help drastically as it reduces the data to: above or below the paddle distinguished by the sign. Remember: if you use the local position, you won't need the players paddle-position and the enemies paddle position must be local too.
Instead of the velocity, you can give it the previous state as an additional input.
The network can calculate the difference between those 2 steps.
In my project, one of my objectives is to find outliers in aeronautical engine data and chose to use the Replicator Neural Network to do so and read the following report on it (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.12.3366&rep=rep1&type=pdf) and am having a slight understanding issue with the step-wise function (page 4, figure 3) and the prediction values due to it.
The explanation of a replicator neural network is best described in the above report but as a background the replicator neural network I have built works by having the same number of outputs as inputs and having 3 hidden layers with the following activation functions:
Hidden layer 1 = tanh sigmoid S1(θ) = tanh,
Hidden layer 2 = step-wise, S2(θ) = 1/2 + 1/(2(k − 1)) {summation each variable j} tanh[a3(θ −j/N)]
Hidden Layer 3 = tanh sigmoid S1(θ) = tanh,
Output Layer 4 = normal sigmoid S3(θ) = 1/1+e^-θ
I have implemented the algorithm and it seems to be training (since the mean squared error decreases steadily during training). The only thing I don't understand is how the predictions are made when the middle layer with the step-wise activation function is applied since it causes the 3 middle nodes' activations to be become specific discrete values (e.g. my last activations on the 3 middle were 1.0, -1.0, 2.0 ) , this causes these values to be forward propagated and me getting very similar or exactly the same predictions every time.
The section in the report on page 3-4 best describes the algorithm but i have no idea what i have to do to fix this, i don't have much time either :(
Any help would be greatly appreciated.
Thank you
I'm facing the problem of implementing this algorithm and here is my insight into the problem that you might have had: The middle layer, by utilizing a step-wise function, is essentially performing clustering on the data. Each layer transforms the data into a discrete number which could be interpreted as a coordinate in a grid system. Imagine we use two neurons in the middle layer with step-wise values ranging from -2 to +2 in increments of 1. This way we define a 5x5 grid where each set of features will be placed. The more steps you allow, the more grids. The more grids, the more "clusters" you have.
This all sounds good and all. After all, we are compressing the data into a smaller (dimensional) representation which then is used to try to reconstruct into the original input.
This step-wise function, however, has a big problem on itself: back-propagation does not work (in theory) with step-wise functions. You can find more about this in this paper. In this last paper they suggest switching the step-wise function with a ramp-like function. That is, to have almost an infinite amount of clusters.
Your problem might be directly related to this. Try switching the step-wise function with a ramp-wise one and measure how the error changes throughout the learning phase.
By the way, do you have any of this code available anywhere for other researchers to use?