Drop out in regression task for neural network - neural-network

I have a neural network for regression prediction means that the output is a real value number in range 0 to 1.
I used drop out for all layers and the errors suddenly increased and never converged.
Is drop out usable for regression task? Because if we disregard some nodes, the last layer will have fewer nodes and the predicted value will definitely very different from the actual value. So the back propagated error will be large and the model will be destroyed. Then Why should we use drop out for regression task in neural networks?

Because if we disregard some nodes, the last layer will have fewer
nodes and the predicted value will definitely very different from the
actual value.
You are correct. Hence most frameworks scale up the number of neurons during training (and don't during prediction time). This simple hack is effective and works well for most cases. However, it doesn't work that well for a regression task. It works well where the outputs of activation can be relative to each other (like softmax). In regression the values are absolute and the small differences in "train" and "prediction" setups do cause mild instabilities on occasions.
It is always best to start with a 0 dropout and then increase it slowly to observe what value gives the best result

I used drop out for all layers and the errors suddenly increased and never converged.
This also happens when you use too many dropouts, especially in regression tasks. Did you tried reducing dropouts?? Also, dropouts is recommended for those layers which has very high number of trainable parameters. Also consider removing dropouts from last layer and then check once.

Related

Neural Network learns better at high output values

I'm training a feed forward neural network
(stochastic gradient descent, 3 small hidden layers, elu activation, inputs scaled between 0 and 1, weights initialized according to TiRune from
https://stats.stackexchange.com/questions/229885/whats-the-recommended-weight-initialization-strategy-when-using-the-elu-activat)
on a function that outputs values from about 0 to 55.000. I'm satisfied with the result, it learns to approximate the function pretty well.
But when I scale the outputs to be between 0 and 1 (just outputs divided by 55.000) it stops learning pretty early, it performs much worse. I tried various learning rates, of course.
Is there a reason it learns much better when the output values are between 0 and 55.000 than when they are between 0 and 1? Or does it not make any sense and my problem is somewhere else?
If I understand correctly, the only difference between the networks is the output scaling (target scaling). For complete answer, I will give a list of possible reasons and include the learning rate you mentioned:
How scaling the outputs can effect my learning?
You may have a bug. If you scale the networks output, make sure you scale both predictions and real outputs that you feed during the training, validation and test.
Your output activation function cannot output at the target range. For instance, sigmoids can output values between 0 and 1. Scaling output values between 0 and 10 will reduce performance since the targets cannot be produced.
Make sure you use correct data types. Normalization can be good, but if normalization causes loss of information due to data types, you should normalize to a larger range. Truncation and rounding errors will cause information loss.
Adjust the learning rate - normalization changes the derivatives values, and therefore the propagated gradients all the way to the updates.
Good luck!

Why disable dropout during validation and testing?

I've seen in multiple places that you should disable dropout during validation and testing stages and only keep it during the training phase. Is there a reason why that should happen? I haven't been able to find a good reason for that and was just wondering.
One reason I'm asking is because I trained a model with dropout, and the results turned out well - about 80% accuracy. Then, I went on to validate the model but forgot to set the prob to 1 and the model's accuracy went down to about 70%. Is it supposed to be that drastic? And is it as simple as setting the prob to 1 in each dropout layer?
Thanks in advance!
Dropout is a random process of disabling neurons in a layer with chance p. This will make certain neurons feel they are 'wrong' in each iteration - basically, you are making neurons feel 'wrong' about their output so that they rely less on the outputs of the nodes in the previous layer. This is a method of regularization and reduces overfitting.
However, there are two main reasons you should not use dropout to test data:
Dropout makes neurons output 'wrong' values on purpose
Because you disable neurons randomly, your network will have different outputs every (sequences of) activation. This undermines consistency.
However, you might want to read some more on what validation/testing exactly is:
Training set: a set of examples used for learning: to fit the parameters of the classifier In the MLP case, we would use the training set to find the “optimal” weights with the back-prop rule
Validation set: a set of examples used to tune the parameters of a classifier In the MLP case, we would use the validation set to find the “optimal” number of hidden units or determine a stopping point for the back-propagation algorithm
Test set: a set of examples used only to assess the performance of a fully-trained classifier In the MLP case, we would use the test to estimate the error rate after we have chosen the final model (MLP size and actual weights) After assessing the final model on the test set, YOU MUST NOT tune the model any further!
Why separate test and validation sets? The error rate estimate of the final model on validation data will be biased (smaller than the true error rate) since the validation set is used to select the final model After assessing the final model on the test set, YOU MUST NOT tune the model any further!
source : Introduction to Pattern Analysis,Ricardo Gutierrez-OsunaTexas A&M University, Texas A&M University (answer)
So even for validation, how would you determine which nodes you remove if the nodes have a random probability of being disactivated?
Dropout is a method of making bagging practical for ensembles of very many large neural networks.
Along the same line we may remember that using the following false explanation:
For the new data, we can predict their classes by taking the average of the results from all the learners:
Since N is a constant we can just ignore it and the result remains the same, so we should disable dropout during validation and testing.
The true reason is much more complex. It is because of the weight scaling inference rule:
We can approximate p_{ensemble} by evaluating p(y|x) in one model: the model with all units, but with the weights going out of unit i multiplied by the probability of including unit i. The motivation for this modification is to capture the right expected value of the output from that unit. There is not yet any theoretical argument for the accuracy of this approximate inference rule in deep nonlinear networks, but empirically it performs very well.
When we train the model using dropout(for example for one layer) we zero out some outputs of some neurons and scale the others up by 1/keep_prob to keep the expectation of the layer almost the same as before. In the prediction process, we can use dropout but we can only get different predictions each time because we drop the values out randomly, then we need to run the prediction many times to get the expected output. Such a process is time-consuming so we can remove the dropout and the expectation of the layer remains the same.
Reference:
Difference between Bagging and Boosting?
7.12 of Deep Learning
Simplest reason can be, during prediction(test, validation or after production deployment) you want to use the capability of each and every learned neurons and really don't like to skip some of them randomly.
Thats the only reason we set probability as 1 during testing.
There is a Bayesian technique called Monte Carlo dropout in which the dropout would be not disabled during testing. The model will run several times with the same dropout rate(or in one go as a batch), and the mean(line 6 depicted below) and variance(line 7 depicted below) of the results will be calculated to determine the uncertainty.
Here is Uber's application to quantify uncertainty:
Short answer:
Dropouts to bring down over fitting in the training data. They are used as a regularization parameters. So if you have high variance (i.e. look at the difference between training set and validation set accuracy for this) then use drop out on training data, as it won't be good enough to apply dropout on test and validation data as you haven't been sure about the neurons which are going to shut off hence laying off the importance of random neurons which can be important.

Backpropagation neural network, too many neurons in layer causing output to be too high

Having neural network with alot of inputs causes my network problems like
Neural network gets stuck and feed forward calculation always gives output as
1.0 because of the output sum being too big and while doing backpropagation, sum of gradients will be too high what causes the
learning speed to be too dramatic.
Neural network is using tanh as an active function in all layers.
Giving alot of thought, I came up with following solutions:
Initalizing smaller random weight values ( WeightRandom / PreviousLayerNeuronCount )
or
After calculation the sum of either outputs or gradients, dividing the sum with the number of 'neurons in previus layer for output sum' and number of 'neurons in next layer for gradient sum' and then passing sum into activation/derivative function.
I don't feel comfortable with solutions I came up with.
Solution 1. does not solve problem entirely. Possibility of gradient or output sum getting to high is still there. Solution 2. seems to solve the problem but I fear that it completely changes network behavior in a way that it might not solve some problems anymore.
What would you suggest me in this situation, keeping in mind that reducing neuron count in layers is not an option?
Thanks in advance!
General things that affect the output backpropagation include weights and biases of early elections, the number of hidden units, the amount of exercise patterns, and long iterations. As an alternative way, the selection of initial weights and biases there are several algorithms that can be used, one of which is an algorithm Nguyen widrow. You can use it to initialize the weights and biases early, I've tried it and gives good results.

Is there any standard rule for considering the best result or at which stage I have to stop train the data with minimum error

I have a dataset containing data 1100, from where I have considered 75% for training, 15% testing and 15% for validation. The problem is that every time I train the network for the same training set I get very different results. Is there any standard rule for considering the best result or at which stage I have to stop train the data with minimum error.
Normally, if you are using a neural network, you should not have too different results between different runs on the same training set. So, first of all, check that your algorithm is working correctly using some standard benchmark problems (like iris/wisconsin from UCI repository)
Regarding when to stop the training, there are two options:
1. When the training set error falls below a threshold
2. When the validation set error starts increasing
Case (1) is clear, as the training error always decreases. For case (2) however, there is no absolute criterion, as the validation error might vary during the training. So, just plot it, to see how it behaves, and then set a threshold depending on you observations (for example, stop when its value becomes 10% larger than the minimum value it acquired during the training)

ANN different results for same train-test sets

I'm implementing a neural network for a supervised classification task in MATLAB.
I have a training set and a test set to evaluate the results.
The problem is that every time I train the network for the same training set I get very different results (sometimes I get a 95% classification accuracy and sometimes like 60%) for the same test set.
Now I know this is because I get different initial weights and I know that I can use 'seed' to set the same initial weights but the question is what does this say about my data and what is the right way to look at this? How do I define the accuracy I'm getting using my designed ANN? Is there a protocol for this (like running the ANN 50 times and get an average accuracy or something)?
Thanks
Make sure your test set is large enough compared to the training set (e.g. 10% of the overall data) and check it regarding diversity. If your test set only covers very specific cases, this could be a reason. Also make sure you always use the same test set. Alternatively you should google the term cross-validation.
Furthermore, observing good training set accuracy while observing bad test set accuracy is a sign for overfitting. Try to apply regularization like a simple L2 weight decay (simply multiply your weight matrices with e.g. 0.999 after each weight update). Depending on your data, Dropout or L1 regularization could also help (especially if you have a lot of redundancies in your input data). Also try to choose a smaller network topology (fewer layers and/or fewer neurons per layer).
To speed up training, you could also try alternative learning algorithms like RPROP+, RPROP- or RMSProp instead of plain backpropagation.
Looks like your ANN is not converging to the optimal set of weights. Without further details of the ANN model, I cannot pinpoint the problem, but I would try increasing the number of iterations.