Caffe: why Dropout layer exists also in Deploy (testing)? - neural-network

I understand that Dropout is for efficient training, avoiding over-fitting and speed-up learning. However, I do not understand why i see it also in the deploy (testing)?
Should I set dropout_ratio: 1.0 is testing?

TL;DR
Don't touch dropout layer. Caffe knows it should do nothing during inference.
"Dropout" is indeed a very powerful addition to the learning process, and it seemingly has no impact at inference time.
However, if you consider a naive implementation where at train time one only set some of the neurons to zero, at test time you must compensate for activating all neurons by scaling the activations (to get the same overall "strength" of the signal). In this case inference-time "Dropout" becomes a simple scale layer (by known and fixed scale factor).
Fortunately, more thoughtful implementation does this scaling as part of the training (that is, setting some of the neurons to zero, and simultaneously scaling up the rest of the neurons be a predefined scale factor), this way, at inference time "Dropout" layer does absolutely [nothing][3].
To learn more about "Dropout" contribution to the stability of the training and its impact on the generalization capacity of the net you can read sec 7.12 of Bengio's deep learning book.

Related

Can dropout increases training data performance?

I am training a neural network with dropout. It happens that as I decrease dropout from 0.9 to 0.7, the loss (cross-validation error) also decreases for the training data data. I noticed also that accuracy increases as I reduce dropout parameter.
It seems odd to me. Does it make sense?
Dropout is a regularization technique. You should use it only to reduce variance (validation performance vs training performance).It is not intended to reduce the bias, and you should not use it in this way. it is very misleading.
Probably the reason for which you see this behavior is that you use a very high value for dropout. 0.9 means you neutralize too many neurons. It makes sense that once you put there 0.7 instead, the network has higher neurons to use while learning on training set. So the performance will increase for lower values.
You usually should see the training performance dropping a bit, while increasing the performance on the validation set (if you do not have one, at least on the test set). This is the desired behavior you are looking for, when using dropout. The current behavior you get is because if the very high values for dropout.
Start with 0.2 or 0.3 and compare the bias vs. variance in order to get a good value for dropout.
My clear recommendation: don't use it to improve bias, but to reduce variance (error on validation set).
In order to fit better on the training set I recommend :
find a better architecture (or change the number of neurons per
layer)
try different optimizers
hyperparameter tunning
maybe train the network a bit longer
Hopefully this helps !
Dropout works by probabilistically removing, or “dropping out,” inputs to a layer, which may be input variables in the data sample or activations from a previous layer. It has the effect of simulating a large number of networks with a very different network structure and, in turn, making nodes in the network generally more robust to the inputs.
With dropout (dropout rate less than some small value), the accuracy will gradually increase and loss will gradually decrease first(That is what is happening in your case).
When you increase dropout beyond a certain threshold, it results in the model not being able to fit properly. Intuitively, a higher dropout rate would result in a higher variance to some of the layers, which also degrades training.
What you should always remember is that Dropout is like all other forms of regularization in that it reduces model capacity. If you reduce the capacity too much, it is sure that you will get bad results.
Hope this may help you.

Is it good practice to apply batch normalization on frozen weights?

Im wondering if it's common to use batch norm layer on those layers that have frozen weights and biases. In my particular case, i have pretrained VGG19 and weights trained on ImageNet. What im trying to do now is training this network with pretrained weights with batch norm layer added before every single nonlinearity. My question here is that is it good practice to use batch norm layer on every layer even though earlier layer's weights are frozen meaning not trainable? My first though was it is good practice because even though those earlier layers are not trainable, still batch norm parameters are trainable so the input to those frozen layers should be normalized and will have good distribution. If you have any idea about this please give me intuitive explanation. Sorry for my poor English and thank you for your attention.

Why disable dropout during validation and testing?

I've seen in multiple places that you should disable dropout during validation and testing stages and only keep it during the training phase. Is there a reason why that should happen? I haven't been able to find a good reason for that and was just wondering.
One reason I'm asking is because I trained a model with dropout, and the results turned out well - about 80% accuracy. Then, I went on to validate the model but forgot to set the prob to 1 and the model's accuracy went down to about 70%. Is it supposed to be that drastic? And is it as simple as setting the prob to 1 in each dropout layer?
Thanks in advance!
Dropout is a random process of disabling neurons in a layer with chance p. This will make certain neurons feel they are 'wrong' in each iteration - basically, you are making neurons feel 'wrong' about their output so that they rely less on the outputs of the nodes in the previous layer. This is a method of regularization and reduces overfitting.
However, there are two main reasons you should not use dropout to test data:
Dropout makes neurons output 'wrong' values on purpose
Because you disable neurons randomly, your network will have different outputs every (sequences of) activation. This undermines consistency.
However, you might want to read some more on what validation/testing exactly is:
Training set: a set of examples used for learning: to fit the parameters of the classifier In the MLP case, we would use the training set to find the “optimal” weights with the back-prop rule
Validation set: a set of examples used to tune the parameters of a classifier In the MLP case, we would use the validation set to find the “optimal” number of hidden units or determine a stopping point for the back-propagation algorithm
Test set: a set of examples used only to assess the performance of a fully-trained classifier In the MLP case, we would use the test to estimate the error rate after we have chosen the final model (MLP size and actual weights) After assessing the final model on the test set, YOU MUST NOT tune the model any further!
Why separate test and validation sets? The error rate estimate of the final model on validation data will be biased (smaller than the true error rate) since the validation set is used to select the final model After assessing the final model on the test set, YOU MUST NOT tune the model any further!
source : Introduction to Pattern Analysis,Ricardo Gutierrez-OsunaTexas A&M University, Texas A&M University (answer)
So even for validation, how would you determine which nodes you remove if the nodes have a random probability of being disactivated?
Dropout is a method of making bagging practical for ensembles of very many large neural networks.
Along the same line we may remember that using the following false explanation:
For the new data, we can predict their classes by taking the average of the results from all the learners:
Since N is a constant we can just ignore it and the result remains the same, so we should disable dropout during validation and testing.
The true reason is much more complex. It is because of the weight scaling inference rule:
We can approximate p_{ensemble} by evaluating p(y|x) in one model: the model with all units, but with the weights going out of unit i multiplied by the probability of including unit i. The motivation for this modification is to capture the right expected value of the output from that unit. There is not yet any theoretical argument for the accuracy of this approximate inference rule in deep nonlinear networks, but empirically it performs very well.
When we train the model using dropout(for example for one layer) we zero out some outputs of some neurons and scale the others up by 1/keep_prob to keep the expectation of the layer almost the same as before. In the prediction process, we can use dropout but we can only get different predictions each time because we drop the values out randomly, then we need to run the prediction many times to get the expected output. Such a process is time-consuming so we can remove the dropout and the expectation of the layer remains the same.
Reference:
Difference between Bagging and Boosting?
7.12 of Deep Learning
Simplest reason can be, during prediction(test, validation or after production deployment) you want to use the capability of each and every learned neurons and really don't like to skip some of them randomly.
Thats the only reason we set probability as 1 during testing.
There is a Bayesian technique called Monte Carlo dropout in which the dropout would be not disabled during testing. The model will run several times with the same dropout rate(or in one go as a batch), and the mean(line 6 depicted below) and variance(line 7 depicted below) of the results will be calculated to determine the uncertainty.
Here is Uber's application to quantify uncertainty:
Short answer:
Dropouts to bring down over fitting in the training data. They are used as a regularization parameters. So if you have high variance (i.e. look at the difference between training set and validation set accuracy for this) then use drop out on training data, as it won't be good enough to apply dropout on test and validation data as you haven't been sure about the neurons which are going to shut off hence laying off the importance of random neurons which can be important.

Neural Network: validation accuracy constant, training accuracy decreasing

I have a neural network which does image segmentation. I trained it ~100 epochs. The current effect is that the validation loss is constant (0.2 +/- 0.03) and the training accuracy is still decreasing (currently 0.07), but very slow.
The result of the neural network is quite well.
What does this mean? Is it overfitting? Should i stop the training?
I currently use dropout in the first layer (50%). Would it make sense to add dropout to every layer (there are about ~15 layers)? Or should i also add L2 regularization? Does it make sense to use L2 and also droput?
Thank you very much
It is recommended to use L2 when you use dropout. I think that your dropout at 50% is a little too high. People usually use it around 20% depending on the operations.
Moreover, 100 epochs may not be enough, it depends on the size of your training set and the size of your neural network.
What do you mean by "quite well"? Please quantify it and share an example. The validation and accuracy are just "indicators", their value also depend on the NN and the training set, so 0.2 can be either bad or good depending on your problem.

Backpropagation makes network worse

i am experimenting with neural networks. I have a network with 8 input neurons, 5 hidden and 2 output. When i let the network learn with backpropagation, sometimes, it produces worse result between single iterations of training. What can be the cause? It should not be implementation error, because i even tried using implementation from Introduction to Neural Networks for Java and it does exactly the same.
Nothing is wrong. Back propagation is just a gradient optimization, and gradient methods do not have a guarantee of making error smaller in each iteration (you do have a guarantee that there exists a very small step size/learning rate which has such property, but in practise no way of finding it); furthermore you are probably updating weights after each sample making your training stochastic, which is even more "unstable" in this matter (as you do not really calculate the true gradient). However, if due to this, your method is not converging - think about proper scaling of your data as well as reducing the learning rate and probably adding the momentum term. These are just gradient-based optimization-related issues, not BP as such.