I am using nn for sentiment analysis and im getting good results but my validation accuracy and loss are not changing much even though training accuracy increases or decrease
Related
Accuracy of training according to ML.NET is 97%. But when I'm trying to predict the class it always returns the same value, no matter what input data is provided. And it doesn't make much sense, because it's clearly not 97%, but 0%. So I wanted to ask is it normal or maybe I need to leave it for 10 hours of training so it reaches higher than 97%.
Training data is Parkinson's Disease (PD) classification from kaggle.
I am training a DNN in MATLAB , while optimizing my network, I am observing a decrement in accuracy while increasing the epochs. Is it possible?
The loss values in on the other hand decreases during training while increasing epochs. Please guide.
tldr; absolutely.
When entire training dataset is seen once by the model (feed forwarded once), it's termed as 1 epoch.
The below graph shows the general behaviour of accuracy with the number of epochs. Training on more number of epochs can result in low accuracy on validation, even though loss will continue to reduce (training accuracy will be high). This is termed as overfitting.
No. of epochs to train also a hyperparameter that needs fine tuning.
It is absolutely possible:
Especially when you are training in batches
When your learning rate is too high
I am learning and experimenting with neural networks and would like to have the opinion from someone more experienced on the following issue:
When I train an Autoencoder in Keras ('mean_squared_error' loss function and SGD optimizer), the validation loss is gradually going down. and the validation accuracy is going up. So far so good.
However, after a while, the loss keeps decreasing but the accuracy suddenly falls back to a much lower low level.
Is it 'normal' or expected behavior that the accuracy goes up very fast and stay high to fall suddenly back?
Should I stop training at the maximum accuracy even if the validation loss is still decreasing? In other words, use val_acc or val_loss as metric to monitor for early stopping?
See images:
Loss: (green = val, blue = train]
Accuracy: (green = val, blue = train]
UPDATE:
The comments below pointed me in the right direction and I think I understand it better now. It would be nice if someone could confirm that following is correct:
the accuracy metric measures the % of y_pred==Y_true and thus only make sense for classification.
my data is a combination of real and binary features. The reason why the accuracy graph goes up very steep and then falls back, while the loss continues to decrease is because around epoch 5000, the network probably predicted +/- 50% of the binary features correctly. When training continues, around epoch 12000, the prediction of real and binary features together improved, hence the decreasing loss, but the prediction of the binary features alone, are a little less correct. Therefor the accuracy falls down, while the loss decreases.
If the prediction is real-time or the data is continuous rather than discrete, then use MSE(Mean Square Error) because the values are real time.
But in the case of Discrete values (i.e) classification or clustering use accuracy because the values given are either 0 or 1 only. So, here the concept of MSE will not applicable, rather use accuracy= no of error values/total values * 100.
I'm using a stochastic (incremental as opposed to batch) approach to training my neural net, and after every 1000000 iterations I print the sum of the errors of the neurons in the net. For a little while I can see this overall error decreasing steadily, and as progress begins to slow, it then seems to reverse the complete other way and the overall error begins increasing steadily. This cannot be normal behavior, and I'm not sure what could be causing it. My learning rate is set very low 0.0001.
I am training the neural network with input vector of 85*650 and target vector of 26*650. Here is the list of parameters that I have used
net.trainParam.max_fail = 6;
net.trainParam.min_grad=1e-5;
net.trainParam.show=10;
net.trainParam.lr=0.9;
net.trainParam.epochs=13500;
net.trainParam.goal=0.001;
Number of hidden nodes=76
As you can see ,I have set the number of epochs to 13500. Is it OK to set the number of epochs to such a large number?. Performance goal is not reaching if the number of epochs is decreased and I am getting a bad classification while testing.
Try not to focus on the number of epochs. Instead, you should have, at least, two sets of data: one for training and another for testing. Use the testing set to get a feel for how well your ANN is performing and how many epochs is needed to get a decent ANN.
For example, you want to stop training when performance on your testing set as levelled-off or has begun to decrease (get worse). This would be evidence of over-learning which is the reason why more epochs is not always better.