Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I use tflearn.DNN to build a deep neural network:
# Build neural network
net = tflearn.input_data(shape=[None, 5], name='input')
net = tflearn.fully_connected(net, 64, activation='sigmoid')
tflearn.batch_normalization(net)
net = tflearn.fully_connected(net, 32, activation='sigmoid')
tflearn.batch_normalization(net)
net = tflearn.fully_connected(net, 16, activation='sigmoid')
tflearn.batch_normalization(net)
net = tflearn.fully_connected(net, 8, activation='sigmoid')
tflearn.batch_normalization(net)
# activation needs to be softmax for classification.
# default loss is cross-entropy and the default metric is accuracy
# cross-entropy + accuracy = categorical network
net = tflearn.fully_connected(net, 2, activation='softmax')
sgd = tflearn.optimizers.SGD(learning_rate=0.01, lr_decay=0.96, decay_step=100)
net = tflearn.regression(net, optimizer=sgd, loss='categorical_crossentropy')
model = tflearn.DNN(net, tensorboard_verbose=0)
I tried many things, but all the time the total loss is around this value:
Training Step: 95 | total loss: 0.68445 | time: 1.436s
| SGD | epoch: 001 | loss: 0.68445 - acc: 0.5670 | val_loss: 0.68363 - val_acc: 0.5714 -- iter: 9415/9415
What can I do to decrease the total loss and make the accuracy get higher?
Many aspects can be considered to improve the network performance, including the datasets and the network.
Just by the network structure you pasted, it is difficult to give a clear way to increase its accuracy without more info about datasets and the target you want to get. But the following are some useful practices may help you to debug / improve the network:
1. About the datasets
Is the datasets balanced with distortions?
Get more training data .
Add data augmentation if possible.
Normalising data.
Feature engineering.
2. About the network
Is the network size is too small / large?
Check overfitting or underfitting by train history, then chose the best epoch size.
Try initialise weights with different initialization scheme.
Try different activation functions, loss function, optimizer.
Change layers number and units number.
Change batch size.
Add dropout layer.
And for more deeply analyse, the following articles may be helpful to you:
How To Improve Deep Learning Performance
How to debug neural networks. Manual
Related
I have a sample neural network and am trying to see how much it would cost me to run it on a server and how long it would take to train if, for example, I add 3 more layers with around 4000,3000,2000 nodes in each layer respectively.
I understand that from a high level perspective the network needs to
Feed the inputs and get the results (which in turn will run Sigmoid) from the network which I guess happens in constant time (even tho the output may not be constant or even linear!)
Run Adam to optimize weights/biases which I guess also happens in linear time since it is like Gradient descent and is different in how it manages the learning rate!
Update the weights/biases which is constant!
I can't find a calculator to use and estimate the computation needed and I'm thinking of making one if I can get a good understanding of different variables in a neural network!
This is the code for my Tensorflow model:
const model = tf.sequential();
model.add(tf.layers.flatten({inputShape: [4317, 5]}));
model.add(tf.layers.dense({units: 1000, activation: 'sigmoid'}));
model.add(tf.layers.dense({units: 4316, activation: 'sigmoid'}));
const optimizer = tf.train.adam();
model.compile({
optimizer: optimizer,
loss: 'meanSquaredError'
});
And here is the network summary printed by Tensorflow
_________________________________________________________________
Layer (type) Output shape Param #
=================================================================
flatten_Flatten1 (Flatten) [null,21585] 0
_________________________________________________________________
dense_Dense1 (Dense) [null,1000] 21586000
_________________________________________________________________
dense_Dense2 (Dense) [null,4316] 4320316
=================================================================
Total params: 25906316
Trainable params: 25906316
Non-trainable params: 0
What if I change the activation functions to linear or ReLU?
I have a laptop with 16 GB of memory and 3.2 GHz 8-core ARMv8-A (M1 chip) and it looks like the laptop is taking about a minute to train a batch of 32 inputs.
With N inputs, each weight is used O(N) times per round of training, so assuming M weights you have roughly O(N*M) training time per round. It doesn't really matter where those weights are in your network. Even for recurrent layers (GRU,RNN, LSTM) this stays true.
Where things break down is that you can't let M go to infinity (which is how big-O works) because in that case your network training won't converge anymore. Effectively, it would be O(infinity).
I have 7 classes of inputs that are related to the brain signals activity (EEG).
When the number of classes is large, the performance of classification algorithms may be affected.
As you can see in the following code, I extracted the features for them and in the first phase I trained my model with 70% of the my data and got 100% accuracy but in the testing phase with the remaining 30% I did not get more than 42.5% accuracy. What is your suggestion to improve the accuracy of my Model?
for i=1:7
[A D]=dwt2(segment_train(i).train,'db1');
wave_train(i).A=A;
wave_train(i).D=D;
f1=mean(A);
f2=median(A);
f3=max(D);
f4=abs(fft(D));
f4=mean(f4);
f5=var(D);
f6=skewness(D);
f7=entropy(D);
f8=var(A);
f9=mean(D);
f(i,:)=[f1 f2 f3 f4 f5 f6 f7 f8 f9];
end
% feature extraction
% Classifier
nOfSamples=7;
nOfClassInstance=10;
Sample=f;
class=[1 2 3 4 5 6 7]'
%SVM
Model=fitcecoc(Sample,class);
predictt=predict(Model,Sample);
disp('class predict')
disp([class predictt])
%Accuracy
Accuracy=mean(class==predictt)*100;
fprintf('\nAccuracy =%d\n',Accuracy)
The question is a tad broad. However, it's a good idea to explore the distribution of the class labels.
Perhaps the distribution of the classes are skewed? It may be the case that some classes show up a lot more than others. There are various ways to counteract this, such as up/down sampling, weighting the error of under-sampled classes with a larger factor, etc. It would be a good idea to explore this further online.
That being said, it certainly sounds like you're overfitting the model. You may also want to explore regularisation to combat the low test score.
Designed a CNN to detect motor movements from EEG.
Input Size (EEG data): 18x64 - 18 electrodes and 64 samples per epoch.
convlayer1; (10 filters of size 1x4)
reluLayer();
maxPooling2dLayer([1,2],'Stride',[1 2])
dropoutLayer(0.1);
convlayer2; (20 filters of size 4x1)
reluLayer();
maxPooling2dLayer([2,1],'Stride',[2 1])
dropoutLayer(0.1);
fullyConnectedLayer(2);
dropoutLayer(0.2);
softmaxLayer();
classificationLayer()];
Data from 8 subjects. Trained the network using 7 subjects and tested it using the left out subject. Did the same for all 8 subjects (basically - LOOM). Training accuracy was 96-98% and so was validation accuracy. For some subjects, the testing accuracy was 100% and for few others, it was 98-99%. Is this a case of overfitting or this result is reliable?
Thanks for your time and help.
Venkat
If testing performance is better than it's not the issue of overfitting. Overfitting avoids generalization but if your model is performing well on test data which means it is working on some unseen data and generalized well.
I made a neural network whice i want to classify the input data (400 caracteristics per input data) as one of the five arabic dialects. I divede the trainig data in "train data", "validation data" and than "test date", with net.divideFcn = 'dividerand'; . I use trainbr as training function, whice results in a long training, that's because i have 9000 elements in training data.
For the network arhitecture i used two-layers, first with 10 perceptrons, second with 5, 5 because i use one vs all strategy.
The network training ends usually with minimum gradient reached, rather than minimux error.
How can i make the network predict better? Could it be o problem with generalization (the network learn very well the training data, but test on new data tends to fail?
Should i add more perceptrons to the first layer? I'm asking that because i take about a hour to train the network when i have 10 perceptrons on first layer, so the time will increase.
This is the code for my network:
[Test] = load('testData.mat');
[Ex] = load('trainData.mat');
Ex.trainVectors = Ex.trainVectors';
Ex.trainLabels = Ex.trainLabels';
net = newff(minmax(Ex.trainVectors),[10 5] ,{'logsig','logsig'},'trainlm','learngdm','sse');
net.performFcn = 'mse';
net.trainParam.lr = 0.01;
net.trainParam.mc = 0.95;
net.trainParam.epochs = 1000;
net.trainParam.goal = 0;
net.trainParam.max_fail = 50;
net.trainFcn = 'trainbr';
net.divideFcn = 'dividerand';
net.divideParam.trainRatio = 0.7;
net.divideParam.valRatio = 0.15;
net.divideParam.testRatio = 0.15;
net = init(net);
net = train(net,Ex.trainVectors,Ex.trainLabels);
Thanks !
Working with neural networks is some type of creative work. So noone can't give you the only true answer. But I can give some advices based on my own experience.
First of all - check the network error when training ends (on training and validation data sets. Before you start to use test data set). You told it is minimum but what is its actual value? If it 50% too, so we have bad data or wrong net architecture.
If error for train data set is OK. Next step - lets check how much the coefficients of your net are changing at the validation step. And what's up about the error here. If they changed dramatically that's the sigh our architecture is wrong: Network does not have the ability to generalize and will retrain at every new data sets.
What else can we do before changing architecture? We can change the number of epochs. Sometimes we can get good results but it is some type of random - we must be sure the changing of coefficient is small at the ending steps of training. But as I remember nntool check it automatically, so maybe we can skip this step.
One more thing I want to recommend to you - change train data set. Maybe you know rand is give you always the same number at start of matlab, so if you create your data sets only once you can work with the same sets always. This problem is also about non-homogeneous data. It can be that some part of your data is more important than other. So if some different random sets will give about the same error data is ok and we can go further. If not - we need to work with data and split it more carefully. Sometimes I avoid using dividerand and divide data manually.
Sometimes I tried to change the type of activation function. But here you use perceptron... So the idea - try to use sigma- or linear- neurons instead of perceptrons. This rarely leads to significant improvements but can help.
If all this steps can't give you enough you have to change net architecture. And the number of neurons in the first layer is the first you have to do. Usually when I work on the neural network I spend a lot of time trying not only different number of neurons but the different types of nets too.
For example, I found interesting article about your topic: link at Alberto Simões article. And that's what they say:
Regarding the number of units in the hidden layers, there are some
rules of thumb: use the same number of units in all hidden layers, and
use at least the same number of units as the maximum between the
number of classes and the number of features. But there can be up to
three times that value. Given the high number of features we opted to
keep that same number of units in the hidden layer.
Some advices from comments:
Data split method (for train and test data sets) depends on your data. For example, I worked on industry data and found that at the last part of the data set technological parameters (pressure for some equipment) was changed. So I have to get data for both operation modes to train data set. But for your case I don't thing there are the same problem... I recommend you to try several random sets (just check they are really different!).
For measuring net error I usually calculate full vector of errors - I train net and then check it's work for all values to get the whole errors vector. It's useful to get some useful vies like histograms and etc and I can see where my net is go wrong. It is not necessary and even harmful to get sse (or mse) close to zero - usually that's mean you already overtrain the net. For the first approximation I usually try to get 80-95% of correct values on training data set and then try the net on the test data set.
I've read a few ideas on the correct sample size for Feed Forward Neural networks. x5, x10, and x30 the # of weights. This part I'm not overly concerned about, what I am concerned about is can I reuse my training data (randomly).
My data is broken up like so
5 independent vars and 1 dependent var per sample.
I was planning on feeding 6 samples in (6x5 = 30 input neurons), confirm the 7th samples dependent variable (1 output neuron.
I would train on neural network by running say 6 or 7 iterations. before trying to predict the next iteration outside of my training data.
Say I have
each sample = 5 independent variables & 1 dependent variables (6 vars total per sample)
output = just the 1 dependent variable
sample:sample:sample:sample:sample:sample->output(dependent var)
Training sliding window 1:
Set 1: 1:2:3:4:5:6->7
Set 2: 2:3:4:5:6:7->8
Set 3: 3:4:5:6:7:8->9
Set 4: 4:5:6:7:8:9->10
Set 5: 5:6:7:6:9:10->11
Set 6: 6:7:8:9:10:11->12
Non training test:
7:8:9:10:11:12 -> 13
Training Sliding Window 2:
Set 1: 2:3:4:5:6:7->8
Set 2: 3:4:5:6:7:8->9
...
Set 6: 7:8:9:10:11:12->13
Non Training test: 8:9:10:11:12:13->14
I figured I would randomly run through my set's per training iteration say 30 times the number of my weights. I believe in my network I have about 6 hidden neurons (i.e. sqrt(inputs*outputs)). So 36 + 6 + 1 + 2 bias = 45 weights. So 44 x 30 = 1200 runs?
So I would do a randomization of the 6 sets 1200 times per training sliding window.
I figured due to the small # of data, I was going to do simulation runs (i.e. rerun over the same problem with new weights). So say 1000 times, of which I do 1140 runs over the sliding window using randomization.
I have 113 variables, this results in 101 training "sliding window".
Another question I have is if I'm trying to predict up or down movement (i.e. dependent variable). Should I match to an actual # or just if I guessed up/down movement correctly? I'm thinking I should shoot for an actual number, but as part of my analysis do a % check on if this # is guessed correctly as up/down.
If you have a small amount of data, and a comparatively large number of training iterations, you run the risk of "overtraining" - creating a function which works very well on your test data but does not generalize.
The best way to avoid this is to acquire more training data! But if you cannot, then there are two things you can do. One is to split the training data into test and verification data - using say 85% to train and 15% to verify. Verification means compute the fitness of the learner on the training set, without adjusting the weights/training. When the verification data fitness (which you are not training on) stops improving (in general it will be noisy), and your training data fitness continues improving - stop training. If on the other hand you use a "sliding window", you may not have a good criterion to know when to stop training - the fitness function will bounce around in unpredictable ways (you might slowly make the effect of each training iteration have less effect on the parameters, however, to give you convergence... maybe not the best approach but some training regimes do this) The other thing you can do normalize out your node's weights via some metric to ensure some notion of 'smoothness' - if you visualize overfitting for a second you'll find that in the extreme case your fitness function sharply curves around your dataset positives...
As for the latter question - for the training to converge, you fitness function needs to be smooth. If you were to just use binary all-or-nothing fitness terms, most likely what would happen is that whatever algorithm you are using to train (backprop, BGFS, etc...) would not converge. In practice, the classification criterion should be an activation that is above for a positive result, less than or equal to for a negative result, and varies smoothly in your weight/parameter space. You can think of 0 as "I am certain that the answer is up" and 1 as "I am certain that the answer is down", and thus realize a fitness function that has a higher "cost" for incorrect guesses that were more certain... There are subtleties possible in how the function is shaped (for example you might have different ideas about how acceptable a false negative and false positive are) - and you may also introduce regions of "uncertain" where the result is closer to "zero weight" - but it should certainly be continuous/smooth.
You can re-use sliding window's.
It basically the same concept as bootstrapping (your training set); which in itself reduces training time, but don't know if it's really helpful in making the net more adaptive to anything other than the training data.
Below is an example of a sliding window in pictorial format (using spreadsheet magic)
http://i.imgur.com/nxhtgaQ.png
https://github.com/thistleknot/FredAPI/blob/05f74faf85d15f6898aa05b9b08d5363fe27c473/FredAPI/Program.cs
Line 294 shows how the code is ran using randomization, it resets the randomization at position 353 so the rest flows as normal.
I was also able to use a 1 (up) or 0 (down) as my target values and the network did converge.