I'm trying to use a neural network for a classification problem, but the result of the training produce very bad performance. The classification problem:
I have more than 300,000 training samples
Each input is a vector of 32 values (real values)
Each output is a vector of 32 values (0 or 1)
This is how I train the network:
DNN_SIZE = [1000, 1000];
% Initialize DNN
net = feedforwardnet(DNN_SIZE, 'traingda');
net.performParam.regularization = 0.2;
%Set activation functions
for i=1:length(DNN_SIZE)
net.layers{i}.transferFcn = 'poslin';
end
net.layers{end}.transferFcn = 'logsig';
net = train(net, train_inputs, train_outputs);
Note: I have tried different values for DNN_SIZE including larger and smaller values, for hidden layers and less, but it didn't make a difference.
Note 2: I have tried training the same network using a data set from Matlab's examples (simpleclass_dataset) and I still got bad performance.
The performance of the trained network is very bad- Its output is basically 0.5 in every output for every input vector (when the target outputs during training are always 0 or 1). What am I doing wrong, and how can I fix it?
Thanks.
Related
Encouraged by some success in MNIST classification I wanted to solve a "real" problem with some neural networks.
The task seems quite easy:
We have:
some x-value (e.g. 1:1:100)
some y-values (e.g. x^2)
I want to train a network with 1 input (for 1 x-value) and one output (for 1 y-value). One hidden layer.
Here is my basic procedure:
Slicing my x-values into different batches (e.g. 10 elements per batch)
In each batch calculating the outputs of the net, then applying backpropagation, calculating weight and bias updates
After each batch averaging the calculated weight and bias updates and actually update the weights and biases
Repeating step 1. - 3. multiple times
This procedure worked fine for MNIST, but for the regression it totally fails.
I am wondering if I do something fundamentally wrong.
I tried different batchsizes, up to averaging over ALL x values.
Basically the network does not train well. After manually tweaking the weights and biases (with 2 hidden neurons) I could approximate my y=f(x) quite well, but when the network shall learn the parameters, it fails.
When I have just one element for x and one for y and I train the network, it trains well for this one specific pair.
Maybe somebody has a hint for me. Am I misunderstanding regression with neural networks?
So far I assume, the code itself is okay, as it worked for MNIST and it works for the "one x/y pair example". I rather think my overall approach (see above) may be not suitable for regression.
Thanks,
Jim
ps: I will post some code tomorrow...
Here comes the code (MATLAB). As I said, its one hidden layer, with two hidden neurons:
% init hyper-parameters
hidden_neurons=2;
input_neurons=1;
output_neurons=1;
learning_rate=0.5;
batchsize=50;
% load data
training_data=d(1:100)/100;
training_labels=v_start(1:100)/255;
% init weights
init_randomly=1;
if init_randomly
% initialize weights and bias with random numbers between -0.5 and +0.5
w1=rand(hidden_neurons,input_neurons)-0.5;
b1=rand(hidden_neurons,1)-0.5;
w2=rand(output_neurons,hidden_neurons)-0.5;
b2=rand(output_neurons,1)-0.5;
else
% initialize with manually determined values
w1=[10;-10];
b1=[-3;-0.5];
w2=[0.2 0.2];
b2=0;
end
for epochs =1:2000 % looping over some epochs
for i = 1:batchsize:length(training_data) % slice training data into batches
batch_data=training_data(i:min(i+batchsize,length(training_data))); % generating training batch
batch_labels=training_labels(i:min(i+batchsize,length(training_data))); % generating training label batch
% initialize weight updates for next batch
w2_update=0;
b2_update =0;
w1_update =0;
b1_update =0;
for k = 1: length(batch_data) % looping over one single batch
% extract trainig sample
x=batch_data(k); % extracting one single training sample
y=batch_labels(k); % extracting expected output of training sample
% forward pass
z1 = w1*x+b1; % sum of first layer
a1 = sigmoid(z1); % activation of first layer (sigmoid)
z2 = w2*a1+b2; % sum of second layer
a2=z2; %activation of second layer (linear)
% backward pass
delta_2=(a2-y); %calculating delta of second layer assuming quadratic cost; derivative of linear unit is equal to 1 for all x.
delta_1=(w2'*delta_2).* (a1.*(1-a1)); % calculating delta of first layer
% calculating the weight and bias updates averaging over one
% batch
w2_update = w2_update +(delta_2*a1') * (1/length(batch_data));
b2_update = b2_update + delta_2 * (1/length(batch_data));
w1_update = w1_update + (delta_1*x') * (1/length(batch_data));
b1_update = b1_update + delta_1 * (1/length(batch_data));
end
% actually updating the weights. Updated weights will be used in
% next batch
w2 = w2 - learning_rate * w2_update;
b2 = b2 - learning_rate * b2_update;
w1 = w1 - learning_rate * w1_update;
b1 = b1 - learning_rate * b1_update;
end
end
Here is the outcome with random initialization, showing the expected output, the output before training, and the output after training:
training with random init
One can argue that the blue line is already closer than the black one, in that sense the network has optimized the results already. But I am not satisfied.
Here is the result with my manually tweaked values:
training with pre-init
The black line is not bad for just two hidden neurons, but my expectation was rather, that such a black line would be the outcome of training starting with random init.
Any suggestions what I am doing wrong?
Thanks!
Ok, after some research I found some interesting points:
The function I tried to learn seems particularly hard to learn (not sure why)
With the same setup I tried to learn some 3rd degree polynomials which was successful (cost <1e-6)
Randomizing training samples seems to improve learning (for the polynomial and my initial function). I know this is well known in literature but I always skipped that part in implementation. So I learned for myself how important it is.
For learning "curvy/wiggly" functions, I found sigmoid works better than ReLu. (output layer is still "linear" as suggested for regression)
a learning rate of 0.1 worked fine for the curve fitting I finally wanted to perform
A larger batchsize would smoothen the cost vs. epochs plot (surprise...)
Initializing weigths between -5 and +5 worked better than -0.5 and 0.5 for my application
In the end I got quite convincing results for what I intendet to learn with the network :)
Have you tried with a much smaller learning rate? Generally, learning rates of 0.001 are a good starting point, 0.5 is in most cases way too large.
Also note that your predefined weights are in an extremely flat region of the sigmoid function (sigmoid(10) = 1, sigmoid(-10) = 0), with the derivative at both positions close to 0. That means that backpropagating from such a position (or getting to such a position) is extremely difficult; For exactly that reason, some people prefer to use ReLUs instead of sigmoid, since it has only a "dead" region for negative activations.
Also, am I correct in seeing that you only have 100 training samples? You could maybe try a smaller batch size, or increase the number of samples you take. Also don't forget to shuffle your samples after each epoch. Reasons are given plenty, for example here.
I wrote this script (Matlab) for classification using Softmax. Now I want to use same script for regression by replacing the Softmax output layer with a Sigmoid or ReLU activation function. But I wasn't able to do that.
X=houseInputs ;
T=houseTargets;
%Train an autoencoder with a hidden layer of size 10 and a linear transfer function for the decoder. Set the L2 weight regularizer to 0.001, sparsity regularizer to 4 and sparsity proportion to 0.05.
hiddenSize = 10;
autoenc1 = trainAutoencoder(X,hiddenSize,...
'L2WeightRegularization',0.001,...
'SparsityRegularization',4,...
'SparsityProportion',0.05,...
'DecoderTransferFunction','purelin');
%%
%Extract the features in the hidden layer.
features1 = encode(autoenc1,X);
%Train a second autoencoder using the features from the first autoencoder. Do not scale the data.
hiddenSize = 10;
autoenc2 = trainAutoencoder(features1,hiddenSize,...
'L2WeightRegularization',0.001,...
'SparsityRegularization',4,...
'SparsityProportion',0.05,...
'DecoderTransferFunction','purelin',...
'ScaleData',false);
features2 = encode(autoenc2,features1);
%%
softnet = trainSoftmaxLayer(features2,T,'LossFunction','crossentropy');
%Stack the encoders and the softmax layer to form a deep network.
deepnet = stack(autoenc1,autoenc2,softnet);
%Train the deep network on the wine data.
deepnet = train(deepnet,X,T);
%Estimate the deep network, deepnet.
y = deepnet(X);
Regression is a different problem from classification. You have to change your loss function to something that fits with a regression e.g. mean square error and of course change the number of neuron to one (you will only ouput 1 value on your last layer).
It is possible to use a Neural Network to perform a regression task but it might be an overkill for many tasks. True regression means to perform a mapping of one set of continuous inputs to another set of continuous outputs:
f: x -> ý
Changing the architecture of a neural network to make it perform a regression task is usually fairly simple. Instead of mapping the continuous input data to a specific class as it is done using the Softmax function as in your case, you have to make the network use only a single output node.
This node will just sum the outputs of the the previous layer (last hidden layer) and multiply the summed activations by 1. During the training process this output ý will be compared to the correct ground-truth value y that comes with your dataset. As a loss function you may use the Root-means-squared-error (RMSE).
Training such a network will result in a model that maps an arbitrary number of independent variables x to a dependent variable ý, which basically is a regression task.
To come back to your Matlab implementation, it would be incorrect to change the current Softmax output layer to be an activation function such as a Sigmoid or ReLU. Instead your would have to implement a custom RMSE output layer for your network, which is fed with the sum of activations coming from the last hidden layer of your network.
I am training a neural network to learn a function. Everything is going great so far.
I have input matrix of 4x10000 and output matrix of 3x10000. I have much more data points than 10000. But not all of them can be fit at once so I have decided to feed pack of 10000-10000 data points and train same neural network on it.
There are three layers and 7 units in hidden layer.
So what I do is, I train the network with 10000 data points randomly and then again train on another random 10000 data points and so on.
So for this I store CheckPoints (in-built functionality of neural net toolkit). But what happens is that the network, which is being trained, is stored as struct in CheckPoints rather than network type itself. So when I load the checkpoint next time I run the program, it shows error something as below.
Undefined function 'train' for input arguments of type 'struct'
I am using fitnet network.
% Create a Fitting Network
hiddenLayerSize = 7;
net = fitnet(hiddenLayerSize,'trainlm');
% Setup Division of Data for Training, Validation, Testing
net.divideParam.trainRatio = 60/100;
net.divideParam.valRatio = 20/100;
net.divideParam.testRatio = 20/100;
load('Highlights_Checkpoint.mat');
existanceOfCheckpoint = exist('checkpoint', 'var');
if existanceOfCheckpoint==0
else
net = (checkpoint.net);
end
% Train the Network
[net,tr] = train(net,x,t,'useParallel', 'yes','showResources','yes', 'CheckpointFile','Highlights_Checkpoint.mat');
Well solution to this problem was quite easy.
All I had to do was the following:
net = network(checkpoint.net);
And all was set. :D
I'm trying to learn neural net that is 289x300x1. E.g. input vector is 289 elements, 300 hidden neurons, 1 class-output.
So
net = feedforwardnet(300);
net = train(net,X,y,'useParallel','yes','showResources','yes');
gives error
Error using nn7/perfsJEJJ>calc_Y_trainPerfJeJJ (line 37) Error
detected on worker 2. Requested 87301x87301 (56.8GB) array exceeds
maximum array size preference.
X is an array of size 289x2040, type of elements is double.
y is an array of size 1x2040, type of elemetns is double.
I dont understand why matlab wants so much of memory for such small task. Weights need to be stored = 289 * 300 * 64 bytes which is ~5.5 MB.
And how to solve it.
It is probably due to a combination of a few things:
The number of neurons into your hidden layer is rather large.... are you sure 300 features / neurons is what you need? Consider breaking down the problem to fewer features... a dimensionality reduction may be fruitful, but I'm just speculating. However, from what I know, a neural network of 300 hidden neurons should be fine from experience, but I just brought this point up because that hidden neuron size is rather large.
You have too many inputs going in for training. You have 2040 points going in and that's perhaps why it's breaking. Try breaking up the dataset into chunks of a given size, then incrementally train the network for each chunk.
Let's assume that point #1 you can't fix, but you can address point #2, something like this comes to mind:
chunk_size = 200; %// Declare chunk size
num_chunks = ceil(size(X,2)/chunk_size); %// Get total number of chunks
net = feedforwardnet(300); %// Initialize NN
%// For each chunk, extract out a section of the data, then train the
%// network. Retrain on original network until we run out of data to train
for ii = 1 : num_chunks
%// Ensure cap off if we get to the chunk at the end that isn't
%// evenly divisible by the chunk size
if ii*chunk_size > size(X,2)
max_val = size(X,2);
else
max_val = ii*chunk_size;
end
%// Specify portion of data to extract
interval = (ii-1)*chunk_size + 1 : max_val;
%// Train the NN on this data
net = train(net, X(:,interval), y(interval),'useParallel','yes','showResources','yes'));
end
As such, break up your data into chunks, train your neural network on each chunk separately and update the neural network as you go. You can do this because neural networks basically implement Stochastic Gradient Descent where the parameters are updated each time a new input sample is provided.
I have a dataset of 43 examples (data points) and 70'000 features, that means my dataset matrix is (43 x 70'000). The labels contains 4 different values (1-4), i.e. there are 4 classes.
Now, I have done classification with a Deep Belief Network / Neural Network but I'm getting only accuracy of around 25% (chance level) with leave-one-out cross-validation. If I'm using kNN, SVM etc. I'm getting >80% accuracy.
I have used the DeepLearnToolbox for Matlab (https://github.com/rasmusbergpalm/DeepLearnToolbox) and just adapted the Deep Belief Network example from the readme of the toolbox. I have tried different number of hidden layers (1-3) and different number of hidden nodes (100, 500,...) as well as different learning rates, momentum etc but accuracy is still very bad. The feature vectors are scaled to the range [0,1] because this is needed by the toolbox.
In detail I have done the following code (only showing one run of cross-validation):
% Indices of training and test set
train = training(c,m);
test = ~train;
% Train DBN
opts = [];
dbn = [];
dbn.sizes = [500 500 500];
opts.numepochs = 50;
opts.batchsize = 1;
opts.momentum = 0.001;
opts.alpha = 0.15;
dbn = dbnsetup(dbn, feature_vectors_std(train,:), opts);
dbn = dbntrain(dbn, feature_vectors_std(train,:), opts);
%unfold dbn to nn
nn = dbnunfoldtonn(dbn, 4);
nn.activation_function = 'sigm';
nn.learningRate = 0.15;
nn.momentum = 0.001;
%train nn
opts.numepochs = 50;
opts.batchsize = 1;
train_labels = labels(train);
nClass = length(unique(train_labels));
L = zeros(length(train_labels),nClass);
for i = 1:nClass
L(train_labels == i,i) = 1;
end
nn = nntrain(nn, feature_vectors_std(train,:), L, opts);
class = nnpredict(nn, feature_vectors_std(test,:));
feature_vectors_std is the (43 x 70'000) matrix with values scaled to [0,1].
Can somebody infer why I'm getting such bad accuracy?
Because you have much more features than examples in the dataset. In other words: you have big number of weights, and you need to estimate all of them, but you can't, because NN with such huge structure cannot generalize well on so small dataset, you need more data to learn such big number of hidden weights (In fact NN may memorize your training set, but cannot infer it's "knowledge" to test set). At the same time 80% accuracy with such simple methods as SVM and kNN indicates that you can describe your data with much simpler rules, because for example SVM will have only 70k weights (Instead of 70kfirst_layer_size + first_layer_sizesecond_layer_size + ... in NN), kNN will not use weights at all.
Complex model is not silver bullet, the more complex model you trying to fit - the more data you need.
Obviousely your dataset is too small than the complexcity of your network. reference from there
The complexity of a neural network can be expressed through the number
of parameters. In the case of deep neural networks, this number can be
in the range of millions, tens of millions and in some cases even
hundreds of millions. Let’s call this number P. Since you want to be
sure of the model’s ability to generalize, a good rule of a thumb for
the number of data points is at least P*P.
While the KNN and SVM is simpler,they don't need that much of data.
So they can work better.