neural network in MATLAB fails in training - matlab

i am in the process of learning neural networks using MATLAB, i'm trying implement a face recognition program using PCA for feature extraction, and a feedforward neural network for classification.
i have 3 people in my training set, the images are stored in 'data' directory.
i am using one network for each individual, and i train each network with all the images of my training set, the code for my project is presented below:
dirs = dir('data');
size = numel(dirs);
eigenVecs = [];
% a neural network for each individual
net1 = feedforwardnet(10);
net2 = feedforwardnet(10);
net3 = feedforwardnet(10);
% extract eigen vectors and prepare the input of the NN
for i= 3:size
eigenVecs{i-2} = eigenFaces(dirs(i).name);
end
trainSet= cell2mat(eigenVecs'); % 27X1024 double
% set the target for each NN, and then train it.
T = [1 1 1 1 1 1 1 1 1 ...
0 0 0 0 0 0 0 0 0 ...
0 0 0 0 0 0 0 0 0];
train(net1, trainSet', T);
T = [0 0 0 0 0 0 0 0 0 ...
1 1 1 1 1 1 1 1 1 ...
0 0 0 0 0 0 0 0 0];
train(net2, trainSet', T);
T = [0 0 0 0 0 0 0 0 0 ...
0 0 0 0 0 0 0 0 0 ...
1 1 1 1 1 1 1 1 1];
train(net3, trainSet', T);
after finishing with training the network, i get this panel:
nntraintool panel
** if anyone could explain to me the progress section of the panel, because i could not understand what those numbers meant. **
after training the networks, i try to test the network using the following:
sim(net1, L)
where L is a sample from my set which is a 1X1024 vector, the result i got is this :
Empty matrix: 0-by-1024
is my approach to training the neural networks wrong ? what can i do to fix this program ?
thank you

The code
train(net1, trainSet', T);
does not save the trained network into the net1 variable (it saves it into ans variable). This is the reason why the result of sim is empty: there is no trained network in net1. You have to save the trained network yourself:
net1= train(net1, trainSet', T);

Related

Why does the DNN model doesn't optimize the loss despite the noiseless input

I need to build this model:
I used data without noise, it means that testing data is exactly similar into the training data (testing data is selected randomly from training data to be sure that data is ok). The way I create the DNN neural network is:
options1 = trainingOptions('adam','MaxEpochs',1000,...
'InitialLearnRate',0.001,'MiniBatchSize',10,'Shuffle','every',...
'L2Regularization',0,'Plots','training-progress','ValidationData',...
{IN_V1,OUT_V1}); %IN_V1 is validation input data, OUT_V1 is validation output data
layers =[sequenceInputLayer([8]) fullyConnectedLayer(32)...
clippedReluLayer(1) fullyConnectedLayer(6) ...
regressionLayer];
Net1 = trainNetwork(train_in,train_ou,layers,options1); %Training the Network real
%# predict output of net on testing data
pred = predict(Net1, train_in(:,1:10));
%# classification confusion matrix
[err,cm] = confusion(train_ou(:,1:10), pred);
The issue I meet is that validation loss cannot be less than 0.7, I think the issue is in the DNN model itself, here is the plot of training:
And the results of the test (output of this command):
%# classification confusion matrix
[err,cm] = confusion(train_ou(:,1:10), pred);
are as following:
err =
1.4500
cm =
4 1 2 0 0 0
0 0 0 0 0 1
0 0 0 0 0 0
0 0 0 0 0 0
1 0 0 0 0 0
0 0 0 0 0 1
Could you please advise what's the mistake of the DNN model? When changing the size of the hidden layer, I also get the same performance !

Why my transfer function keep turn back in to 'logsig'?

   I try to build a basic feedforward system using patternnet command that can recognise the data from MNIST dataset. Here is my code
one = [1];
one = repelem(one,100);
%%%%%%%%%%%%%%%Create Neural network%%%%%%%%%%%%%%%%%%%%%
nn = patternnet([100 100]);
nn.numInputs = 1;
nn.inputs{1}.size = 784;
nn.layers{1}.transferFcn = 'logsig';
nn.layers{2}.transferFcn = 'logsig';
nn.layers{3}.transferFcn = 'softmax';
nn.trainFcn = 'trainscg';
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
%%%%%%%%%%%%%%%%Dealing with data%%%%%%%%%%%%%%%%%%%%%%%%%%
mnist_in = csvread('mnist_train_100.csv');
mnist_test_in = csvread('mnist_test_10.csv');
[i,j] = size(mnist_in);
data_in = mnist_in(:,2:785);
data_in = data_in';
target_in = mnist_in(:,1);
target_in = target_in';
nn = train(nn,data_in,target_in);
   The problem is when I build this system the transfer function in output layer is set to softmax function. Somehow when I train my system the transfer function turn into 'logsig' function and it stay that way until I clear my workspace. I even try to set the transfer function of output layer in the code and program still find a way to change it to logsig. So is there anything I can do.
PS. I even try building this system using network() to make everything from scrath the program still change my tranfer function back from softmax to logsig.
As I see, there is a mistake in the divideParam parameter. You created the neural network as nn but the parameters that you changed is belong to a variable called net. Other than that, the creating neural network part is normal.
I think the problem lies in the data preparation part.
Your training target, the target_in, has the dimension of 1 x < Number of sample>. Because of that, the train function replace 'softmax' with 'logsig' to fit with the output.
The output data for softmax should be in the form of < Number of result> x < Number of sample>
For example, the output is either 1,2 or 3. Then the output array shouldn't be
[1 2 1 3 3 1 ...]
but it should be
[1 0 1 0 0 1 ...;
0 1 0 0 0 0 ...;
0 0 0 1 1 0 ...]
Hope this helps.
EDIT: To turn the single array (1 x < Number of sample>) to the multiple array (< Number of result> x < Number of sample>), the data in the single array will be map with index. For example, 11 sample in a single array:
[-1 -5.5 4 0 3.3 4 -1 0 0 0 -1]
Checking all the unique number and sort it. Now every number has its index.
[-5.5 -1 0 3.3 4] #index table
Going through the single array, for each number, place it in the right index. Basically, -1 will have index 2 so I will tick 1 in the second row at any column that -1 appear. Finally,
[ 0 1 0 0 0 0 0 0 0 0 0;
1 0 0 0 0 0 1 0 0 0 1; #there are three -1 in the single array
0 0 0 1 0 0 0 1 1 1 0;
0 0 0 0 1 0 0 0 0 0 0;
0 0 1 0 0 1 0 0 0 0 0]
Here is the code for it:
idx = sort(unique(target_in));
number_of_result = size(idx,2);
number_of_sample = size(target_in,2);
target_softmax = zeros(number_of_result,number_of_sample);
for i = 1:number_of_sample
place = find(idx == target_in(i)); % find the index of the value
target_softmax(place,i) = 1; % tick 1 at the row
end

Matlab PRBS 4 waveform generation

I have a variable which has values of PRBS 4 sequence.
Output = [0 0 0 1 0 0 1 1 0 1 0 1 1 1 1];
I want to plot this in Matlab. I know I have to use idinput() function to generate prbs sequences. But I am using an old version of Matlab and this function is not available for me. Just by using plot(Output) will not give me the PRBS signal, since in the transition from 0 to 1 and 1 to 0, it will be like a triangle. I need to have a square waveform for PRBS.
Also, I want to make this signal 1 Gbps signal. Is this possible to do?
Best Regards,
nkp.
You can repeat each of output bit by some number and then plot.
For example: output = [0 0 1 0];
Then you repeat each bit by some number (let say 4), so the output vector is
[0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0].

How to use train in neural networks for Matlab R2009b

I have input matrix as:
input =
1 0 0 1 1
1 0 0 0 1
1 0 0 0 1
1 0 0 0 1
0 0 1 0 0
0 1 1 1 0
0 1 1 1 0
and
T = [eye(10) eye(10) eye(10) eye(10)];
The neural network that I created is:
net = newff(input,T,[35], {'logsig'})
%net.performFcn = 'sse';
net.divideParam.trainRatio = 1; % training set [%]
net.divideParam.valRatio = 0; % validation set [%]
net.divideParam.testRatio = 0; % test set [%]
net.trainParam.goal = 0.001;
It works fine till now, but when i use train function the problem arises
[net tr] = train(net,input,T);
and the following error show up in matlab window:
??? Error using ==> network.train at 145
Targets are incorrectly sized for network.
Matrix must have 5 columns.
Error in ==> test at 103
[net tr] = train(net,input,T);
I've also tried the input' and T' as well. Any help is appreciated in advance
If you look at MATLAB's official documentaion of train, you'll notice that T must have the same amount of columns as the input matrix, which is 5 in your case. Instead, try:
T = ones(size(input, 1));
or
T = [1, size(input, 1) - 1];
and see if this works.

Matlab - Image Momentum Calculation

Is there a function or a toolbox which allows for computation of Image Moment?
http://en.wikipedia.org/wiki/Image_moment
The type of data on which I want to apply this function is binary. It is basically a matrix filled with 0 and 1.
Data =
1 0 0 0 0 0
1 1 1 0 1 1
0 1 1 1 1 0
1 0 1 1 0 0
0 1 1 0 0 0
1 1 0 0 0 0
0 0 0 0 0 0
1 0 0 1 0 0
And I want to apply image moments on this type of data. Is there any optimal Matlab implementation for this type of data?
In a previous answer of mine, I had written an implementation for a subset of the regionprops function. The goal was to find image orientation, which was derived from the image moments. Here is the part relevant to you:
function outmom = raw_moments(im,i,j)
outmom = sum(sum( ((1:size(im,1))'.^j * (1:size(im,2)).^i) .* im ));
end
function cmom = central_moments(im,i,j)
rawm00 = raw_moments(im,0,0);
centroids = [raw_moments(im,1,0)/rawm00 , raw_moments(im,0,1)/rawm00];
cmom = sum(sum( (([1:size(im,1)]-centroids(2))'.^j * ...
([1:size(im,2)]-centroids(1)).^i) .* im ));
end
The code follows the equations from the Wikipedia article, so no additional explanation is needed..