How to set output size in Matlab newff method - matlab

Summary:
I'm trying to do classification of some images depending on the angles between body parts.
I assume that human body consists of 10 parts(as rectangles) and find the center of each part and calculate the angle of each part by reference to torso.
And I have three action categories:Handwave-Walking-Running.
My goal is to find which test images fall into which action category.
Facts:
TrainSet:1057x10 feature set,1057 stands for number of image.
TestSet:821x10
I want my output to be 3x1 matrice each row showing the percentage of classification for action category.
row1:Handwave
row2:Walking
row3:Running
Code:
actionCat='H';
[train_data_hw train_label_hw] = tugrul_traindata(TrainData,actionCat);
[test_data_hw test_label_hw] = tugrul_testdata(TestData,actionCat);
actionCat='W';
[train_data_w train_label_w] = tugrul_traindata(TrainData,actionCat);
[test_data_w test_label_w] = tugrul_testdata(TestData,actionCat);
actionCat='R';
[train_data_r train_label_r] = tugrul_traindata(TrainData,actionCat);
[test_data_r test_label_r] = tugrul_testdata(TestData,actionCat);
Train=[train_data_hw;train_data_w;train_data_r];
Test=[test_data_hw;test_data_w;test_data_r];
Target=eye(3,1);
net=newff(minmax(Train),[10 3],{'logsig' 'logsig'},'trainscg');
net.trainParam.perf='sse';
net.trainParam.epochs=50;
net.trainParam.goal=1e-5;
net=train(net,Train);
trainSize=size(Train,1);
testSize=size(Test,1);
if(trainSize > testSize)
pend=-1*ones(trainSize-testSize,size(Test,2));
Test=[Test;pend];
end
x=sim(net,Test);
Question:
I'm using Matlab newff method.But my output is always an Nx10 matrice not 3x1.
My input set should be grouped as 3 classes but they are grouped to 10 classes.
Thanks

%% Load data : I generated some random data instead
Train = rand(1057,10);
Test = rand(821,10);
TrainLabels = randi([1 3], [1057 1]);
TestLabels = randi([1 3], [821 1]);
trainSize = size(Train,1);
testSize = size(Test,1);
%% prepare the input/output vectors (1-of-N output encoding)
input = Train'; %'matrix of size numFeatures-by-numImages
output = zeros(3,trainSize); % matrix of size numCategories-by-numImages
for i=1:trainSize
output(TrainLabels(i), i) = 1;
end
%% create net: one hidden layer with 10 nodes (output layer size is infered: 3)
net = newff(input, output, 10, {'logsig' 'logsig'}, 'trainscg');
net.trainParam.perf = 'sse';
net.trainParam.epochs = 50;
net.trainParam.goal = 1e-5;
view(net)
%% training
net = init(net); % initialize
[net,tr] = train(net, input, output); % train
%% performance (on Training data)
y = sim(net, input); % predict
%[err cm ind per] = confusion(output, y);
[maxVals predicted] = max(y); % predicted
cm = confusionmat(predicted, TrainLabels);
acc = sum(diag(cm))/sum(cm(:));
fprintf('Accuracy = %.2f%%\n', 100*acc);
fprintf('Confusion Matrix:\n');
disp(cm)
%% Testing (on Test data)
y = sim(net, Test');
Note how I converted from category label for each instance (1/2/3) to a 1-to-N encoding vector ([100]: 1, [010]: 2, [001]: 3)
Also note that the test set is currently not being used, since by default the input data is divided into train/test/validation. You could achieve your manual division by setting net.divideFcn to the divideind function, and setting the corresponding net.divideParam parameters.
I showed the testing on the same training data, but you could do the same for the Test data.

Related

Separate matrix of column vectors into training and testing data in MATLAB

I'm currently doing a project in MATLAB using the MNIST data set. I have a training data set of n = 50000, represented by a matrix of 784 x 50000 (50000 column vectors of size 784).
I am trying to separate my training and testing data (70-30, respectively), but the method I am using is a bit wordy and brute force for my liking. Being that this is MATLAB, I'm sure there has got to be a better way. The code I have been using is listed below. I'm brand new to MATLAB so please help! Thanks :)
% MNIST - data loads into trn and trnAns, representing
% the input vectors and the desired output vectors, respectively
load('Data/mnistTrn.mat');
mnist_train = zeros(784, 35000);
mnist_train_ans = zeros(10, 35000);
mnist_test = zeros(784, 15000);
mnist_test_ans = zeros(10, 15000);
indexes = zeros(1,50000);
for i = 1:50000
indexes(i) = i;
end
indexes(randperm(length(indexes)));
for i = 1:50000
if i <= 35000
mnist_train (:,i) = trn(:,indexes(i));
mnist_train_ans(:,i) = trnAns(:,indexes(i));
else
mnist_test(:,i-35000) = trn(:,indexes(i));
mnist_test_ans(:,i-35000) = trnAns(:,indexes(i));
end
end
I hope this works:
% MNIST - data loads into trn and trnAns, representing
% the input vectors and the desired output vectors, respectively
load('Data/mnistTrn.mat');
% Generating a random permutation for both trn and trnAns:
perm = randperm(50000);
% Shuffling both trn and trnAns columns using a single random permutations:
trn = trn(:, perm);
trnAns = trnAns(:, perm);
mnist_train = trn(:, 1:35000);
mnist_train_ans = trnAns(:, 1:35000);
mnist_test = trn(:, 35001:50000);
mnist_test_ans = trnAns(:, 35001:50000);

Chance level accuracy for clearly separable data

I have written what I believe to be quite a simple SVM-classifier [SVM = Support Vector Machine]. "Testing" it with normally distributed data with different parameters, the classifier is returning me a 50% accuracy. What is wrong?
Here is the code, the results should be reproducible:
features1 = normrnd(1,5,[100,5]);
features2 = normrnd(50,5,[100,5]);
features = [features1;features2];
labels = [zeros(100,1);ones(100,1)];
%% SVM-Classification
nrFolds = 10; %number of folds of crossvalidation
kernel = 'linear'; % 'linear', 'rbf' or 'polynomial'
C = 1; % C is the 'boxconstraint' parameter.
cvFolds = crossvalind('Kfold', labels, nrFolds);
for i = 1:nrFolds % iterate through each fold
testIdx = (cvFolds == i); % indices test instances
trainIdx = ~testIdx; % indices training instances
% train the SVM
cl = fitcsvm(features(trainIdx,:), labels(trainIdx),'KernelFunction',kernel,'Standardize',true,...
'BoxConstraint',C,'ClassNames',[0,1]);
[label,scores] = predict(cl, features(testIdx,:));
eq = sum(labels(testIdx));
accuracy(i) = eq/numel(labels(testIdx));
end
crossValAcc = mean(accuracy)
You are not computing the accuracy correctly. You need to determine how many predictions match the original data. You are simply summing up the total number of 1s in the test set, not the actual number of correct predictions.
Therefore you must change your eq statement to this:
eq = sum(labels(testIdx) == label);
Recall that labels(testIdx) extracts the true label from your test set and label is the predicted results from your SVM model. This correctly generates a vector of 0/1 where 0 means that the prediction does not match the actual label from the test set and 1 means that they agree. Summing over each time they agree, or each time the vector is 1 is the way to compute the accuracy.

Split dataset to test and train MATLAB [duplicate]

This question already has an answer here:
Matlab: How can I split my data matrix into two random subsets of column vectors while keeping the label information?
(1 answer)
Closed 5 years ago.
I want to split a very large dataset that I have (over one million observations) into a test and train set. As, you can see I have already managed to perform something similar in the code bellow with the use of dividerand.
What the code does is we have a very large set X, on every iteration we select N=1700 variables and then I split them in a ratio 7/3 - train/test. But, what I would further like to do though is instead of using %'s with the dividerand to use specific values. For instance, split the data into mini-batches with size 2000, and then use 500 for test and 1500 for training. Again, in the next loop we will select the data (2001:4000) and split them in 500 test and 1500 train etc.
Again, dividerand allows to do that with ratios but I would like to use actual values.
X = randn(10000,9);
mu_6 = zeros(510,613); % 390/802 - 450/695 - 510/613 - Test/Iterations
s2_6 = zeros(510,613);
nl6 = zeros(613,1);
RSME6 = zeros(613,1);
prev_batch = 0;
inf = #infGaussLik;
meanfunc = []; % empty: don't use a mean function
covfunc = #covSEiso; % Squared Exponential covariance
likfunc = #likGauss; % Gaussian likelihood
for k=1:613
new_batch = k*1700;
X_batch = X(1+prev_batch:new_batch,:);
[train,~,test] = dividerand(transpose(X_batch),0.7,0,0.3);
train = transpose(train);
test = transpose(test);
x_t = train(:,1:8); % Train batch we get 910 values
y_t = train(:,9);
x_z = test(:,1:8); % Test batch we get 390 values
y_z = test(:,9);
% Calculations for Gaussian process regression
if k==1
hyp = struct('mean', [], 'cov', [0 0], 'lik', -1);
else
hyp = hyp2;
end
hyp2 = minimize(hyp, #gp, -100, inf, meanfunc, covfunc, likfunc, x_t, y_t);
[m4 s4] = gp(hyp2, inf, meanfunc, covfunc, likfunc, x_t, y_t, x_z);
[nlZ4,dnlZ4] = gp(hyp2, inf, meanfunc, covfunc, likfunc, x_t, y_t);
RSME6(k,1) = sqrt(sum(((m4-y_z).^2))/450);
nl6(k,1) = nlZ4;
mu_6(:,k) = m4;
s2_6(:,k) = s4;
% End of calculations
prev_batch = new_batch;
disp(k);
end
How about:
[~, idx] = sort([randn(2000,1)]);
group1_idx = idx(1:1500);
group2_idx = idx(1501:end);

Export a neural network trained with MATLAB in other programming languages

I trained a neural network using the MATLAB Neural Network Toolbox, and in particular using the command nprtool, which provides a simple GUI to use the toolbox features, and to export a net object containing the informations about the NN generated.
In this way, I created a working neural network, that I can use as classifier, and a diagram representing it is the following:
There are 200 inputs, 20 neurons in the first hidden layer, and 2 neurons in the last layer that provide a bidimensional output.
What I want to do is to use the network in some other programming language (C#, Java, ...).
In order to solve this problem, I try to use the following code in MATLAB:
y1 = tansig(net.IW{1} * input + net.b{1});
Results = tansig(net.LW{2} * y1 + net.b{2});
Assuming that input is a monodimensional array of 200 elements, the previous code would work if net.IW{1} is a 20x200 matrix (20 neurons, 200 weights).
The problem is that I noticed that size(net.IW{1}) returns unexpected values:
>> size(net.IW{1})
ans =
20 199
I got the same problem with a network with 10000 input. In this case, the result wasn't 20x10000, but something like 20x9384 (I don't remember the exact value).
So, the question is: how can I obtain the weights of each neuron? And after that, can someone explain me how can I use them to produce the same output of MATLAB?
I solved the problems described above, and I think it is useful to share what I've learned.
Premises
First of all, we need some definitions. Let's consider the following image, taken from [1]:
In the above figure, IW stands for initial weights: they represent the weights of neurons on the Layer 1, each of which is connected with each input, as the following image shows [1]:
All the other weights, are called layer weights (LW in the first figure), that are also connected with each output of the previous layer. In our case of study, we use a network with only two layers, so we will use only one LW array to solve our problems.
Solution of the problem
After the above introduction, we can proceed by dividing the issue in two steps:
Force the number of initial weights to match with the input array length
Use the weights to implement and use the neural network just trained in other programming languages
A - Force the number of initial weights to match with the input array length
Using the nprtool, we can train our network, and at the end of the process, we can also export in the workspace some information about the entire training process. In particular, we need to export:
a MATLAB network object that represents the neural network created
the input array used to train the network
the target array used to train the network
Also, we need to generate a M-file that contains the code used by MATLAB to create the neural network, because we need to modify it and change some training options.
The following image shows how to perform these operations:
The M-code generated will be similar to the following one:
function net = create_pr_net(inputs,targets)
%CREATE_PR_NET Creates and trains a pattern recognition neural network.
%
% NET = CREATE_PR_NET(INPUTS,TARGETS) takes these arguments:
% INPUTS - RxQ matrix of Q R-element input samples
% TARGETS - SxQ matrix of Q S-element associated target samples, where
% each column contains a single 1, with all other elements set to 0.
% and returns these results:
% NET - The trained neural network
%
% For example, to solve the Iris dataset problem with this function:
%
% load iris_dataset
% net = create_pr_net(irisInputs,irisTargets);
% irisOutputs = sim(net,irisInputs);
%
% To reproduce the results you obtained in NPRTOOL:
%
% net = create_pr_net(trainingSetInput,trainingSetOutput);
% Create Network
numHiddenNeurons = 20; % Adjust as desired
net = newpr(inputs,targets,numHiddenNeurons);
net.divideParam.trainRatio = 75/100; % Adjust as desired
net.divideParam.valRatio = 15/100; % Adjust as desired
net.divideParam.testRatio = 10/100; % Adjust as desired
% Train and Apply Network
[net,tr] = train(net,inputs,targets);
outputs = sim(net,inputs);
% Plot
plotperf(tr)
plotconfusion(targets,outputs)
Before start the training process, we need to remove all preprocessing and postprocessing functions that MATLAB executes on inputs and outputs. This can be done adding the following lines just before the % Train and Apply Network lines:
net.inputs{1}.processFcns = {};
net.outputs{2}.processFcns = {};
After these changes to the create_pr_net() function, simply we can use it to create our final neural network:
net = create_pr_net(input, target);
where input and target are the values we exported through nprtool.
In this way, we are sure that the number of weights is equal to the length of input array. Also, this process is useful in order to simplify the porting to other programming languages.
B - Implement and use the neural network just trained in other programming languages
With these changes, we can define a function like this:
function [ Results ] = classify( net, input )
y1 = tansig(net.IW{1} * input + net.b{1});
Results = tansig(net.LW{2} * y1 + net.b{2});
end
In this code, we use the IW and LW arrays mentioned above, but also the biases b, used in the network schema by the nprtool. In this context, we don't care about the role of biases; simply, we need to use them because nprtool does it.
Now, we can use the classify() function defined above, or the sim() function equally, obtaining the same results, as shown in the following example:
>> sim(net, input(:, 1))
ans =
0.9759
-0.1867
-0.1891
>> classify(net, input(:, 1))
ans =
0.9759
-0.1867
-0.1891
Obviously, the classify() function can be interpreted as a pseudocode, and then implemented in every programming languages in which is possibile to define the MATLAB tansig() function [2] and the basic operations between arrays.
References
[1] Howard Demuth, Mark Beale, Martin Hagan: Neural Network Toolbox 6 - User Guide, MATLAB
[2] Mathworks, tansig - Hyperbolic tangent sigmoid transfer function, MATLAB Documentation center
Additional notes
Take a look to the robott's answer and the Sangeun Chi's answer for more details.
Thanks to VitoShadow and robott answers, I can export Matlab neural network values to other applications.
I really appreciate them, but I found some trivial errors in their codes and want to correct them.
1) In the VitoShadow codes,
Results = tansig(net.LW{2} * y1 + net.b{2});
-> Results = net.LW{2} * y1 + net.b{2};
2) In the robott preprocessing codes,
It would be easier extracting xmax and xmin from the net variable than calculating them.
xmax = net.inputs{1}.processSettings{1}.xmax
xmin = net.inputs{1}.processSettings{1}.xmin
3) In the robott postprocessing codes,
xmax = net.outputs{2}.processSettings{1}.xmax
xmin = net.outputs{2}.processSettings{1}.xmin
Results = (ymax-ymin)*(Results-xmin)/(xmax-xmin) + ymin;
-> Results = (Results-ymin)*(xmax-xmin)/(ymax-ymin) + xmin;
You can manually check and confirm the values as follows:
p2 = mapminmax('apply', net(:, 1), net.inputs{1}.processSettings{1})
-> preprocessed data
y1 = purelin ( net.LW{2} * tansig(net.iw{1}* p2 + net.b{1}) + net.b{2})
-> Neural Network processed data
y2 = mapminmax( 'reverse' , y1, net.outputs{2}.processSettings{1})
-> postprocessed data
Reference:
http://www.mathworks.com/matlabcentral/answers/14517-processing-of-i-p-data
This is a small improvement to the great Vito Gentile's answer.
If you want to use the preprocessing and postprocessing 'mapminmax' functions, you have to pay attention because 'mapminmax' in Matlab normalizes by ROW and not by column!
This is what you need to add to the upper "classify" function, to keep a coherent pre/post-processing:
[m n] = size(input);
ymax = 1;
ymin = -1;
for i=1:m
xmax = max(input(i,:));
xmin = min(input(i,:));
for j=1:n
input(i,j) = (ymax-ymin)*(input(i,j)-xmin)/(xmax-xmin) + ymin;
end
end
And this at the end of the function:
ymax = 1;
ymin = 0;
xmax = 1;
xmin = -1;
Results = (ymax-ymin)*(Results-xmin)/(xmax-xmin) + ymin;
This is Matlab code, but it can be easily read as pseudocode.
Hope this will be helpful!
I tried to implement a simply 2-layer NN in C++ using OpenCV and then exported the weights to Android which worked quiet well. I wrote a small script which generates a header file with the learned weights and this is used in the following code snipped.
// Map Minimum and Maximum Input Processing Function
Mat mapminmax_apply(Mat x, Mat settings_gain, Mat settings_xoffset, double settings_ymin){
Mat y;
subtract(x, settings_xoffset, y);
multiply(y, settings_gain, y);
add(y, settings_ymin, y);
return y;
/* MATLAB CODE
y = x - settings_xoffset;
y = y .* settings_gain;
y = y + settings_ymin;
*/
}
// Sigmoid Symmetric Transfer Function
Mat transig_apply(Mat n){
Mat tempexp;
exp(-2*n, tempexp);
Mat transig_apply_result = 2 /(1 + tempexp) - 1;
return transig_apply_result;
}
// Map Minimum and Maximum Output Reverse-Processing Function
Mat mapminmax_reverse(Mat y, Mat settings_gain, Mat settings_xoffset, double settings_ymin){
Mat x;
subtract(y, settings_ymin, x);
divide(x, settings_gain, x);
add(x, settings_xoffset, x);
return x;
/* MATLAB CODE
function x = mapminmax_reverse(y,settings_gain,settings_xoffset,settings_ymin)
x = y - settings_ymin;
x = x ./ settings_gain;
x = x + settings_xoffset;
end
*/
}
Mat getNNParameter (Mat x1)
{
// convert double array to MAT
// input 1
Mat x1_step1_xoffsetM = Mat(1, 48, CV_64FC1, x1_step1_xoffset).t();
Mat x1_step1_gainM = Mat(1, 48, CV_64FC1, x1_step1_gain).t();
double x1_step1_ymin = -1;
// Layer 1
Mat b1M = Mat(1, 25, CV_64FC1, b1).t();
Mat IW1_1M = Mat(48, 25, CV_64FC1, IW1_1).t();
// Layer 2
Mat b2M = Mat(1, 48, CV_64FC1, b2).t();
Mat LW2_1M = Mat(25, 48, CV_64FC1, LW2_1).t();
// input 1
Mat y1_step1_gainM = Mat(1, 48, CV_64FC1, y1_step1_gain).t();
Mat y1_step1_xoffsetM = Mat(1, 48, CV_64FC1, y1_step1_xoffset).t();
double y1_step1_ymin = -1;
// ===== SIMULATION ========
// Input 1
Mat xp1 = mapminmax_apply(x1, x1_step1_gainM, x1_step1_xoffsetM, x1_step1_ymin);
Mat temp = b1M + IW1_1M*xp1;
// Layer 1
Mat a1M = transig_apply(temp);
// Layer 2
Mat a2M = b2M + LW2_1M*a1M;
// Output 1
Mat y1M = mapminmax_reverse(a2M, y1_step1_gainM, y1_step1_xoffsetM, y1_step1_ymin);
return y1M;
}
example for a bias in the header could be this:
static double b2[1][48] = {
{-0.19879, 0.78254, -0.87674, -0.5827, -0.017464, 0.13143, -0.74361, 0.4645, 0.25262, 0.54249, -0.22292, -0.35605, -0.42747, 0.044744, -0.14827, -0.27354, 0.77793, -0.4511, 0.059346, 0.29589, -0.65137, -0.51788, 0.38366, -0.030243, -0.57632, 0.76785, -0.36374, 0.19446, 0.10383, -0.57989, -0.82931, 0.15301, -0.89212, -0.17296, -0.16356, 0.18946, -1.0032, 0.48846, -0.78148, 0.66608, 0.14946, 0.1972, -0.93501, 0.42523, -0.37773, -0.068266, -0.27003, 0.1196}};
Now, that Google published Tensorflow, this became obsolete.
Hence the solution becomes (after correcting all parts)
Here I am giving a solution in Matlab, but if you have tanh() function, you may easily convert it to any programming language. It is for just showing the fields from network object and the operations you need.
Assume you have a trained ann (network object) that you want to export
Assume that the name of the trained ann is trained_ann
Here is the script for exporting and testing.
Testing script compares original network result with my_ann_evaluation() result
% Export IT
exported_ann_structure = my_ann_exporter(trained_ann);
% Run and Compare
% Works only for single INPUT vector
% Please extend it to MATRIX version by yourself
input = [12 3 5 100];
res1 = trained_ann(input')';
res2 = my_ann_evaluation(exported_ann_structure, input')';
where you need the following two functions
First my_ann_exporter:
function [ my_ann_structure ] = my_ann_exporter(trained_netw)
% Just for extracting as Structure object
my_ann_structure.input_ymax = trained_netw.inputs{1}.processSettings{1}.ymax;
my_ann_structure.input_ymin = trained_netw.inputs{1}.processSettings{1}.ymin;
my_ann_structure.input_xmax = trained_netw.inputs{1}.processSettings{1}.xmax;
my_ann_structure.input_xmin = trained_netw.inputs{1}.processSettings{1}.xmin;
my_ann_structure.IW = trained_netw.IW{1};
my_ann_structure.b1 = trained_netw.b{1};
my_ann_structure.LW = trained_netw.LW{2};
my_ann_structure.b2 = trained_netw.b{2};
my_ann_structure.output_ymax = trained_netw.outputs{2}.processSettings{1}.ymax;
my_ann_structure.output_ymin = trained_netw.outputs{2}.processSettings{1}.ymin;
my_ann_structure.output_xmax = trained_netw.outputs{2}.processSettings{1}.xmax;
my_ann_structure.output_xmin = trained_netw.outputs{2}.processSettings{1}.xmin;
end
Second my_ann_evaluation:
function [ res ] = my_ann_evaluation(my_ann_structure, input)
% Works with only single INPUT vector
% Matrix version can be implemented
ymax = my_ann_structure.input_ymax;
ymin = my_ann_structure.input_ymin;
xmax = my_ann_structure.input_xmax;
xmin = my_ann_structure.input_xmin;
input_preprocessed = (ymax-ymin) * (input-xmin) ./ (xmax-xmin) + ymin;
% Pass it through the ANN matrix multiplication
y1 = tanh(my_ann_structure.IW * input_preprocessed + my_ann_structure.b1);
y2 = my_ann_structure.LW * y1 + my_ann_structure.b2;
ymax = my_ann_structure.output_ymax;
ymin = my_ann_structure.output_ymin;
xmax = my_ann_structure.output_xmax;
xmin = my_ann_structure.output_xmin;
res = (y2-ymin) .* (xmax-xmin) /(ymax-ymin) + xmin;
end

Question on neural network matlab code

Hi i found this code somewhere with little info with it.
it's suppose to be a backpropagation neural network code.
but it seem to be lacking of something like weight and bias.
is the code correct? is it test-while-train backpropagation neural network?
thanks
% --- Executes on button press in pushbutton6.
%~~~~~~~~~~~[L1 L2 1];first hidden layer,second & output layer~~~~~
layer = [11 15 1];
myepochs = 30;
attemption = 1; %i;
mytfn = {'tansig' 'tansig' 'purelin'};
%~~~~~~load data~~~~~~~~~~~~~~~~~~~~~~~
m = xlsread('D:\MATLAB\datatrain.csv');
%~~~~~~convert the data in Matrix form~~~~
[row,col] = size(m);
P = m(1:row,1:10)';
T1 = m(1:row, col)'; % target data for training...last column
net = newff([minmax(P)],layer,mytfn,'trainlm'); %nnet
net.trainParam.epochs = myepochs; % how many time newff will repeat the training
net.trainParam.showWindow = true;
net.trainParam.showCommandLine = true;
net = train(net,P,T1); % start training newff with input P and target T1
Y = sim(net,P); % training
save 'net7' net;
% --- Executes on button press in pushbutton4.
%~~~~~~load data~~~~~~~~~~~~~~~~~~~~~~~
mt = xlsread('D:\MATLAB\datatest.csv');
%~~~~~~convert the data in Matrix form~~~~
[row1,col1] = size(mt);
Pt= mt(1:row1,1:10)';
Tt = mt(1:row1, col1)';
load 'net7' -mat;
Yt= sim(net,Pt);
%~~~~~~~final result of the neural network~~~~~~~~
[r,c]=size(Yt);
result=Yt(c);
if result>0.7
error=1-result;
set(handles.edit39,'String','yes')
set(handles.edit40,'String',num2str(error))
set(handles.edit41,'String','Completed')
data1=[num2str(result) ];
fid = fopen('D:\MATLAB\record.csv','a+');
fprintf(fid,[data1,'\n']);
fclose(fid);
else
set(handles.edit39,'String','no')
set(handles.edit40,'String',num2str(result))
set(handles.edit41,'String','Completed')
data1=[num2str(result) ];
fid = fopen('D:\MATLAB\record.csv','a+');
fprintf(fid,[data1,'\n']);
fclose(fid);
end
The code is correct. Neural network weights and biases are stored inside net structure, you can access them via net.IW and net.LW structures. Biases are stored inside net.b. This code train a network using inputs P and targets T1, splitting them in training, testing and validations subsets used during training. Check the documentation for further information about training procedure.