MATLAB Neural Network pattern recognition - matlab

I've made simple neural network for mouse gestures recognition (inputs are angles)and I've used nprtool (function patternnet for creating). I saved the weights and biases of the network:
W1=net.IW{1,1};
W2=net.LW{2,1};
b1=net.b{1,1};
b2=net.b{2,1};
and for calculating result I used tansig(W2*(tansig(W1*in+b1))+b2);
where in is an input. But the result is awful (each number is approximately equal to 0.99). Output from commend net(in) is good. What am I doing wrong ? It's very important for me why first method is bad (the same I do in my C++ program). I'm asking for help:)
[edit]
Below there's generated code from nprtool GUI. Maybe for someone it would be helpful but I don't see any solution to my problem from this code. For hidden and output layers neurons is used tansig activation function (is there any parameter in MATLAB network ?).
% Solve a Pattern Recognition Problem with a Neural Network
% Script generated by NPRTOOL
% Created Tue May 22 22:05:57 CEST 2012
%
% This script assumes these variables are defined:
%
% input - input data.
% target - target data.
inputs = input;
targets = target;
% Create a Pattern Recognition Network
hiddenLayerSize = 10;
net = patternnet(hiddenLayerSize);
% Choose Input and Output Pre/Post-Processing Functions
% For a list of all processing functions type: help nnprocess
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};
% Setup Division of Data for Training, Validation, Testing
% For a list of all data division functions type: help nndivide
net.divideFcn = 'dividerand'; % Divide data randomly
net.divideMode = 'sample'; % Divide up every sample
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% For help on training function 'trainlm' type: help trainlm
% For a list of all training functions type: help nntrain
net.trainFcn = 'trainlm'; % Levenberg-Marquardt
% Choose a Performance Function
% For a list of all performance functions type: help nnperformance
net.performFcn = 'mse'; % Mean squared error
% Choose Plot Functions
% For a list of all plot functions type: help nnplot
net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ...
'plotregression', 'plotfit'};
% Train the Network
[net,tr] = train(net,inputs,targets);
% Test the Network
outputs = net(inputs);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)
% Recalculate Training, Validation and Test Performance
trainTargets = targets .* tr.trainMask{1};
valTargets = targets .* tr.valMask{1};
testTargets = targets .* tr.testMask{1};
trainPerformance = perform(net,trainTargets,outputs)
valPerformance = perform(net,valTargets,outputs)
testPerformance = perform(net,testTargets,outputs)
% View the Network
view(net)
% Plots
% Uncomment these lines to enable various plots.
%figure, plotperform(tr)
%figure, plottrainstate(tr)
%figure, plotconfusion(targets,outputs)
%figure, ploterrhist(errors)

As can be seen in your code, the network applies automated preprocessing of the input and postprocessing of the targets - look for the lines which define processFcns. It means that the trained parameters are valid for input which is preprocessed, and that the output of the network is postprocessed (with the same paramaters as the targets were). So in your line tansig(W2*(tansig(W1*in+b1))+b2); you can't use your original inputs. You have to preprocess the input, use the result as the network's input, and postprocess the output using the same parameters that were used to postprocess the targets. Only then will you get the same result as calling net(in).
You can read more here: http://www.mathworks.com/help/toolbox/nnet/rn/f0-81221.html#f0-81692

Related

K-fold cross validation modification to generated ANN code?

My data set is basically a matrix of 3 variables (input), and a matrix of 1 variable (target). There are 50 total data sets for each of these (basically 50 samples of f(x,y,z) = t)
I have only done the ANN training using the GUI. Never really with the script/code.
My most simple objective now is to split the data manually for each train-test run, so I can just painstakingly run the neural network 5 times, but I'm not even sure how to manually select a range of the data set for use in training, and which one for testing.
Here's the full exported script from MATLAB. The point of focus is shown below the wall of code.
% Solve an Input-Output Fitting problem with a Neural Network
% Script generated by NFTOOL
% Created Mon Jul 17 02:39:31 SGT 2017
%
% This script assumes these variables are defined:
%
% DEinp - input data.
% DEcgl - target data.
inputs = DEinp;
targets = DEcgl;
% Create a Fitting Network
hiddenLayerSize = 10;
net = fitnet(hiddenLayerSize);
% Choose Input and Output Pre/Post-Processing Functions
% For a list of all processing functions type: help nnprocess
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};
% Setup Division of Data for Training, Validation, Testing
% For a list of all data division functions type: help nndivide
net.divideMode = 'sample'; % Divide up every sample
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% For help on training function 'trainlm' type: help trainlm
% For a list of all training functions type: help nntrain
net.trainFcn = 'trainlm'; % Levenberg-Marquardt
% Choose a Performance Function
% For a list of all performance functions type: help nnperformance
net.performFcn = 'mse'; % Mean squared error
% Choose Plot Functions
% For a list of all plot functions type: help nnplot
net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ...
'plotregression', 'plotfit'};
% Train the Network
[net,tr] = train(net,inputs,targets);
% Test the Network
outputs = net(inputs);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)
% Recalculate Training, Validation and Test Performance
trainTargets = targets .* tr.trainMask{1};
valTargets = targets .* tr.valMask{1};
testTargets = targets .* tr.testMask{1};
trainPerformance = perform(net,trainTargets,outputs)
valPerformance = perform(net,valTargets,outputs)
testPerformance = perform(net,testTargets,outputs)
% View the Network
view(net)
% Plots
% Uncomment these lines to enable various plots.
%figure, plotperform(tr)
%figure, plottrainstate(tr)
%figure, plotfit(net,inputs,targets)
%figure, plotregression(targets,outputs)
%figure, ploterrhist(errors)
I figured that all I needed to do was mess with the net.divideMode section, but I really have no idea how to change the syntax to complete my objective.
Network Parameters
The process of splitting the data into training, validation and test sets happens in the section that you identified. I'm just going to break down each of the lines. Starting with:
% Setup Division of Data for Training, Validation, Testing
% For a list of all data division functions type: help nndivide
net.divideMode = 'sample'; % Divide up every sample
The divideMode is well documented in Neural Network Object Properties
net.divideMode
This property defines the target data dimensions which
to divide up when the data division function is called. Its default
value is 'sample' for static networks and 'time' for dynamic networks.
It may also be set to 'sampletime' to divide targets by both sample
and timestep, 'all' to divide up targets by every scalar value, or
'none' to not divide up data at all (in which case all data is used
for training, none for validation or testing).
So your network is a static network which divides up every sample into a training example. This will remain the same for your cross-validation. What you are interested in manipulating is the training, test, and validation splits.
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
Okay, the variable names here seem promising, but you want a little more control than just choosing the ratio size.
Again the Neural Network Object Properties point us towards more information
net.divideParam
This property defines the parameters and values of the current
data-division function. To get a description of what each field means,
type the following command:
help(net.divideFcn)
This will print out information about how your dataset is partitioned into training, validation, and test splits. In your current configuration, the message reads
dividerand Partition indices into three sets using random indices.
[trainInd,valInd,testInd] = dividerand(Q,trainRatio,valRatio,testRatio) takes a number of
samples Q and divides up the sample indices 1:Q between training,
validation and test indices.
dividerand randomly assigns sample indices to the three sets according to the three ratios.
(...)
See also divideblock, divideind, divideint, dividetrain.
Since you want more control of the partitions, you should check out these additional options.
I think the most promising is divideind. This option allows you to specify the indices for each partition. You can calculate the indices for each fold in your k-fold cross validation and reassign the partitions in each iteration using this option.
To set this parameter, replace the net.divideParam lines above with something like,
net.divideFcn = 'divideind';
net.divideParam.Q = length(targets); %This is the total number of instances in your data
net.divideParam.trainInd = your_train_ind;
net.divideParam.valInd = your_val_ind;
net.divideParam.testInd = your_test_ind;
K-folds
Last detail, how to select the indices? First, a quick review on k-fold cross-validation.
The data is split into k equally sized subsamples.
In each iteration of cross-validation, we train on k-1 of the subsamples and test on the remaining subsamples, rotating to a new testing subsamples each time.
An implementation sketch might look like this
k = 5; % As an example, let's let k = 5
sample_size = length(targets)/k;
%Make a vector of all the indices of your data from 1 to the total number of instances
indices= 1:length(targets);
% Optional: Randomize samples
indices = randperm(length(targets));
% Iterate in steps of sample_size
for ii = 1: sample_size:length(targets) - sample_size
% Grab one subsample of indices for testing
your_test_ind = indices( ii:ii + sample_size - 1);
% Everything else
your_train_ind = indices( [1:ii, ii + sample_size:end]);
%Train and test your network here!
end
This is just an implementation sketch and doesn't handle some edge cases correctly. For example, the first element is always added to the training set, but it should be enough to get you started.

Function approximation using Autoencoder in MATLAB

I have a simple non-linear function y=x.^2, where x and y are n-dimensional vectors, and the square is a component-wise square. I want to approximate y with a low dimensional vector using an auto-encoder in Matlab. The problem is that I am getting distorted reconstructed y even if the low-dimensional space is set to n-1. My training data looks like
this and here is a typical result reconstructed from the low dimensional space. My Matlab code is given below.
%% Training data
inputSize=100;
hiddenSize1 = 80;
epo=1000;
dataNum=6000;
rng(123);
y=rand(2,dataNum);
xTrain=zeros(inputSize,dataNum);
for i=1:dataNum
xTrain(:,i)=linspace(y(1,i),y(2,i),inputSize).^2;
end
%scaling the data to [-1,1]
for i=1:inputSize
meanX=0.5; %mean(xTrain(i,:));
sd=max(xTrain(i,:))-min(xTrain(i,:));
xTrain(i,:) = (xTrain(i,:)- meanX)./sd;
end
%% Training the first Autoencoder
% Create the network.
autoenc1 = feedforwardnet(hiddenSize1);
autoenc1.trainFcn = 'trainscg';
autoenc1.trainParam.epochs = epo;
% Do not use process functions at the input or output
autoenc1.inputs{1}.processFcns = {};
autoenc1.outputs{2}.processFcns = {};
% Set the transfer function for both layers to the logistic sigmoid
autoenc1.layers{1}.transferFcn = 'tansig';
autoenc1.layers{2}.transferFcn = 'tansig';
% Use all of the data for training
autoenc1.divideFcn = 'dividetrain';
autoenc1.performFcn = 'mae';
%% Train the autoencoder
autoenc1 = train(autoenc1,xTrain,xTrain);
%%
% Create an empty network
autoEncoder = network;
% Set the number of inputs and layers
autoEncoder.numInputs = 1;
autoEncoder.numlayers = 1;
% Connect the 1st (and only) layer to the 1st input, and also connect the
% 1st layer to the output
autoEncoder.inputConnect(1,1) = 1;
autoEncoder.outputConnect = 1;
% Add a connection for a bias term to the first layer
autoEncoder.biasConnect = 1;
% Set the size of the input and the 1st layer
autoEncoder.inputs{1}.size = inputSize;
autoEncoder.layers{1}.size = hiddenSize1;
% Use the logistic sigmoid transfer function for the first layer
autoEncoder.layers{1}.transferFcn = 'tansig';
% Copy the weights and biases from the first layer of the trained
% autoencoder to this network
autoEncoder.IW{1,1} = autoenc1.IW{1,1};
autoEncoder.b{1,1} = autoenc1.b{1,1};
%%
% generate the features
feat1 = autoEncoder(xTrain);
%%
% Create an empty network
autoDecoder = network;
% Set the number of inputs and layers
autoDecoder.numInputs = 1;
autoDecoder.numlayers = 1;
% Connect the 1st (and only) layer to the 1st input, and also connect the
% 1st layer to the output
autoDecoder.inputConnect(1,1) = 1;
autoDecoder.outputConnect(1) = 1;
% Add a connection for a bias term to the first layer
autoDecoder.biasConnect(1) = 1;
% Set the size of the input and the 1st layer
autoDecoder.inputs{1}.size = hiddenSize1;
autoDecoder.layers{1}.size = inputSize;
% Use the logistic sigmoid transfer function for the first layer
autoDecoder.layers{1}.transferFcn = 'tansig';
% Copy the weights and biases from the first layer of the trained
% autoencoder to this network
autoDecoder.IW{1,1} = autoenc1.LW{2,1};
autoDecoder.b{1,1} = autoenc1.b{2,1};
%% Reconstruction
desired=xTrain(:,50);
input=feat1(:,50);
output = autoDecoder(input);
figure
plot(output)
hold on
plot(desired,'r')
I'm not a Matlab user, but your code makes me think you have a standard shallow autoencoder. You can't really approximate a nonlinearity using a single autoencoder, because it won't be much more optimal than a purely linear PCA reconstruction (I can provide a more elaborate mathematical reasoning if you need it, though this is not math.stackexchange). You need to build a deep network to approximate your nonlinearity with several layers of linear transformations. Then, autoencoder is a bad model to choose (hardly anyone uses them in practice today), when you have denoising autoencoders, that tend to learn more important representations by trying to reconstruct a prior from its noisy version. Try building a deep denoising autoencoder. This video introduces the concept of denoising autoencoders. That course has a video about deep denoising autoencoders as well.

What are the Inputs, Outputs and Target in ANN

I am getting confusing about Inputs data set, outputs and target. I am studying about Artificial Neural Network in Matlab, my purposed is that I wanted to use the history data (I have rainfall and water levels for 20 years ago) to predict water level in the future (for example 2014). So, where is my inputs, targets, and output? For example i have a Excel sheet data as [Column1-Date| Column2-Rainfall | Column3 |Water level]
I am using this code to prediction, but it could not predict in the future, can anyone help me to fix it again? Thank you .
%% 1. Importing data
Data_Inputs=xlsread('demo.xls'); % Import file
Training_Set=Data_Inputs(1:end,2);%specific training set
Target_Set=Data_Inputs(1:end,3); %specific target set
Input=Training_Set'; %Convert to row
Target=Target_Set'; %Convert to row
X = con2seq(Input); %Convert to cell
T = con2seq(Target); %Convert to cell
%% 2. Data preparation
N = 365; % Multi-step ahead prediction
% Input and target series are divided in two groups of data:
% 1st group: used to train the network
inputSeries = X(1:end-N);
targetSeries = T(1:end-N);
inputSeriesVal = X(end-N+1:end);
targetSeriesVal = T(end-N+1:end);
% Create a Nonlinear Autoregressive Network with External Input
delay = 2;
inputDelays = 1:2;
feedbackDelays = 1:2;
hiddenLayerSize = 10;
net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize);
% Prepare the Data for Training and Simulation
% The function PREPARETS prepares timeseries data for a particular network,
% shifting time by the minimum amount to fill input states and layer states.
% Using PREPARETS allows you to keep your original time series data unchanged, while
% easily customizing it for networks with differing numbers of delays, with
% open loop or closed loop feedback modes.
[inputs,inputStates,layerStates,targets] = preparets(net,inputSeries,{},targetSeries);
% Setup Division of Data for Training, Validation, Testing
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% Train the Network
[net,tr] = train(net,inputs,targets,inputStates,layerStates);
% Test the Network
outputs = net(inputs,inputStates,layerStates);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)
% View the Network
view(net)
% Plots
% Uncomment these lines to enable various plots.
%figure, plotperform(tr)
%figure, plottrainstate(tr)
%figure, plotregression(targets,outputs)
%figure, plotresponse(targets,outputs)
%figure, ploterrcorr(errors)
%figure, plotinerrcorr(inputs,errors)
% Closed Loop Network
% Use this network to do multi-step prediction.
% The function CLOSELOOP replaces the feedback input with a direct
% connection from the outout layer.
netc = closeloop(net);
netc.name = [net.name ' - Closed Loop'];
view(netc)
[xc,xic,aic,tc] = preparets(netc,inputSeries,{},targetSeries);
yc = netc(xc,xic,aic);
closedLoopPerformance = perform(netc,tc,yc)
% Early Prediction Network
% For some applications it helps to get the prediction a timestep early.
% The original network returns predicted y(t+1) at the same time it is given y(t+1).
% For some applications such as decision making, it would help to have predicted
% y(t+1) once y(t) is available, but before the actual y(t+1) occurs.
% The network can be made to return its output a timestep early by removing one delay
% so that its minimal tap delay is now 0 instead of 1. The new network returns the
% same outputs as the original network, but outputs are shifted left one timestep.
nets = removedelay(net);
nets.name = [net.name ' - Predict One Step Ahead'];
view(nets)
[xs,xis,ais,ts] = preparets(nets,inputSeries,{},targetSeries);
ys = nets(xs,xis,ais);
earlyPredictPerformance = perform(nets,ts,ys)
%% 5. Multi-step ahead prediction
inputSeriesPred = [inputSeries(end-delay+1:end),inputSeriesVal];
targetSeriesPred = [targetSeries(end-delay+1:end), con2seq(nan(1,N))];
[Xs,Xi,Ai,Ts] = preparets(netc,inputSeriesPred,{},targetSeriesPred);
yPred = netc(Xs,Xi,Ai);
perf = perform(net,yPred,targetSeriesVal);
figure;
plot([cell2mat(targetSeries),nan(1,N);
nan(1,length(targetSeries)),cell2mat(yPred);
nan(1,length(targetSeries)),cell2mat(targetSeriesVal)]')
legend('Original Targets','Network Predictions','Expected Outputs');
Inputs and targets are data you are using to train net.
Inputs and targets are correct data that is known. After you have trained net, you send again only inputs, and your output would be predicted based on inputs and targets you have sent in training session. So your targets would be the correct output for data you have already know.
As I can understand you are trying to predict future and about future you have only date? If I am wrong correct me. So in this case:
Before training:
input1 = date; input2 = rainFall;
input = [input1; input2];
target = waterLevel;
Because you want to get back the result of water level from the net, your targets should be also water level.
Now you train net;
..train(net, input, target..
After training
Now as you said you want to predict water level, but you gave only date for example 2015-11-11, so in this case it's impossible because you need rain fall info, so if you still want to predict your water level based on date you need to predict rain fall too, or eliminate it, because it's not helping when you don't know it anymore.
I'd say your inputs are both the rainfall and the water level, the target is the water level for the next year and the output is the predicted water level.
In other words, when training, your inputs should be rainfall(k-2:k-1) (direct input) and waterlevel(k-2:k-1) (as feedback). Your target is waterlevel(k). That should output an estimation of the water level for year k (waterlevel_hat(k)). You can compute the error e = waterlevel_hat(k) - waterlevel(k) and use it to train the network. You should repeat the same process for all k > 2 (the reason is that you have 2 input delays and 2 feedback delays).

Matlab: neural network time series prediction?

Background: I am trying to use MATLAB's Neural Network toolbox to predict future values of data. I run it from the GUI, but I have also included the output code below.
Problem: My predicted values lag behind the actual values by 2 time periods, and I do not know how to actually see a "t+1" (predicted) value.
Code:
% Solve an Autoregression Time-Series Problem with a NAR Neural Network
% Script generated by NTSTOOL
% Created Tue Mar 05 22:09:39 EST 2013
%
% This script assumes this variable is defined:
%
% close_data - feedback time series.
targetSeries = tonndata(close_data_short,false,false);
% Create a Nonlinear Autoregressive Network
feedbackDelays = 1:3;
hiddenLayerSize = 10;
net = narnet(feedbackDelays,hiddenLayerSize);
% Choose Feedback Pre/Post-Processing Functions
% Settings for feedback input are automatically applied to feedback output
% For a list of all processing functions type: help nnprocess
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
% Prepare the Data for Training and Simulation
% The function PREPARETS prepares timeseries data for a particular network,
% shifting time by the minimum amount to fill input states and layer states.
% Using PREPARETS allows you to keep your original time series data unchanged, while
% easily customizing it for networks with differing numbers of delays, with
% open loop or closed loop feedback modes.
[inputs,inputStates,layerStates,targets] = preparets(net,{},{},targetSeries);
% Setup Division of Data for Training, Validation, Testing
% For a list of all data division functions type: help nndivide
net.divideFcn = 'dividerand'; % Divide data randomly
net.divideMode = 'time'; % Divide up every value
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% Choose a Training Function
% For a list of all training functions type: help nntrain
net.trainFcn = 'trainlm'; % Levenberg-Marquardt
% Choose a Performance Function
% For a list of all performance functions type: help nnperformance
net.performFcn = 'mse'; % Mean squared error
% Choose Plot Functions
% For a list of all plot functions type: help nnplot
net.plotFcns = {'plotperform','plottrainstate','plotresponse', ...
'ploterrcorr', 'plotinerrcorr'};
% Train the Network
[net,tr] = train(net,inputs,targets,inputStates,layerStates);
% Test the Network
outputs = net(inputs,inputStates,layerStates);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)
% Recalculate Training, Validation and Test Performance
trainTargets = gmultiply(targets,tr.trainMask);
valTargets = gmultiply(targets,tr.valMask);
testTargets = gmultiply(targets,tr.testMask);
trainPerformance = perform(net,trainTargets,outputs)
valPerformance = perform(net,valTargets,outputs)
testPerformance = perform(net,testTargets,outputs)
% View the Network
view(net)
% Plots
% Uncomment these lines to enable various plots.
%figure, plotperform(tr)
%figure, plottrainstate(tr)
%figure, plotresponse(targets,outputs)
%figure, ploterrcorr(errors)
%figure, plotinerrcorr(inputs,errors)
% Closed Loop Network
% Use this network to do multi-step prediction.
% The function CLOSELOOP replaces the feedback input with a direct
% connection from the outout layer.
netc = closeloop(net);
[xc,xic,aic,tc] = preparets(netc,{},{},targetSeries);
yc = netc(xc,xic,aic);
perfc = perform(net,tc,yc)
% Early Prediction Network
% For some applications it helps to get the prediction a timestep early.
% The original network returns predicted y(t+1) at the same time it is given y(t+1).
% For some applications such as decision making, it would help to have predicted
% y(t+1) once y(t) is available, but before the actual y(t+1) occurs.
% The network can be made to return its output a timestep early by removing one delay
% so that its minimal tap delay is now 0 instead of 1. The new network returns the
% same outputs as the original network, but outputs are shifted left one timestep.
nets = removedelay(net);
[xs,xis,ais,ts] = preparets(nets,{},{},targetSeries);
ys = nets(xs,xis,ais);
closedLoopPerformance = perform(net,tc,yc)
Proposed Solution: I believe the answer lies in the last part of the code "Early Prediction Network". I'm just not sure how to remove 'one delay'.
Additional question: Is there a function that can be output from this so I can use it over and over? Or would I just have to keep retraining once I get the next time period of data?
To ensure that this question does not remain open whilst the answer is already present I will post the comment that seems to address the issue:
Credits to #DanielTheRocketMan
I believe that you should work in steps:
see if the data is stationary
if not, deal with it (for instance, differentiate the data)
test the most possible model, for instance, ar model
try nonlinear model, for instance, nar
go to a nn model.
Try a simpler version. I have tested this code and this code works fine for me.
inputs = X; %define input and target
targets = y;
hiddenLayerSize = 10;
net = patternnet(hiddenLayerSize);
% Set up Division of Data for Training, Validation, Testing
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
[net,tr] = train(net,inputs,targets);
outputss(x,:) = net(inputs);
errors = gsubtract(targets,outputss);
mse(errors)

How to apply Back propagation for 3 class classification task in matlab 2012a?

I want to solve a classification problem with 3 classes using multi layer neural network with back propagation algorithm. I'm using matlab 2012a. I'm facing trouble with newff function. I want to build a network with one hidden layer and there will be 3 neurons in the output layer, one for each class. Please advise me with example.
Here is my code
clc
%parameters
nodesInHL=7;
nodesInOutput=3;
iteration=1000;
HLtranfer='tansig';
outputTranser='tansig';
trainFunc='traingd';
learnRate=0.05;
performanceFunc='mse';
%rand('seed',0);
%randn('seed',0);
rng('shuffle');
net=newff(trainX,trainY,[nodesInHL],{HLtranfer,outputTranser},trainFunc,'learngd',performanceFunc);
net=init(net);
%setting parameters
net.trainParam.epochs=iteration;
net.trainParam.lr=learnRate;
%training
[net,tr]=train(net,trainX,trainY);
Thanks.
The newff function was made obsolete. The recommended function is feedforwardnet, or in your case (classification), use patternnet.
You could also use the GUI of nprtool, which provides a wizard-like tool that guides you step-by-step to build your network. It even allows for code generation at the end of the experiment.
Here is an example:
%# load sample dataset
%# simpleclassInputs: 2x1000 matrix (1000 points of 2-dimensions)
%# simpleclassTargets: 4x1000 matrix (4 possible classes)
load simpleclass_dataset
%# create ANN of one hidden layer with 7 nodes
net = patternnet(7);
%# set params
net.trainFcn = 'traingd'; %# training function
net.trainParam.epochs = 1000; %# max number of iterations
net.trainParam.lr = 0.05; %# learning rate
net.performFcn = 'mse'; %# mean-squared error function
net.divideFcn = 'dividerand'; %# how to divide data
net.divideParam.trainRatio = 70/100; %# training set
net.divideParam.valRatio = 15/100; %# validation set
net.divideParam.testRatio = 15/100; %# testing set
%# training
net = init(net);
[net,tr] = train(net, simpleclassInputs, simpleclassTargets);
%# testing
y_hat = net(simpleclassInputs);
perf = perform(net, simpleclassTargets, y_hat);
err = gsubtract(simpleclassTargets, y_hat);
view(net)
note that NN will automatically set the number of nodes in the output layer (based on the target class matrix size)