Matlab neural network - matlab

I've seen the following instructions in one of my AI courses:
net = newp([-2 2;-2 2],2])
net.IW {1,1} = [-1 1; 3 4]
net.b{1} = [-2,3]
How does the neural network look? The perceptron has 2 neurons?

the easiest way to take a look at it is via:
view(net)
there you can see the number of inputs outputs and layers. Also you can check with
help netp
the documentation of the command and in there it says
NET = newp(P,T,TF,LF) takes these inputs,
P - RxQ matrix of Q1 representative input vectors.
T - SxQ matrix of Q2 representative target vectors.
TF - Transfer function, default = 'hardlim'.
LF - Learning function, default = 'learnp'.
net.iw{1,1} sets the input weigths to the chosen numbers
amd net.b{1} sets the biases of the network to the vector [-2,3].
Did this clearify things for you?

Related

Why modifying the weights of a recurrent neural network in MATLAB does not cause the output to change when predicting on same data?

I consider the following recurrent neural network (RNN):
RNN under consideration
where x is the input (a vector of reals), h the hidden state vector and y is the output vector. I trained the network on Matlab using some data x and obtained W, V, and U.
However, in MATLAB after changing matrix W to W', and keeping U,V the same, the output (y) of the RNN that uses W is the same as the output (y') of the RNN that uses W' when both predict on the same data x. Those two outputs should be different just by looking at the above equation, but I don't seem to be able to do that in MATLAB (when I modify V or U, the outputs do change). How could I fix the code so that the outputs (y) and (y') are different as they should be?
The relevant code is shown below:
[x,t] = simplefit_dataset; % x: input data ; t: targets
net = newelm(x,t,5); % Recurrent neural net with 1 hidden layer (5 nodes) and 1 output layer (1 node)
net.layers{1}.transferFcn = 'tansig'; % 'tansig': equivalent to tanh and also is the activation function used for hidden layer
net.biasConnect = [0;0]; % biases set to zero for easier experimenting
net.derivFcn ='defaultderiv'; % defaultderiv: tells Matlab to pick whatever derivative scheme works best for this net
view(net) % displays the network topology
net = train(net,x,t); % trains the network
W = net.LW{1,1}; U = net.IW{1,1}; V = net.LW{2,1}; % network matrices
Y = net(x); % Y: output when predicting on data x using W
net.LW{1,1} = rand(5,5); % This is the modified matrix W, W'
Y_prime = net(x) % Y_prime: output when predicting on data x using W'
max(abs(Y-Y_prime )); % The difference between the two outputs is 0 when it probably shouldn't be.
Edit: minor corrections.
This is the recursion in your first layer: (from the docs)
The weight matrix for the weight going to the ith layer from the jth
layer (or a null matrix [ ]) is located at net.LW{i,j} if
net.layerConnect(i,j) is 1 (or 0).
So net.LW{1,1} are the weights to the first layer from the first layer (i.e. recursion), whereas net.LW{2,1} stores the weights to the second layer from the first layer. Now, what does it mean when one can change the weights of the recursion randomly without any effect (in fact, you can set them to zero net.LW{1,1} = zeros(size(W)); without an effect). Note that this essentially is the same as if you drop the recursion and create as simple feed-forward network:
Hypothesis: The recursion has no effect.
You will note that if you change the weights to the second layer (1 neuron) from the first layer (5 neurons) net.LW{2,1} = zeros(size(V));, it will affect your prediction (the same is of course true if you change the input weights net.IW).
Why does the recursion has no effect?
Well, that beats me. I have no idea where this special glitch is or what the theory is behind the newelm network.

Neural network: weights and biases convergence

I've been reading up on a few topics regarding machine learning, neural networks and deep learning, one of which is this (in my opinion) excellent online book: http://neuralnetworksanddeeplearning.com/chap1.html
For the most part I've come to understand the workings of a neural network but there is one question which still bugs me (which is based on the example on the website):
I consider a three layer neural network with an input layer, hidden layer and output layer. Say these layers have 2, 3 and 1 neurons (although the amount doesn't really matter).
Now an input is given: x1 and x2. Because the network is [2, 3, 1] the weights are randomly generated the first time being a list containing a 2x3 and a 3x1 matrix. The biases is a list of a 3x1 and 1x1 matrix.
Now the part I don't get:
The formula calculated in in the hidden layer:
weights x input - biases = 0
On every iteration the weights and biases are changed slightly, based on the derivative in order to find a global optimum. If this is the cases, why don't the biases and weights for every neuron converge to the same weights and biases?
I think I found the answer by doing some tests as well as finding some information on the internet. The answer lies in the having random initial weigths and biases. If all "neurons" would be equal they would all come to the same result since the weights, biases and inputs are equal. Having random weights allows for different answers:
x1 = 1
x2 = 2
x3 = 3
w1 = [0, 0, 1], giving w dot x = 3
w2 = [3, 0, 0], giving w dot x = 3
If anyone can confirm, please do so.

neural network classification in matlab

My input data is an 101*22 array(101 samples and 22 features).
These data(101) should be divided into 3 groups(L1, L2 and L3).
I want to use mat lab neural network as classifier.
What will be target array?
What other classifier you recommend?
Thanks
The target data should be the classes of the Input data. In your case you have 3 classes. You can use a binary coding.
More details about the input and target data can be found here at the end of the page see here
Other resources:
first
A simple example can be the following:
#this is the INPUT data that you have
X=randint(101,22,[0 10]);
#this is the TARGET data
y =randint(3,22,[0 1]);
#define hidden layer size
hiddenLayerSize = 10;
#create the neural net
my_net = patternnet(hiddenLayerSize);
#run it
[my_net,tr] = trainrp(my_net,X,y);
Then you should see something like the following:
Then explore this windows.
E.g. select confusion

Matlab Neural Network training doesn't yield good results

I'm trying to use a neural network for a classification problem, but the result of the training produce very bad performance. The classification problem:
I have more than 300,000 training samples
Each input is a vector of 32 values (real values)
Each output is a vector of 32 values (0 or 1)
This is how I train the network:
DNN_SIZE = [1000, 1000];
% Initialize DNN
net = feedforwardnet(DNN_SIZE, 'traingda');
net.performParam.regularization = 0.2;
%Set activation functions
for i=1:length(DNN_SIZE)
net.layers{i}.transferFcn = 'poslin';
end
net.layers{end}.transferFcn = 'logsig';
net = train(net, train_inputs, train_outputs);
Note: I have tried different values for DNN_SIZE including larger and smaller values, for hidden layers and less, but it didn't make a difference.
Note 2: I have tried training the same network using a data set from Matlab's examples (simpleclass_dataset) and I still got bad performance.
The performance of the trained network is very bad- Its output is basically 0.5 in every output for every input vector (when the target outputs during training are always 0 or 1). What am I doing wrong, and how can I fix it?
Thanks.

Matlab Multilayer Perceptron Question

I need to classify a dataset using Matlab MLP and show classification.
The dataset looks like
Click to view
What I have done so far is:
I have create an neural network contains a hidden layer (two neurons
?? maybe someone could give me some suggestions on how many
neurons are suitable for my example) and a output layer (one
neuron).
I have used several different learning methods such as Delta bar
Delta, backpropagation (both of these methods are used with or -out
momentum and Levenberg-Marquardt.)
This is the code I used in Matlab(Levenberg-Marquardt example)
net = newff(minmax(Input),[2 1],{'logsig' 'logsig'},'trainlm');
net.trainParam.epochs = 10000;
net.trainParam.goal = 0;
net.trainParam.lr = 0.1;
[net tr outputs] = train(net,Input,Target);
The following shows hidden neuron classification boundaries generated by Matlab on the data, I am little bit confused, beacause network should produce nonlinear result, but the result below seems that two boundary lines are linear..
Click to view
The code for generating above plot is:
figure(1)
plotpv(Input,Target);
hold on
plotpc(net.IW{1},net.b{1});
hold off
I also need to plot the output function of the output neuron, but I am stucking on this step. Can anyone give me some suggestions?
Thanks in advance.
Regarding the number of neurons in the hidden layer, for such an small example two are more than enough. The only way to know for sure the optimum is to test with different numbers. In this faq you can find a rule of thumb that may be useful: http://www.faqs.org/faqs/ai-faq/neural-nets/
For the output function, it is often useful to divide it in two steps:
First, given the input vector x, the output of the neurons in the hidden layer is y = f(x) = x^T w + b where w is the weight matrix from the input neurons to the hidden layer and b is the bias vector.
Second, you will have to apply the activation function g of the network to the resulting vector of the previous step z = g(y)
Finally, the output is the dot product h(z) = z . v + n, where v is the weight vector from the hidden layer to the output neuron and n the bias. In the case of more than one output neurons, you will repeat this for each one.
I've never used the matlab mlp functions, so I don't know how to get the weights in this case, but I'm sure the network stores them somewhere. Edit: Searching the documentation I found the properties:
net.IW numLayers-by-numInputs cell array of input weight values
net.LW numLayers-by-numLayers cell array of layer weight values
net.b numLayers-by-1 cell array of bias values