MATLAB neural network weight and bias initializaiton - matlab

I'm creating a neural network in one part of my program and using it's weights and biases for another neural network in other part so I have the following code:
net_b = patternnet(10);
net_b = configure(net,INPUT,Target);
Weights = getwb(net);
I will use this neural network weights and biases for creating another neural network as below:
net = patternnet(10);
net = configure(net,INPUT,Target);
net = setwb(net,Weights);
Everything was good until this stage, but then I wanted to disable pre-processing from the neural network (because I did it in a stage of the program before inserting the data to the neural network), so I used these functions:
net.inputs{1}.processFcns={};
net.outputs{2}.processFcns={};
When I used the above two functions and checked the weights in the input layer or biases in the output layer, everything is removed and I have an empty matrix, but in the hidden layer everything is normal. How can I do these without removing my weights and biases?

Related

CNN feed forward or back propagtion model

Is convolutional neural network (CNN) a feed forward model or back propagation model. I get this confusion by comparing the blog of DR.Yann and Wikipedia definition of CNN.
A convolutional neural net is a structured neural net where the first several layers are sparsely connected in order to process information (usually visual).
A feed forward network is defined as having no cycles contained within it. If it has cycles, it is a recurrent neural network. For example, imagine a three layer net where layer 1 is the input layer and layer 3 the output layer. A feed forward network would be structured by layer 1 taking inputs, feeding them to layer 2, layer 2 feeds to layer 3, and layer 3 outputs. A recurrent neural net would take inputs at layer 1, feed to layer 2, but then layer two might feed to both layer 1 and layer 3. Since the "lower" layer feeds its outputs into a "higher" layer, it creates a cycle inside the neural net.
Back propagation, however, is the method by which a neural net is trained. It doesn't have much to do with the structure of the net, but rather implies how input weights are updated.
When training a feed forward net, the info is passed into the net, and the resulting classification is compared to the known training sample. If the net's classification is incorrect, the weights are adjusted backward through the net in the direction that would give it the correct classification. This is the backward propagation portion of the training.
So a CNN is a feed-forward network, but is trained through back-propagation.
In short,
CNN is feed forward Neural Network.
Backward propagation is a technique that is used for training neural network.
Similar to tswei's answer but perhaps more concise.
A convolutional Neural Network is a feed forward nn architecture that uses multiple sets of weights (filters) that "slide" or convolve across the input-space to analyze distance-pixel relationship opposed to individual node activations.
Backward propagation is a method to train neural networks by "back propagating" the error from the output layer to the input layer (including hidden layers).

How do i take a trained neural network and implement in another system?

I have trained a feedforward neural network in Matlab. Now I have to implement this neural network in C language (or simulate the model in Matlab using mathematical equations, without using direct functions). How do I do that? I know that I have to take the weights and bias and activation function. What else is required?
There is no point in representing it as a mathematical function because it won't save you any computations.
Indeed all you need is the weights, biases, activation and your architecture. I'm assuming it is a simple feedforward network as you said, you need to implement some kind of matrix multiplication and addition in C. Also, you'll need to implement the activation function. After that, you're ready to go. Your feed forward NN is ready to be implemented. If the C code will not be used for training, it won't be necessary to implement the backpropagation algorithm in C.
A feedforward layer would be implemented as follows:
Output = Activation_function(Input * weights + bias)
Where,
Input: (1 x number_of_input_parameters_for_this_layer)
Weights: (number_of_input_parameters_for_this_layer x number_of_neurons_for_this_layer)
Bias: (1 x number_of_neurons_for_this_layer)
Output: (1 x number_of_neurons_for_this_layer)
The output of a layer is the input to the next layer.
After some days of searching, I have found the following webpage to be very useful http://ufldl.stanford.edu/tutorial/supervised/MultiLayerNeuralNetworks/
The picture below shows a simple feedforward neural network. Picture taken from the above website.
In this figure, the circles denote the inputs to the network. The circles labeled “+1” are called bias units, and correspond to the intercept term. The leftmost layer of the network is called the input layer, and the rightmost layer the output layer (which, in this example, has only one node). The middle layer of nodes is called the hidden layer, because its values are not observed in the training set. In this example, the neural network has 3 input units (not counting the bias unit), 3 hidden units, and 1 output unit.
The mathematical equations representing this feedforward network are
This neural network has parameters (W,b)=(W(l),b(l),W(2),b(2)), where we write Wij(l) to denote the parameter (or weight) associated with the connection between unit j in layer l, and unit i in layer l+1. (Note the order of the indices.) Also, bi(l) is the bias associated with unit i in layer l+1.
So, from the trained model, as Mido mentioned in his answer, we have to take the input weight matrix which is W(1), the layer weight matrix which is W(2), biases, hidden layer transfer function and output layer transfer function. After this, use the above equations to estimate the output hW,b(x). A popular transfer function used for a regression problem is tan-sigmoid transfer function in the hidden layer and linear transfer function in the output layer.
Those who use Matlab, these links are highly useful
try to simulate neural network in Matlab by myself
Neural network in MATLAB
Programming a Basic Neural Network from scratch in MATLAB

How to get the exact weights and threshold used by a Neural net (Matlab Neural Network)

After training and testing a neural net on Matlab, I got a satisfactory Net-output.
The problem I am facing now is how to get the weights/bias distributed by the network, as well as the threshold, as I intend to use them on a different program.
I just need a guide on how to retrieve these values from the network
Thanks for your suggestions.
The weights are saved in the network class. The values are contained in
net.IW
net.LW
net.b
where net.IW contains the input weight values, net.LW contains the layer weight values and net.b contains the bias values.
To help you with the implementation of the neural network, you could use genFunction to create a MATLAB function for your neural network.

Disconnect some input-hidden layer connections in MLP neural network in MATLAB

I am using Neural Network (NN) wizard in MATLAB for some implementations. Also i can use code-based version of NN in MATLAB which is available after construction NN by wizard (It is clear!).
When we provide our NN with MATLAB, it is a fully connected input-hidden layer. For example, if you have 4 inputs in the input layer and 2 neurons in the hidden layer, we have fully connected relation between 4 inputs and 2 neurons in hidden layer. I am going to manipulate this connections. For example, disconnect 3rd input connection to 1st neuron and 2nd input connection to 2nd neuron in hidden layer. How is it possible by the MATLAB?
Thank you in advance for any guidance.
I read completely documentation of NN in MATLAB. With the following command we can access to each connection and change their weights and bias so that the desired connection gets off duty!
For a NN with one hidden layer:
Network.IW{1,1} = The matrix of Input weights to Hidden layer.
Network.LW{2,1} = The matrix of Hidden layer weights to Output layer.
Network.b{1,1} = The matrix of bias between Input to Hidden layer.
Network.b{2,1} = The matrix of bias between Hidden layer to Output.
Then, we can set 0 to those connections (both weights and bias) between Input and Hidden layer as our desired. With this type of configuration, we can re-construct the neural network infrastructure.
If you want to randomize this switching off and on, of some nodes, you can also use the dropoutLayer in Matlab. This works best for deep NNs.
https://in.mathworks.com/help/deeplearning/ref/nnet.cnn.layer.dropoutlayer.html

Matlab neural network simulate up to hidden layer

I have trained a 3-layer (input, hidden and output) feedforward neural network in Matlab. After training, I would like to simulate the trained network with an input test vector and obtain the response of the neurons of the hidden layer (not the final output layer). How can I go about doing this?
Additionally, after training a neural network, is it possible to "cut away" the final output layer and make the current hidden layer as the new output layer (for any future use)?
Extra-info: I'm building an autoencoder network.
The trained weights for a trained network are available in the net.LW property. You can use these weights to get the hidden layer outputs
From Matlab Documentation
nnproperty.net_LW
Neural network LW property.
NET.LW
This property defines the weight matrices of weights going to layers
from other layers. It is always an Nl x Nl cell array, where Nl is the
number of network layers (net.numLayers).
The weight matrix for the weight going to the ith layer from the jth
layer (or a null matrix []) is located at net.LW{i,j} if
net.layerConnect(i,j) is 1 (or 0).
The weight matrix has as many rows as the size of the layer it goes to
(net.layers{i}.size). It has as many columns as the product of the size
of the layer it comes from with the number of delays associated with the
weight:
net.layers{j}.size * length(net.layerWeights{i,j}.delays)
Addition to using input and layer weights and biases, you may add a output connect from desired layer (after training the network). I found it possible and easy but I didn't exam the correctness of it.