I am using Deep Learning Toolbox and have imported an ONNX model. Now I have a Matlab variable net which is 1x1 Layer Graph. Here is the information:
net =
LayerGraph with properties:
Layers: [33×1 nnet.cnn.layer.Layer]
Connections: [35×2 table]
InputNames: {'Input_0'}
OutputNames: {'RegressionLayer_Gemm_28_Flatten14RegressionLayer_Gemm_28'}
My question is how can I access the weights of this network and store them in a matrix? The data structure I am thinks is a 1xn cell where n is the number of layers, and each cell is a x b size matrix where a and b are the number of neurons of each layer.
Related
I have a task where I need to train a machine learning model to predict a set of outputs from multiple inputs. My inputs are 1000 iterations of a set of 3x 1 vectors, a set of 3x3 covariance matrices and a set of scalars, while my output is just a set of scalars. I cannot use regression learner app because these inputs need to have the same dimensions, any idea on how to unify them?
One possible way to solve this is to flatten the covariance matrix into a vector. Once you did that, you can construct a 1000xN matrix where 1000 refers to the number of samples in your dataset and N is the number of features. For example if your features consist of a 3x1 vector, a 3x3 covariance matrix and lets say 5 other scalars, N could be 3+3*3+5=17. You then use this matrix to train an arbitrary model such as a linear regressor or more advanced models like a tree or the like.
When training machine learning models it is important to understand your data and exploit its structure to help the learning algorithms. For example we could use the fact that a covariance matrix is symmetric and positive semi-definite and thus lives in a closed convex cone. Symmetry of the matrix implies that it lives in a subspace of the set of all 3x3 matrices. In fact the dimension of the space of 3x3 symmetric matrices is only 6. You can use that knowledge to reduce redundancy in your data.
I want to use RBM pretraining weights from Hinton paper code for weights of MATLAB native feedforwardnet toolbox.
Anyone can help me how to set or arrange the pre-trained weight for feedforwardnet?
for instance, i used Hinton code from http://www.cs.toronto.edu/~hinton/MatlabForSciencePaper.html
and use the pre-trained weights for matlab feedforwardnet.
W=hintonRBMpretrained;
net=feedforwardnet([700 300 200 30 200 300 700]);
net.setwb(net,W);
how to set up or arrange the W such that it will match the feedforwardnet structure? I know how to use single vector but i am afraid that the order or the weights sequence is incorrect.
The MATLAB feedforwardnet function returns a Neural Network object with the properties as described in the documentation. The workflow for creating a neural network with pre-trained weights is as follows:
Load data
Create the network
Configure the network
Initialize the weights and biases
Train the network
The steps 1, 2, 3, and 5 are exactly as they would be when creating a neural network from scratch. Let's look at a simple example:
% 1. Load data
load fisheriris
meas = meas.';
species = species.';
targets = dummyvar(categorical(species));
% 2. Create network
net = feedforwardnet([16, 16]);
% 3. Configure the network
configure(net, meas, targets)
Now, we have a neural network net with 4 inputs (sepal and petal length and width), and 3 outputs ('setosa', 'versicolor', and 'virginica'). We have two hidden layers with 16 nodes each. The weights are stored in the two fields net.IW and net.LW, where IW are the input weights, and LW are the layer weights:
>> net.IW
ans =
3×1 cell array
[16×4 double]
[]
[]
>> net.LW
ans =
3×3 cell array
[] [] []
[16×16 double] [] []
[] [3×16 double] []
This is confusing at first, but makes sense: each row in both these cell arrays corresponds to one of the layers we have.
In the IW array, we have the weights between the input and each of the layers. Obviously, we only have weights between the input and the first layer. The shape of this weight matrix is 16x4, as we have 4 inputs and 16 hidden units.
In the LW array, we have the weights from each layer (the rows) to each layer (the columns). In our case, we have a 16x16 weight matrix from the first to the second layer, and a 3x16 weight matrix from the second to the third layer. Makes perfect sense, right?
With that, we know how to initialize the weights we have got from the RBM code:
net.IW{1,1} = weights_input;
net.LW{2,1} = weights_hidden;
With that, you can continue with step 5, i.e. training the network in a supervised fashion.
I am trying to train a linear SVM on a data which has 100 dimensions. I have 80 instances for training. I train the SVM using fitcsvm function in MATLAB and check the function using predict on the training data. When I classify the training data with the SVM all the data points are being classified into only one class.
SVM = fitcsvm(votes,b,'ClassNames',unique(b)');
predict(SVM,votes);
This gives outputs as all 0's which corresponds to 0th class. b contains 1's and 0's indicating the class to which each data point belongs.
The data used, i.e. matrix votes and vector b are given the following link
Make sure you use a non-linear kernel, such as a gaussian kernel and that the parameters of the kernel are tweaked. Just as a starting point:
SVM = fitcsvm(votes,b,'KernelFunction','RBF', 'KernelScale','auto');
bp = predict(SVM,votes);
that said you should split your set in a training set and a testing set, otherwise you risk overfitting
I'm full newbie in neural networks. I generated NN in matlab. Further I need to know exact structure of this NN, because I need to implement it in Java(static connections and weights, no learning). Can you explain how to connect neurons and what math operations perform in each element?
NN params are next(taken from Matlab):
iw{1,1} - Weight to layer 1 from intput 1
[2.8574 -1.9207;
1.7582 -1.2549;
-4.5925 0.23236;
12.0861 12.3701;
2.503 -1.9321;
-2.1422 2.6928]
lw{2,1} - Weight to layer
[-0.51977 5.3993 3.4349 5.2863 3.1976 -0.67102]
b{1} - Bias to layer 1
[-3.2811;
-6.956;
-3.0943;
11.1103;
0.14842;
-3.3705]
b{2} - Bias to layer 2
[1.4657]
Transfer function TANSIG
Greatly appreciate your help.
You have a NN that has 2 inputs, then a hidden layer of 6 neurons and an output layer of 1 neuron.
Each of the neuron in each layer, will take all the outputs from the previous one and multiply them by a number and offset the result by another.
The numbers you show are the numbers I mentioned.
For example, the neuron 1 from hidden layer will output hidden1=2.8574*in1 -1.9207*in2-3.2811. Then take whatever sigma function you are using and apply hidden1=sigma(hidden1).
As another example, the output will be out=-hidden1*0.51977+hidden2*5.3993+...-hidden6*0.67102+1.4657
I want to feed a cell containing of two dimensional inputs to the neural network in matlab. the input is a graph which is shown in a two dimensional matrix (n*n). how can I do that?
I guess that your neural network is 2D, subsequently your input is 1D, which means you will have to give a vector. In your case, this latter would be n²-sized.
Just string your whole input out as a one-dimensional array:
input_vec = reshape(input_mat, length(input_mat)^2, 1); % assuming your input_mat is square
So rather than having a 10x10 (or whatever) matrix input to your network, you would have a 100x1 vector being input. Then train your network on this vector. This approach is commonly used in textbook character recognition networks, for example read the section titled The MNIST Data in this tutorial.