I have a 2x147 matrix as an input and a 3x147 matrix as an output, and I trained the NN pattern recognition with the input matrix and output matrix. I then generated a Simulink model of the trained NN, and now I want to test the new dataset of same size (2x147).
I am getting the following errors:
Error in port widths or dimensions. Output port 1 of NN_Trail/Constant is a [2x147] matrix.
Error in port widths or dimensions. Input port 1 of NN_Trail/Pattern Recognition Neural Network is a one dimensional vector with 2 elements.
If I give a constant value of 2 elements, then the Simulink runs for the mentioned time and gives the desired output. How can I get it to work with the data I've described?
My idea in future is to connect the trained neural network to a simulated plant and find the abnormal data from the plant.
So your model has an input of dimenstion 2 and an output of dimenson 3.
And you have an calculated signal of 147 timesteps that you want to run on the inputs.
To import that signal to your model you can use a Matlab time series object.
http://ch.mathworks.com/help/simulink/ug/importing-matlab-timeseries-data-to-a-root-level-input-port.html
Related
My goal is to train an Autoencoder in Matlab. I am using the Deep Learning Toolbox. I am new to both autoencoders and Matlab, so please bear with me if the question is trivial.
My input datasets is a list of 2000 time series, each with 501 entries for each time component. So my input dataset is stored into an array called inputdata which has dimensions 2000*501.
The autoencoder should reproduce the time series. Thi means the output should be 2000 times a time series of 501 components. So, my understanding is that the input nodes should be 501 and the same should be true for the output nodes.
However, if I do:
hiddenSize = 100;
autoenc = trainAutoencoder(y_sorted,hiddenSize);
to train an autoencoder with 100 nodes in the hidden layer, I think the Autoencoder automatically chooses to have 2000 input nodes. What is the correct way of training this Autoencoder?
Hi I haven't tried to train an autoencoder myself with the Deeplearning toolbox, but as far as i can read here (https://www.mathworks.com/help/deeplearning/ref/trainautoencoder.html?s_tid=doc_ta) your input matrix should have the samples as columns and the features/values of your timeseries in the rows. You can do this easy by transposing your input matrix. In MATLAB this is done by:
inputdata = inputdata.'
This is a very simple question as I am new to the concepts.
I have a 4-4-1 neural network that I am running on 16x4 binary data to predict a 16x1 column of outputs.
I have utilized random weights and biases to generate a rough predicted output vector. I then calculate a vector of errors (actual-output) which is 16x1.
When back propagating, I am trying to update my weights. But how do I update the single value of a weight if my error is a 16x1 list of errors? That is, how do I implement:
weight = weight + learning_rate * error * input
if 'error' is 16x1 and input is 16x4?
i want to create a Neural Network with "three (2D) Matrices" as a inputs , and
the output is a 1 (2D) Matrix , so the three inputs is :
1-2D Matrix Contains ( X ,Y ) Coordinates From a device
2-2D Matrix Contains ( X ,Y ) Coordinates From another different Device
3-2D Matrix Contains the True exact( X , Y ) Coordinates that i already
measured it ( i don't know if that exact include from the inputs or What??)
***Note that each input have his own Error and i want to make the Neural Network
to Minimize that error and choose the best result depends on the true exact (X,Y)***
**Notice that : im Working on object tracking that i extracting (x,y) Coordinate
from the camera and the other device is same the data so for example
i will simulate the Coordinates as Follows:
{ (1,2), (1,3), (1,4), (1,5) , (1,6).......}
and so on
For Sure the Output is one 2D Matrix the best or the True Exact (x,y) ,So im a
beginner and i want to understand how to create this Network with this Different
inputs and choose the best training method to have the Best Results ...?!
thanks in Advance
It sounds like what you want is a HxWx2 input where the first channel (depthwise layer) is your 1st input and the 2nd channel is your 2nd input. Your "true exact" coordinates would be the target that your net output is compared to, rather than being an input.
Note that neural nets don't really handle regression (real-valued outputs) very well - you may get better results dividing your coordinate range into buckets and then treating it as a classification problem instead (use softmax loss vs regression mean-squared error loss).
Expanding on the regression vs classification point:
A regression problem is one where you want the net to output a real value such as a coordinate value in range 0-100. A classification problem is one where you want the net to output a set of propabilities that your input belongs to a given class it was trained on (e.g. you train a net on images belonging to classes "cat" "dog" and "rabbit").
It turns out that modern neural nets are much better at classification than regression, because the way they work is basically by subdividing the N-dimensional input spaces into sub-regions corresponding to the outputs they are being trained to make. What they are naturally doing is classifying.
The obvious way to turn a regression problem into a classification problem, which may work better, is to divide your desired output range into sub-ranges (aka buckets) which you treat as classes. e.g. instead of training your net to output a single (or multiple) value in range 0-100, instead train it to output class probabilities representing, for example, each of 10 separate sub-ranges (classes) 0-9, 10-19, 20-20, etc.
I am trying to train a linear SVM on a data which has 100 dimensions. I have 80 instances for training. I train the SVM using fitcsvm function in MATLAB and check the function using predict on the training data. When I classify the training data with the SVM all the data points are being classified into only one class.
SVM = fitcsvm(votes,b,'ClassNames',unique(b)');
predict(SVM,votes);
This gives outputs as all 0's which corresponds to 0th class. b contains 1's and 0's indicating the class to which each data point belongs.
The data used, i.e. matrix votes and vector b are given the following link
Make sure you use a non-linear kernel, such as a gaussian kernel and that the parameters of the kernel are tweaked. Just as a starting point:
SVM = fitcsvm(votes,b,'KernelFunction','RBF', 'KernelScale','auto');
bp = predict(SVM,votes);
that said you should split your set in a training set and a testing set, otherwise you risk overfitting
I have to use NAR network to train a time-series for my project. To have an idea of how time-series tool (ntstool) works in MATLAB , I used the GUI of ntstool in matlab with a dataset containing 427 timesteps of one element. While training I used a neural network with 10 hidden layers and delay value = 5.
Now I have following Three questions :
What does the **delay value (d) ** in the GUI means. Does it mean that while training the network assumes that each timestep value is dependent on last 'd' timesteps' values ?
how to predict the values at future timesteps in ntstool?
Delay value means that neural network inputs are current input value and N delay values of input signals, in your case N=5. Hope this will help you.