Matlab neural networks - bad results - matlab

I've got a problem with implementing multilayered perceptron with Matlab Neural Networks Toolkit.
I try to implement neural network which will recognize single character stored as binary image(size 40x50).
Image is transformed into a binary vector. The output is encoded in 6bits. I use simple newff function in that way (with 30 perceptrons in hidden layer):
net = newff(P, [30, 6], {'tansig' 'tansig'}, 'traingd', 'learngdm', 'mse');
Then I train my network with a dozen of characters in 3 different fonts, with following train parameters:
net.trainParam.epochs=1000000;
net.trainParam.goal = 0.00001;
net.traxinParam.lr = 0.01;
After training net recognized all characters from training sets correctly but...
It cannot recognize more then twice characters from another fonts.
How could I improve that simple network?

you can try to add random elastic distortion to your training set (in order to expand it, and making it more "generalizable").
You can see the details on this nice article from Microsoft Research :
http://research.microsoft.com/pubs/68920/icdar03.pdf

You have a very large number of input variables (2,000, if I understand your description). My first suggestion is to reduce this number if possible. Some possible techniques include: subsampling the input variables or calculating informative features (such as row and column total, which would reduce the input vector to 90 = 40 + 50)
Also, your output is coded as 6 bits, which provides 32 possible combined values, so I assume that you are using these to represent 26 letters? If so, then you may fare better with another output representation. Consider that various letters which look nothing alike will, for instance, share the value of 1 on bit 1, complicating the mapping from inputs to outputs. An output representation with 1 bit for each class would simplify things.

You could use patternnet instead of newff, this creates a network more suitable for pattern recognition. As target function use a 26-elements vector with 1 in the right letter's position (0 elsewhere). The output of the recognition will be a vector of 26 real values between 0 and 1, with the recognized letter with the highest value.
Make sure to use data from all fonts for the training.
Give as input all data sets, train will automatically divide them into train-validation-test sets according to the specified percentages:
net.divideParam.trainRatio = .70;
net.divideParam.valRatio = .15;
net.divideParam.testRatio = .15;
(choose you own percentages).
Then test using only the test set, you can find their indices into
[net, tr] = train(net,inputs,targets);
tr.testInd

Related

Matlab Testing a trained neural network

I have a training sample set and the according targets. The targets are 7 different classes.
net = patternnet(10);
[net,tr] = train(net,samples.',targets.');
%samples is 200x7 and targets is 200x1
test = net([0;1;1;1;1;0;1])
Now when I execute the last line I would expect test to hold the value of the class that the net classifies the input [0;1;1;1;1;0;1] as. But All I get is a weird decimal value, and not one of the values that are entered in "targets" (which are 1,2,3,4,5,6 or 7). What am I doing wrong?
The patternnet documentation says that
The target data for pattern recognition networks should consist of vectors of all zero values except for a 1 in element i, where i is the class they are to represent.
Thus, targets should be an array of the size 200x7, where 7 is the number of the classes. Each column should consist of zeros except for the i-th row, where i is the index of the class.
You can transform targets to the required form by using the ind2vec function.

Multi-Output Multi-Class Keras Model

For each input I have, I have a 49x2 matrix associated. Here's what 1 input-output couple looks like
input :
[Car1, Car2, Car3 ..., Car118]
output :
[[Label1 Label2]
[Label1 Label2]
...
[Label1 Label2]]
Where both Label1 and Label2 are LabelEncode and they have respectively 1200 and 1300 different classes.
Just to make sure this is what we call a multi-output multi-class problem?
I tried to flatten the output but I feared the model wouldn't understand that all similar Label share the same classes.
Is there a Keras layer that handle output this peculiar array shape?
Generally, multi-class problems correspond with models outputting a probability distribution over the set of classes (that is typically scored against the one-hot encoding of the actual class through cross-entropy). Now, independently of whether you are structuring it as one single output, two outputs, 49 outputs or 49 x 2 = 98 outputs, that would mean having 1,200 x 49 + 1,300 x 49 = 122,500 output units - which is not something a computer cannot handle, but maybe not the most convenient thing to have. You could try having each class output to be a single (e.g. linear) unit and round it's value to choose the label, but, unless the labels have some numerical meaning (e.g. order, sizes, etc.), that is not likely to work.
If the order of the elements in the input has some meaning (that is, shuffling it would affect the output), I think I'd approach the problem through an RNN, like an LSTM or a bidirectional LSTM model, with two outputs. Use return_sequences=True and TimeDistributed Dense softmax layers for the outputs, and for each 118-long input you'd have 118 pairs of outputs; then you can just use temporal sample weighting to drop, for example, the first 69 (or maybe do something like dropping the 35 first and the 34 last if you're using a bidirectional model) and compute the loss with the remaining 49 pairs of labellings. Or, if that makes sense for your data (maybe it doesn't), you could go with something more advanced like CTC (although Keras does not have it, I'm trying to integrate TensorFlow implementation into it without much sucess), which is also implemented in Keras (thanks #indraforyou)!.
If the order in the input has no meaning but the order of the outputs does, then you could have an RNN where your input is the original 118-long vector plus a pair of labels (each one-hot encoded), and the output is again a pair of labels (again two softmax layers). The idea would be that you get one "row" of the 49x2 output on each frame, and then you feed it back to the network along with the initial input to get the next one; at training time, you would have the input repeated 49 times along with the "previous" label (an empty label for the first one).
If there are no sequential relationships to exploit (i.e. the order of the input and the output do not have a special meaning), then the problem would only be truly represented by the initial 122,500 output units (plus all the hidden units you may need to get those right). You could also try some kind of middle ground between a regular network and a RNN where you have the two softmax outputs and, along with the 118-long vector, you include the "id" of the output that you want (e.g. as a 49-long one-hot encoded vector); if the "meaning" of each label at each of the 49 outputs is similar, or comparable, it may work.

How to calculate the Number of parameters for GoogLe Net?

I have a pretty good understanding of AlexNet and VGG. I could verify the number of parameters used in each layer with what is being submitted in their respective papers.
However when i try to do the same on the GoogleNet paper "Going Deeper With COnvolution", even after many iterations I am NOT able to verify the numbers they have in the 'Table 1' of their paper.
For example, the first layer is the good old plain convolution layer with kernel size (7x7), input number of maps 3 , output number of maps is 64. So based on this fact the number of parameters needed would be (3 * 49 * 64) + 64 (bias) which is around 9.5k but they say they use 2.7k. I did the math for other layers as well and i am always off by few percent than what they report. Any idea?
Thanks
I think the first line (2.7k) is wrong, but the rest of the lines of the table are correct.
Here is my computation:
http://i.stack.imgur.com/4bDo9.jpg
Be care to check which input is connect to which layer,
e.g. for the layer "inception_3a/5x5_reduce":
input = "pool2/3x3_s2" with 192 channels
dims_kernel = C*S*S =192x1x1
num_kernel = 16
Hence parameter size for that layer = 16*192*1*1 = 3072
Looks like they divided the numbers by 1024^n to convert to the K/M labels on the number of parameters in the paper Table 1. That feels wrong. We're not talking about actual storage numbers here (as in "bytes"), but straight up number of parameters. They should have just divided by 1000^n instead.
May be 7*7 conv layer is actually the combination of 7*1 conv layer and 1*7 conv layer, then the num of params could be : ((7+7)*64*3 + 64*2) / 2014 = 2.75k, which approaches 2.7k (or you can omit 128 biases).
As we know, Google introduced asymmetric convolution while doing spatial factorization in paper "Spatial Factorization into Asymmetric Convolutions"
(1x7+7x1)x3x64=2688≈2.7k, this is my opinion, I am a fresh student
Number of parameters in a CONV layer would be : ((m * n * d)+1)* k), added 1 because of the bias term for each filter. The same expression can be written as follows: ((shape of width of the filter * shape of height of the filter * number of filters in the previous layer+1)*number of filters)

Matlab - neural networks - How to use different datasets for training, validation and testing?

Best
I've a question about Neural networks in Matlab.
First of all, I've a small NN, 2 inputs, 1 hidden layer with 10 neurons and one output. And this works fine. But the question which I've is. Can I determine my training date, validation data and test data?
I know, if I use e.g. net = feedforwardnet(10); that I can divide my overall dataset into e.g.70/100 15/100 and 15/100. But I don't want to do this, because in this case I want to train my NN with a 1000 data-points, validate them with another data-points and use another independent data-set of 1000 data-points to test them. With other words, I want to control these 3 interdependent data-sets.
Thus, can someone help me?
Kind regards
Edit, I don't want to use a data-set with 3000 data-points and set the devideParams on 1/3 1/3 & 1/3.
Best myself
When you use a feedforwardnet then you can define your divide parameters
net.divideParam.trainRatio = 1/3;
net.divideParam.valRatio = 1/3;
net.divideParam.testRatio = 1/3;
You know that your data will be divided into 3 pieces.
But you (I) didn't know which data.
But when you and thus I, train my network via the following command line:
[net,tr]=train(net,x,t);
then, tr will contain all the necessary info, like for example :
tr.trainInd 1x1000 double,
tr.valInd 1x1000 double,
tr.testInd 1x1000 double,
Thus, e.g. tr.trainInd, will contain all the indexes of our set of data which were used for training. Also, in tr, we can see that the type of tr.divideFcn is set on dividerand which means that the indexes are picked at random. Thus it would be logical, that there is a possibility that those indexes aren't picked randomly which means that, if we combine both things. It should be possible to use another test set --> net.divideParam.testRatio = 0 and to use two different train and validation sets --> net.divideParam.trainRatio = 1/2 and net.divideParam.valRatio = 1/2 - If you can set the tr.divideFcn on something chronological, sequential. Last but not least, if this is possible then we have nothing more to do, then put the training and validation set into one data set, etc...
Kind regards to myself
By default it will use a random index for train,validation,test. This is manually set with the following, though since it's default usually not needed:
net.divideFcn = 'dividerand'
and then you use the the commands you noted above:
net.divideParam.trainRatio = 1/3;
net.divideParam.valRatio = 1/3;
net.divideParam.testRatio = 1/3;
To do what you want and set the index of each you can do the following:
net.divideFcn = 'divideind'
net.divideParam.trainInd = [1:1000]
net.divideParam.valInd=[1001:2000]
net.divideParam.testInd=[2001:3000]

Using SURF algorithm to match objects on MATLAB

The objective is to see if two images, which have one object captured in each image, matches.
The object or image I have stored. This will be used as a baseline:
item1 (This is being matched in the code)
The object/image that needs to matched with-this is stored:
input (Need to see if this matches with what is stored
My method:
Covert images to gray-scale.
Extract SURF interest points.
Obtain features.
Match features.
Get 50 strongest features.
Match the number of strongest features with each image.
Take the ratio of- number of features matched/ number of strongest
features (which is 50).
If I have two images of the same object (two images taken separately on a camera), ideally the ratio should be near 1 or near 100%.
However this is not the case, the best ratio I am getting is near 0.5 or even worse, 0.3.
I am aware the SURF detectors and features can be used in neural networks, or using a statistics based approach. I believe I have approached the statistics based approach to some extent by using 50 of the strongest features.
Is there something I am missing? What do I add onto this or how do I improve it? Please provide me a point to start from.
%Clearing the workspace and all variables
clc;
clear;
%ITEM 1
item1 = imread('Loreal.jpg');%Retrieve order 1 and digitize it.
item1Grey = rgb2gray(item1);%convert to grayscale, 2 dimensional matrix
item1KP = detectSURFFeatures(item1Grey,'MetricThreshold',600);%get SURF dectectors or interest points
strong1 = item1KP.selectStrongest(50);
[item1Features, item1Points] = extractFeatures(item1Grey, strong1,'SURFSize',128); % using SURFSize of 128
%INPUT : Aquire Image
input= imread('MakeUp1.jpg');%Retrieve input and digitize it.
inputGrey = rgb2gray(input);%convert to grayscale, 2 dimensional matrix
inputKP = detectSURFFeatures(inputGrey,'MetricThreshold',600);%get SURF dectectors or interest
strongInput = inputKP.selectStrongest(50);
[inputFeatures, inputPoints] = extractFeatures(inputGrey, strongInput,'SURFSize',128); % using SURFSize of 128
pairs = matchFeatures(item1Features, inputFeatures, 'MaxRatio',1); %matching SURF Features
totalFeatures = length(item1Features); %baseline number of features
numPairs = length(pairs); %the number of pairs
percentage = numPairs/50;
if percentage >= 0.49
disp('We have this');
else
disp('We do not have this');
disp(percentage);
end
The baseline image
The input image
I would try not doing selectStrongest and not setting MaxRatio. Just call matchFeatures with the default options and compare the number of resulting matches.
The default behavior of matchFeatures is to use the ratio test to exclude ambiguous matches. So the number of matches it returns may be a good indicator of the presence or absence of the object in the scene.
If you want to try something more sophisticated, take a look at this example.