I am working on neural networks and I am currently creating a perceptron that will work as a classifier for a data set of images with faces. I am required to perform pca (principal component analysis) to my data set before dividing the samples into two different sets for training and testing. By doing this, I am lowering the dimensionality of the data and at the same time I am compressing the size of the images.
However, I am not a statistician and I have some problems defining the number of principal components to use for the pca method without any specific formula. My data set is an array of 4096x400, 400 being the number of the sample images and 4096 being their dimension. Is there a way to be more precise and accurate about the number of principal components to use during pca?
I am working on matlab so I am using princomp. Thank you in advance, any help will be highly appreciated.
Question: How many principal components should I use in pattern classification?
Answer: As low as possible.
When you apply PCA, you get number of principal components according to your data. Lets say you get 10 principal components from your data. You will control how much your variance are explained with principal components.
For example
component variance explained
1 0.40
2 0.25
3 0.15
4 0.10
5 0.05
6 0.01
7 0.01
8 0.01
9 0.01
10 0.01
With this, you decide on cutoff number and train your classifier. In this example, as you can see first 4 principal components holds %90 of information. Your results may be good enough with only 4 principal components.
You may add 5th principal components, these 5 principal components will hold %95 of your information, and so on.
See an example with PCA and image data here
Related
I'm newbie in Neural network So I little bit confuse about ADAM optimezer. For Example I use MLP with architecture like this:
I've used SDG before, so I want to ask if changing the weight with adam's optimization is the same as SDG updating the weight on each layer? In the example above, does that mean there will be 2 weight changes from output to hid layer 2, 8 weight changes from hid layer 2 to hid layer 1, and finally 4 weight changes from hid layer 1 to input?
because of the example I see, they only update the weights from the output to the hidden layer 2 only.
You can use both SGD and Adam to calculate updates for every weight in your network (as long as the loss is differentiable w.r.t. the weight). If you use Tensorflow or Pytorch and build the model in the sketch, by default, all weights will be updated when you perform an optimizer step. (If you really want to, you can also limit the optimizer to only a subset of parameters.)
The difference between SGD and Adam is that with SGD the weight updates are simple steps in the direction of the (negative) gradient, while with Adam the gradient steps are scaled the using running statistics of previous weight updates.
Im new with NN and i have this problem:
I have a dataset with 300 rows and 33 columns. Each row has 3 more columns for the results.
Im trying to use MLP for trainning a model so that when i have a new row, it estimates those 3 result columns.
I can easily reduce the error during trainning to 0.001 but when i use cross validation it keep estimating very poorly.
It estimates correctly if i use the same entry it used to train, but if i use another values that werent used for trainning the results are very wrong
Im using two hidden layers with 20 neurons each, so my architecture is [33 20 20 3]
For activation function im using biporlarsigmoid function.
Do you guys have some suggestion on where i could try to change to improve this?
Overfitting
As mentioned in the comments, this perfectly describes overfitting.
I strongly suggest reading the wikipedia article on overfitting, as it well describes causes, but I'll summarize some key points here.
Model complexity
Overfitting often happens when you model is needlessly complex for the problem. I don't know anything about your dataset, but I'm guessing [33 20 20 3] is more parameters than necessary for predicting.
Try running your cross-validation methods again, this time with either fewer layers, or fewer nodes per layer. Right now you are using 33*20 + 20*20 + 20*3 = 1120 parameters (weights) to make your prediction, is this necessary?
Regularization
A common solution to overfitting is regularization. The driving principle is KISS (keep it simple, stupid).
By applying an L1 regularizer to your weights, you keep preference for the smallest number of weights to solve your problem. The network will pull many weights to 0 as they aren't need.
By applying an L2 regularizer to your weights, you keep preference for lower rank solutions to your problem. This means that your network will prefer weights matrices that span lower dimensions. Practically this means your weights will be smaller numbers, and are less likely to be able to "memorize" the data.
What is L1 and L2? These are types of vector norms. L1 is the sum of the absolute value of your weights. L2 is the sqrt of the sum of squares of your weights. (L3 is the cubed root of the sum of cubes of weights, L4 ...).
Distortions
Another commonly used technique is to augment your training data with distorted versions of your training samples. This only makes sense with certain types of data. For instance images can be rotated, scaled, shifted, add gaussian noise, etc. without dramatically changing the content of the image.
By adding distortions, your network will no longer memorize your data, but will also learn when things look similar to your data. The number 1 rotated 2 degrees still looks like a 1, so the network should be able to learn from both of these.
Only you know your data. If this is something that can be done with your data (even just adding a little gaussian noise to each feature), then maybe this is worth looking into. But do not use this blindly without considering the implications it may have on your dataset.
Careful analysis of data
I put this last because it is an indirect response to the overfitting problem. Check your data before pumping it through a black-box algorithm (like a neural network). Here are a few questions worth answering if your network doesn't work:
Are any of my features strongly correlated with each other?
How do baseline algorithms perform? (Linear regression, logistic regression, etc.)
How are my training samples distributed among classes? Do I have 298 samples of one class and 1 sample of the other two?
How similar are my samples within a class? Maybe I have 100 samples for this class, but all of them are the same (or nearly the same).
I have trained Feed Forward NN using Matlab Neural Network Toolbox on a dataset containing speech features and accelerometer measurements. Targetset contains two target classes for dataset: 0 and 1. The training, validation and performance are all fine and I have generated code for this network.
Now I need to use this neural network in real-time to recognize pattern when occur and generate 0 or 1 when I test a new dataset against previously trained NN. But when I issue a command:
c = sim(net, j)
Where "j" is a new dataset[24x11]; instead 0 or 1 i get this as an output (I assume I get percent of correct classification but there is no classification result itself):
c =
Columns 1 through 9
0.6274 0.6248 0.9993 0.9991 0.9994 0.9999 0.9998 0.9934 0.9996
Columns 10 through 11
0.9966 0.9963
So is there any command or a way that I can actually see classification results? Any help highly appreciated! Thanks
I'm no matlab user, but from a logical point of view, you are missing an important point:
The input to a Neural Network is a single vector, you are passing a matrix. Thus matlab thinks that you want to classify a bunch of vectors (11 in your case). So the vector that you get is the output activation for every of these 11 vectors.
The output activation is a value between 0 and 1 (I guess you are using the sigmoid), so this is perfectly normal. Your job is to get a threshold that fits your data best. You can get this threshold with cross validation on your training/test data or by just choosing one (0.5?) and see if the results are "good" and modify if needed.
NNs normally convert their output to a value within (0,1) using for example the logistic function. It's not a percentage or probability, just a relative measure of certainty. In any case this means is that you have to manually use a threshold (such as 0.5) to discriminate the two classes. Which threshold is best is tough to find because you must select the optimum trade off between precision and recall.
I have big data set (time-series, about 50 parameters/values). I want to use Kohonen network to group similar data rows. I've read some about Kohonen neural networks, i understand idea of Kohonen network, but:
I don't know how to implement Kohonen with so many dimensions. I found example on CodeProject, but only with 2 or 3 dimensional input vector. When i have 50 parameters - shall i create 50 weights in my neurons?
I don't know how to update weights of winning neuron (how to calculate new weights?).
My english is not perfect and I don't understand everything I read about Kohonen network, especially descriptions of variables in formulas, thats why im asking.
One should distinguish the dimensionality of the map, which is usually low (e.g. 2 in the common case of a rectangular grid) and the dimensionality of the reference vectors which can be arbitrarily high without problems.
Look at http://www.psychology.mcmaster.ca/4i03/demos/competitive-demo.html for a nice example with 49-dimensional input vectors (7x7 pixel images). The Kohonen map in this case has the form of a one-dimensional ring of 8 units.
See also http://www.demogng.de for a java simulator for various Kohonen-like networks including ring-shaped ones like the one at McMasters. The reference vectors, however, are all 2-dimensional, but only for easier display. They could have arbitrary high dimensions without any change in the algorithms.
Yes, you would need 50 neurons. However, these types of networks are usually low dimensional as described in this self-organizing map article. I have never seen them use more than a few inputs.
You have to use an update formula. From the same article: Wv(s + 1) = Wv(s) + Θ(u, v, s) α(s)(D(t) - Wv(s))
yes, you'll need 50 inputs for each neuron
you basically do a linear interpolation between the neurons and the target (input) neuron, and use W(s + 1) = W(s) + Θ() * α(s) * (Input(t) - W(s)) with Θ being your neighbourhood function.
and you should update all your neurons, not only the winner
which function you use as a neighbourhood function depends on your actual problem.
a common property of such a function is that it has a value 1 when i=k and falls off with the distance euclidian distance. additionally it shrinks with time (in order to localize clusters).
simple neighbourhood functions include linear interpolation (up to a "maximum distance") or a gaussian function
I have to write a classificator (gaussian mixture model) that I use for human action recognition.
I have 4 dataset of video. I choose 3 of them as training set and 1 of them as testing set.
Before I apply the gm model on the training set I run the pca on it.
pca_coeff=princomp(trainig_data);
score = training_data * pca_coeff;
training_data = score(:,1:min(size(score,2),numDimension));
During the testing step what should I do? Should I execute a new princomp on testing data
new_pca_coeff=princomp(testing_data);
score = testing_data * new_pca_coeff;
testing_data = score(:,1:min(size(score,2),numDimension));
or I should use the pca_coeff that I compute for the training data?
score = testing_data * pca_coeff;
testing_data = score(:,1:min(size(score,2),numDimension));
The classifier is being trained on data in the space defined by the principle components of the training data. It doesn't make sense to evaluate it in a different space - therefore, you should apply the same transformation to testing data as you did to training data, so don't compute a different pca_coef.
Incidently, if your testing data is drawn independently from the same distribution as the training data, then for large enough training and test sets, the principle components should be approximately the same.
One method for choosing how many principle components to use involves examining the eigenvalues from the PCA decomposition. You can get these from the princomp function like this:
[pca_coeff score eigenvalues] = princomp(data);
The eigenvalues variable will then be an array where each element describes the amount of variance accounted for by the corresponding principle component. If you do:
plot(eigenvalues);
you should see that the first eigenvalue will be the largest, and they will rapidly decrease (this is called a "Scree Plot", and should look like this: http://www.ats.ucla.edu/stat/SPSS/output/spss_output_pca_5.gif, though your one may have up to 800 points instead of 12).
Principle components with small corresponding eigenvalues are unlikely to be useful, since the variance of the data in those dimensions is so small. Many people choose a threshold value, and then select all principle components where the eigenvalue is above that threshold. An informal way of picking the threshold is to look at the Scree plot and choose the threshold to be just after the line 'levels out' - in the image I linked earlier, a good value might be ~0.8, selecting 3 or 4 principle components.
IIRC, you could do something like:
proportion_of_variance = sum(eigenvalues(1:k)) ./ sum(eigenvalues);
to calculate "the proportion of variance described by the low dimensional data".
However, since you are using the principle components for a classification task, you can't really be sure that any particular number of PCs is optimal; the variance of a feature doesn't necessarily tell you anything about how useful it will be for classification. An alternative to choosing PCs with the Scree plot is just to try classification with various numbers of principle components and see what the best number is empirically.