Im a final year student working on my major project. My project is basically to extract text from a natural scene, and recognize it and then display them in a notepad etc..
I have already extracted the text form the images and have also obtained 85 features for each character which is extracted.
How ever, for the recognition part, I have no clue as of how to train or use SVM(support vector machines) in matlab so I can get a match.
Please help me out as this is turning out to be painstakingly difficult
If you're happy with using an existing SVM implementation, then you should either use the bioinformatics toolbox svmtrain, or download the Matlab version of libsvm. If you want to implement an SVM yourself then you should understand SVM theory and you can use quadprog to solve the appropriate optimisation problem.
With your data, you will need to have an N-by-85 feature matrix, where N is a number of characters, and an N-by-1 array of 'true labels' which you provide manually. Depending on which tool you use to train an SVM, the paramaters to svmtrain are slightly different - check the documentation.
If you want to evaluate your SVM to show that it works, you may need to organise your data such that you can estimate the generalization error of classifier - see cross-validation
Related
I was reading this particular paper http://www.robots.ox.ac.uk/~vgg/publications/2011/Chatfield11/chatfield11.pdf and I find the Fisher Vector with GMM vocabulary approach very interesting and I would like to test it myself.
However, it is totally unclear (to me) how do they apply PCA dimensionality reduction on the data. I mean, do they calculate Feature Space and once it is calculated they perform PCA on it? Or do they just perform PCA on every image after SIFT is calculated and then they create feature space?
Is this supposed to be done for both training test sets? To me it's an 'obviously yes' answer, however it is not clear.
I was thinking of creating the feature space from training set and then run PCA on it. Then, I could use that PCA coefficient from training set to reduce each image's sift descriptor that is going to be encoded into Fisher Vector for later classification, whether it is a test or a train image.
EDIT 1;
Simplistic example:
[coef , reduced_feat_space]= pca(Feat_Space','NumComponents', 80);
and then (for both test and train images)
reduced_test_img = test_img * coef; (And then choose the first 80 dimensions of the reduced_test_img)
What do you think? Cheers
It looks to me like they do SIFT first and then do PCA. the article states in section 2.1 "The local descriptors are fixed in all experiments to be SIFT descriptors..."
also in the introduction section "the following three steps:(i) extraction
of local image features (e.g., SIFT descriptors), (ii) encoding of the local features in an image descriptor (e.g., a histogram of the quantized local features), and (iii) classification ... Recently several authors have focused on improving the second component" so it looks to me that the dimensionality reduction occurs after SIFT and the paper is simply talking about a few different methods of doing this, and the performance of each
I would also guess (as you did) that you would have to run it on both sets of images. Otherwise your would be using two different metrics to classify the images it really is like comparing apples to oranges. Comparing a reduced dimensional representation to the full one (even for the same exact image) will show some variation. In fact that is the whole premise of PCA, you are giving up some smaller features (usually) for computational efficiency. The real question with PCA or any dimensionality reduction algorithm is how much information can I give up and still reliably classify/segment different data sets
And as a last point, you would have to treat both images the same way, because your end goal is to use the Fisher Feature Vector for classification as either test or training. Now imagine you decided training images dont get PCA and test images do. Now I give you some image X, what would you do with it? How could you treat one set of images differently from another BEFORE you've classified them? Using the same technique on both sets means you'd process my image X then decide where to put it.
Anyway, I hope that helped and wasn't to rant-like. Good Luck :-)
When I tried to train a SVM(trainsvm function) with RBF kernel,
The libSVM library outputs "Line search fails in two-class probability estimates" during training.
After training, the training accuracy of the model is just 20%.
I think I might miss something and it is related to the message.
For more information about my project,
I'm dealing with PASCAL VOC action classification problem.
I'm trying to follow this method.
http://www.ifp.illinois.edu/~jyang29/papers/CVPR09-ScSPM.pdf
There are 1300 training images and 11 classes.
After making codebooks and sparse coding,
The dimension of feature vector is 2688.
The number of training example is 1370.
You need to do a grid search, either using cross validation, or using a separate validation data set to get good values for C and gamma. Libsvm has a script called grid.py that is useful for this. I noticed you tagged this with matlab, using grid.py needs command line tools and a python installation (IMO this generally works out better than with matlab, especially if you have a some big machines to run many jobs in parallel).
I recommend that you read the libsvm guide if you haven't already done so: http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf.
I also suggest you initially use the same dataset as used for the paper as occasionally published algorithms only work well on the dataset chosen for the paper.
Lastly, you could contact the authors of the paper.
I asked about this warning the author of LIBSVM, and he replied that this warning can be ignored.
this question is about LibSVM or SVMs in general.
I wonder if it is possible to categorize Feature-Vectors of different length with the same SVM Model.
Let's say we train the SVM with about 1000 Instances of the following Feature Vector:
[feature1 feature2 feature3 feature4 feature5]
Now I want to predict a test-vector which has the same length of 5.
If the probability I receive is to poor, I now want to check the first subset of my test-vector containing the columns 2-5. So I want to dismiss the 1 feature.
My question now is: Is it possible to tell the SVM only to check the features 2-5 for prediction (e.g. with weights), or do I have to train different SVM Models. One for 5 features, another for 4 features and so on...?
Thanks in advance...
marcus
You can always remove features from your test points by fiddling with the file, but I highly recommend not using such an approach. An SVM model is valid when all features are present. If you are using the linear kernel, simply setting a given feature to 0 will implicitly cause it to be ignored (though you should not do this). When using other kernels, this is very much a no no.
Using a different set of features for predictions than the set you used for training is not a good approach.
I strongly suggest to train a new model for the subset of features you wish to use in prediction.
Hi I am doing my Final Year M.E Project in Tamil Character Recognition. I have completed till Feature Extraction step. Now I got Features for Each image in the Dataset(HP Labs). How to Feed these features to train SVM and How to Perform Class Labeling. I am new to this Neural Network area. So please Help me....
In Training
In Matlab neural Network has two inputs:
Input vector
Target Vector
Example:
net = newFF(input,target);
net = train(net,input,target);
You give feature is input vector. Target is corresponding feature id(char ID).
In Testing
Extract feature from image, then test the feature in Neural Network using sim function.
sim(net,features).. It returns corresponding char-ID.
open Matlab then type nftool and study that tool box.
same thing in SVM
Training
svmtrain(input,label).
input as feature.
label as ID of particular feature.
Testing
using svmclassify() method . It returns output of charID.
you may want to look at the svmclassify and svmtrain methods in the bioinformatics toolbox in matlab.
by the way, do you really want support vector machines or neural networks? they are very different from each other. please be clear which classifier you want to use for your problem before deciding to use a particular implementation.
if you are new to the field of machine learning and want to try out a couple of algorithms, I would suggest you try Weka first.
I have a dataset for text classification ready to be used in MATLAB. Each document is a vector in this dataset and the dimensionality of this vector is extremely high. In these cases peopl usually do some feature selection on the vectors like the ones that you have actually find the WEKA toolkit. Is there anything like that in MATLAB? if not can u suggest and algorithm for me to do it...?
thanks
MATLAB (and its toolboxes) include a number of functions that deal with feature selection:
RANDFEATURES (Bioinformatics Toolbox): Generate randomized subset of features directed by a classifier
RANKFEATURES (Bioinformatics Toolbox): Rank features by class separability criteria
SEQUENTIALFS (Statistics Toolbox): Sequential feature selection
RELIEFF (Statistics Toolbox): Relief-F algorithm
TREEBAGGER.OOBPermutedVarDeltaError, predictorImportance (Statistics Toolbox): Using ensemble methods (bagged decision trees)
You can also find examples that demonstrates usage on real datasets:
Identifying Significant Features and Classifying Protein Profiles
Genetic Algorithm Search for Features in Mass Spectrometry Data
In addition, there exist third-party toolboxes:
Matlab Toolbox for Dimensionality Reduction
LIBGS: A MATLAB Package for Gene Selection
Otherwise you can always call your favorite functions from WEKA directly from MATLAB since it include a JVM...
Feature selection depends on the specific task you want to do on the text data.
One of the simplest and crudest method is to use Principal component analysis (PCA) to reduce the dimensions of the data. This reduced dimensional data can be used directly as features for classification.
See the tutorial on using PCA here:
http://matlabdatamining.blogspot.com/2010/02/principal-components-analysis.html
Here is the link to Matlab PCA command help:
http://www.mathworks.com/help/toolbox/stats/princomp.html
Using the obtained features, the well known Support Vector Machines (SVM) can be used for classification.
http://www.mathworks.com/help/toolbox/bioinfo/ref/svmclassify.html
http://www.autonlab.org/tutorials/svm.html
You might consider using the independent features technique of Weiss and Kulikowski to quickly eliminate variables which are obviously unimformative:
http://matlabdatamining.blogspot.com/2006/12/feature-selection-phase-1-eliminate.html