How to extract useful features from a graph? - matlab

Things are like this:
I have some graphs like the pictures above and I am trying to classify them to different kinds so the shape of a character can be recognized, and here is what I've done:
I apply a 2-D FFT to the graphs, so I can get the spectral analysis of these graphs. And here are some result:
S after 2-D FFT
T after 2-D FFT
I have found that the same letter share the same pattern of magnitude graph after FFT, and I want to use this feature to cluster these letters. But there is a problem: I want the features of interested can be presented in a 2-D plane, i.e in the form of (x,y), but the features here is actually a graph, with about 600*400 element, and I know the only thing I am interested is the shape of the graph(S is a dot in the middle, and T is like a cross). So what can I do to reduce the dimension of the magnitude graph?
I am not sure I am clear about my question here, but thanks in advance.

You can use dimensionality reduction methods such as
k-means clustering
SVM
PCA
MDS
Each of these methods can take 2-dimensional arrays, and work out the best coordinate frame to distinguish / represent etc your letters.
One way to start would be reducing your 240000 dimensional space to a 26-dimensional space using any of these methods.
This would give you an 'amplitude' for each of the possible letters.
But as #jucestain says, a network classifiers are great for letter recognition.

Related

How to convert the result of Monte-Carlo Flooding algorithm to set of polygons?

I am trying to solve this problem by using of Monte-Carlo Flooding algorithm. As result I receive set of semicircles (the picture below), but the requested solution is for trapezoid like polygons. Please, can you suggest me an algorithm by which I will be able to transform this semicircles in polygons?
First, you extract your pieces as contours, using Suzuki-Abe algorithm (Suzuki, S. and Abe, K., Topological Structural Analysis of Digitized Binary Images by Border Following. CVGIP 30 1, pp 32-46 (1985)). You'll get all contours out of your image as they are produced.
Then, you approximate contours into polygons using Ramer-Douglas-Peucker algorithm.
THere is well-known library which does it all - OpenCV, see link for details https://docs.opencv.org/2.4.13.2/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html

Combine detectors in Bag of visual words

I am using Bag of visual words for classification.
I have quantized SIFT descriptor into 100 words for each image and encoded the histogram of the images and have completed classification.
Now, I want to try to combine two different descriptors and detectors i.e. SIFT and SURF, which means neither the number of key points will be the same nor will be the descriptor dimensionality (SIFT 128D and SURF 64D).
What will be the easiest way to combine them?
If, for each image, I encode one histogram for SIFT (which will be a 100x1 histogram) and another for SURF (another 100x1) and then stack them together making 200x1 histogram, will that be correct?
Any other way?
Thanks a lot in advance.
In bag of words, the number of key points or the descriptor size is irrelevant, once you generate the code book, you get a histogram whose dimensions are dependent on your codebook size. Again, the histogram is normalized, so it does not depend on the number of features detected per image. Suppose you have SIFT and SURF features, all you need to do is generate 2 codebooks and concatenate them to get a feature vector.
A brief overview of the method is mentioned here:
http://en.wikipedia.org/wiki/Bag-of-words_model_in_computer_vision

Bayes classification in matlab

I have 50 images and created a database of the green channel of each image by separating them into two classes (Skin and wound) and storing the their respective green channel value.
Also, I have 1600 wound pixel values and 3000 skin pixel values.
Now I have to use bayes classification in matlab to classify the skin and wound pixels in a new (test) image using the data base that I have. I have tried the in-built command diaglinear but results are poor resulting in lot of misclassification.
Also, I dont know if it's a normal distribution or not so can't use gaussian estimation for finding the conditional probability density function for the data.
Is there any way to perform pixel wise classification?
If there is any part of the question that is unclear, please ask.
I'm looking for help. Thanks in advance.
If you realy want to use pixel wise classification (quite simple, but why not?) try exploring pixel value distributions with hist()/imhist(). It might give you a clue about a gaussianity...
Second, you might fit your values to the some appropriate curves (gaussians?) manually with fit() if you have curve fitting toolbox (or again do it manualy). Then multiply the curves by probabilities of the wound/skin if you like it to be MAP classifier, and finally find their intersection. Voela! you have your descition value V.
if Xi skin
else -> wound

Pattern recognition in Neural Network using matlab simulation

I am new to this neural network in matlab. I wanted to create a Neural Network using matlab simulation.
This matlab simulation is using pattern recognition.
I am running on a windows XP platform.
For example, I have a sets of waveforms of circular shape.
I have extracted out the poles.
These poles will teach my Neural Network that it is circular in shape, hence whenever I input another set of slightly different circular shape waveform, the Neural Network is able to distinguish between the shape.
Currently, I have extracted the poles of these 3 shapes, cylinder, circle and rectangle.
But I am clueless of how I should go about creating my Neural Network.
I'd recommend utilizing SOM (Self-organizing map) for pattern recognition since it's really robust. Also there's a Som Toolbox for Matlab you might be interested in. However, to make it learn waves while neglecting their offsets, you'd need to make some changes to the "similarity function". These changes will affect quite a lot on the SOM's training time but if that's not a problem, keep reading.
For the SOM you'll have to sample your waves to constant sized vectors, let say:
sin x -> sin_vector = (a1, a2, a3, ..., aN)
cos x -> cos_vector = (b1, b2, b3, ..., bN)
Usually similarity of "SOM-vectors" is calculated with euclidian distance. Euclidian distance of those two vectors is huge since they have a different offset. In your case they should be considered to be similar ie. distance to be small. So.. if you don't sample all the similar waves from the same starting point, they will be classified in different classes. That is probably a problem. But! Similarity of vectors in SOM is calculated in order to find the BMU (best-matching unit) from the map and pulling the BMU's and its neigborhood's vectors torwards the values of the given sample. So all you need to change is the way to compare those vectors and the way to pull the vectors' values torwards the sample so that both will be "offset-tolerent".
Slow but working solution is first finding the best offset index for each vector. Best offset index is the one that will produce the smallest value with euclidian distance for the sample. Smallest distance calculated with some node of the net will then be the BMU. Then the BMU's and its neigborhood's vectors are pulled torwards the given sample using the offset index calculated for each node just before. Everything else should work out-of-the-box.
This solution is relatively slow but should work great. I'd recommend studying the consept of SOM thoroughly and then reading this post (and angry comments) again :)
PLEASE comment if you know some mathematical solution that would be better than that previous one!
You can try to use Matlab's Neural network pattern recognition tool nprtool as it is specialize to train and test neural network for pattern recognition.

Homography to Projective transform

I've been attempting to figure out how to take a homography between two planes and convert it into an projective transform. Matlab does this automatically, but I've been trying to figure out how matlab implements the conversion.
You can look at the source code in toolbox\images\images\maketform.m
At least within the editor you can get to this by hitting F4 on the function name.
A homography is a projective transform that maps lines to lines, keeps cross ratio, but does not keep parallelism or other similarity magnitudes (angles, distances, etc).
A homography can be expressed as a homogeneous 3x3 matrix, and computed in many (really, many) different ways according to your problem.
The most typical one is to determine 4 point correspondences between the two planes and use the Direct Linear Transform (DLT). There are also many implementations of the DLT. If you are familiar with OpenCV, you can easily obtain such homography matrix using cv::findHomography (http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=findhomography#findhomography).
In general, I recommend you to take a look to the "Multiple View Geometry" book from Hartley & Zisserman, which explain in detail the concept of homographies in the context of computer vision.