T test in image segmentation - matlab

how to use T test in image processing? i am working on image segmentation using split and merge algorithm using Matlab. to merge adjacent regions i need to t test to compare mean..

You probably need an F-test if you need to check if several samples have the same mean. The formulas you need to use depend on assumptions about your data, check http://en.wikipedia.org/wiki/F-test article. The F distribution can be calculated in Matlab using fcdf.

Related

Understanding Feature Extraction and Feature Vectors in Image Processing?

I am working on a small project in Matlab just because of my interest in image processing and I have not studied a degree or a course related to image processing.
I want to understand a small concept about feature extraction and feature vectors. I have read some articles about that and in general I can understand that, but my question is:
For example, I want to extract some information from different objects of a binary image, the information is about length, width and distance between the objects. In one application I want to extract the features on which I want to apply some algorithms to compute width of all the objects and ignore the length and distance. Can we name this as feature extraction regarding the width? And storing them in different vectors as Feature Vectors?
It makes me think that, I might be complicating the simple things. Should I use some other terminologies for this instead of feature extraction and feature vectors?
Please suggest me if I am going in the right direction or not?
Thank you!
Feature extraction is the process of computing numerical values on regions/objects/shapes/blobs detected in an image. [Sometimes the detection itself can be called extraction and the features need not be numbers.]
The feature values can indeed be stored in vectors, usually they fill a table. Sometimes they are structured in a more complicated way (such as a graph f.i.). Most of the time they are used for classification/recognition purposes or they can just be the output of the process on hand.

Matlab Image Histogram Analysis: how do I test for an underlying bimodal distribution?

I am working with image processing in MATLAB. I have two different images whose histogram plots are as shown below.
Image 1:
and
Image 2:
I have multiple images like those and the only distinguishing(separating) features is that some have single peak and others have two peaks.
In other words some can be thresholded (to generate good results) while others cannot. Is there any way I can separate the two images? Are there any functions that do so in MATLAB or any reference code that will help?
The function used is imhist()
If you mean "distinguish" by "separate", then yes: The property you describe is called bimodality, i.e. you have 2 peaks that can be seperated by one threshold. So your question is actually "how do I test for an underlying bimodal distribution?"
One option to do this programmatically is Binning. This is not the most robust method but the easiest. It might work, it might not.
Kernel Smoothing is probably the more robust solution. You basically shift and scale a certain function (e.g. Gaussian) to fit the data. This can be done with histfit in matlab.
There's more solutions for this problem which you can research for yourself since you now know the terms needed. Be aware though that your problem is not a trivial one if you want to do it properly.

PCA on Sift desciptors and Fisher Vectors

I was reading this particular paper http://www.robots.ox.ac.uk/~vgg/publications/2011/Chatfield11/chatfield11.pdf and I find the Fisher Vector with GMM vocabulary approach very interesting and I would like to test it myself.
However, it is totally unclear (to me) how do they apply PCA dimensionality reduction on the data. I mean, do they calculate Feature Space and once it is calculated they perform PCA on it? Or do they just perform PCA on every image after SIFT is calculated and then they create feature space?
Is this supposed to be done for both training test sets? To me it's an 'obviously yes' answer, however it is not clear.
I was thinking of creating the feature space from training set and then run PCA on it. Then, I could use that PCA coefficient from training set to reduce each image's sift descriptor that is going to be encoded into Fisher Vector for later classification, whether it is a test or a train image.
EDIT 1;
Simplistic example:
[coef , reduced_feat_space]= pca(Feat_Space','NumComponents', 80);
and then (for both test and train images)
reduced_test_img = test_img * coef; (And then choose the first 80 dimensions of the reduced_test_img)
What do you think? Cheers
It looks to me like they do SIFT first and then do PCA. the article states in section 2.1 "The local descriptors are fixed in all experiments to be SIFT descriptors..."
also in the introduction section "the following three steps:(i) extraction
of local image features (e.g., SIFT descriptors), (ii) encoding of the local features in an image descriptor (e.g., a histogram of the quantized local features), and (iii) classification ... Recently several authors have focused on improving the second component" so it looks to me that the dimensionality reduction occurs after SIFT and the paper is simply talking about a few different methods of doing this, and the performance of each
I would also guess (as you did) that you would have to run it on both sets of images. Otherwise your would be using two different metrics to classify the images it really is like comparing apples to oranges. Comparing a reduced dimensional representation to the full one (even for the same exact image) will show some variation. In fact that is the whole premise of PCA, you are giving up some smaller features (usually) for computational efficiency. The real question with PCA or any dimensionality reduction algorithm is how much information can I give up and still reliably classify/segment different data sets
And as a last point, you would have to treat both images the same way, because your end goal is to use the Fisher Feature Vector for classification as either test or training. Now imagine you decided training images dont get PCA and test images do. Now I give you some image X, what would you do with it? How could you treat one set of images differently from another BEFORE you've classified them? Using the same technique on both sets means you'd process my image X then decide where to put it.
Anyway, I hope that helped and wasn't to rant-like. Good Luck :-)

Multiscale search for HOG+SVM in Matlab

first of all this is my first question here, so I hope I can explain it in a clear way.
My goal is to detect different classes of traffic signs in images. For that purpose I have trained binary SVMs following these steps:
First I got a database of cropped traffic signs like the one in the link. I considered different classes (prohibition, danger, etc), and negative images. All of them were scaled to 40x40 pixels.
http://i.imgur.com/Hm9YyZT.jpg
I trained linear-SVM models for each class (1-vs-all), using HOG as feature. Each image is described with a 1728-dimensional feature. (I append the three feature vectors for all three image planes). I did crossvalidation to set parameter C, and tested on previously unseen 40x40 images, and I got very accurate results (F1 score over 0.9 for all classes). I used libsvm for training and testing.
Now I'd want to detect signs in full road images, sliding a window in different image scales. The problem I'm facing is that I couldn't find any function that can do it for me (as DetectMultiScale in OpenCV), and my solution is very slow and rudimentary (I'm just doing a triple for loop, and for each scale I crop consecutive and overlapping 40x40 images, obtain HOG features and apply svmpredict for each one).
Can someone give me a clue to find a faster way to do it? I thought too about getting the HOG feature vector of the whole input image, and then reorder that vector to a matrix where each row will have the features corresponding to each 40x40 window, but I couldn't find a straightforward way of doing it.
Thanks,
I would suggest using SURF feature detection, however I don't know if this would also be too slow your needs.
See : http://morf.lv/modules.php?name=tutorials&lasit=2 for more information on how to implement and weather it is a viable solution for you.

Estimating an image deblurring/denoising technique

I'm new to image processing and wanted to deblur/denoise some images using Matlab. Example:
Input
Output
I don't know the exact blurring/noising effects by which the second image came about. At first, I normally did so by trial and error of the Wiener deconvolution method, but not able to reach best results.
So my question is, is there a more clever method other than trial and error?
(Note: The output image was obtained from Robot36 radio transmission decoder.)
I'd suggest trying the Richardson-Lucy deblurring algorithm. It is available as a built-in function in the Image Processing Toolbox. It makes use of multiple iterations which you can set to control the degree to which you want the image deblurred. I've always found this a very useful method. Here's the doc link: deconvlucy