Having a saliency map of an image (it has values between 0 and 1), my aim is to compute its globale score of saliency. I'm a bit confused, I don't know if I have to use the 'mean' or the 'median'.
The problem of the mean is that low saliency values will pull down the global score of saliency.
What kind of summons could I use for this question ?
Thanks in advance.
I think you want to compare if one image is more salient than another. Is that right?
First please check if the saliency map is already normalized. Many algorithms do this so that all images have the same mean (e.g., 0.5).
If there is no per image normalization, and you do not want to use the mean or median, perhaps you can use the mode.
Two sample outputs would be helpful. :)
Related
I am working on a small project in Matlab just because of my interest in image processing and I have not studied a degree or a course related to image processing.
I want to understand a small concept about feature extraction and feature vectors. I have read some articles about that and in general I can understand that, but my question is:
For example, I want to extract some information from different objects of a binary image, the information is about length, width and distance between the objects. In one application I want to extract the features on which I want to apply some algorithms to compute width of all the objects and ignore the length and distance. Can we name this as feature extraction regarding the width? And storing them in different vectors as Feature Vectors?
It makes me think that, I might be complicating the simple things. Should I use some other terminologies for this instead of feature extraction and feature vectors?
Please suggest me if I am going in the right direction or not?
Thank you!
Feature extraction is the process of computing numerical values on regions/objects/shapes/blobs detected in an image. [Sometimes the detection itself can be called extraction and the features need not be numbers.]
The feature values can indeed be stored in vectors, usually they fill a table. Sometimes they are structured in a more complicated way (such as a graph f.i.). Most of the time they are used for classification/recognition purposes or they can just be the output of the process on hand.
My question is regarding image processing techniques in MATLAB. I am designing a proof of concept which will discriminate the digit "4" from a large set of digit images.
I have used many image processing techniques such as edge detection. I am also using one technique where I get the mean pixel value of each column and row of the images. However, I am unsure what this feature extraction method does exactly. Can someone clarify why this is a type of feature extraction? And does this method have a particular name?
Yes, you can call these kind of features as statistical features. They can be useful in some problems, but not sure about the digits. In my opinion, you should also use variance as a feature. You can check this paper with statistical features:
https://pdfs.semanticscholar.org/9a0d/c802a6e6f7b193e2b90cb84ca119ebb1e705.pdf
Image moments are also useful, you can explore the use of them:
http://www.sci.utah.edu/~gerig/CS7960-S2010/handouts/CS7960-AdvImProc-MomentInvariants.pdf
Yes, in a way it is a type of feature extraction. The mean of each row is the sum of the pixel values over the number of pixels. The sum of the pixels could be interpreted as a projection of the image onto the y axis. Same goes for the columns but as a projection on the x axis.
Whether that type of feature extraction helps you depends on your problem.
I have 50 images and created a database of the green channel of each image by separating them into two classes (Skin and wound) and storing the their respective green channel value.
Also, I have 1600 wound pixel values and 3000 skin pixel values.
Now I have to use bayes classification in matlab to classify the skin and wound pixels in a new (test) image using the data base that I have. I have tried the in-built command diaglinear but results are poor resulting in lot of misclassification.
Also, I dont know if it's a normal distribution or not so can't use gaussian estimation for finding the conditional probability density function for the data.
Is there any way to perform pixel wise classification?
If there is any part of the question that is unclear, please ask.
I'm looking for help. Thanks in advance.
If you realy want to use pixel wise classification (quite simple, but why not?) try exploring pixel value distributions with hist()/imhist(). It might give you a clue about a gaussianity...
Second, you might fit your values to the some appropriate curves (gaussians?) manually with fit() if you have curve fitting toolbox (or again do it manualy). Then multiply the curves by probabilities of the wound/skin if you like it to be MAP classifier, and finally find their intersection. Voela! you have your descition value V.
if Xi skin
else -> wound
I have applied Two different Image Enhancement Algorithm on a particular Image and got two resultant image , Now i want to compare the quality of those two image in order to find the effectiveness of those two Algorithms and find the more appropriate one based on the comparison of Feature vectors of those two images.So what Suitable Feature Vectors should i compare in this Case?
Iam asking in context of comparing the texture features of the images and which feature vector will be more suitable.
I need Mathematical support for verifying the effectiveness of any one algorithm based on the evaluation of Images for example using Constrast and Variance.So are there any more approaches do that?
A better approach would be to do some Noise/Signal ratio by comparing image spectra ?
Slayton is right, you need a metric and a way to measure against it, which can be an academic project in itself. However, i could think of one approach straightaway, not sure if it makes sense to your specific task at hand:
Metric:
The sum of abs( colour difference ) across all pixels. The lower, the more similar the images are.
Method:
For each pixel, get the absolute colour difference (or distance, to be precise) in LAB space between original and processed image and sum that up. Don't ruin your day trying to understand the full wikipedia article and coding that, this has been done before. Try re-using the methods getDistanceLabFrom(Color color) or getDistanceRgbFrom(Color color) from this PHP implementation. It worked like a charm for me when i needed a way to match a color of pixels in a jpg picture - which basically is the same principle.
The theory behind it (as far as my limited understanding goes): It's doing a mathematical abstraction of rgb or (better) lab colour space as a three dimensional room, and then calculate the distance, that's why it works well - and hardly worked for me when looking at a color code from a one-dimensionional perspective.
The usual way is to start with a reference image (a good one), then add some noise on it (in a controlled way).
Then, your algorithm should remove as much as possible from the added noise. The results are easy to compare with a signal-to-noise ration (see wikipedia).
Now, the approach is easy to apply on simple noise models, but if you aim to improve more complex appearance issues, you must devise a way to apply the noise, which is not easy.
Another, quite common way to do it is the one recommended by slayton: take all your colleagues to appreciate the output of your algorithm, then average their impressions.
If you have only the 2 images and no reference (higest quality) image, then you can see my rude solution/bash script there: https://photo.stackexchange.com/questions/75995/how-do-i-compare-two-similar-images-sharpness/117823#117823
It gets the 2 filenames and outputs the higher quality filename. It assumes the content of the images is identical (same source image).
It can be fooled though.
I am currently asked to compare certain images with each other (using nested for loops) and determine which images are closest to each other (NOT particularly exactly the same) and the 2 images that are most different, either by using linear correlation or convulution.
As all images are 2D matricies with the exact same dimensions, the only thing I can come up with now in (if using correlation) to do the following :
a = imread('image_1.jpg');
b = imread('image_2.jpg');
c = corr2(a,b);
if c==1
disp('The images are same')
The problem is that the above, only works when comparing the exact same image with it self, any other images that look similar dont work. How can I solve this problem. Thanks
You need to use the function *xcorr2.
corr2 is your correlation function and will give a value. The larger the value, the better match it will be. You could store the values of c in a separate matrix C and pick the pair with the largest c to get the two images that are closest to each other
Normalize your image, then use conv2 instead and find the maximum. It will be more generous to the registration problem you might have. If you upload sample images that you consider similar and not similar, we might be able to help you better.