Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to compute histogram of oriented gradient on my image. But I don't want to divide the image to regular square blocks. I'm going to divide the image to uniform log polar bins(like bins in shape context or bins like here ) and then on each bin(block) the histogram of gradient with 8 orientation is computed.
But
1) I don't know how to divide the image to log polar bins. Can I use shape context? Or even the above link for partitioning to these bins?
2) how can I compute HOG on this bins since available codes(in matlab, OpenCV and EmguCV) use square bins? I have no idea.
What you are describing sounds pretty much like the C-HOG (circular HOG) features in the original HOG paper. The only difference wrt typical hog is the shape of the bins. I think it would be best to:
iterate over the pixels
calculate the circular bin number for each pixel
add the contribution of the gradient at the pixel to the histogram corresponding to the bin number
A good starting point would be the pseudo-matlab-code in this answer: https://stackoverflow.com/a/10115112/1576602
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am building a pyramid of images. First I take a big picture and build a smaller one even smaller, etc. I use interpolation to reduce the image. And I need to understand at what interpolation there will be less lost information between images. This is what I mean by interpolation quality.
I am looking at horizontal gradients. Please tell me how good this criterion is or if there is something better.
Blurred = imfilter(img, PSF);
Blurred = im2double(Blurred)
Blurred2 = imresize(Blurred, [300 300], "Method", "bicubic");
[x0,y0] = meshgrid(1:360,1:360);
[x, y] = meshgrid(1:1.2:360, 1:1.2:360);
Blurred3 = interp2(x0, y0, Blurred, x,y, "spline");
gradX = diff(Blurred,1,1);
gradY = diff(Blurred,1,2);
gradX2 = diff(Blurred2,1,1);
gradY2 = diff(Blurred2,1,2);
gradX3 = diff(Blurred3,1,1);
gradY3 = diff(Blurred3,1,2);
[h, cx]=imhist(gradX);
[h2, cx2]=imhist(gradX2);
[h3, cx3]=imhist(gradX3);
h=log10(h);
h2 = log10(h2);
h3 = log10(h3);
figure, plot(cx, h)
hold on
plot(cx2, h2);
plot(cx3, h3);
hold off
You're using the finite difference approximation to the derivative. The units in gradX are intensity units/pixel, with "pixel" the distance between pixels (which is assumed to be 1). When you rescale your image, you increase the pixel size, but in the derivative you're still assuming the distance between pixels is 1. Thus, the values in gradX2 are larger than those in gradX. You'd have to normalize by the image width to correct for this effect.
But still, after normalization, I don't see how this is a measure of quality of the interpolation. The right question would be: how well can I reconstruct Blurred from Blurred2? I'm assuming here that Blurred has been blurred just sufficiently to avoid aliasing when resampling the image.
I would apply a 2nd round of interpolation to Blurred2 to recover an image of the same size as Blurred, then compare the two images using MSE or similar error measure.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am new to MATLAB, my problem is I created a triangle, now requirement is to gather all the x,y coordinates on three sides of triangle in order to find the angles to the starting point (0,0).
My approach is
side_1=[linspace(0,2,100),linspace(1,1,100)]
side_2=[linspace(2,0,100),linspace(1,5,100)]
side_3=[linspace(0,0,100),linspace(5,1,100)]
all_coordinates=(side_1, side_2, side_3)
The 4th line of the above code failed, I need make a matrix that contains all x,y coordinates in order to calculate angles that every points face to (0,0) by atan function. Looking for advice.
The first three lines of you code created three 1x200 row vectors. I presume you are trying to combine them into a single matrix with dimensions 3x200. In that case use:
all_coordiantes=[side_1; side_2; side_3]
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm trying to make a confusion matrix for some classification problem.
So far, I've managed to make a 10 by 10 matrix that stores the accuracy of my estimation for classification problem.
Now I want to make a 10 by 10 square image(?) that has a darker color when the number in that location of the matrix has a higher number. (Ranges from 0 to 100)
I've never done graphics with Matlab before.
Any help would be greatly appreciated.
Thanks.
I would use image. For example:
img = 100*rand(4,4); % Your image
img = img./100*64; % The `image` function works on scale from 0->64
image(img); % or `image(img')` depending on how you want the axis
colormap('grey'); % For grey scale images
axis equal; % For square pixels
Or for inverted colours you can change the one line to:
img = 64-img./100*64;
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I need to show this image in 3 colors , by giving each similar color region one color. The result should be in 3 colors without escape for any region in the image.
how i can combine the small regions with same color in one region as possible limits
i need to reduce the number of regions.
thanks
I hope you already have the regions yourself, else you'd be asking a too broad question, as those super-pixels are hard to get.
While I wont write thw code for you, ill give you the steps needed.
Find the average color of each region. Remember to work in HSV, and not RGB. Also, remember that H is circular. [1 0 0] and [0 0 0] are the same color in HSV.
Perform a classification of those colors, by, for example, KNN. Create 3 clusters, and compute the centroids of those clusters. Those will be your 3 colors
Convert the 3 centroids and the mean color of each superpixel to L*a*b* color space. This space is defined as "closest color is most similar color". Basically, compute the euclidean distance of each of the mean values of the superpixels to the 3 "class colors". The minimum distance class will be the one corresponding to that superpixel.
You can find help on each of these steps easily if you Google/search Stackverflow.
The nice thing of coding it properly is that you can try more colors, say 5 or 6, to see if the image/classification is better. 3 seem like to few colors.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want to remove the blood vessel in this images, please suggest any method and want to detect the microaneurysms (the red small dots in the images), below is my images after enhancement :
You could do something like this:
A = imread ('tFKeD.jpg');
C=bwlabel(A);
IM2 = imcomplement(C); % // invert the image so that your objects are 1
se = strel('diamond',3); % // Create a morphological object
BW2 = imdilate(IM2,se);
L=bwlabel(BW2); % // Label your objects
E = regionprops(L,'area'); % // Get the respective area
Area = cell2mat(struct2cell(E)); % // Convert to a matrix
[~,largestObject] = max(Area); % // Find the one with the largest area
vessel = L==largestObject;
imshow(vessel)
I suggest the following approach:
Take a threshold on the input image and keep all the values below the threshold.
This will generate a mask, in which the blood vessels and the microaneurysms are marked in white, and the rest is marked in black.
The threshold value can be determined by using the image histogram.
From looking at the image, it seems that the threshold should be low (due to the fact that the blood vessels and the microaneurysms are relatively dark).
Calculate the connected components in the image using bwconncomp.
Perform noise cleaning on the mask from the previous stage.
This can be done by using morphological operations (such as imclose), or by zeroing out connected components which
are too small to be classified as microaneurysms.
The biggest connected component should represent the blood vessels - remove it from your mask.
The output of this stage will be the desired result.