Detected SURF Feature location - matlab

I am using SURF on image size of 60*83 with varying scale levels and MetricThreshold to generate more blobs. But location of points2 vector showing coordinates which is beyond the dimension of input image size. I really wonder why it is. I need to obtain exact coordinate of detected key-points.
I2 = rgb2gray(Temp); %I2= 60*83 uint8
points2 = detectSURFFeatures(I2,'NumScaleLevels',6,'MetricThreshold',600);
I am trying to get location of the detected points in command window and it is showing following coordinates (see the highlighted x-axis coordinate exceeding dimension).
But if I use following code then only all coordinates are inside the image dimension.
points2 = detectSURFFeatures(I2);
I need to do this using varying scale levels and MetricThreshold. Thanks in advance.

matlab stores matrix as nOfRows x nOfCols
detectSURFFeatures returns positions as [x,y]
http://www.mathworks.com/help/vision/ref/surfpoints-class.html
so results are in range.

What does size(I2) return? From what you wrote, I would expect it to return [60, 83], where 60 is the height of the image (number of rows), and 83 is the width (number of columns). If so, then your results make perfect sense, because the SURFPoints locations are [x,y].
You can also see if your points make sense by visualizing them:
imshow(I2)
hold on
plot(points2)

Related

How to programatically process the image to black and white and separate out the polygon

I have an image that represents a polygon.
I want to process it in matlab and generate the image below.
Basically i am asking to separate the polygon from the rest of the image out. This question got inspired here.
We only interested in the red pixels we can use the first channel(Red) to extract coordinates centroid of each scaled pixel. Since there may be slight differences between the same coordinates we can use third output of the uniquetol function to convert absolute coordinates to relative coordinates then use accumarray to convert coordinates to a binary image.
[a,m]=imread('KfXkR.png'); %read the indexed image
rgb = ind2rgb(a,m); %convert it to rgb
region = rgb(:,:,1)>.5; %extract red cannel convert to binary to contrast red pixels
cen = regionprops(region,'Centroid'); %find absolute coordinates of centeroid of each pixel
colrow = reshape([cen.Centroid],2,[]); %reformat/reshape
[~,~,col] = uniquetol(colrow(1,:),0.1,'DataScale',1); %convert absolute coordinated to relative coordinates correcting possible slight variations
[~,~,row] = uniquetol(colrow(2,:),0.1,'DataScale',1);
result = accumarray([row col],1); %make the binary image from coordinates of pixels
imwrite(result,'result.png')
Scaled result:
Unscaled:
I think function contourc will get the ploygon:
C = contourc(img, [1 1]); % img is 2-D double in range [0 1]
The format of output C is a little tricky. But for one level contour, it should be easy. You can read the documentation for contourc to construct the polygon.

Get coordinates of inlier points in Matlab

I need to find pixel values of inlier points obtained in object detection using impixel(). I am using the same code as provided in the example at the link
How can I get x,y coordinates of the inlier points being with respect to image dimensions.(Top-left corner of image considered as 0 row, 0 col) so that I can use the coordinates to find their respective pixel values. I couldn't find any solution in Matlab same as KeyPoint object in C++ that gives coordinate values easily.
You do not need impixel here. impixel lets you get the pixel value from in image displayed in a figure, which is not what you are trying to do.
In the example you are using, inlierBoxPoints and inlierScenePoints are SURFPoints objects. You can get the (x,y) locations of the points as inlierBoxPoints.Location. Then you can get the pixel value for the i-th point as follows:
loc = round(inlierBoxPoints.Location(i, :));
pixVal = boxImage(loc(2), loc(1), :);
Keep in mind that in MATLAB the images are indexed as (row, col), and that the top-left corner pixel is (1,1), not (0,0). You have to round off the coordinates, because the points are detected with sub-pixel accuracy.

3D scatter plot with 4D data

I need to plot a 3D figure with each data point colored with the value of a 4th variable using a colormap. Lets say I have 4 variables X,Y,Z and W where W = f(X,Y,Z). I want a 3D plot with X, Y, and Z as the three axis. The statement scatter3(X,Y,Z,'filled','b') gives me a scatter plot in 3D but I want to incorporate the value of Z in the graph by representing the points as an extra parameter (either with different areas :bigger circles for data points with high value of Z and small circles for data points with low value of Z or by plotting the data points with different colors using a colormap). However, I am a novice in MATLAB and dont really know how to proceed. Any help will be highly appreciated.
Thanks in advance!
So just use z for the size vector (4th input) as well as the color vector (5th input):
z = 10*(1:pi/50:10*pi);
y = z.*sin(z/10);
x = z.*cos(z/10);
figure(1)
scatter3(x,y,z,z,z)
view(45,10)
colorbar
The size vector needs to be greater 0, so you may need to adjust your z accordingly.
You are already nearly there... simply use
scatter3(X,Y,Z,s,W);
where s is the point size (scalar, e.g. 3) and W is a vector with your W values.
You might also want to issue an
set(gcf, 'Renderer','OpenGL');
where gcf gets your current figure you are plotting in to significantly increase responsiveness when scattering a lot of data.

Calculate the average of part of the image

How can i calculate the average of a certain area in an image using mat-lab?
For example, if i have an intensity image with an area that is more alight and i want to know what is the average of the intensity there- how do i calculate it?
I think i can find the coordinates of the alight area by using the 'impixelinfo' command.
If there is another more efficient way to find the coordinates i will also be glad to know.
After i know the coordinates how do i calculate the average of part of the image?
You could use one of the imroi type functions in Matlab such as imfreehand
I = imread('cameraman.tif');
h = imshow(I);
e = imfreehand;
% now select area on image - do not close image
% this makes a mask from the area you just drew
BW = createMask(e);
% this takes the mean of pixel values in that area
I_mean = mean(I(BW));
Alternatively, look into using regionprops, especially if there's likely to be more than one of these features in the image. Here, I'm finding points in the image above some threshold intensity and then using imdilate to pick out a small area around each of those points (presuming the points above the threshold are well separated, which may not be the case - if they are too close then imdilate will merge them into one area).
se = strel('disk',5);
BW = imdilate(I>thresh,se);
s = regionprops(BW, I, 'MeanIntensity');

MATLAB: Return array of values between two co-ordinates in a large matrix (diagonally)

If I explain why, this might make more sense
I have a logical matrix (103x3488) output of a photo of a measuring staff having been run through edge detect (1=edge, 0=noedge). Aim- to calculate the distance in pixels between the graduations on the staff. Problem, staff sags in the middle.
Idea: User inputs co-ordinates (using ginput or something) of each end of staff and the midpoint of the sag, then if the edges between these points can be extracted into arrays I can easily find the locations of the edges.
Any way of extracting an array from a matrix in this manner?
Also open to other ideas, only been using matlab for a month, so most functions are unknown to me.
edit:
Link to image
It shows a small area of the matrix, so in this example 1 and 2 are the points I want to sample between, and I'd want to return the points that occur along the red line.
Cheers
Try this
dat=imread('83zlP.png');
figure(1)
pcolor(double(dat))
shading flat
axis equal
% get the line ends
gi=floor(ginput(2))
x=gi(:,1);
y=gi(:,2);
xl=min(x):max(x); % line pixel x coords
yl=floor(interp1(x,y,xl)); % line pixel y coords
pdat=nan(length(xl),1);
for i=1:length(xl)
pdat(i)=dat(yl(i),xl(i));
end
figure(2)
plot(1:length(xl),pdat)
peaks=find(pdat>40); % threshhold for peak detection
bigpeak=peaks(diff(peaks)>10); % threshold for selecting only edge of peak
hold all
plot(xl(bigpeak),pdat(bigpeak),'x')
meanspacex=mean(diff(xl(bigpeak)));
meanspacey=mean(diff(yl(bigpeak)));
meanspace=sqrt(meanspacex^2+meanspacey^2);
The matrix pdat gives the pixels along the line you have selected. The meanspace is edge spacing in pixel units. The thresholds might need fiddling with, depending on the image.
After seeing the image, I'm not sure where the "sagging" you're referring to is taking place. The image is rotated, but you can fix that using imrotate. The degree to which it needs to be rotated should be easy enough; just input the coordinates A and B and use the inverse tangent to find the angle offset from 0 degrees.
Regarding the points, once it's aligned straight, all you need to do is specify a row in the image matrix (it would be a 1 x 3448 vector) and use find to get non-zero vector indexes. As the rotate function may have interpolated the pixels somewhat, you may get more than one index per "line", but they'll be identifiable as being consecutive numbers, and you can just average them to get an approximate value.