I need to find pixel values of inlier points obtained in object detection using impixel(). I am using the same code as provided in the example at the link
How can I get x,y coordinates of the inlier points being with respect to image dimensions.(Top-left corner of image considered as 0 row, 0 col) so that I can use the coordinates to find their respective pixel values. I couldn't find any solution in Matlab same as KeyPoint object in C++ that gives coordinate values easily.
You do not need impixel here. impixel lets you get the pixel value from in image displayed in a figure, which is not what you are trying to do.
In the example you are using, inlierBoxPoints and inlierScenePoints are SURFPoints objects. You can get the (x,y) locations of the points as inlierBoxPoints.Location. Then you can get the pixel value for the i-th point as follows:
loc = round(inlierBoxPoints.Location(i, :));
pixVal = boxImage(loc(2), loc(1), :);
Keep in mind that in MATLAB the images are indexed as (row, col), and that the top-left corner pixel is (1,1), not (0,0). You have to round off the coordinates, because the points are detected with sub-pixel accuracy.
Related
I have an image that represents a polygon.
I want to process it in matlab and generate the image below.
Basically i am asking to separate the polygon from the rest of the image out. This question got inspired here.
We only interested in the red pixels we can use the first channel(Red) to extract coordinates centroid of each scaled pixel. Since there may be slight differences between the same coordinates we can use third output of the uniquetol function to convert absolute coordinates to relative coordinates then use accumarray to convert coordinates to a binary image.
[a,m]=imread('KfXkR.png'); %read the indexed image
rgb = ind2rgb(a,m); %convert it to rgb
region = rgb(:,:,1)>.5; %extract red cannel convert to binary to contrast red pixels
cen = regionprops(region,'Centroid'); %find absolute coordinates of centeroid of each pixel
colrow = reshape([cen.Centroid],2,[]); %reformat/reshape
[~,~,col] = uniquetol(colrow(1,:),0.1,'DataScale',1); %convert absolute coordinated to relative coordinates correcting possible slight variations
[~,~,row] = uniquetol(colrow(2,:),0.1,'DataScale',1);
result = accumarray([row col],1); %make the binary image from coordinates of pixels
imwrite(result,'result.png')
Scaled result:
Unscaled:
I think function contourc will get the ploygon:
C = contourc(img, [1 1]); % img is 2-D double in range [0 1]
The format of output C is a little tricky. But for one level contour, it should be easy. You can read the documentation for contourc to construct the polygon.
I am using SURF on image size of 60*83 with varying scale levels and MetricThreshold to generate more blobs. But location of points2 vector showing coordinates which is beyond the dimension of input image size. I really wonder why it is. I need to obtain exact coordinate of detected key-points.
I2 = rgb2gray(Temp); %I2= 60*83 uint8
points2 = detectSURFFeatures(I2,'NumScaleLevels',6,'MetricThreshold',600);
I am trying to get location of the detected points in command window and it is showing following coordinates (see the highlighted x-axis coordinate exceeding dimension).
But if I use following code then only all coordinates are inside the image dimension.
points2 = detectSURFFeatures(I2);
I need to do this using varying scale levels and MetricThreshold. Thanks in advance.
matlab stores matrix as nOfRows x nOfCols
detectSURFFeatures returns positions as [x,y]
http://www.mathworks.com/help/vision/ref/surfpoints-class.html
so results are in range.
What does size(I2) return? From what you wrote, I would expect it to return [60, 83], where 60 is the height of the image (number of rows), and 83 is the width (number of columns). If so, then your results make perfect sense, because the SURFPoints locations are [x,y].
You can also see if your points make sense by visualizing them:
imshow(I2)
hold on
plot(points2)
I want to extract (x,y) pixel coordinates out of the SURF points returned, as an example in the example provided here using Matlab. It is clear that using 'ptsIn(1).Location' I can return the (x,y) coordinates of the point. But the points retuned included some decimal points as well, as an example (102.9268, 51.7285). Is there any way to convert this to pixel positions in the image plane, or just averaging these values will give the pixel positions? Thank you.
To understand it further I tried the following code in this link.
% Extract SURF features
I = imread('cameraman.tif');
points = detectSURFFeatures(I);
[features, valid_points] = extractFeatures(I, points);
% Visualize 10 strongest SURF features, including their
% scales and orientation which were determined during the
% descriptor extraction process.
imshow(I); hold on;
strongestPoints = valid_points.selectStrongest(10);
strongestPoints.plot('showOrientation',true);
Then, tried the command strongestPoints.Location in the Matlab console, which returned the following (x,y) coordinates.
139.7482 95.9542
107.4502 232.0347
116.6112 138.2446
105.5152 172.1816
113.6975 48.7220
104.4210 75.7348
111.3914 154.4597
106.2879 175.2709
131.1298 98.3900
124.2933 64.4942
Since there is a coordinate (107.4502 232.0347), I tried to mark the row 232 as black (I(232,:)=0;) to see whether it matches 232.0347 y coordinate in the SURF point, and received the following figure. So it seems rounded values of the SURF points give the (x,y) pixel coordinates of the image.
I have this 3 points (x,y) and I need to obtain a mask with a triangle where vertices is the points. I should respect some parameters, like the pixel pitch and I need a grid from the minimum x cordinate to the maximum x coordinate (the same for the y).
I tried to do this in matlab with the function poly2mask but the problem is the resultant image: when I have negative coordinates, I cannot see the polygon.
So I tried to center the polygon but I loose the original coordinates and I cannot have they back again because I need to do some elaboration on the image.
How I can obtain a mask triangle from 3 points without modifying the points and respecting the parameters?
If I explain why, this might make more sense
I have a logical matrix (103x3488) output of a photo of a measuring staff having been run through edge detect (1=edge, 0=noedge). Aim- to calculate the distance in pixels between the graduations on the staff. Problem, staff sags in the middle.
Idea: User inputs co-ordinates (using ginput or something) of each end of staff and the midpoint of the sag, then if the edges between these points can be extracted into arrays I can easily find the locations of the edges.
Any way of extracting an array from a matrix in this manner?
Also open to other ideas, only been using matlab for a month, so most functions are unknown to me.
edit:
Link to image
It shows a small area of the matrix, so in this example 1 and 2 are the points I want to sample between, and I'd want to return the points that occur along the red line.
Cheers
Try this
dat=imread('83zlP.png');
figure(1)
pcolor(double(dat))
shading flat
axis equal
% get the line ends
gi=floor(ginput(2))
x=gi(:,1);
y=gi(:,2);
xl=min(x):max(x); % line pixel x coords
yl=floor(interp1(x,y,xl)); % line pixel y coords
pdat=nan(length(xl),1);
for i=1:length(xl)
pdat(i)=dat(yl(i),xl(i));
end
figure(2)
plot(1:length(xl),pdat)
peaks=find(pdat>40); % threshhold for peak detection
bigpeak=peaks(diff(peaks)>10); % threshold for selecting only edge of peak
hold all
plot(xl(bigpeak),pdat(bigpeak),'x')
meanspacex=mean(diff(xl(bigpeak)));
meanspacey=mean(diff(yl(bigpeak)));
meanspace=sqrt(meanspacex^2+meanspacey^2);
The matrix pdat gives the pixels along the line you have selected. The meanspace is edge spacing in pixel units. The thresholds might need fiddling with, depending on the image.
After seeing the image, I'm not sure where the "sagging" you're referring to is taking place. The image is rotated, but you can fix that using imrotate. The degree to which it needs to be rotated should be easy enough; just input the coordinates A and B and use the inverse tangent to find the angle offset from 0 degrees.
Regarding the points, once it's aligned straight, all you need to do is specify a row in the image matrix (it would be a 1 x 3448 vector) and use find to get non-zero vector indexes. As the rotate function may have interpolated the pixels somewhat, you may get more than one index per "line", but they'll be identifiable as being consecutive numbers, and you can just average them to get an approximate value.