Geometrical transformation of a polygon to a higher resolution image - matlab

I'm trying to resize and reposition a ROI (region of interest) correctly from a low resolution image (256x256) to a higher resolution image (512x512). It should also be mentioned that the two images cover different field of view - the low and high resolution image have 330mm x 330mm and 180mm x 180mm FoV, respectively.
What I've got at my disposal are:
Physical reference point (in mm) in the 256x256 and 512x512 image, which are refpoint_lowres=(-164.424,-194.462) and refpoint_highres=(-94.3052,-110.923). The reference points are located in the top left pixel (1,1) in their respective images.
Pixel coordinates of the ROI in the 256x256 image (named pxX and pxY). These coordinates are positioned relative to the reference point of the lower resolution image, refpoint_lowres=(-164.424,-194.462).
Pixel spacing for the 256x256 and 512x512 image, which are 0.7757 pixel/mm and 2.8444 pixel/mm respectively.
How can I rescale and reposition the ROI (the binary mask) to correct pixel location in the 512x512 image? Many thanks in advance!!
Attempt
% This gives correctly placed and scaled binary array in the 256x256 image
mask_lowres = double(poly2mask(pxX, pxY, 256., 256.));
% Compute translational shift in pixel
mmShift = refpoint_lowres - refpoint_highres;
pxShift = abs(mmShift./pixspacing_highres)
% This produces a binary array that is only positioned correctly in the
% 512x512 image, but it is not upscaled correctly...(?)
mask_highres = double(poly2mask(pxX + pxShift(1), pxY + pxShift(2), 512.,
512.));

So you have coordinates pxX, and pxY in pixels with respect to the low-resolution image. You can transform these coordinates to real-world coordinates:
pxX_rw = pxX / 0.7757 - 164.424;
pxY_rw = pxY / 0.7757 - 194.462;
Next you can transform these coordinates to high-res coordinates:
pxX_hr = (pxX_rw - 94.3052) * 2.8444;
pxY_hr = (pxY_rw - 110.923) * 2.8444;
Since the original coordinates fit in the low-res image, but the high-res image is smaller (in physical coordinates) than the low-res one, it is possible that these new coordinates do not fit in the high-res image. If this is the case, cropping the polygon is a non-trivial exercise, it cannot be done by simply moving the vertices to be inside the field of view. MATLAB R2017b introduces the polyshape object type, which you can intersect:
bbox = polyshape([0 0 180 180] - 94.3052, [180 0 0 180] - 110.923);
poly = polyshape(pxX_rw, pxY_rw);
poly = intersect([poly bbox]);
pxX_rw = poly.Vertices(:,1);
pxY_rw = poly.Vertices(:,2);
If you have an earlier version of MATLAB, maybe the easiest solution is to make the field of view larger to draw the polygon, then crop the resulting image to the right size. But this does require some proper calculation to get it right.

Related

How to preserve spatial reference using Imcrop with Matlab

I have an image and the spatial reference object of that image.
Now i want to crop the image by coordinates according to the spatial reference object.
The function Imcrop can only crop according the pixel coordinates. Is there a way to crop based on the world coordinates?
I tried to use Imcrop and compute for the new reference object but I get lost in the coordinate transformation.
An example of the reference object after warping an Image.
imref2d with properties:
XWorldLimits: [-775.4357 555.5643]
YWorldLimits: [-488.3694 523.6306]
ImageSize: [1012 1331]
PixelExtentInWorldX: 1
PixelExtentInWorldY: 1
ImageExtentInWorldX: 1331
ImageExtentInWorldY: 1012
XIntrinsicLimits: [0.5000 1.3315e+03]
YIntrinsicLimits: [0.5000 1.0125e+03]
What I actually want to do is to crop the image such that the point (0,0) is the center of the cropped image.
According to you spatial reference each pixel has a dimensions of 1 x 1 in world coordinates. Therefore if you want to convert between world coordinate (Xw,Yw) and image coordinate (Xi,Yi) do the following:
Xi = round(abs(-775.4357 - Xw))
Yi = round(abs(-488.3694 - Yw))
So if you want to crop the image such that the real world coordinate (0,0) will be the center of the new cropped image and the size of the new image will be width on height than the rectangle for imcrop will be
[(755 - width) (488 - height) width height]

Quantifying pixels from a list of coordinates

I have a list of coordinates, which are generated from another program, and I have an image.
I'd like to load those coordinates (making circular regions of interest (ROIs) with a diameter of 3 pixels) onto my image, and extract the intensity of those pixels.
I can load/impose the coordinates on to the image by using;
imshow(file);
hold on
scatter(xCoords, yCoords, 'g')
But can not extract the intensity.
Can you guys point me in the right direction?
I am not sure what you mean by a circle with 3 pixels diameter since you are in a square grid (as mentioned by Ander Biguri). But you could use fspecial to create a disk filter and then normalize. Something like this:
r = 1.5; % for diameter = 3
h = fspecial('disk', r);
h = h/h(ceil(r),ceil(r));
You can use it as a mask to get the intensities at the given region of the image.
im = imread(file);
ROI = im(xCoord-1:xCoord+1; yCoord-1:yCoord+1);
I = ROI.*h;

How to crop face section from an image with given corner points. MATLAB

I want to crop a face section from an image but face image is not straight/vertically aligned. I am having four pixel points to crop it..
Problem is that,
If i will transform image first the pixel points cannot be used thereafter to crop the facial section out of it.
Or in other case I am not having an exact bounding box to crop the image directly using imcrop as facial sections are somewhat tilted left or right.
The four pixel points are at forehead , chin and ears of the face to be cropped.
You should look at poly2mask. This function produces a mask image from your given x and y coordinates:
BW = poly2mask(x,y,m,n);
where x and y are your coordinates, and the produced BW image is m by n. You can then use this BW image to mask your original image I by doing
I(~BW) = 0;
If you actually want to crop, then you could get the bounding box (either through the regionprops function or the code below):
x1 = round(min(x));
y1 = round(min(y));
x2 = round(max(x));
y2 = round(max(y));
and then crop the image after you have used the BW as a mask.
I2 = I(x1:x2,y1:y2);
Hope that helps.

How to change a pixel distance to meters?

I have a .bmp image with a map. What i know:
Height an Width of bmp image
dpi
Map Scale
Image Center's coordinates in meters.
What i want:
How can i calculate some points of image (for example corners) in meters.
Or how can i change a pixel distanse to meters?
What i do before:
For sure i know image center coordinates in pixels:
CenterXpix = Widht/2;
CenterYpix = Height/2;
But what i gonna do to find another corners coordinates. Don't think that:
metersDistance = pixelDistance*Scale;
is a correct equation.
Any advises?
If you know the height or width in both meters and pixels, you can calculate the scale in meters/pixel. You equation:
metersDistance = pixelDistance*Scale;
is correct, but only if your points are on the same axis. If your two points are diagonal from each other, you have to use good old pythagoras (in pseudocode):
X = XdistancePix*scale;
Y = YdistancePix*scale;
Distance_in_m = sqrt(X*X+Y*Y);

matlab will two different cameras give me different results?

the next code gets an image of a grape that I photograph (is called: 'full_img') and calculate the area of the grape:
RGB = imread(full_img);
GRAY = rgb2gray(RGB);
threshold = graythresh(GRAY);
originalImage = im2bw(GRAY, threshold);
originalImage = bwareaopen(originalImage,250);
SE = strel('disk',10);
IM2 = imclose(originalImage,SE);
originalImage = IM2;
labeledImage = bwlabel(originalImage, 8); % Label each blob so we can make measurements of it
blobMeasurements = regionprops(labeledImage, originalImage, 'all');
numberOfBlobs = length(blobMeasurements);
pixperinch=get(0,'ScreenPixelsPerInch'); %# find resolution of your display
dpix=blobMeasurements(numberOfBlobs).Area; %# calculate distance in pixels
dinch=dpix/pixperinch; %# convert to inches from pixels
dcm=dinch*2.54; %# convert to cm from inches
blobArea = dcm; % Get area.
If I photograph the same grape with the same conditions by different cameras (photographed it from the same distance and the same lightning), will I get the same results? (what if I have a camera of 5 Mega Pixel and 12 Mega Pixel?).
No, it won't. You go from image coordinates to world coordinates using dpix/pixperinch. In general this is wrong. It will only work for a specific image (and that alone), if you know the pixperinch. In order to get the geometric characteristics of an object in an image (eg length, area etc), you must back-project the image pixels in the Cartesian space using the Camera matrix and the inverse projective transformation, in order to get Cartesian coordinates (let along calibrating the camera for lens distortion, which is a nonlinear problem). Then, you can perform the calculations. You code won't work even for the same camera.
See this for more.