My binary image has rectangular rotated objects of known size on it. I'd like to get the object inclination using axis-aligned bounding box that MATLAB's regionprops returns. What are my suggestions:
Let bounding box width be W, side of rectangle be C and inclination alpha
Then
Using Weierstrass substitution
After some simplification:
Solving the equation for tan(alpha/2) with
For any nonzero inclination discriminant is positive.
Logic seems to be OK, so as math. Could you please point where I make a mistake, or what is a better way to get inclination?
Here is corresponding MATLAB code:
img = false(25,25);
img(5:16,5:16) = true;
rot_img = imrotate(img, 30, 'crop');
props = regionprops(bwlabel(rot_img),'BoundingBox');
bbox = cat(1,props.BoundingBox);
w = bbox(3);
h = 12;
a = -1*(1+w/h); b = 2; c = 1 - w/h;
D = b^2 - 4*a*c;
alpha = 2*atand((-b + sqrt(D))/(2*a));
%alpha = 25.5288
EDIT Thank you for trigonometry hints. They significantly simplify the calculations, but they give wrong answer. As I now understand, the question is asked in wrong way. The thing I really need is finding inclination of short lines (10-50 pixels) with high accuracy (+/- 0.5 deg), the lines' position is out of interest.
The approach used in the question and answers show better accuracy for long lines, for c = 100 error is less than 0.1 degree. That means we're into rasterization error here, and need subpixel accuracy. At the moment I have only one algorithm that solves the problem - Radon transform, but I hope you can recommend something else.
p = bwperim(rot_img);
theta=0:0.1:179.9;
[R,xp] = radon(p,theta); %Radon transform of contours
a=imregionalmax(R,true(3,3)); %Regional maxima of the transform
[r,c]=find(a); idx=sub2ind(size(a),r,c); maxvals=R(idx);
[val,midx]=sort(maxvals,'descend'); %Choose 4 highest maxima
mean(rem(theta(c(midx(1:4))),90)) %And average corresponding angles
%29.85
If rectangle is square:
w/c=sin(a)+cos(a)
(w/c)^2=1+sin(2a)
sin(2a)=(w/c)^2-1
a=0.5*arcsin((w/c)^2-1)
May be use regionprops function with 'Orientation' option...
Related
Can someone please explain me why the 'Circularity' in Matlab is calculated by (4*Area*pi)/(Perimeter^2) while in Podczeck Shape it is Area/(Pi/4*sp^2) https://qiftp.tudelft.nl/dipref/FeatureShape.html)? Or is it just simply defined differently?
I tried to write a Podczeck Shape circularity code in Matlab and I assume that ‘MaxFeretDiameter’ is perpendicular to ‘MinFeretDiameter’, am I correct?
Code:
clc;
clear all;
close all;
Pi=pi;
Image = rgb2gray(imread('pillsetc.png'));
BW = imbinarize(Image);
BW = imfill(BW,'holes');
BW = bwareaopen(BW, 100);
imshow(BW);
[B,L] = bwboundaries(BW,'noholes');
i=2;
stat = regionprops(BW, 'Area', 'Circularity', 'MaxFeretProperties', 'MinFeretProperties');
OArea = stat(i).Area;
OMaxFeretProperties = stat(i).MaxFeretDiameter;
OMinFeretProperties = stat(i).MinFeretDiameter;
OCircularityPodzeck = OArea/(Pi/4 * (OMaxFeretProperties^2))
OCircularityMatlab = stat(i).Circularity
The 'Circularity' measure in regionprops is defined as
Circularity = (4 Area π)/(Perimeter²)
For a circle, where Area = π r² and Perimeter = 2 π r, this comes out to:
Circularity = (4 π r² π)/((2 π r)²) = (4 π² r²)/(4 π² r²) = 1
For any other shape, the perimeter will be relatively longer (this is a characteristic of the circle!), and so the 'Circularity' measure will be smaller.
Podczeck's Circularity is a different measure. It is defined as
Podczeck Circularity = Area/(π/4 Height²)
In the documentation you link it refers to Height as sp, and defines it as "Feret diameter perpendicular to s", and defines s as "the shortest Feret diameter". Thus, sp is the largest of the two sides of the minimal bounding box.
For a circle, the minimal bounding box has Height equal to the diameter. We substitute again:
Podczeck Circularity = (π r²)/(π/4 (2 r)²) = (π r²)/(π/4 4 r²) = 1
For any other shape, the height will be relatively larger, and so the Podczeck Circularity measure will be smaller.
Do note that the max and min Feret diameters are not necessarily perpendicular. A simple example is a square: the largest diameter is the diagonal of the square; the smallest diameter is the height or width; these two are at 45 degrees from each other. The Podczeck Circularity measure uses the size of the project perpendicular to the smallest projection, which for a square is equal to the smallest projection, and smaller than the largest projection. The smallest projection and its perpendicular projection form the minimal bounding rectangle (typically, though apparently this is not necessarily the case?). However, regionprops has a 'BoundingBox' that is axis-aligned, and therefore not suitable. I don't know how to get the required value out of regionprops.
The approach you would have to follow is to use the 'PixelList' output of regionprops, together with the 'MinFeretAngle'. 'PixelList' is a list of pixel coordinates that belong to the object. You would rotate these coordinates according to 'MinFeretAngle', such that the axis-aligned bounding rectangle now corresponds to the minimal bounding rectangle. You can then determine the size of the box by taking the minimum and maximum values of the rotated coordinates.
I have this 3D image generated from the simple code below.
% Input Image size
imageSizeY = 200;
imageSizeX = 120;
imageSizeZ = 100;
%# create coordinates
[rowsInImage, columnsInImage, pagesInImage] = meshgrid(1:imageSizeY, 1:imageSizeX, 1:imageSizeZ);
%# get coordinate array of vertices
vertexCoords = [rowsInImage(:), columnsInImage(:), pagesInImage(:)];
centerY = imageSizeY/2;
centerX = imageSizeX/2;
centerZ = imageSizeZ/2;
radius = 28;
%# calculate distance from center of the cube
sphereVoxels = (rowsInImage - centerY).^2 + (columnsInImage - centerX).^2 + (pagesInImage - centerZ).^2 <= radius.^2;
%# Now, display it using an isosurface and a patch
fv = isosurface(sphereVoxels,0);
patch(fv,'FaceColor',[0 0 .7],'EdgeColor',[0 0 1]); title('Binary volume of a sphere');
view(45,45);
axis equal;
grid on;
xlabel('x-axis [pixels]'); ylabel('y-axis [pixels]'); zlabel('z-axis [pixels]')
I have tried plotting the image with isosurface and some other volume visualization tools, but there remains quite a few surprises for me from the plots.
The code has been written to conform to the image coordinate system (eg. see: vertexCoords) which is a left-handed coordinate system I presume. Nonetheless, the image is displayed in the Cartesian (right-handed) coordinate system. I have tried to see this displayed as the figure below, but that’s simply not happening.
I am wondering if the visualization functions have been written to display the image the way they do.
Image coordinate system:
Going forward, there are other aspects of the code I am to write for example if I have an input image sphereVoxels as in above, in addition to visualizing it, I would want to find north, south east, west, top and bottom locations in the image, as well as number and count the coordinates of the vertices, plus more.
I foresee this would likely become confusing for me if I don’t stick to one coordinate system, and considering that the visualization tools predominantly use the right-hand coordinate system, I would want to stick with that from the onset. However, I really do not know how to go about this.
Right-hand coordinate system:
Any suggestions to get through this?
When you call meshgrid, the dimensions x and y axes are switched (contrary to ndgrid). For example, in your case, it means that rowsInImage is a [120x100x200] = [x,y,z] array and not a [100x120x200] = [y,x,z] array even if meshgrid was called with arguments in the y,x,z order. I would change those two lines to be in the classical x,y,z order :
[columnsInImage, rowsInImage, pagesInImage] = meshgrid(1:imageSizeX, 1:imageSizeY, 1:imageSizeZ);
vertexCoords = [columnsInImage(:), rowsInImage(:), pagesInImage(:)];
I am trying to rotate the image manually using the following code.
clc;
m1 = imread('owl','pgm'); % a simple gray scale image of order 260 X 200
newImg = zeros(500,500);
newImg = int16(newImg);
rotationMatrix45 = [cos((pi/4)) -sin((pi/4)); sin((pi/4)) cos((pi/4))];
for x = 1:size(m1,1)
for y = 1:size(m1,2)
point =[x;y] ;
product = rotationMatrix45 * point;
product = int16(product);
newx =product(1,1);
newy=product(2,1);
newImg(newx,newy) = m1(x,y);
end
end
imshow(newImg);
Simply I am iterating through every pixel of image m1, multiplying m1(x,y) with rotation matrix, I get x',y', and storing the value of m1(x,y) in to `newImg(x',y')' BUT it is giving the following error
??? Attempted to access newImg(0,1); index must be a positive integer or logical.
Error in ==> at 18
newImg(newx,newy) = m1(x,y);
I don't know what I am doing wrong.
Part of the rotated image will get negative (or zero) newx and newy values since the corners will rotate out of the original image coordinates. You can't assign a value to newImg if newx or newy is nonpositive; those aren't valid matrix indices. One solution would be to check for this situation and skip such pixels (with continue)
Another solution would be to enlarge the newImg sufficiently, but that will require a slightly more complicated transformation.
This is assuming that you can't just use imrotate because this is homework?
The problem is simple, the answer maybe not : Matlab arrays are indexed from one to N (whereas in many programming langages it's from 0 to (N-1) ).
Try newImg( max( min(1,newX), m1.size() ) , max( min(1,newY), m1.size() ) ) maybe (I don't have Matlab at work so I can tell if it's gonna work), but the resulting image will be croped.
this is an old post so I guess it wont help the OP but as I was helped by his attempt I post here my corrected code.
basically some freedom in the implementation regarding to how you deal with unassigned pixels as well as wether you wish to keep the original size of the pic - which will force you to crop areas falling "outside" of it.
the following function rotates the image around its center, leaves unassigned pixels as "burned" and crops the edges.
function [h] = rot(A,ang)
rotMat = [cos((pi.*ang/180)) sin((pi.*ang/180)); -sin((pi.*ang/180)) cos((pi.*ang/180))];
centerW = round(size(A,1)/2);
centerH = round(size(A,2)/2);
h=255.* uint8(ones(size(A)));
for x = 1:size(A,1)
for y = 1:size(A,2)
point =[x-centerW;y-centerH] ;
product = rotMat * point;
product = int16(product);
newx =product(1,1);
newy=product(2,1);
if newx+centerW<=size(A,1)&& newx+centerW > 0 && newy+centerH<=size(A,2)&& newy+centerH > 0
h(newx+centerW,newy+centerH) = A(x,y);
end
end
end
Below is an arbitrary hand-drawn Intensity profile of a line in an image:
The task is to draw the line. The profile can be approximated to an arc of a circle or ellipse.
This I am doing for camera calibration. Since I do not have the actual industrial camera, I am trying to simulate the correction needed for calibration.
The question can be rephrased as I want pixel values which will follow a plot similar to the above. I want to do this using program (Preferably using opencv) and not manually enter these values because I have thousands of pixels in the line.
An algorithm/pseudo code will suffice. Also please note that I do not have any actual Intensity profile, otherwise I would have read those values.
When will you encounter such situation ?
Suppose you take a picture (assuming complete white) from a Camera, your object being placed on table, and camera just above it in vertical direction. The light coming on the center of the picture vertically downward from the camera will be stronger in intensity as compared to the light reflecting at the edges. You measure pixel values across any line in the Image, you will find intensity curve like shown above. Since I dont have camera for the time being, I want to emulate this situation. How to achieve this?
This is not exactly image processing, rather image generation... but anyways.
Since you want an arc, we still need three points on that arc, lets take the first, middle and last point (key characteristics in my opinion):
N = 100; % number of pixels
x1 = 1;
x2 = floor(N/2);
x3 = N;
y1 = 242;
y2 = 255;
y3 = 242;
and now draw a circle arc that contains these points.
This problem is already discussed here for matlab: http://www.mathworks.nl/matlabcentral/newsreader/view_thread/297070
x21 = x2-x1; y21 = y2-y1;
x31 = x3-x1; y31 = y3-y1;
h21 = x21^2+y21^2; h31 = x31^2+y31^2;
d = 2*(x21*y31-x31*y21);
a = x1+(h21*y31-h31*y21)/d; % circle center x
b = y1-(h21*x31-h31*x21)/d; % circle center y
r = sqrt(h21*h31*((x3-x2)^2+(y3-y2)^2))/abs(d); % circle radius
If you assume the middle value is always larger (and thus it's the upper part of the circle you'll have to plot), you can draw this with:
x = x1:x3;
y = b+sqrt(r^2-(x-a).^ 2);
plot(x,y);
you can adjust the visible window with
xlim([1 N]);
ylim([200 260]);
which gives me the following result:
I am working on video stabilisation ( making shaky videos non-shaky) using matlab.
One of the steps is to find a smooth camera path given the unstable camera path.
The unstable camera path is one which gives the jittering or shake to the video.
I have camera path specified using camera position which is a 3d-data.
camera path - (cx,cy,cz);
As i plot in matlab, i can visually see the shakiness of the camera motion.
So now i require a least squares fitting to be done on the camera path specified by(cx,cy,cz);
I came across polyfit() which does fitting for 2-dimensional data.
But what i need is a 3-d smooth curve fit to the shaky curve.
Thanks in advance.
Couldn't you just fit three separate 1d curves for cx(t), cy(t), cz(t)?
BTW: I think what you need is a Kalman filter, not a polynomial fit to the camera path. But I'm not sure if matlab has builtin support for that.
Approach using least square fit:
t = (1:0.1:5)';
% model
px = [ 5 2 1 ];
x = polyval(px,t);
py = [ -2 1 1 ];
y = polyval(py,t);
pz = [ 1 20 1 ];
z = polyval(pz,t);
% plot model
figure
plot3(x,y,z)
hold all
% simulate measurement
xMeasured = x+2*(rand(length(x),1)-0.5);
yMeasured = y+2*(rand(length(y),1)-0.5);
zMeasured = z+2*(rand(length(z),1)-0.5);
% plot simulated measurements
plot3(xMeasured, yMeasured, zMeasured,'or')
hold off
grid on
% least squares fit
A = [t.^2, t, t./t];
pxEstimated = A\xMeasured;
pyEstimated = A\yMeasured;
pzEstimated = A\zMeasured;
Let me be grateful to stackoverflow.com first of all and then my thanks to zellus and nikie who had started me thinking about the problem more. So now I have reached the solution which follows zellus approach and as nikie pointed out I used parameter 't' .
cx, cy,cz are the coordinates in 3d space and in my case they are all 343x1 doubles
My final code is shown below which fits the 3d data set:
t = linspace(1,343,343)';
load cx.mat;
load cy.mat;
load cz.mat;
plot3(cx, cy, cz,'r'),title('source Camera Path');
hold all
A = [t.^2, t, t./t];
fx = A\cx;
fy = A\cy;
fz = A\cz;
Xev = polyval(fx,t);
Yev = polyval(fy,t);
Zev = polyval(fz,t);
plot3(Xev,Yev,Zev,'+b'),title('Fitting Line');
I look forward to more interesting discussions on StackOverflow with great helpful people.