I have to draw a curve captured on a image using screen pixels (mouse clicks) into a coordinate system. E.g.: Pixels on the screen, from left to right (130 px to 970 px) correspond to the x-axis of my coordinate system (1000 to 6000). Pixels from bottom to top (670 to 99) correspond to the y-axis of coordinate system (0 to 1.2). How can this be done? Maybe there's a function in matlab doing something like that?
Some more explanation:
I have a jpg image of a curve on a coordinate system. I've got pixel positions (x,y) of several points on that curve. Now I want to plot same curve into a matlab figure with same x and y axis as on the jpg image.
Not sure if there is a MATLAB function/command to do this, but it may not be too difficult to come up with something.
Suppose that xPixDiff = 970-130 and xAxisDiff = 6000-1000. Then the xPixel value from any (xPixel,yPixel) pair can be translated into an x-axis coordinate via
xAxisCoord = (xPixel-130)*xAxisDiff/xPixDiff + 1000
It is clear from the above that xPixel=130 maps to 1000 and xPixel=970 maps to 6000.
The yAxisCoord calculation is similar but we just have to remember that the "directions" are opposite in the y-axis coordinate system and the y pixel positions.
Let yPixDiff=99-670 and yAxisDiff=1.2-0. Then the yPixel value from any (xPixel,yPixel) pair can be translated into an y-axis coordinate via
yAxisCoord = (yPixel-670)*yAxisDiff/yPixDiff + 0
It is clear from the above that yPixel=670 maps to 0 and yPixel=99 maps to 1.2.
Hope that the above helps!
Related
I need to calculate the X,Y coordinates in the world with respect to the camera using u,v coordinates in the 2D image. I am using an S7 edge camera to send a 720x480 video feed to MATLAB.
What I know: Z i.e the depth of the object from the camera, size of the camera pixels (1.4um), focal length (4.2mm)
Let's say the image point is at (u,v) = (400,400).
My approach is as follows:
Subtract the pixel value of center point (240,360) from the u,v pixel coordinates of the point in the image. This should give us the pixel coordinates with respect to the camera's optical axis (z axis). The origin is now at the center of the image. So new coordinates are: (160, -40)
Multiply the new u,v pixel values with pixel size to obtain the distance of the point from the origin in physical units. Let's call it (x,y). We get (x,y) = (0.224,-0.056) in mm units.
Use the formula X = xZ/f & Y = yZ/f to calculate X,Y coordinates in the real world with respect to the camera's optical axis.
Is my approach correct?
Your approach is going in the right way, but it would be easier if you use a more standardize approach. What we usually do is use Pinhole Camera Model to give you a transformation between the world coordinates [X, Y, Z] to the pixel [x, y]. Take a look in this guide which describes step-by-step the process of building your transformation.
Basically you have to define you Internal Camera Matrix to do the transformation:
fx and fy are your focal length scaled to use as pixel distance. You can calculate this with your FOV and the total pixel in each direction. Take a look here and here for more info.
u0 and v0 are the piercing point. Since our pixels are not centered in the [0, 0] these parameters represents a translation to the center of the image. (intersection of the optical axis with the image plane provided in pixel coordinates).
If you need, you can also add a the skew factor a, which you can use to correct shear effects of your camera. Then, the Internal Camera Matrix will be:
Since your depth is fixed, just fix your Z and continue the transformation without a problem.
Remember: If you want the inverse transformation (camera to world) just invert you Camera Matrix and be happy!
Matlab has also a very good guide for this transformation. Take a look.
I captured a depth image of a human body in the room and I collected and saved Skeletal data related to that depth image (the joints of wrists, elbows, ...).
Considering the joints' coordinates are in the camera space and depth image is in depth space, I was able to show the location of the right hand wrist joint on depth image using this code:
depthJointIndices = metadata.DepthJointIndices(:, :, trackedBodies);
plot(depthJointIndices(11,1), depthJointIndices(11,2), '*');
Now I want to know which pixel EXACTLY contains the right hand wrist joint, how can I do this properly?
I thought that I can get the coordinate of x,y of that joint using the code I used to show the right hand wrist joint.
As follows:
depthJointIndices = metadata.DepthJointIndices(:, :, trackedBodies);
x=depthJointIndices(11, 1)
y=depthJointIndices(11, 2)
But x,y are calculated as follows:
x = 303.5220
y = 185.6131
As you can see x,y are Floating-point numbers, but coordinates of pixels can't be Floating-point numbers.
So can anyone help me with this problem? how can I get coordinate of a pixel that is containing right hand wrist joint, in depth image, using kinect?
You can use the following 2 equations to derive the coordinates.
Here,
(U,V) and Z denote screen coordinates and depth value, respectively, Cx and Cy denote the center of a depth map, and fx and fy are the focal lengths of the camera. For Kinect-V1 cameras, fx = fy = 580
You can refer the paper Action Recognition From Depth Maps Using Deep Convolutional Neural Networks by P. Wang et.al for more information.
I have this 3 points (x,y) and I need to obtain a mask with a triangle where vertices is the points. I should respect some parameters, like the pixel pitch and I need a grid from the minimum x cordinate to the maximum x coordinate (the same for the y).
I tried to do this in matlab with the function poly2mask but the problem is the resultant image: when I have negative coordinates, I cannot see the polygon.
So I tried to center the polygon but I loose the original coordinates and I cannot have they back again because I need to do some elaboration on the image.
How I can obtain a mask triangle from 3 points without modifying the points and respecting the parameters?
I'd like to be able to calculate the surface area of objects seen by the depth camera. Is there an easy way to do this? For example if the kinect is seeing a player I need to calculate how much surface it is covering.
If there is no such functions existing, I can calculate by creating multiple squares with coordinate (x,y) (x+1,y) (x, y+1) (x+1, y+1) and taking into consideration the z value. But I'm not sure how to get the distance in mm or cm between pixels in the x or y axis.
Thanks
I know Matlab has a function called cylinder to create the points for a cylinder when number of points along the circumference, and the radius length. What if I don't want a unit cylinder, and also don't want it to center at the default axis (for example along z-axis)? What would be the easiest approach to create such a cylinder? Thanks in advance.
The previous answer is fine, but you can get matlab to do more of the work for you (because the results of cylinder separate x,y,z components you need to work a little to do the matrix multiplication for the rotation). To have the center of base of the cylinder at [x0 y0 z0], scaled by [xf yf xf] (use xf=yf unless you want an elliptic cylinder), use:
[x y z] = cylinder;
h=mesh(x*xf+x0,y*yf+y0,z*zf+z0)
If you also want to rotate it so it isn't aligned along the z-axis, use rotate. For example, to rotate about the x-axis by 90 degrees, so it's aligned along the y-axis, use:
rotate(h,[1 0 0],90)
Multiply the points by your favourite combination of a scaling matrix, a translation matrix, and a rotation matrix.