Mapping Depth onto RGB Kinect using MATLAB - matlab

I am trying to map the Depth map onto the RGB image on the Kinec using MATLAB.
So here are the steps that I took:
(1) Obtain the images using a C++ program.
(2) Using the depth value from each pixel on MATLAB, I was able to obtain the XYZ distances of all the pixels in mm.
(3) Then using some equations, I was able to obtain the XY pixel coordinates of those depth pixels on the RGB image.
So I am left with a huge cell containing all the locations of the depth map w.r.t the color image.
So my question is now if I want to overlay the depth image on the color image, how can I do that?
Can anyone help me?
Thanks;

Related

Crop an image using multiple coordinates in Matlab

I used the "imfreehand" to crop an irregular shape and save its positions into a variable. This position variable is a 85*2 double matrix (85 points, X and Y coordinates). Now, I want to use the same position to crop another image (different layer of the image, but the location of the objects is the same). The functions I can find all requires rectangle positions (X1,X2,Y1,Y2). In my situation, I have 82 different (X,Y) coordinates, how can I use the position information to crop a new image?
From what I understand, you want to take the coordinates created by imfreehand(...) to create a cropable object on another image. You can use the function impoly(hparent,position) for this purpose.
The MathWorks page provides an example to guide you on its usage.

Kinect v2 sensor Color-Depth Mapping misalignment using matlab [duplicate]

I have collected data using Kinect v2 sensor and I have a depth map together with its corresponding RGB image. I also calibrated the sensor and obtained the rotation and translation matrix between the Depth camera and RGB camera.
So I was able to reproject the depth values on the RGB image and they match. However, since the RGB image and the depth image are of different resolutions, there are a lot of holes in the resulting image.
So I am trying to move the other way, i.e. mapping the color onto the depth instead of depth to color.
So the first problem I am having is that the RGB image has 3 layers and I have to convert the RGB image to grayscale to do it and I am not getting the correct results.
Can this be done?
Has anyone tried this before?
Why can't you fit the Z-depth to the RGB?
To fit the low res image to the high- res should be easy, as long as both represent the same size of data (i.e. corners of both images are the same point)
It should be as easy as:
Z_interp=imresize(Zimg, [size(RGB,1) size(RGB,2)])
Now Z_interp should have the same amount of pixels as RGB
If you still want to do it the other way around, well, use the same approach:
RGB_interp=imresize(RGB, [size(Zimg,1) size(Zimg,2)])
The Image Acquisition Toolbox now officially supports Kinect v2 for Windows. You can get a point cloud out from Kinect using pcfromkinect function in the Computer Vision System Toolbox.

3D reconstruction based on stereo rectified edge images

I have two closed curve stereo rectified edge images. Is it possible to find the disparity(along x-axis in image coordinates) between the edge images and do a 3D reconstruction since I know the camera matrix. I am using matlab for the process. And I will not be able to do a window based technique as it's a binary image since a window based technique requires texture. The question how will I compute the disparity between the edge images? The images are available in the following links. Left Edge image https://www.dropbox.com/s/g5g22f6b0vge9ct/edge_left.jpg?dl=0 Right Edge Image https://www.dropbox.com/s/wjmu3pugldzo2gw/edge_right.jpg?dl=0
For this type of images, you can easily map each edge pixel from the left image to its counterpart in the right image, and therefore calculate the disparity for those pixels as usual.
The mapping can be done in various ways, depending on how typical these images are. For example, using DTW like approach to match curvatures.
For all other pixels in the image, you just don't have any information.
#Photon: Thanks for the suggestion. I did what you suggested. I matched each edge pixel in the left and right image in a DTW like fashion. But there are some pixels whose y-pixel coordinate value differ by 1 or 2 pixels, albeit they are properly rectified. So I calculated the depth by averaging those differing(up to 2-pixel difference in y-axis) edge pixels using least squares method. But I ended getting this space curve (https://www.dropbox.com/s/xbg2q009fjji0qd/false_edge.jpg?dl=0) when they actually should have been like this (https://www.dropbox.com/s/0ib06yvzf3k9dny/true_edge.jpg?dl=0) which is obtained using RGB images.I couldn't think of any other reason why it would be the case since I compared by traversing along the 408 edge pixels.

Matlab: apply projection correction to an image subset

Following the question I posted here, I need to apply a projective transformation to an image given 4 points.
Say I successfully segmented the QR code from an image:
and I have stored in an array of points the coordinates of the QR vertices. In this case I would only need a rotation in order oto obtain the rectified image but in here:
I need to apply a projective correction to the image.
Is there a way of making these transformations knowing the coordinates of the said vertices?
EDIT
I solved it using #Xiang's suggestion and using HSV components of the image.
If I understand the question correctly you have the 4 corner points and you want to know to which coordinates to map them in the transformed image. Well, this is up to you. You know this is a square so just choose an arbitrary height or calculate based on some measurement from the original image and generate the coordinates:
(0,0)
(0, size)
(size, 0)
(size, size)
Now you can compute the transform and apply it to the original image using maketform.
From Matlab docs http://www.mathworks.com/help/images/ref/maketform.html:
T = maketform('projective',U,X)
To apply the transform use imtransform and set the fields UData, VData, XData, YData to specify your source coordinate system and the new sampling coordinates you wish to transform to.

Disparity calculation of two similar images in matlab

I have two images(both are exactly same images) and I am trying to calculate the disparity between them using sum of squared distances and reconstruct disparity in 3D space.
Do I need to rectify the image before calculating disparity?
The following are the steps that I have done so far for disparity map computation(I have tried with rectification and without rectification but both are returning all zeroes disparity matrix).
For each pixel in the left image X,
Take the pixels in the same row in the right image.
Separate the row in right image to windows.
For each window,
Calculate the disparity for each pixel in that window with X
Select the pixel in the window which gives minimum SSD with X
Find the pixel with minimum disparity among all windows as the best match to X
Am I doing it correctly?
How can I visualise the 3D reconstruction of the disparity as scatter plot in matlab?
Rectification guarantees that matches are to be found in the same row (for horizontally separated cameras). If you have doubts about rectification of your images you can try to compare rows by drawing horizontal lines between horizontally separated images. If the lines hit the same features you are fine, see the picture below where images are NOT rectified. The fact that they are distorted means there was a lens distortion correction as well as attempted (but not actually performed correctly) rectification.
Now, let’s see what you meant by the same images. Did you mean the images of the same object that were taken from different viewpoints? Note that if the images are literally the same (the same viewpoints) the disparity will be zero as was noted in another answer. The definition of disparity (for horizontally separated cameras) is a value of shift (in the same row) between matching features. The disparity is related to depth (if optical axes of cameras are parallel) as disparity d=f*B/z, where z - depth, B - baseline or separation between cameras and f is a focal length. You can transform the formula above into disparity/B=f/z which basically says that disparity related to camera separation as focal length is related to distance. In other words, the ratios of horizontal and distance measures are equal.
If your images are taken with the cameras shifted horizontally the disparity (in a simple correlation algorithm) is typically calculated in 5-embedded loops:
loop over image1 y
loop over image1 x
loop over disparity d
loop over correlation window y
loop over correlation window x
Disparity, or D_best, gives you the best matching window between image1 and image2 across all possible values of d. Finally, scatterplots are for 3D point clouds while disparity can be rather visualized as a heat color map. If you need to visualize 3D reconstruction or simply saying a 3D point cloud calculate X, Y, Z as:
Z=fB/D, X=uZ/f, Y=v*Z/f, where u and v are related to column and row of wxh image as
u=col-w/2 and v=h/2-row, that is u, v form an image centered coordinate system.
If your two images are exactly the same, then the disparity would be 0 for every pixel. You either have to use two separate cameras to take the images, or take them with a single camera from two different locations. The best way to do 3D reconstruction is to use a calibrated stereo pair of cameras. Here is an example of how to do that using the Computer Vision System Toolbox for MATLAB.