I am trying to allign two images - one rgb and another depth using MATLAB. Please note that I have checked several places for this - like here , here which requires a kinect device, and here here which says that camera parameters are required for calibration. I was also suggested to use EPIPOLAR GEOMETRY to match the two images though I do not know how. The dataset I am referring to is given in rgb-d-t face dataset. One such example is illustrated below :
The ground truth which basically means the bounding boxes which specify the face region of interest are already provided and I use them to crop the face regions only. The matlab code is illustrated below :
I = imread('1.jpg');
I1 = imcrop(I,[218,198,158,122]);
I2 = imcrop(I,[243,209,140,108]);
figure, subplot(1,2,1),imshow(I1);
subplot(1,2,2),imshow(I2);
The two cropped images rgb and depth are shown below :
Is there any way by which we can register/allign the images. I took the hint from
here where basic sobel operator has been used on both the rgb and depth images to generate an edge map and then keypoints will need to be generated for matching purposes. The edge maps for both the images are generated here.
.
However they are so noisy that I do not think we will be able to do keypoint matching for this images.
Can anybody suggest some algorithms in matlab to do the same ?
prologue
This answer is based on mine previous answer:
Does Kinect Infrared View Have an offset with the Kinect Depth View
I manually crop your input image so I separate colors and depth images (as my program need them separated. This could cause minor offset change by few pixels. Also as I do not have the depths (depth image is 8bit only due to grayscale RGB) then the depth accuracy I work with is very poor see:
So my results are affected by all this negatively. Anyway here is what you need to do:
determine FOV for both images
So find some measurable feature visible on both images. The bigger in size the more accurate the result. For example I choose these:
form a point cloud or mesh
I use depth image as reference so my point cloud is in its FOV. As I do not have the distances but 8bit values instead I converted that to some distance by multiplying by constant. So I scan whole depth image and for every pixel I create point in my point cloud array. Then convert the dept pixel coordinate to color image FOV and copy its color too. something like this (in C++):
picture rgb,zed; // your input images
struct pnt3d { float pos[3]; DWORD rgb; pnt3d(){}; pnt3d(pnt3d& a){ *this=a; }; ~pnt3d(){}; pnt3d* operator = (const pnt3d *a) { *this=*a; return this; }; /*pnt3d* operator = (const pnt3d &a) { ...copy... return this; };*/ };
pnt3d **xyz=NULL; int xs,ys,ofsx=0,ofsy=0;
void copy_images()
{
int x,y,x0,y0;
float xx,yy;
pnt3d *p;
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
{
p=&xyz[y][x];
// copy point from depth image
p->pos[0]=2.000*((float(x)/float(xs))-0.5);
p->pos[1]=2.000*((float(y)/float(ys))-0.5)*(float(ys)/float(xs));
p->pos[2]=10.0*float(DWORD(zed.p[y][x].db[0]))/255.0;
// convert dept image x,y to color image space (FOV correction)
xx=float(x)-(0.5*float(xs));
yy=float(y)-(0.5*float(ys));
xx*=98.0/108.0;
yy*=106.0/119.0;
xx+=0.5*float(rgb.xs);
yy+=0.5*float(rgb.ys);
x0=xx; x0+=ofsx;
y0=yy; y0+=ofsy;
// copy color from rgb image if in range
p->rgb=0x00000000; // black
if ((x0>=0)&&(x0<rgb.xs))
if ((y0>=0)&&(y0<rgb.ys))
p->rgb=rgb2bgr(rgb.p[y0][x0].dd); // OpenGL has reverse RGBorder then my image
}
}
where **xyz is my point cloud 2D array allocated t depth image resolution. The picture is my image class for DIP so here some relevant members:
xs,ys is the image resolution in pixels
p[ys][xs] is the image direct pixel access as union of DWORD dd; BYTE db[4]; so I can access color as single 32 bit variable or each color channel separately.
rgb2bgr(DWORD col) just reorder color channels from RGB to BGR.
render it
I use OpenGL for this so here the code:
glBegin(GL_QUADS);
for (int y0=0,y1=1;y1<ys;y0++,y1++)
for (int x0=0,x1=1;x1<xs;x0++,x1++)
{
float z,z0,z1;
z=xyz[y0][x0].pos[2]; z0=z; z1=z0;
z=xyz[y0][x1].pos[2]; if (z0>z) z0=z; if (z1<z) z1=z;
z=xyz[y1][x0].pos[2]; if (z0>z) z0=z; if (z1<z) z1=z;
z=xyz[y1][x1].pos[2]; if (z0>z) z0=z; if (z1<z) z1=z;
if (z0 <=0.01) continue;
if (z1 >=3.90) continue; // 3.972 pre vsetko nad .=3.95m a 4.000 ak nechyti vobec nic
if (z1-z0>=0.10) continue;
glColor4ubv((BYTE* )&xyz[y0][x0].rgb);
glVertex3fv((float*)&xyz[y0][x0].pos);
glColor4ubv((BYTE* )&xyz[y0][x1].rgb);
glVertex3fv((float*)&xyz[y0][x1].pos);
glColor4ubv((BYTE* )&xyz[y1][x1].rgb);
glVertex3fv((float*)&xyz[y1][x1].pos);
glColor4ubv((BYTE* )&xyz[y1][x0].rgb);
glVertex3fv((float*)&xyz[y1][x0].pos);
}
glEnd();
You need to add the OpenGL initialization and camera settings etc of coarse. Here the unaligned result:
align it
If you notice I added ofsx,ofsy variables to copy_images(). This is the offset between cameras. I change them on arrows keystrokes by 1 pixel and then call copy_images and render the result. This way I manually found the offset very quickly:
As you can see the offset is +17 pixels in x axis and +4 pixels in y axis. Here side view to better see the depths:
Hope It helps a bit
Well I have tried doing it after reading lots of blogs and all. I am still not sure whether I am doing it correct or not. Please feel free to give comments if something is found amiss. For this I used a mathworks fex submission that can be found here : ginputc function.
The matlab code is as follows :
clc; clear all; close all;
% no of keypoint
N = 7;
I = imread('2.jpg');
I = rgb2gray(I);
[Gx, Gy] = imgradientxy(I, 'Sobel');
[Gmag, ~] = imgradient(Gx, Gy);
figure, imshow(Gmag, [ ]), title('Gradient magnitude')
I = Gmag;
[x,y] = ginputc(N, 'Color' , 'r');
matchedpoint1 = [x y];
J = imread('2.png');
[Gx, Gy] = imgradientxy(J, 'Sobel');
[Gmag, ~] = imgradient(Gx, Gy);
figure, imshow(Gmag, [ ]), title('Gradient magnitude')
J = Gmag;
[x, y] = ginputc(N, 'Color' , 'r');
matchedpoint2 = [x y];
[tform,inlierPtsDistorted,inlierPtsOriginal] = estimateGeometricTransform(matchedpoint2,matchedpoint1,'similarity');
figure; showMatchedFeatures(J,I,inlierPtsOriginal,inlierPtsDistorted);
title('Matched inlier points');
I = imread('2.jpg'); J = imread('2.png');
I = rgb2gray(I);
outputView = imref2d(size(I));
Ir = imwarp(J,tform,'OutputView',outputView);
figure; imshow(Ir, []);
title('Recovered image');
figure,imshowpair(I,J,'diff'),title('Difference with original');
figure,imshowpair(I,Ir,'diff'),title('Difference with restored');
Step 1
I used the sobel edge detector to extract the edges for both the depth and rgb images and then used a thresholding values to get the edge map. I will be primarily working with the gradient magnitude only. This gives me two images as this :
Step 2
Next I use the ginput or ginputc function to mark keypoints on both the images. The correspondence between the points are established by me beforehand. I tried using SURF features but they do not work well on depth images.
Step 3
Use the estimategeometrictransform to get the transformation matrix tform and then use this matrix to recover the original position of the moved image. The next set of images tells this story.
Granted I still believe the results can be further improved if the keypoint selections in either of the images are more judiciously done. I also think #Specktre method is better. I just noticed that I used a separate image-pair in my answer compared to that of the question. Both images come from the same dataset to be found here vap rgb-d-t dataset.
Related
I am trying to allign two images - one rgb and another depth using MATLAB. Please note that I have checked several places for this - like here , here which requires a kinect device, and here here which says that camera parameters are required for calibration. I was also suggested to use EPIPOLAR GEOMETRY to match the two images though I do not know how. The dataset I am referring to is given in rgb-d-t face dataset. One such example is illustrated below :
The ground truth which basically means the bounding boxes which specify the face region of interest are already provided and I use them to crop the face regions only. The matlab code is illustrated below :
I = imread('1.jpg');
I1 = imcrop(I,[218,198,158,122]);
I2 = imcrop(I,[243,209,140,108]);
figure, subplot(1,2,1),imshow(I1);
subplot(1,2,2),imshow(I2);
The two cropped images rgb and depth are shown below :
Is there any way by which we can register/allign the images. I took the hint from
here where basic sobel operator has been used on both the rgb and depth images to generate an edge map and then keypoints will need to be generated for matching purposes. The edge maps for both the images are generated here.
.
However they are so noisy that I do not think we will be able to do keypoint matching for this images.
Can anybody suggest some algorithms in matlab to do the same ?
prologue
This answer is based on mine previous answer:
Does Kinect Infrared View Have an offset with the Kinect Depth View
I manually crop your input image so I separate colors and depth images (as my program need them separated. This could cause minor offset change by few pixels. Also as I do not have the depths (depth image is 8bit only due to grayscale RGB) then the depth accuracy I work with is very poor see:
So my results are affected by all this negatively. Anyway here is what you need to do:
determine FOV for both images
So find some measurable feature visible on both images. The bigger in size the more accurate the result. For example I choose these:
form a point cloud or mesh
I use depth image as reference so my point cloud is in its FOV. As I do not have the distances but 8bit values instead I converted that to some distance by multiplying by constant. So I scan whole depth image and for every pixel I create point in my point cloud array. Then convert the dept pixel coordinate to color image FOV and copy its color too. something like this (in C++):
picture rgb,zed; // your input images
struct pnt3d { float pos[3]; DWORD rgb; pnt3d(){}; pnt3d(pnt3d& a){ *this=a; }; ~pnt3d(){}; pnt3d* operator = (const pnt3d *a) { *this=*a; return this; }; /*pnt3d* operator = (const pnt3d &a) { ...copy... return this; };*/ };
pnt3d **xyz=NULL; int xs,ys,ofsx=0,ofsy=0;
void copy_images()
{
int x,y,x0,y0;
float xx,yy;
pnt3d *p;
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
{
p=&xyz[y][x];
// copy point from depth image
p->pos[0]=2.000*((float(x)/float(xs))-0.5);
p->pos[1]=2.000*((float(y)/float(ys))-0.5)*(float(ys)/float(xs));
p->pos[2]=10.0*float(DWORD(zed.p[y][x].db[0]))/255.0;
// convert dept image x,y to color image space (FOV correction)
xx=float(x)-(0.5*float(xs));
yy=float(y)-(0.5*float(ys));
xx*=98.0/108.0;
yy*=106.0/119.0;
xx+=0.5*float(rgb.xs);
yy+=0.5*float(rgb.ys);
x0=xx; x0+=ofsx;
y0=yy; y0+=ofsy;
// copy color from rgb image if in range
p->rgb=0x00000000; // black
if ((x0>=0)&&(x0<rgb.xs))
if ((y0>=0)&&(y0<rgb.ys))
p->rgb=rgb2bgr(rgb.p[y0][x0].dd); // OpenGL has reverse RGBorder then my image
}
}
where **xyz is my point cloud 2D array allocated t depth image resolution. The picture is my image class for DIP so here some relevant members:
xs,ys is the image resolution in pixels
p[ys][xs] is the image direct pixel access as union of DWORD dd; BYTE db[4]; so I can access color as single 32 bit variable or each color channel separately.
rgb2bgr(DWORD col) just reorder color channels from RGB to BGR.
render it
I use OpenGL for this so here the code:
glBegin(GL_QUADS);
for (int y0=0,y1=1;y1<ys;y0++,y1++)
for (int x0=0,x1=1;x1<xs;x0++,x1++)
{
float z,z0,z1;
z=xyz[y0][x0].pos[2]; z0=z; z1=z0;
z=xyz[y0][x1].pos[2]; if (z0>z) z0=z; if (z1<z) z1=z;
z=xyz[y1][x0].pos[2]; if (z0>z) z0=z; if (z1<z) z1=z;
z=xyz[y1][x1].pos[2]; if (z0>z) z0=z; if (z1<z) z1=z;
if (z0 <=0.01) continue;
if (z1 >=3.90) continue; // 3.972 pre vsetko nad .=3.95m a 4.000 ak nechyti vobec nic
if (z1-z0>=0.10) continue;
glColor4ubv((BYTE* )&xyz[y0][x0].rgb);
glVertex3fv((float*)&xyz[y0][x0].pos);
glColor4ubv((BYTE* )&xyz[y0][x1].rgb);
glVertex3fv((float*)&xyz[y0][x1].pos);
glColor4ubv((BYTE* )&xyz[y1][x1].rgb);
glVertex3fv((float*)&xyz[y1][x1].pos);
glColor4ubv((BYTE* )&xyz[y1][x0].rgb);
glVertex3fv((float*)&xyz[y1][x0].pos);
}
glEnd();
You need to add the OpenGL initialization and camera settings etc of coarse. Here the unaligned result:
align it
If you notice I added ofsx,ofsy variables to copy_images(). This is the offset between cameras. I change them on arrows keystrokes by 1 pixel and then call copy_images and render the result. This way I manually found the offset very quickly:
As you can see the offset is +17 pixels in x axis and +4 pixels in y axis. Here side view to better see the depths:
Hope It helps a bit
Well I have tried doing it after reading lots of blogs and all. I am still not sure whether I am doing it correct or not. Please feel free to give comments if something is found amiss. For this I used a mathworks fex submission that can be found here : ginputc function.
The matlab code is as follows :
clc; clear all; close all;
% no of keypoint
N = 7;
I = imread('2.jpg');
I = rgb2gray(I);
[Gx, Gy] = imgradientxy(I, 'Sobel');
[Gmag, ~] = imgradient(Gx, Gy);
figure, imshow(Gmag, [ ]), title('Gradient magnitude')
I = Gmag;
[x,y] = ginputc(N, 'Color' , 'r');
matchedpoint1 = [x y];
J = imread('2.png');
[Gx, Gy] = imgradientxy(J, 'Sobel');
[Gmag, ~] = imgradient(Gx, Gy);
figure, imshow(Gmag, [ ]), title('Gradient magnitude')
J = Gmag;
[x, y] = ginputc(N, 'Color' , 'r');
matchedpoint2 = [x y];
[tform,inlierPtsDistorted,inlierPtsOriginal] = estimateGeometricTransform(matchedpoint2,matchedpoint1,'similarity');
figure; showMatchedFeatures(J,I,inlierPtsOriginal,inlierPtsDistorted);
title('Matched inlier points');
I = imread('2.jpg'); J = imread('2.png');
I = rgb2gray(I);
outputView = imref2d(size(I));
Ir = imwarp(J,tform,'OutputView',outputView);
figure; imshow(Ir, []);
title('Recovered image');
figure,imshowpair(I,J,'diff'),title('Difference with original');
figure,imshowpair(I,Ir,'diff'),title('Difference with restored');
Step 1
I used the sobel edge detector to extract the edges for both the depth and rgb images and then used a thresholding values to get the edge map. I will be primarily working with the gradient magnitude only. This gives me two images as this :
Step 2
Next I use the ginput or ginputc function to mark keypoints on both the images. The correspondence between the points are established by me beforehand. I tried using SURF features but they do not work well on depth images.
Step 3
Use the estimategeometrictransform to get the transformation matrix tform and then use this matrix to recover the original position of the moved image. The next set of images tells this story.
Granted I still believe the results can be further improved if the keypoint selections in either of the images are more judiciously done. I also think #Specktre method is better. I just noticed that I used a separate image-pair in my answer compared to that of the question. Both images come from the same dataset to be found here vap rgb-d-t dataset.
I'm working with the Kinect v2 and I have to map the depth information onto the RGB images to process them: in particular, I need to know which pixels in the RGB images are in a certain range of distance (depth) along the Z axis; I'm acquiring all the data with a C# program and saving them as images (RGB) and txt files (depth).
I've followed the instruction from here and here (and I thank them for sharing), but I still have some problems I don't know how to solve.
I have calculated the rotation (R) and translation (T) matrix between the depth sensor and the RGB camera, as well as their intrinsic parameters.
I have created P3D_d (depth pixels in world coordinates related to depth sensor) and P3D_rgb (depth pixels in world coordinates related to rgb camera).
row_num = 424;
col_num = 512;
P3D_d = zeros(row_num,col_num,3);
P3D_rgb = zeros(row_num,col_num,3);
cont = 1;
for row=1:row_num
for col=1:col_num
P3D_d(row,col,1) = (row - cx_d) * depth(row,col) / fx_d;
P3D_d(row,col,2) = (col - cy_d) * depth(row,col) / fy_d;
P3D_d(row,col,3) = depth(row,col);
temp = [P3D_d(row,col,1);P3D_d(row,col,2);P3D_d(row,col,3)];
P3D_rgb(row,col,:) = R*temp+T;
end
end
I have created P2D_rgb_x and P2D_rgb_y.
P2D_rgb_x(:,:,1) = (P3D_rgb(:,:,1)./P3D_rgb(:,:,3))*fx_rgb+cx_rgb;
P2D_rgb_y(:,:,2) = (P3D_rgb(:,:,2)./P3D_rgb(:,:,3))*fy_rgb+cy_rgb;
but now I don't understand how to continue.
Assuming that the calibration parameters are correct, I've tried to click on a defined point in both the depth (coordinates: row_d, col_d) and rgb (coordinates: row_rgb, col_rgb) images, but P2D_rgb_x(row_d, col_d) is totally different from row_rgb, as well as P2D_rgb_y(row_d, col_d) is totally different from col_rgb.
So, what do exactly mean P2D_rgb_x and P2D_rgb_y? How can I use them to map depth value onto rgb images or just to get the depth of a certain RGB pixel?
I'll apreciate any suggest or help!
PS: I've also a related post on MathWorks at this link
For exactly the same image
Opencv Code:
img = imread("testImg.png",0);
threshold(img, img_bw, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
Mat tmp;
img_bwR.copyTo(tmp);
findContours(tmp, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// Get the moment
vector<Moments> mu(contours.size() );
for( int i = 0; i < contours.size(); i++ )
{ mu[i] = moments( contours[i], false );
}
// Display area (m00)
for( int i = 0; i < contours.size(); i++ )
{
cout<<mu[i].m00 <<endl;
// I also tried the code
//cout<<contourArea(contours.at(i))<<endl;
// But the result is the same
}
Matlab code:
Img = imread('testImg.png');
lvl = graythresh(Img);
bw = im2bw(Img,lvl);
stats = regionprops(bw,'Area');
for k = 1:length(stats)
Area = stats(k).Area; %m00
end
Any one has any thought on it? How to unify them? I think they use different methods to find contours.
I uploaded the test image at the link below so that someone who is interested in this can reproduce the procedure
It is a 100 by 100 small 8 bit grayscale image with only 0 and 255 pixel intensity. For simplicity, it only has one blob on it.
For OpenCV, the area of contour (image moment m00) is 609.5 (Very odd value)
For Matlab, the area of contour (image moment m00) is 763.
Thanks
Exist many different definitions of how contours should be extracted from binary image. For example it can be polygon that is the perimeter of white object in a binary image. If this definition was used by OpenCV, then areas of contours would be the same as areas of connected components found by Matlab. But this is not the case. Contour found by findContour() function is the polygon that connects centers of neighbor "edge pixels". Edge pixel is a white pixel that has black neighbor in N4 neighborhood.
Example: suppose you have an image whose size is 100x100 pixels. Every pixel above the diagonal is black. Every pixel below or on the diagonal is white (black triangle and white triangle). Exact separation polygon will have almost 200 vertexes at distance of 1 pixel: (0,0), (1,0), (1,1), (2,1), (2,2),.... (100,99), (100,100), (0,100). As you can see this definition is not very good from practical point of view. Polygon returned by OpenCV will have exactly 3 vertexes needed to define the triangle: (0,0), (99,99), (0,99). Its area is (99 x 99 / 2) pixels. It is not equal to number of white pixels. It is not even an integer. But this polygon is more practical than previous one.
Those are not the only possible definitions for polygon extraction. Many other definitions exist. Some of them (in my opinion) may be better than the one used by OpenCV. But this is the one that was implemented and used by a lot of people.
Currently there no effective workaround for your problem. If you want to get exactly same numbers from MATLAB and OpenCV you will have to draw the contours found by foundContours on some black image, and use function moments() on image. I know that upcoming OpenCV 3 have function that finds connected components but I didn't tried it myself.
My primary motive is to detect hand from simple RGB images (images from my webcam ).
I found a sample code find_hand_point
function [result, depth] = find_hand_point(depth_frame)
% function result = find_hand_point(depth_frame)
%
% returns the coordinate of a pixel that we expect to belong to the hand.
% very simple implementation, we assume that the hand is the closest object
% to the sensor.
max_value = max(depth_frame(:));
current2 = depth_frame;
current2(depth_frame == 0) = max_value;
blurred = imfilter(current2, ones(5, 5)/25, 'symmetric', 'same');
minimum = min(blurred(:));
[is, js] = find(blurred == minimum);
result = [is(1), js(1)];
depth = minimum;
The result variable is the co-ordinate of the nearest thing to the camera (the hand).
A depth image from kinect device was passed to this function and the result is as:
http://img839.imageshack.us/img839/5562/testcs.jpg
the green rectangle shows the closest thing to the camera (the hand).
PROBLEM:
The images that my laptop camera captures are not Depth images but are simple RGB images.
Is there a way to convert my RGB images to those depth images ?
Is there a simple alternative technique to detect hand ?
Kinect uses extra sensors to retrieve the depth data. There is not enough information in a single webcam image to reconstruct a 3D picture. But it is possible to make far-reaching estimates based on a series of images. This is the principle behind XTR-3D and similar solutions.
A much simpler approach can be found in http://www.andol.info/hci/830.htm
There the author converts the rgb image to hsv, and he just keeps specific ranges of the H, S and V values, that he assumes that are hand-like colors.
In Matlab:
function [hand] = get_hand(rgb_image)
hsv_image = rgb2hsv(rgb_image)
hand = ( (hsv_image(:,:,1)>= 0) & (hsv_image(:,:,1)< 20) ) & ( (hsv_image(:,:,2)>= 30) & (hsv_image(:,:,2)< 150) ) & ( (hsv_image(:,:,3)>= 80) & (hsv_image(:,:,3)< 255) )
end
the hand=... will give you a matrix that will have 1s in the pixels where
0 <= H < 20 AND 30 <= S < 150 AND 80 <= V < 255
A better technique I found to detect hand via skin color :)
http://www.edaboard.com/thread200673.html
the next code gets an image of a grape that I photograph (is called: 'full_img') and calculate the area of the grape:
RGB = imread(full_img);
GRAY = rgb2gray(RGB);
threshold = graythresh(GRAY);
originalImage = im2bw(GRAY, threshold);
originalImage = bwareaopen(originalImage,250);
SE = strel('disk',10);
IM2 = imclose(originalImage,SE);
originalImage = IM2;
labeledImage = bwlabel(originalImage, 8); % Label each blob so we can make measurements of it
blobMeasurements = regionprops(labeledImage, originalImage, 'all');
numberOfBlobs = length(blobMeasurements);
pixperinch=get(0,'ScreenPixelsPerInch'); %# find resolution of your display
dpix=blobMeasurements(numberOfBlobs).Area; %# calculate distance in pixels
dinch=dpix/pixperinch; %# convert to inches from pixels
dcm=dinch*2.54; %# convert to cm from inches
blobArea = dcm; % Get area.
If I photograph the same grape with the same conditions by different cameras (photographed it from the same distance and the same lightning), will I get the same results? (what if I have a camera of 5 Mega Pixel and 12 Mega Pixel?).
No, it won't. You go from image coordinates to world coordinates using dpix/pixperinch. In general this is wrong. It will only work for a specific image (and that alone), if you know the pixperinch. In order to get the geometric characteristics of an object in an image (eg length, area etc), you must back-project the image pixels in the Cartesian space using the Camera matrix and the inverse projective transformation, in order to get Cartesian coordinates (let along calibrating the camera for lens distortion, which is a nonlinear problem). Then, you can perform the calculations. You code won't work even for the same camera.
See this for more.