Plot depth image in matlab - matlab

I have an RGB-D image and am trying to get a 3D visualization in matlab. Currently I am doing:
depth = imread('img_031_depth.png');
depth = double(depth);
img = imread('img_031.png');
surf(depth, img, 'FaceColor', 'texturemap', 'EdgeColor', 'none' )
view(158, 38)
Which gives me an image like:
I have two questions:
1) how can I save the image without it blurring as above
2) As you can see some edges show lined going to zero (e.g. the top of the coffee cup) I would like to remove these.
What I'm trying to produce is a 3D looking pointcloud, as these are only 2.5D I must show them from the right angle.
Any help is appreciated
EDIT: added images (note depth image needs to be normalized for visualization)

If you are only interested in a point cloud, you might want to consider scatter3.
You can select which points to plot (discard those with depth == 0).
You need to have explicit x-y coordinates though.
[y x] = ndgrid( 1:size(img,1), 1:size(img,2) );
sel = depth > 0 ; % which points to plot
% "flatten" the matrices for scatter plot
x = x(:);
y = y(:);
img = reshape( img, [], 3 );
depth = depth(:);
scatter3( x(sel), y(sel), depth(sel), 20, img( sel, : ), 'filled' );
view(158, 38)
Edit: sampled version
[y x] = ndgrid( 1:2:size(img,1), 1:2:size(img,2) );
sel = depth( 1:2:end, 1:2:end ) > 0;
x = x(:);
y = y(:);
img = reshape( img( 1:2:end, 1:2:end, : ), [], 3 );
depth = depth( 1:2:end, 1:2:end );
scatter( x(sel), y(sel), depth(sel), 20, img( sel, : ), 'filled' );
view( 158, 38 );
Alternatively, you can directly manipulate sel mask.

i suggest you first restore x=zu/f and y=zv/f, to obtain x, y, z, where f is your camera focal length;
then apply whatever rotation, translation you want before displaying them [x’,y’,z’] = R[x, y, z] + t;
then project them back using col = xf/z+w/2, row = h/2-yf/z to get a simple image that you can display fast; you can add a depth buffer to the last operation to guarantee
proper occlusions by writing depth at each pixel there and checking that repetitive writing happens only if new z is smaller (that is a new pixel is close to the viewer). The resulting image will still have holes due to the nature of point clouds. You can interpolate in those holes but this means you have to trace rays from every pixels in the image to your point cloud and find a closest neighbor to the ray which probably takes forever in Matlab.

I am also doing some 3D image restoring and reconstructing. The first question is easy. Your photo is taken by a camera. So you need to transform the position to camera coordinate system. In other words, you need to know some intrinsic value of your camera! Or you can never recover it with a single image. Google 'kinect intrinsic value' you can get the focal length etc.
Also, change your view.
Try this! And if it's not working, ask again.

Related

Pupil detection with hough transform

so I am working for my disertation thesis and I have to detect the pupil from images using Hough Transform. So far I wrote a code that identifies 2 circles on my image, but right now I have to keep the black circle from the pupil.
When I run the code, it identifies me the pupil, but also a random circle on the cheek. My professor said that I should calculate the pixels mean and, considering the fact that the pupil is black, to keep the pixels from only that region. I don't know how to do this.
I will let my code here to have a look and if someone has an ideea on how should I write this and keep only the black pixels would be great. I also attached to this the final image to see what I obtained.
close all
clear all
path='C:\Users\Ioana PMEC\OneDrive\Ioana personal\Disertatie\test.jpg';
%Citire imagine initiala
xx = imread(path);
figure
imshow(xx)
title('Imagine initiala');% Binarizarea imaginii initiale
yy = rgb2gray(xx);
figure
imshow(yy);
title('Imagine binarizata');
e = edge(yy, 'canny');
imshow(e);
radii = 11:1:30;
h = circle_hough(e, radii, 'same', 'normalise');
peaks = circle_houghpeaks(h, radii, 'nhoodxy', 15, 'nhoodr', 21, 'npeaks', 2);
imshow(yy);
hold on;
for peak = peaks
[x, y]=circlepoints(peak(3));
plot(x+peak(1), y+peak(2), 'r-');
end
hold off
testimage
finalimage
I implemented something which should get the task done for you. The example is done with the image you provided.
Step 1: Read the file(s) and convert them to grayscale.
path = %user input;
RGB = imread(path);
lab = rgb2lab(RGB);
grayscale_image = rgb2gray(RGB);
Step 2: Do the Hough transformation with given parameters.
These, and also the sensitivity can be adapted according to your task. Hint: Play around in the Image Segmenter toolbox for fast parameter finding. Next, the inferred circles are converted to integer values, since these are required for indexing.
min_radius = 10;
max_radius = 50;
% Find circles
[centers,radii,~] = imfindcircles(RGB,[min_radius max_radius],'ObjectPolarity','dark','Sensitivity',0.95);
centers = uint16(centers);
radii = uint16(radii);
The annotated image appears as follows:
Step 3: Get the brightness values of the circles.
From the circle center and radius values, we infer their respective brightness. It is sufficient to only check for the x and y pixel values left/right and above/below the center. (-1 is just a safety margin to completely stay within the circles.)
brightness_checker = zeros(2, max(radii), 2);
for i=1:size(centers,1)
current_radii = radii(i)-1;
for j=1:current_radii
% X-center minus radius, step along x-axis
brightness_checker(i, j, 1) = grayscale_image((centers(i,2) - current_radii/2) + j,...
(centers(i,1) - radii(i)/2) + j);
% Y-center minus radius, step along y-axis
brightness_checker(i, j, 2) = grayscale_image((centers(i,2) - current_radii/2) + j,...
(centers(i,1) - current_radii/2) + j);
end
end
Step 4: Check which circle is a pupil.
The determined value of 30 could potentially be enhanced.
median_x = median(brightness_checker(:,:,1),2);
median_y = median(brightness_checker(:,:,2),2);
is_pupil = (median_x<30)&(median_y<30);
pupils_center = centers(is_pupil == true,:);
Step 5: Draw the pupils.
The marker can be changed. Refer to:
https://de.mathworks.com/help/matlab/ref/matlab.graphics.chart.primitive.line-properties.html
figure
imshow(grayscale_image);
hold on
plot(centers(:,1), centers(:,2), 'r+', 'MarkerSize', 20, 'LineWidth', 2);
hold on
plot(pupils_center(:,1), pupils_center(:,2), 'b+', 'MarkerSize', 20, 'LineWidth', 2);
This is the final output:

How to convert from the image coordinates to Cartesian coordinates

I have this 3D image generated from the simple code below.
% Input Image size
imageSizeY = 200;
imageSizeX = 120;
imageSizeZ = 100;
%# create coordinates
[rowsInImage, columnsInImage, pagesInImage] = meshgrid(1:imageSizeY, 1:imageSizeX, 1:imageSizeZ);
%# get coordinate array of vertices
vertexCoords = [rowsInImage(:), columnsInImage(:), pagesInImage(:)];
centerY = imageSizeY/2;
centerX = imageSizeX/2;
centerZ = imageSizeZ/2;
radius = 28;
%# calculate distance from center of the cube
sphereVoxels = (rowsInImage - centerY).^2 + (columnsInImage - centerX).^2 + (pagesInImage - centerZ).^2 <= radius.^2;
%# Now, display it using an isosurface and a patch
fv = isosurface(sphereVoxels,0);
patch(fv,'FaceColor',[0 0 .7],'EdgeColor',[0 0 1]); title('Binary volume of a sphere');
view(45,45);
axis equal;
grid on;
xlabel('x-axis [pixels]'); ylabel('y-axis [pixels]'); zlabel('z-axis [pixels]')
I have tried plotting the image with isosurface and some other volume visualization tools, but there remains quite a few surprises for me from the plots.
The code has been written to conform to the image coordinate system (eg. see: vertexCoords) which is a left-handed coordinate system I presume. Nonetheless, the image is displayed in the Cartesian (right-handed) coordinate system. I have tried to see this displayed as the figure below, but that’s simply not happening.
I am wondering if the visualization functions have been written to display the image the way they do.
Image coordinate system:
Going forward, there are other aspects of the code I am to write for example if I have an input image sphereVoxels as in above, in addition to visualizing it, I would want to find north, south east, west, top and bottom locations in the image, as well as number and count the coordinates of the vertices, plus more.
I foresee this would likely become confusing for me if I don’t stick to one coordinate system, and considering that the visualization tools predominantly use the right-hand coordinate system, I would want to stick with that from the onset. However, I really do not know how to go about this.
Right-hand coordinate system:
Any suggestions to get through this?
When you call meshgrid, the dimensions x and y axes are switched (contrary to ndgrid). For example, in your case, it means that rowsInImage is a [120x100x200] = [x,y,z] array and not a [100x120x200] = [y,x,z] array even if meshgrid was called with arguments in the y,x,z order. I would change those two lines to be in the classical x,y,z order :
[columnsInImage, rowsInImage, pagesInImage] = meshgrid(1:imageSizeX, 1:imageSizeY, 1:imageSizeZ);
vertexCoords = [columnsInImage(:), rowsInImage(:), pagesInImage(:)];

How to crop and rotate an image to bounding box?

I have a dataset of thousands of images containing hands
I also have .mat files which contain the coordinates of 4 corners of the bounding box
However, the edges of these bounding boxes are at an angle with the x & y axis. For example,
I want to crop out the hands using the bounding box coordinates & then rotate the hands such that they are aligned with the x or y axis.
EDIT:
The hand is represented as follows:
However, please keep in mind that the rectangle is NOT straight. So, I'll have to rotate it to straighten it out.
Good one!
First step:
Compute the size of the rectangle
width = sqrt( sum( (b-a).^2 ) );
height = sqrt( sum( (c-b).^2 ) );
Second step:
Compute an affine transformation from a...d to an upright image
Xin = [a(2) b(2) c(2) d(2)];
Yin = [a(1) b(1) c(1) d(1)];
Xout = [width 1 1 width];
Yout = [1 1 height height];
A = [Xin;Yin;ones(1,4)]';
B = [Xout; Yout]';
H = B \ A; % affine transformation
Note that despite the fact that we allow fo H to be affine, the choise of corners (depending on width and height) will acertain that H will not distort the cropped rectangle.
optionally use cp2tform:
H2 = cp2tform( [Xin;Yin]', [Xout;Yout]', 'nonreflectivesimilarity' );
Third step
Use the transformation to get the relevant image part
thumb = tformarray( img, maketform( 'affine', H' ), ... %//'
makeresampler( 'cubic', 'fill' ), ...
1:2, 1:2, ceil( [height width] ), [], 0 );
optionally use imtransform:
thumb = imtransform( img, H2, 'bicubic' );
A note regarding vectorization:
depends on how the coordinates of the corners are stored (a...d) the first two steps can be easily vectorize.
You can rotate images using the imrotate command.
You can crop images (once they are rotated properly) by using indexing. i.e.
subimg = img( c(1):b(1), c(2):d(2) )
(note, the above line is assuming, you've tracked the corners through the imrotate command, so that c(2) == b(2), c(1) == d(1) etc.)

OpenCV MATLAB: How to draw a line having a particular Intensity profile?

Below is an arbitrary hand-drawn Intensity profile of a line in an image:
The task is to draw the line. The profile can be approximated to an arc of a circle or ellipse.
This I am doing for camera calibration. Since I do not have the actual industrial camera, I am trying to simulate the correction needed for calibration.
The question can be rephrased as I want pixel values which will follow a plot similar to the above. I want to do this using program (Preferably using opencv) and not manually enter these values because I have thousands of pixels in the line.
An algorithm/pseudo code will suffice. Also please note that I do not have any actual Intensity profile, otherwise I would have read those values.
When will you encounter such situation ?
Suppose you take a picture (assuming complete white) from a Camera, your object being placed on table, and camera just above it in vertical direction. The light coming on the center of the picture vertically downward from the camera will be stronger in intensity as compared to the light reflecting at the edges. You measure pixel values across any line in the Image, you will find intensity curve like shown above. Since I dont have camera for the time being, I want to emulate this situation. How to achieve this?
This is not exactly image processing, rather image generation... but anyways.
Since you want an arc, we still need three points on that arc, lets take the first, middle and last point (key characteristics in my opinion):
N = 100; % number of pixels
x1 = 1;
x2 = floor(N/2);
x3 = N;
y1 = 242;
y2 = 255;
y3 = 242;
and now draw a circle arc that contains these points.
This problem is already discussed here for matlab: http://www.mathworks.nl/matlabcentral/newsreader/view_thread/297070
x21 = x2-x1; y21 = y2-y1;
x31 = x3-x1; y31 = y3-y1;
h21 = x21^2+y21^2; h31 = x31^2+y31^2;
d = 2*(x21*y31-x31*y21);
a = x1+(h21*y31-h31*y21)/d; % circle center x
b = y1-(h21*x31-h31*x21)/d; % circle center y
r = sqrt(h21*h31*((x3-x2)^2+(y3-y2)^2))/abs(d); % circle radius
If you assume the middle value is always larger (and thus it's the upper part of the circle you'll have to plot), you can draw this with:
x = x1:x3;
y = b+sqrt(r^2-(x-a).^ 2);
plot(x,y);
you can adjust the visible window with
xlim([1 N]);
ylim([200 260]);
which gives me the following result:

How to measure the rotation of a image in MATLAB?

I have two images. One is the original, and the another is rotated.
Now, I need to discover the angle that the image was rotated. Until now, I thought about discovering the centroids of each color (as every image I will use has squares with colors in it) and use it to discover how much the image was rotated, but I failed.
I'm using this to discover the centroids and the color in the higher square in the image:
i = rgb2gray(img);
bw = im2bw(i,0.01);
s = regionprops(bw,'Centroid');
centroids = cat(1, s.Centroid);
colors = impixel(img,centroids(1),centroids(2));
top = max(centroids);
topcolor = impixel(img,top(1),top(2));
You can detect the corners of one of the colored rectangles in both the image and the rotated version, and use these as control points to infer the transformation between the two images (like in image registration) using the CP2TFORM function. We can then compute the angle of rotation from the affine transformation matrix:
Here is an example code:
%# read first image (indexed color image)
[I1 map1] = imread('http://i.stack.imgur.com/LwuW3.png');
%# constructed rotated image
deg = -15;
I2 = imrotate(I1, deg, 'bilinear', 'crop');
%# find blue rectangle
BW1 = (I1==2);
BW2 = imrotate(BW1, deg, 'bilinear', 'crop');
%# detect corners in both
p1 = corner(BW1, 'QualityLevel',0.5);
p2 = corner(BW2, 'QualityLevel',0.5);
%# sort corners coordinates in a consistent way (counter-clockwise)
p1 = sortrows(p1,[2 1]);
p2 = sortrows(p2,[2 1]);
idx = convhull(p1(:,1), p1(:,2)); p1 = p1(idx(1:end-1),:);
idx = convhull(p2(:,1), p2(:,2)); p2 = p2(idx(1:end-1),:);
%# make sure we have the same number of corner points
sz = min(size(p1,1),size(p2,1));
p1 = p1(1:sz,:); p2 = p2(1:sz,:);
%# infer transformation from corner points
t = cp2tform(p2,p1,'nonreflective similarity'); %# 'affine'
%# rotate image to match the other
II2 = imtransform(I2, t, 'XData',[1 size(I1,2)], 'YData',[1 size(I1,1)]);
%# recover affine transformation params (translation, rotation, scale)
ss = t.tdata.Tinv(2,1);
sc = t.tdata.Tinv(1,1);
tx = t.tdata.Tinv(3,1);
ty = t.tdata.Tinv(3,2);
translation = [tx ty];
scale = sqrt(ss*ss + sc*sc);
rotation = atan2(ss,sc)*180/pi;
%# plot the results
subplot(311), imshow(I1,map1), title('I1')
hold on, plot(p1(:,1),p1(:,2),'go')
subplot(312), imshow(I2,map1), title('I2')
hold on, plot(p2(:,1),p2(:,2),'go')
subplot(313), imshow(II2,map1)
title(sprintf('recovered angle = %g',rotation))
If you can identify a color corresponding to only one component it is easier to:
Calculate the centroids for each image
Calculate the mean of the centroids (in x and y) for each image. This is the "center" of each image
Get the red component color centroid (in your example) for each image
Subtract the mean of the centroids for each image from the red component color centroid for each image
Calculate the ArcTan2 for each of the vectors calculated in 4), and subtract the angles. That is your result.
If you have more than one figure of each color, you need to calculate all possible combinations for the rotation and then select the one that is compatible with the other possible rotations.
I could post the code in Mathematica, if you think it is useful.
I would take a variant to the above mentioned approach:
% Crude binarization method to knock out background and retain foreground
% features. Note one looses the cube in the middle
im = im > 1
Then I would get the 2D autocorrelation:
acf = normxcorr2(im, im);
From this result, one can easily detect the peaks, and as rotation carries into the autocorrelation function (ACF) domain, one can ascertain the rotation by matching the peaks between the original ACF and the ACF from the rotated image, for example using the so-called Hungarian algorithm.