Delaunay command gives fewer of triangles than expected in MATLAB - matlab

I have got two unregistered images and a base image I use as reference for registration, image registration is performed as demonstrated in matlab example using SURF, now I have all images of 100*100 so after applying transformation matrix on both and saving all coordinates from three images in matrix named registeredPts when I apply delaunay command on a 30,000x2 matrix I get only 20,000 triangles where as as far as I know i should get 60,000 triangles approx
I have to use delaunay triangulation for image interpolation. I cannot figure out why so few number of triangles are formed though i cannot find any fault in feature based registration
close all
clear all
K = 2;
P1 = imread('C:\Users\Javeria Farooq\Desktop\project images\a.pgm');
%apply a command here that rgb2gray if it is a if it is rgb so convert it
% P1=rgb2gray(P1);
%reads the image to be registered
P2 = imread('C:\Users\Javeria Farooq\Desktop\project images\b.pgm');
% P2=rgb2gray(P2);
P3 = imread('C:\Users\Javeria Farooq\Desktop\project images\c.pgm');
% P3=rgb2gray(P3);
%reads the base image
image1_gray = makelr(P1, 1, 100, 1/2);
%image1_gray = P1;
% makes lr image of first
image2_gray= makelr(P2, 1, 100, 1/2);
image3_gray= makelr(P3, 1, 100, 1/2);
%image2_gray= P2;
%makes lr image of second
figure(1),imshow(image1_gray)
axis on;
grid on;
title('Unregistered image');
figure(2),imshow(image3_gray)
axis on;
grid on;
title('Unregistered image2');
figure(3),imshow(image2_gray)
axis on;
grid on;
title('Base image ');
impixelinfo
% both image displayed with pixel info
hold on
points_image1= detectSURFFeatures(image1_gray, 'NumScaleLevels', 100, 'NumOctaves', 12, 'MetricThreshold', 500 );
%detects surf features of first image
points_image2 = detectSURFFeatures(image2_gray, 'NumScaleLevels', 100, 'NumOctaves', 12, 'MetricThreshold', 500 );
points_image3 = detectSURFFeatures(image3_gray, 'NumScaleLevels', 100, 'NumOctaves', 12, 'MetricThreshold', 500 );
%detects surf features of second image
[features_image1, validPoints_image1] = extractFeatures(image1_gray, points_image1);
[features_image2, validPoints_image2] = extractFeatures(image2_gray, points_image2);
[features_image3, validPoints_image3] = extractFeatures(image3_gray, points_image3);
%extracts features of both images
indexPairs = matchFeatures(features_image1, features_image2, 'Prenormalized', true) ;
indexPairs1 = matchFeatures(features_image3, features_image2, 'Prenormalized', true) ;
% get matching points
matched_pts1 = validPoints_image1(indexPairs(:, 1));
matched_pts2 = validPoints_image2(indexPairs(:, 2));
matched_pts3 = validPoints_image3(indexPairs1(:, 1));
matched_pts4=validPoints_image2(indexPairs1(:, 2));
figure(4); showMatchedFeatures(image1_gray,image2_gray,matched_pts1,matched_pts2,'montage');
legend('matched points 1','matched points 2');
figure(5); showMatchedFeatures(image3_gray,image2_gray,matched_pts3,matched_pts4,'montage');
%matched features of both images are displayed
legend('matched points 3','matched points 2');
% Compute the transformation matrix using RANSAC
[tform, inlierFramePoints, inlierPanoPoints, status] = estimateGeometricTransform(matched_pts1, matched_pts2, 'projective');
[tform1, inlierFramePoints, inlierPanoPoints, status] = estimateGeometricTransform(matched_pts3, matched_pts4, 'projective');
%figure(6); showMatchedFeatures(image1_gray,image2_gray,inlierPanoPoints,inlierFramePoints,'montage');
%tform = estimateGeometricTransform(matched_pts1,matched_pts2,'projective')
%calculate transformation matrix using projective transform
T=tform.T;
r=[];
A=[];
l=1
[N1 N2]=size(image2_gray)
registeredPts = zeros(N1*N2,2);
pixelVals = zeros(N1*N2,1);
for row = 1:N1
for col = 1:N2
pixNum = (row-1)*N2 + col;
pixelVals(pixNum,1) = image2_gray(row,col);
registeredPts(pixNum,:) = [col,row];
end
end
[r]=transformPointsForward(tform,registeredPts(:,:));
[q]=transformPointsForward(tform1,registeredPts(:,:));
%coordinates of base image
image2_gray=double(image2_gray);
R=2;
r1=r(:,1);
r2=r(:,2);
for row = 1:N1
for col = 1:N2
pixNum = N1*N2 + (row-1)*N2 + col;
pixelVals(pixNum,1) = image1_gray(row,col);
registeredPts(pixNum,:) = [r1(col,:),r2(col,:)];
end
end
q1=q(:,1);
q2=q(:,2);
for row = 1:N1
for col = 1:N2
pixNum = N1*N2 +N1*N2+ (row-1)*N2 + col;
pixelVals(pixNum,1) = image3_gray(row,col);
registeredPts(pixNum,:) = [q1(col,:),q2(col,:)];
end
end
tri = delaunayTriangulation();
tri.Points=[registeredPts(:,1),registeredPts(:,2)];

i figured out the problem in code
for row = 1:N1
for col = 1:N2
pixNum = N1*N2 + (row-1)*N2 + col;
pixelVals(pixNum,1) = image1_gray(row,col);
registeredPts(pixNum,:) = [r1(col,:),r2(col,:)];
end
end
here same registered points are reassigned again and again, i corrected it by assigning the points out of for loop using
registeredPts=[registeredPts;r]
now i m getting above 60,000 triangles which works fine

Related

how to find the corners of rotated object in matlab?

I want to find the corners of objects.
I tried the following code:
Vstats = regionprops(BW2,'Centroid','MajorAxisLength','MinorAxisLength',...
'Orientation');
u = [Vstats.Centroid];
VcX = u(1:2:end);
VcY = u(2:2:end);
[VcY id] = sort(VcY); % sorting regions by vertical position
VcX = VcX(id);
Vstats = Vstats(id); % permute according sort
Bv = Bv(id);
Vori = [Vstats.Orientation];
VRmaj = [Vstats.MajorAxisLength]/2;
VRmin = [Vstats.MinorAxisLength]/2;
% find corners of vertebrae
figure,imshow(BW2)
hold on
% C = corner(VER);
% plot(C(:,1), C(:,2), 'or');
C = cell(size(Bv));
Anterior = zeros(2*length(C),2);
Posterior = zeros(2*length(C),2);
for i = 1:length(C) % for each region
cx = VcX(i); % centroid coordinates
cy = VcY(i);
bx = Bv{i}(:,2); % edge points coordinates
by = Bv{i}(:,1);
ux = bx-cx; % move to the origin
uy = by-cy;
[t, r] = cart2pol(ux,uy); % translate in polar coodinates
t = t - deg2rad(Vori(i)); % unrotate
for k = 1:4 % find corners (look each quadrant)
fi = t( (t>=(k-3)*pi/2) & (t<=(k-2)*pi/2) );
ri = r( (t>=(k-3)*pi/2) & (t<=(k-2)*pi/2) );
[rp, ip] = max(ri); % find farthest point
tc(k) = fi(ip); % save coordinates
rc(k) = rp;
end
[xc,yc] = pol2cart(tc+1*deg2rad(Vori(i)) ,rc); % de-rotate, translate in cartesian
C{i}(:,1) = xc + cx; % return to previous place
C{i}(:,2) = yc + cy;
plot(C{i}([1,4],1),C{i}([1,4],2),'or',C{i}([2,3],1),C{i}([2,3],2),'og')
% save coordinates :
Anterior([2*i-1,2*i],:) = [C{i}([1,4],1), C{i}([1,4],2)];
Posterior([2*i-1,2*i],:) = [C{i}([2,3],1), C{i}([2,3],2)];
end
My input image is :
I got the following output image
The bottommost object in the image is not detected properly. How can I correct the code? It fails to work for a rotated image.
You can get all the points from the image, and use kmeans clustering and partition the points into 8 groups. Once partition is done, you have the points in and and you can pick what ever the points you want.
rgbImage = imread('your image') ;
%% crop out the unwanted white background from the image
grayImage = min(rgbImage, [], 3);
binaryImage = grayImage < 200;
binaryImage = bwareafilt(binaryImage, 1);
[rows, columns] = find(binaryImage);
row1 = min(rows);
row2 = max(rows);
col1 = min(columns);
col2 = max(columns);
% Crop
croppedImage = rgbImage(row1:row2, col1:col2, :);
I = rgb2gray(croppedImage) ;
%% Get the white regions
[y,x,val] = find(I) ;
%5 use kmeans clustering
[idx,C] = kmeans([x,y],8) ;
%%
figure
imshow(I) ;
hold on
for i = 1:8
xi = x(idx==i) ; yi = y(idx==i) ;
id1=convhull(xi,yi) ;
coor = [xi(id1) yi(id1)] ;
[id,c] = kmeans(coor,4) ;
plot(coor(:,1),coor(:,2),'r','linewidth',3) ;
plot(c(:,1),c(:,2),'*b')
end
Now we are able to capture the regions..the boundary/convex hull points are in hand. You can do what ever math you want with the points.
Did you solve the problem? I Looked into it and it seems that the rotation given by 'regionprops' seems to be off. To fix that I've prepared a quick solution: I've dilated the image to close the gaps, found 4 most distant peaks of each spine, and then validated if a peak is on the left, or on the right of the centerline (that I have obtained by extrapolating form sorted centroids). This method seems to work for this particular problem.
BW2 = rgb2gray(Image);
BW2 = imbinarize(BW2);
%dilate and erode will help to remove extra features of the vertebra
se = strel('disk',4,4);
BW2_dilate = imdilate(BW2,se);
BW2_erode = imerode(BW2_dilate,se);
sb = bwboundaries(BW2_erode);
figure
imshow(BW2)
hold on
centerLine = [];
corners = [];
for bone = 1:length(sb)
x0 = sb{bone}(:,2) - mean(sb{bone}(:,2));
y0 = sb{bone}(:,1) - mean(sb{bone}(:,1));
%save the position of the centroid
centerLine = [centerLine; [mean(sb{bone}(:,1)) mean(sb{bone}(:,2))]];
[th0,rho0] = cart2pol(x0,y0);
%make sure that the indexing starts at the dip, not at the corner
lowest_val = find(rho0==min(rho0));
rho1 = [rho0(lowest_val:end); rho0(1:lowest_val-1)];
th00 = [th0(lowest_val:end); th0(1:lowest_val-1)];
y1 = [y0(lowest_val:end); y0(1:lowest_val-1)];
x1 = [x0(lowest_val:end); x0(1:lowest_val-1)];
%detect corners, using smooth data to remove noise
[pks,locs] = findpeaks(smooth(rho1));
[pksS,idS] = sort(pks,'descend');
%4 most pronounced peaks are where the corners are
edgesFndCx = x1(locs(idS(1:4)));
edgesFndCy = y1(locs(idS(1:4)));
edgesFndCx = edgesFndCx + mean(sb{bone}(:,2));
edgesFndCy = edgesFndCy + mean(sb{bone}(:,1));
corners{bone} = [edgesFndCy edgesFndCx];
end
[~,idCL] = sort(centerLine(:,1),'descend');
centerLine = centerLine(idCL,:);
%extrapolate the spine centerline
yDatExt= 1:size(BW2_erode,1);
extrpLine = interp1(centerLine(:,1),centerLine(:,2),yDatExt,'spline','extrap');
plot(centerLine(:,2),centerLine(:,1),'r')
plot(extrpLine,yDatExt,'r')
%find edges to the left, and to the right of the centerline
for bone = 1:length(corners)
x0 = corners{bone}(:,2);
y0 = corners{bone}(:,1);
for crn = 1:4
xCompare = extrpLine(y0(crn));
if x0(crn) < xCompare
plot(x0(crn),y0(crn),'go','LineWidth',2)
else
plot(x0(crn),y0(crn),'ro','LineWidth',2)
end
end
end
Solution

How to find Orientation of axis of contour in matlab?

I want to find Orientation, MajorAxisLengthand MinorAxisLength of contour which is plotted with below code.
clear
[x1 , x2] = meshgrid(linspace(-10,10,100),linspace(-10,10,100));
mu = [1,3];
sigm = [2,0;0,2];
xx_size = length(mu);
tem_matrix = ones(size(x1));
x_mesh= cell(1,xx_size);
for i = 1 : xx_size
x_mesh{i} = tem_matrix * mu(i);
end
x_mesh= {x1,x2};
temp_mesh = [];
for i = 1 : xx_size
temp_mesh = [temp_mesh x_mesh{i}(:)];
end
Z = mvnpdf(temp_mesh,mu,sigm);
z_plat = reshape(Z,size(x1));
figure;contour(x1, x2, z_plat,3, 'LineWidth', 2,'color','m');
% regionprops(z_plat,'Centroid','Orientation','MajorAxisLength','MinorAxisLength');
In my opinion, I may have to use regionprops command but I don't know how to do this. I want to find direction of axis of contour and plot something like this
How can I do this task? Thanks very much for your help
Rather than trying to process the graphical output of contour, I would instead recommend using contourc to compute the ContourMatrix and then use the x/y points to estimate the major and minor axes lengths as well as the orientation (for this I used this file exchange submission)
That would look something like the following. Note that I have modified the inputs to contourc as the first two inputs should be the vector form and not the output of meshgrid.
% Compute the three contours for your data
contourmatrix = contourc(linspace(-10,10,100), linspace(-10,10,100), z_plat, 3);
% Create a "pointer" to keep track of where we are in the output
start = 1;
count = 1;
% Now loop through each contour
while start < size(contourmatrix, 2)
value = contourmatrix(1, start);
nPoints = contourmatrix(2, start);
contour_points = contourmatrix(:, start + (1:nPoints));
% Now fit an ellipse using the file exchange
ellipsedata(count) = fit_ellipse(contour_points(1,:), contour_points(2,:));
% Increment the start pointer
start = start + nPoints + 1;
count = count + 1;
end
orientations = [ellipsedata.phi];
% 0 0 0
major_length = [ellipsedata.long_axis];
% 4.7175 3.3380 2.1539
minor_length = [ellipsedata.short_axis];
% 4.7172 3.3378 2.1532
As you can see, the contours are actually basically circles and therefore the orientation is zero and the major and minor axis lengths are almost equal. The reason that they look like ellipses in your post is because your x and y axes are scaled differently. To fix this, you can call axis equal
figure;contour(x1, x2, z_plat,3, 'LineWidth', 2,'color','m');
axis equal
Thank you #Suever. It help me to do my idea.
I add some line to code:
clear
[X1 , X2] = meshgrid(linspace(-10,10,100),linspace(-10,10,100));
mu = [-1,0];
a = [3,2;1,4];
a = a * a';
sigm = a;
xx_size = length(mu);
tem_matrix = ones(size(X1));
x_mesh= cell(1,xx_size);
for i = 1 : xx_size
x_mesh{i} = tem_matrix * mu(i);
end
x_mesh= {X1,X2};
temp_mesh = [];
for i = 1 : xx_size
temp_mesh = [temp_mesh x_mesh{i}(:)];
end
Z = mvnpdf(temp_mesh,mu,sigm);
z_plat = reshape(Z,size(X1));
figure;contour(X1, X2, z_plat,3, 'LineWidth', 2,'color','m');
hold on;
% Compute the three contours for your data
contourmatrix = contourc(linspace(-10,10,100), linspace(-10,10,100), z_plat, 3);
% Create a "pointer" to keep track of where we are in the output
start = 1;
count = 1;
% Now loop through each contour
while start < size(contourmatrix, 2)
value = contourmatrix(1, start);
nPoints = contourmatrix(2, start);
contour_points = contourmatrix(:, start + (1:nPoints));
% Now fit an ellipse using the file exchange
ellipsedata(count) = fit_ellipse(contour_points(1,:), contour_points(2,:));
% Increment the start pointer
start = start + nPoints + 1;
count = count + 1;
end
orientations = [ellipsedata.phi];
major_length = [ellipsedata.long_axis];
minor_length = [ellipsedata.short_axis];
tet = orientations(1);
x1 = mu(1);
y1 = mu(2);
a = sin(tet) * sqrt(major_length(1));
b = cos(tet) * sqrt(major_length(1));
x2 = x1 + a;
y2 = y1 + b;
line([x1, x2], [y1, y2],'linewidth',2);
tet = ( pi/2 + orientations(1) );
a = sin(tet) * sqrt(minor_length(1));
b = cos(tet) * sqrt(minor_length(1));
x2 = x1 + a;
y2 = y1 + b;
line([x1, x2], [y1, y2],'linewidth',2);

How project Velodyne point clouds on image? (KITTI Dataset)

Here is my code to project Velodyne points into the images:
cam = 2;
frame = 20;
% compute projection matrix velodyne->image plane
R_cam_to_rect = eye(4);
[P, Tr_velo_to_cam, R] = readCalibration('D:/Shared/training/calib/',frame,cam)
R_cam_to_rect(1:3,1:3) = R;
P_velo_to_img = P*R_cam_to_rect*Tr_velo_to_cam;
% load and display image
img = imread(sprintf('D:/Shared/training/image_2/%06d.png',frame));
fig = figure('Position',[20 100 size(img,2) size(img,1)]); axes('Position',[0 0 1 1]);
imshow(img); hold on;
% load velodyne points
fid = fopen(sprintf('D:/Shared/training/velodyne/%06d.bin',frame),'rb');
velo = fread(fid,[4 inf],'single')';
% remove every 5th point for display speed
velo = velo(1:5:end,:);
fclose(fid);
% remove all points behind image plane (approximation
idx = velo(:,1)<5;
velo(idx,:) = [];
% project to image plane (exclude luminance)
velo_img = project(velo(:,1:3),P_velo_to_img);
% plot points
cols = jet;
for i=1:size(velo_img,1)
col_idx = round(64*5/velo(i,1));
plot(velo_img(i,1),velo_img(i,2),'o','LineWidth',4,'MarkerSize',1,'Color',cols(col_idx,:));
where readCalibration function is defined as
function [P, Tr_velo_to_cam, R_cam_to_rect] = readCalibration(calib_dir,img_idx,cam)
% load 3x4 projection matrix
P = dlmread(sprintf('%s/%06d.txt',calib_dir,img_idx),' ',0,1);
Tr_velo_to_cam = P(6,:);
R_cam_to_rect = P(5,1:9);
P = P(cam+1,:);
P = reshape(P ,[4,3])';
Tr_velo_to_cam = reshape(Tr_velo_to_cam ,[3,4])';
R_cam_to_rect = reshape(R_cam_to_rect ,[3,3])';
end
But here is the result:
what is wrong with my code? I changed the "cam" variable from 0 to 3 and none of them worked. You can find a sample of Calibration file in this link:
How to understand KITTI camera calibration files
I fixed it by myself. here is the modification in readCalibration function:
Tr_velo_to_cam = P(6,:);
Tr_velo_to_cam = reshape(Tr_velo_to_cam ,[4,3])';
Tr_velo_to_cam = [Tr_velo_to_cam;0 0 0 1];

Extraction of Plankton using Segmentation with Matlab

I am trying to extract plankton from a scanned image.
I segmented the plankton using the technique I found here, http://www.mathworks.com/help/images/examples/detecting-a-cell-using-image-segmentation.html
The outline is not bad, however, now I am not sure how to extract the images so each individual plankton can be saved individually. I tried to use labels but there is a lot of noise and it labels every single spec. I am wondering if there is a better way to do this.
Here is my code:
I = imread('plankton_2.jpg');
figure, imshow(I), title('original image');
[~, threshold] = edge(I, 'sobel');
fudgeFactor = .5;
BWs = edge(I,'sobel', threshold * fudgeFactor);
figure, imshow(BWs), title('binary gradient mask');
se90 = strel('line', 3, 90);
se0 = strel('line', 3, 0);
BWsdil = imdilate(BWs, [se90 se0]);
figure, imshow(BWsdil), title('dilated gradient mask');
BWdfill = imfill(BWsdil, 'holes');
figure, imshow(BWdfill);
title('binary image with filled holes');
BWnobord = imclearborder(BWdfill,1);
figure, imshow(BWnobord), title('cleared border image');
seD = strel('diamond',1);
BWfinal = imerode(BWnobord,seD);
BWfinal = imerode(BWfinal,seD);
figure, imshow(BWfinal), title('segmented image');
BWoutline = bwperim(BWfinal);
Segout = I;
Segout(BWoutline) = 0;
figure, imshow(Segout), title('outlined original image');
label = bwlabel(BWfinal);
max(max(label))
for j = 1:max(max(label))
[row, col] = find(label == j);
len = max(row) - min(row)+2;
breadth = max(col)-min(col) +2;
target = uint8(zeros([len breadth]));
sy = min(col)-1;
sx = min(row)-1;
for i = 1:size(row,1)
x = row(i,1)-sx;
y = col(i,1) - sy;
target(x,y)=I(row(i,1),col(i,1));
end
mytitle =strcat('Object Number:',num2str(j));
figure, imshow(target);mytitle;
end
for j = 1:max(max(label))
[row, col] = find(label == j);
len = max(row) - min(row)+2;
breadth = max(col)-min(col) +2;
target = uint8(zeros([len breadth]));
sy = min(col)-1;
sx = min(row)-1;
for i = 1:size(row,1)
x = row(i,1)-sx;
y = col(i,1) - sy;
target(x,y)=I(row(i,1),col(i,1));
end
mytitle =strcat('Object Number:',num2str(j));
figure, imshow(target);mytitle;
end
You should use the regionprops function to filter the detected objects by size and/or shape characteristics.

How to update the ( x,y ) coordinate for each frame?

I want the coordinate of x and y frame update for each frame. but the coordinate didn't do so. It only get the coordinate of the first frame.
The coding is:
EyeDetect = vision.CascadeObjectDetector('EyePairBig');
vidD = imaq.VideoDevice('winvideo',1,'MJPG_640x480');
EFrame = step(vidD);
bboxe = step(EyeDetect, EFrame);
x = bboxe(1, 1); y = bboxe(1, 2); w = bboxe(1, 3); h = bboxe(1, 4);
bboxPolygon = [x, y, x+w, y, x+w, y+h, x, y+h];
textColor = [255, 0, 0];
textLocation = [1 1];
text = ['x: ',num2str(bboxe(1)),' y: ',num2str(bboxe(2))];
textInserter = vision.TextInserter(text,'Color', textColor, 'FontSize', 12, 'Location', textLocation);
vido = step(textInserter, EFrame);
EFrame = insertShape(vido, 'Polygon', bboxPolygon);
figure; imshow(EFrame); title('Eyes Detection');
% Detect feature points in the eye region.
points = detectMinEigenFeatures(rgb2gray(EFrame), 'ROI', bboxe);
% Display the detected points.
figure, imshow(EFrame), hold on, title('Detected features');
plot(points);
pointTracker = vision.PointTracker('MaxBidirectionalError', 2);
points = points.Location; initialize(pointTracker, points, EFrame);
videoPlayer = vision.VideoPlayer('Position',... [100 100 [size(EFrame, 2), size(EFrame, 1)]+30]);
oldPoints = points;
nFrames=0;
while (nFrames<100)
% get the next frame
EFrame = step(vidD);
% Track the points. Note that some points may be lost.
[points, isFound] = step(pointTracker, EFrame);
visiblePoints = points(isFound, :);
oldInliers = oldPoints(isFound, :);
if size(visiblePoints, 1) >= 2 % need at least 2 points
[xform, oldInliers, visiblePoints] = estimateGeometricTransform(...
oldInliers, visiblePoints, 'similarity', 'MaxDistance', 4);
[bboxPolygon(1:2:end), bboxPolygon(2:2:end)] ...
= transformPointsForward(xform, bboxPolygon(1:2:end), bboxPolygon(2:2:end));
EFrame = insertShape(EFrame, 'Polygon', bboxPolygon);
% Display tracked points
%EFrame = insertMarker(EFrame, visiblePoints, '+','Color', 'white');
% Reset the points
oldPoints = visiblePoints;
setPoints(pointTracker, oldPoints);
text = ['x: ',num2str(bboxe(1)),' y: ',num2str(bboxe(2))];
textInserter = vision.TextInserter(text,'Color', textColor, 'FontSize', 12, 'Location', textLocation);
end
% Display the annotated video frame using the video player object
EFrame= step(textInserter,EFrame);
step(videoPlayer, EFrame);
nFrames= nFrames+1;
end
release(vidD); release(videoPlayer); release(pointTracker);
Replace the line
text = ['x: ',num2str(bboxe(1)),' y: ',num2str(bboxe(2))];
with this:
text = ['x: ',num2str(bboxPolygon(1)),' y: ',num2str(bboxPolygon(2))];