Can anyone suggest alternative means of detecting the centre of each of the targets in the following image using MATLBAB:
My current approach uses regionprops and centroid detection.
clc, clear all, close all
format long
beep off
rng('default')
I=imread('WP_20160811_13_38_26_Pro.jpg');
BW=im2bw(I);
BW=imcomplement(BW);
s = regionprops(BW, 'area','Centroid');
centroids = cat(1, s.Centroid);
imshow(BW)
hold on
plot(centroids(:,1), centroids(:,2), 'b*')
hold off
Is there a more precise way of detecting the centre as this approach seems sensitive to noise, perspective distortion, etc. Is there a way to find the intersection of each of the two quarter circles.
Another type of target I am considering is:
Can anyone suggest a way for detecting the centre of a crosshair? Thanks
My modification works 100% efficient for your image:
I = imadjust(imcomplement(rgb2gray(imread('WP_20160811_13_38_26_Pro.jpg'))));
filtered_BW = bwareaopen(im2bw(I), 500, 4);
% 500 is the area of ignored objects
final_BW = imdilate(filtered_BW, strel('disk', 5));
s = regionprops(final_BW, 'area','Centroid');
centroids = cat(1, s([s.Area] < 10000).Centroid);
% the condition leaves out the big areas on both sides
figure; imshow(final_BW)
hold on
plot(centroids(:,1), centroids(:,2), 'b*')
hold off
The functions I am adding:
rgb2gray to have one dimension of values!
imadjust to automatically optimize the brightness and contrast,
bwareaopen to get rid of small islands,
imdilate and strel to grow the regions and connect disconnected regions.
Related
everyone. I am trying to get to boundary dimension of the bubble inside the water using MATLAB. The code and result are shown below.
clear;
clc;
i1=imread('1.jpg');
i2=imread('14.jpg');
% i1=rgb2gray(i1);
% i2=rgb2gray(i2);
[m,n]=size(i1);
im1=double(i1);
im2=double(i2);
i3=zeros(size(i1));
threshold=29;
for i=1:m;
for j=1:n;
if abs((im2(i,j))-(im1(i,j)))>threshold ;
i3(i,j)=1;
else abs((im2(i,j))-(im1(i,j)))<threshold;
i3(i,j)=0;
end
end;
end;
se = strel('square', 5);
filteredForeground = imopen(i3, se);
figure; imshow(filteredForeground); title('Clean Foreground');
BW1 = edge(filteredForeground,'sobel');
subplot(2,2,1);imshow(i1);title('BackGround');
subplot(2,2,2);imshow(i2);title('Current Frame');
subplot(2,2,3);imshow(filteredForeground);title('Clean Foreground');
subplot(2,2,4);imshow(BW1);title('Edge');
As the figure shows, the result is not very satisfactory. So can anyone give me some advice to improve my result? And how can I output the boundary coordinate to file and get the real dimension of the bubble? Thank you very much!
First note that your background removal is almost useless.
If we plot diffI=i2-i1; imshow(diffI,[]);colorbar, we can see that the difference is almost as big as the image itself. You need to understand that what its visually similar to you, its not necessarily similar numerically, and this is a great example for it.
Therefore you dont have what you think you have. The background is there in your thresholding. Then, note that the object you want to segment, its not just whiter. Its definitely as dark as the background in some areas. This means that a simple segmentation by value thresholding will not work. You need better segmentation techniques.
I happen to have a copy of this level set algorithm in my MATLAB, the "Distance Regularized Level Set Evolution".
When I run the code demo_1 with your image, I get the following (nice gif!):
(Uncompressed)
Full code of the demo:
% This Matlab code demonstrates an edge-based active contour model as an application of
% the Distance Regularized Level Set Evolution (DRLSE) formulation in the following paper:
%
% C. Li, C. Xu, C. Gui, M. D. Fox, "Distance Regularized Level Set Evolution and Its Application to Image Segmentation",
% IEEE Trans. Image Processing, vol. 19 (12), pp. 3243-3254, 2010.
%
% Author: Chunming Li, all rights reserved
% E-mail: lchunming#gmail.com
% li_chunming#hotmail.com
% URL: http://www.imagecomputing.org/~cmli//
clear all;
close all;
Img=imread('https://i.stack.imgur.com/Wt9be.jpg');
Img=double(Img(:,:,1));
%% parameter setting
timestep=1; % time step
mu=0.2/timestep; % coefficient of the distance regularization term R(phi)
iter_inner=5;
iter_outer=300;
lambda=5; % coefficient of the weighted length term L(phi)
alfa=-3; % coefficient of the weighted area term A(phi)
epsilon=1.5; % papramater that specifies the width of the DiracDelta function
sigma=.8; % scale parameter in Gaussian kernel
G=fspecial('gaussian',15,sigma); % Caussian kernel
Img_smooth=conv2(Img,G,'same'); % smooth image by Gaussiin convolution
[Ix,Iy]=gradient(Img_smooth);
f=Ix.^2+Iy.^2;
g=1./(1+f); % edge indicator function.
% initialize LSF as binary step function
c0=2;
initialLSF = c0*ones(size(Img));
% generate the initial region R0 as two rectangles
initialLSF(size(Img,1)/2-5:size(Img,1)/2+5,size(Img,2)/2-5:size(Img,2)/2+5)=-c0;
% initialLSF(25:35,40:50)=-c0;
phi=initialLSF;
potential=2;
if potential ==1
potentialFunction = 'single-well'; % use single well potential p1(s)=0.5*(s-1)^2, which is good for region-based model
elseif potential == 2
potentialFunction = 'double-well'; % use double-well potential in Eq. (16), which is good for both edge and region based models
else
potentialFunction = 'double-well'; % default choice of potential function
end
% start level set evolution
for n=1:iter_outer
phi = drlse_edge(phi, g, lambda, mu, alfa, epsilon, timestep, iter_inner, potentialFunction);
if mod(n,2)==0
figure(2);
imagesc(Img,[0, 255]); axis off; axis equal; colormap(gray); hold on; contour(phi, [0,0], 'r');
drawnow
end
end
% refine the zero level contour by further level set evolution with alfa=0
alfa=0;
iter_refine = 10;
phi = drlse_edge(phi, g, lambda, mu, alfa, epsilon, timestep, iter_inner, potentialFunction);
finalLSF=phi;
figure(2);
imagesc(Img,[0, 255]); axis off; axis equal; colormap(gray); hold on; contour(phi, [0,0], 'r');
hold on; contour(phi, [0,0], 'r');
str=['Final zero level contour, ', num2str(iter_outer*iter_inner+iter_refine), ' iterations'];
title(str);
Ander pointed out in his answer that the background image doesn't match the background of the bubble image. My very best advice to you is not to try to fix this in code, but to fix your experimental setup. If you fix this in software, you'll get a complicated program with lots of "magic numbers" that nobody will be able to maintain after you graduate and leave. Anybody wanting to continue your work will have a hard time adjusting the program to match some new experimental conditions. Fixing the setup will lead to an experiment that is much easier to reproduce and to build on.
So what is wrong with the background picture? First of all, make sure the illumination hasn't changed since you took it. Let's assume you took the pictures in succession, and the change in background illumination is due to shadows of the bubble on the background.
In your previous question about this topic you got some so-so advice about your experimental setup. This picture is from that question:
This looks really great, you have a transparent tank, and a big white surface behind it. I recommend that you take out the reticulated sheet from behind it, and put all your lights on the white background. The goal is to get back-illuminated bubbles. The bubbles will cast a shadow, but it will be towards the camera, not the background -- they will darken the image, making detection really simple. But you need to make sure there is no direct light falling on the bubbles, since the reflection of that light towards the camera will causes highlights (as you see in your picture) that could be brighter than the background, or at least will reduce contrast.
If you keep some distance between the tank and the white background, then when focusing the camera on the bubbles that background will be out of focus and blurred, meaning that it will be fairly uniform. The less detail in the background, the easier the detection of bubbles is.
If you need the markings from the reticulated sheet, then I recommend you use a transparent sheet for that, on which you can draw lines with a permanent marker.
Sorry, this was not at all a programming answer... :)
So here is what this could look like. An example image with bubbles that we've used in Delft for many decades as an exercise:
I actually don't know what it is from, but they seem to be small bubbles in liquid. Some are out of focus, you won't have this problem. Segmentation is quite simple (This uses MATLAB with DIPimage):
img = readim('bubbles.tif');
background = closing(img,25); % estimate of background
out = threshold(background - img);
out = fillholes(out);
traces = traceobjects(out);
If you have a background image (which of course you'll have), then you don't need to estimate it. What the code then does is simply threshold the difference between the background and the image (since the bubbles are darker, I subtract the image from the background instead of the other way around), and a very simple post-processing to fill up the holes in the objects. Depending on what your images look like, you might need a bit more preprocessing or postprocessing... Think about noise removal in the input image!
The last line traces the object boundaries, yielding a polygon for each bubble (this last command is only in DIPimage 3.0, which isn't officially released yet, but you can compile it yourself if you're adventurous). Alternatively, use the bwboundaries function from the Image Processing Toolbox:
traces = bwboundaries(dip_array(out));
I have this BW image:
And using the function RegionProps, it shows that some objetcs are connected:
So I used morphological operations like imerode to separte the objects to get their centroids:
Now I have all the centroids of each object separated, but to that I lost a lot of information when eroding the region, like you can see in picture 3 in comparison with picture 1.
So I was thinking if is there anyway to "dilate" the picture 3 till get closer to picture 1 but without connecting again the objects.
You might want to take a look at bwmorph(). With the 'thicken', inf name-value pair it will thicken the labels until they would overlap. It's a neat tool for segmentation. We can use it to create segmentation borders for the original image.
bw is the original image.
labels is the image of the eroded labels.
lines = bwmorph(labels, 'thicken', inf);
segmented_bw = bw & lines
You could also skip a few phases and achieve similiar results with a marker based watershed. Or even better, as the morphological seesaw has destroyed some information as seen on the poorly segmented cluster on the lower right.
You can assign each white pixel in the mask to the closest centroid and work with the resulting label map:
[y x]= find(bw); % get coordinates of mask pixels
D = pdist2([x(:), y(:)], [cx(:), cy(:)]); % assuming cx, cy are centers' coordinates
[~, lb] = min(D, [], 2); % find index of closest center
lb_map = 0*bw;
lb_map(bw) = lb; % should give you the map.
See pdist2 for more information.
The task is to connect the centroids that I have got using regionprops horizontally in rows and then predict missing objects.
Here is the image that I have:
This is what I want to achieve :
All centroids within a certain y-coordinate range should be connected. After that I want to predict the missing objects. For example, there should be more objects/centroids present on the green line in the image above.
My code so far :
BW = rgb2gray(imread('noise_removal_single_25_cropped.png'));
props = regionprops(im2bw(BW), 'Centroid');
centroids = cat(1, props.Centroid);
[B,L] = bwboundaries(BW,'noholes');
imshow(label2rgb(L, #jet, [.5 .5 .5]))
hold on
for k = 1:length(B)
boundary = B{k};
plot(boundary(:,2), boundary(:,1), 'w', 'LineWidth', 2)
end
plot(centroids(:,1),centroids(:,2), 'b*')
plot(centroids(:,1),centroids(:,2), 'k-')
The code connects all centroids vertically and I have no idea how to detect missing objects/centroids (maybe based on length of line)?
Let us assume that the rows are perfectly horizontal. It seems that you can easily cluster the points by ordinate, either knowing row separators in advance, or by analysis of the point density.
Take the median ordinate of every cluster and discard the outliers (those farther than a defined tolerance from the median).
Sort the inliers by abscissa. The gap lengths (or point count in a sliding window) will tell you about missing points.
If the rows aren't perfectly horizontal, it remains likely that you can cluster by ordinates and obtain good horizontal separators. In every cluster, use a robust line fitting algorithm that will perform outlier detection, and sort horizontally as before. You can also deskew (using the line equation), but given the small slope, this will make little difference.
Final remark: if all lines are parallel, you can perform skew detection collectively by finding the gravity centers (or medioids) of the clusters and translating the clusters to a common center, giving a single thick line.
I am trying to use stereo imaging for 3D reconstruction, however when I use the tutorials and tools in matlab for stereo vision, I get erroneous results. I use a Loreo 3D macro lens to take images of small instruments at a distance of around 23mm. Then after cropping the images to create left and right images I use the stereo calibration app (I have also used code from the matlab tutorial which does pretty much the same thing). These are the kinds of results I get.
Stereo calibration using matlab's app
I am aware that the reprojection errors are quite high, but have tried a lot of things like changes in image quantity, illumination, checkerboard size and the skew, tangential distortion, and coefficients in the app to reduce this value without any luck. At a first glance, the Extrinsic reconstruction in the bottom right looks accurate, as the dimensions are quite correct. Therefore when I use the exported stereoParameters with a new image and the next code:
Isv = imread('IMG_0036.JPG');
I1 = imcrop(Isv, [0 0 2592 3456]);
I2 = imcrop(Isv, [2593 0 2592 3456]);
% Rectify the images.
[J1, J2] = rectifyStereoImages(I1, I2, stereoParams, 'OutputView', 'valid');
% Display the images before rectification.
figure;
imshow(stereoAnaglyph(I1, I2), 'InitialMagnification', 30);
title('Before Rectification');
% Display the images after rectification.
figure;
imshow(stereoAnaglyph(J1, J2), 'InitialMagnification', 30);
title('After Rectification');
disparityRange = [0, 64];
disparityMap = disparity(rgb2gray(J1), rgb2gray(J2), 'DisparityRange', ...
disparityRange);
figure;
imshow(disparityMap, disparityRange, 'InitialMagnification', 30);
colormap('jet');
colorbar;
title('Disparity Map');
point3D = reconstructScene(disparityMap, stereoParams);
% Convert from millimeters to meters.
point3D = point3D / 1000;
% Plot points between 3 and 7 meters away from the camera.
z = point3D(:, :, 3);
maxZ = 7;
minZ = 3;
zdisp = z;
zdisp(z < minZ | z > maxZ) = NaN;
point3Ddisp = point3D;
point3Ddisp(:,:,3) = zdisp;
figure
pcshow(point3Ddisp, J1, 'VerticalAxis', 'Y', 'VerticalAxisDir', 'Down' );
xlabel('X');
ylabel('Y');
zlabel('Z');
I get these erroneous rectification, disparity and 3D reconstruction.
Rectification, disparity and erroneous 3D reconstruction
As it can be seen, the rectification looks bad, as it the objects are too separated in my opinion; also the disparity results look very random, and finally the 3D reconstruction simply has no discernible outcome.
Please, I ask for any possible help, comments or recommendations regarding this issue.
Your reprojection errors are indeed high... Leaving that aside for the moment, your most immediate problem is that the disparityRange is too small.
The rectified images look fine. Corresponding points appear to be on the same pixel rows in both images, which is what you want. Display the anaglyph of the rectified images using imtool, and use the ruler widget to measure the distances between some of the corresponding points. That should give you an idea of what your disparity range should be. [0 64] is definitely too small.
To improve the reprojection errors, normally, I would say get more images. But you already have 30 pairs, which is a good number. You can specify the initial intrinsics and distortion, if you can get them off the camera manufacturer's spec, but I doubt they would help here. Try turning on tangential distortion estimation, and try using 3 radial distortion coefficients instead of two.
Also, see if you can more your cameras farther away from the scene. It may be that at such a short distance the pinhole camera model starts to break down.
There are also things you can do to improve your disparity and 3D reconstruction:
Try varying the BlockSize parameter of the disparity function
Try applying histogram equalization and/or low-pass filtering to your rectified images before computing disparity
Try median filtering the resulting disparity map to reduce the noise
Another tip: since you know approximately where the objects of interest are relative to the cameras, you can simply exclude the 3D points whose z-coordinate is too big or two small (or negative). This should give you a much cleaner 3D plot. You already have this in your code, but you should modify it to have appropriate units and thresholds for Z.
Hello, I have an image as shown above. Is it possible for me to detect the center point of the cross and output the result using Matlab? Thanks.
Here you go. I'm assuming that you have the image toolbox because if you don't then you probably shouldn't be trying to do this sort of thing. However, all of these functions can be implemented with convolutions I believe. I did this process on the image you presented above and obtained the point (139,286) where 138 is the row and 268 is the column.
1.Convert the image to a binary image:
bw = bw2im(img, .25);
where img is the original image. Depending on the image you might have to adjust the second parameters (which ranges from 0 to 1) so that you only get the cross. Don't worry about the cross not being fully connected because we'll remedy that in the next step.
2.Dilate the image to join the parts. I had to do this twice because I had to set the threshold so low on the binary image conversion (some parts of your image were pretty dark). Dilation essentially just adds pixels around existing white pixels (I'll also be inverting the binary image as I send it into bwmorph because the operations are made to act on white pixels which are the ones that have a value of 1).
bw2 = bwmorph(~bw, 'dilate', 2);
The last parameter says how many times to do the dilation operation.
3.Shrink the image to a point.
bw3 = bwmorph(bw2, 'shrink',Inf);
Again, the last parameter says how many times to perform the operation. In this case I put in Inf which shrinks until there is only one pixel that is white (in other words a 1).
4.Find the pixel that is still a 1.
[i,j] = find(bw3);
Here, i is the row and j is the column of the pixel in bw3 such that bw3(i,j) is equal to 1. All the other pixels should be 0 in bw3.
There might be other ways to do this with bwmorph, but I think that this way works pretty well. You might have to adjust it depending on the picture too. I can include images of each step if desired.
I just encountered the same kind of problem, and I found other solutions that I would like to share:
Assume image file name is pict1.jpg.
1.Read input image, crop relevant part and covert to Gray-scale:
origI = imread('pict1.jpg'); %Read input image
I = origI(32:304, 83:532, :); %Crop relevant part
I = im2double(rgb2gray(I)); %Covert to Grayscale and to double (set pixel range [0, 1]).
2.Convert image to binary image in robust approach:
%Subtract from each pixel the median of its 21x21 neighbors
%Emphasize pixels that are deviated from surrounding neighbors
medD = abs(I - medfilt2(I, [21, 21], 'symmetric'));
%Set threshold to 5 sigma of medD
thresh = std2(medD(:))*5;
%Convert image to binary image using above threshold
BW = im2bw(medD, thresh);
BW Image:
3.Now I suggest two approaches for finding the center:
Find find centroid (find center of mass of the white cluster)
Find two lines using Hough transform, and find the intersection point
Both solutions return sub-pixel result.
3.1.Find cross center using regionprops (find centroid):
%Find centroid of the cross (centroid of the cluster)
s = regionprops(BW, 'centroid');
centroids = cat(1, s.Centroid);
figure;imshow(BW);
hold on, plot(centroids(:,1), centroids(:,2), 'b*', 'MarkerSize', 15), hold off
%Display cross center in original image
figure;imshow(origI), hold on, plot(82+centroids(:,1), 31+centroids(:,2), 'b*', 'MarkerSize', 15), hold off
Centroid result (BW image):
Centroid result (original image):
3.2 Find cross center by intersection of two lines (using Hough transform):
%Create the Hough transform using the binary image.
[H,T,R] = hough(BW);
%ind peaks in the Hough transform of the image.
P = houghpeaks(H,2,'threshold',ceil(0.3*max(H(:))));
x = T(P(:,2)); y = R(P(:,1));
%Find lines and plot them.
lines = houghlines(BW,T,R,P,'FillGap',5,'MinLength',7);
figure, imshow(BW), hold on
L = cell(1, length(lines));
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green');
% Plot beginnings and ends of lines
plot(xy(1,1),xy(1,2),'x','LineWidth',2,'Color','yellow');
plot(xy(2,1),xy(2,2),'x','LineWidth',2,'Color','red');
%http://robotics.stanford.edu/~birch/projective/node4.html
%Find lines in homogeneous coordinates (using cross product):
L{k} = cross([xy(1,1); xy(1,2); 1], [xy(2,1); xy(2,2); 1]);
end
%https://en.wikipedia.org/wiki/Line%E2%80%93line_intersection
%Lines intersection in homogeneous coordinates (using cross product):
p = cross(L{1}, L{2});
%Convert from homogeneous coordinate to euclidean coordinate (divide by last element).
p = p./p(end);
plot(p(1), p(2), 'x', 'LineWidth', 1, 'Color', 'white', 'MarkerSize', 15)
Hough transform result:
I think that there is a far simpler way of solving this. The lines which form the cross-hair are of equal length. Therefore it in will be symmetric in all orientations. So if we do a simple line scan horizontally as well as vertically, to find the extremities of the lines forming the cross-hair. the median of these values will give the x and y co-ordinates of the center. Simple geometry.
I just love these discussions of how to find something without defining first what that something is! But, if I had to guess, I’d suggest the center of mass of the original gray scale image.
What about this;
a) convert to binary just to make the algorithm faster.
b) Perform a find on the resulting array
c) choose the element which has either lowest/highest row/column index (you would have four points to choose from then
d) now keep searching neighbours
have a global criteria for search that if search does not result in more than a few iterations, the point selected is false and choose another extreme point
e) going along the neighbouring points, you will end up at a point where you have three possible neighbours.That is you intersection
I would start by using the grayscale image map. The darkest points are on the cross, so discriminating on the highest values is a starting point. After discrimination, set all the lower points to white and leave the rest as they are. This would maximize the contrast between points on the cross and points in the image. Next up is to come up with a filter for determining the position with the highest average values. I would step through the entire image with a NxM array and take the mean value at the center point. Create a new array of these means and you should have the highest mean at the intersection. I'm curious to see how someone else may try this!