I have an image in png format of the digit ‘6’, I want to determine the position of the stem with respect to the blob using morphological operations. I have detected the blob of 6 using the code below. Now, I don't know how to detect the stem of the digit ‘6’. I tried using hough transform and edge detection algorithms but it didn't help.
Here is my code for detecting the blob:
img=imread('six.png');
img=rgb2gray(img);
figure,imshow(img);
i1=im2bw(img);
st=strel('square',20);
imdilate(i1,st);
figure,imshow(i1);
i2=imfill(i1,'holes');
figure,imshow(i2);
i1=imsubtract(i2,i1);
B = bwboundaries(i1);
figure,imshow(i1)
i2=i2-i1;
figure,imshow(i2);
text(10,10,strcat('\color{green}Objects Found:',num2str(length(B))))
hold on
for k = 1:length(B)
boundary = B{k};
plot(boundary(:,2), boundary(:,1), 'g', 'LineWidth', 0.2)
end
if eq(num2str(length(B)),'1')
h=msgbox('the number is 6');
else
h=msgbox('unknown number');
end
Here's the original six image and my current output
If you want to stick to morphological operations, you can simply find the pixels that are closest to the hole that you have already detected and remove them.
I start with the same morphological operations that you do, and add the extra step removing pixels within a distance threshold of the detected hole.
img=imread('six.png');
img=im2bw(img);
figure,imshow(img);
filled_img=imfill(img,'holes');
figure; imshow(filled_img);
filled_boundary= bwmorph(filled_img,'remove');
figure
imshow(filled_boundary)
hole = ~img & filled_img;
figure; imshow(hole);
hole_boundary = bwmorph(hole, 'remove');
figure; imshow(hole_boundary);
%Remove points on the boundary that are close to the hole
[hole_x, hole_y] = find(hole_boundary);
[fill_x, fill_y] = find(filled_boundary);
D = pdist2([hole_x, hole_y], [fill_x, fill_y]);
[distance, ~] = min(D, [], 1);
distance_threshold = 10;
top_edges = filled_boundary;
top_edges(fill_x(distance<distance_threshold), fill_y(distance<distance_threshold)) = 0;
figure; imshow(top_edges);
This is what my output image looks like
Related
I have a polygon which intersects itself multiple times. I try to create a mask from this polygon, i.e., to find all points/pixels location within the polygon. I use the Matlab function poly2mask for this. However, due to the multiple self-intersections this is the results I obtain:
Resulting mask from poly2mask for multi-self-intersecting polygon
So, some areas remain unmasked, because of the intersections. I think Matlab sees this as some sort of inclusions. The Matlab help for poly2mask doesn't mention anything about this. Does anyone have an idea how to also include these regions in the mask?
I obtain good results combining a small erosion/dilation step and imfill as follows:
data = load('polygon_edge.mat');
x = data.polygon_edge(:, 1);
y = data.polygon_edge(:, 2);
bw1 = poly2mask(x,y,ceil(max(y)),ceil(max(x)));
se = strel('sphere',1);
bw2 = imerode(imdilate(bw1,se), se);
bw3 = imfill(bw2, 'holes');
figure
imshow(bw3)
hold on
plot(x(:, 1),y(:, 1),'g','LineWidth',2)
The small erosion and dilation step is needed to be sure that all the regions are connected even at places where the polygon is only connected through a single point, otherwise imfill may see some non-existing holes.
you can use inpolygon:
bw1 = poly2mask(x,y,1000,1000);
subplot(131)
imshow(bw1)
hold on
plot(x([1:end 1]),y([1:end 1]),'g','LineWidth',2)
title('using poly2mask')
[xq,yq] = meshgrid(1:1000);
[IN,ON] = inpolygon(xq,yq,x,y);
bw2 = IN | ON;
subplot(132)
imshow(bw2)
hold on
plot(x([1:end 1]),y([1:end 1]),'g','LineWidth',2)
title('using inpolygon')
% boundary - seggested by another answer
k = boundary(x, y, 1); % 1 == tightest single-region boundary
bw3 = poly2mask(x(k), y(k), 1000, 1000);
subplot(133)
imshow(bw3)
hold on
plot(x([1:end 1]),y([1:end 1]),'g','LineWidth',2)
title('using boundary')
Update - I updated my answer to include boundary - it not seems to work well in my case.
You should first calculate the boundary of your polygon and use this to create your mask.
k = boundary(x, y, 0.99); % 1 == tightest single-region boundary
BW = poly2mask(x(k), y(k), m, n)
Using a shrink factor of 0.99 instead of 1 avoids undercutting, but sharp non-convex corners are still not fitted correctly.
I am making a script in Matlab that takes in an image of the rear of a car. After some image processing I would like to output the original image of the car with a rectangle around the license plate of the car. Here is what I have written so far:
origImg = imread('CAR_IMAGE.jpg');
I = imresize(origImg, [500, NaN]); % easier viewing and edge connecting
G = rgb2gray(I);
M = imgaussfilt(G); % blur to remove some noise
E = edge(M, 'Canny', 0.4);
% I can assume all letters are somewhat upright
RP = regionprops(E, 'PixelIdxList', 'BoundingBox');
W = vertcat(RP.BoundingBox); W = W(:,3); % get the widths of the BBs
H = vertcat(RP.BoundingBox); H = H(:,4); % get the heights of the BBs
FATTIES = W > H; % find the BBs that are more wide than tall
RP = RP(FATTIES);
E(vertcat(RP.PixelIdxList)) = false; % remove more wide than tall regions
D = imdilate(E, strel('disk', 1)); % dilate for easier viewing
figure();
imshowpair(I, D, 'montage'); % display original image and processed image
Here are some examples:
From here I am unsure how to isolate the letters of the license plate, particularly like in the second example above where each letter has a decreased area due to the perspective of the image. My first idea was to get the bounding box of all regions and keep only the regions where the perimeter to area ratio is "similar" but this resulted in removing the letters of the plate that were connected when I dilate the image like the K and V in the fourth example above.
I would appreciate some suggestions on how I should go about isolating these letters. No code is necessary, and any advice is appreciated.
So I continued to work despite not receiving any answers here on SO and managed to get a working version through trial and error. All of the following code comes after the code in my original question and all plots below are from the first example image above. First, I found the variance for every single pixel row of the image and plotted them like so:
V = var(D, 0, 2);
X = 1:length(V);
figure();
hold on;
scatter(X, V);
I then fit a very high order polynomial to this scatter plot and saved the values where the slope of the polynomial was zero and the variance value was very low (i.e. the dark row of pixels immediately before or after a row with some white):
P = polyfit(X', V, 25);
PV = polyval(P, X);
Z = X(find(PV < 0.03 & abs(gradient(PV)) < 0.0001));
plot(X, PV); % red curve on plot
scatter(Z, zeros(1,length(Z))); % orange circles on x-axis
I then calculate the integral of the polynomial between any consecutive Z values (my dark rows), and save the two Z values between which the integral is the largest, which I mark with lines on the plot:
MAX_INTEG = -1;
MIN_ROW = -1;
MAX_ROW = -1;
for i = 1:(length(Z)-1)
TEMP_MIN = Z(i);
TEMP_MAX = Z(i+1);
Q = polyint(P);
TEMP_INTEG = diff(polyval(Q, [TEMP_MIN, TEMP_MAX]));
if (TEMP_INTEG > MAX_INTEG)
MAX_INTEG = TEMP_INTEG;
MIN_ROW = TEMP_MIN;
MAX_ROW = TEMP_MAX;
end
end
line([MIN_ROW, MIN_ROW], [-0.1, max(V)+0.1]);
line([MAX_ROW, MAX_ROW], [-0.1, max(V)+0.1]);
hold off;
Since the X-values of these lines correspond row numbers in the original image, I can crop my image between MIN_ROW and MAX_ROW:
I repeat the above steps now for the columns of pixels, crop, and remove any excess black rows of columns to result in the identified plate:
I then perform 2D cross correlation between this cropped image and the edged image D using Matlab's xcorr2 to locate the plate in the original image. After finding the location I just draw a rectangle around the discovered plate like so:
How to count circle objects in a bright image using MATLAB?
The input image is:
imfindcircles function can't find any circle in this image.
Based on well known image processing techniques, you can write your own processing tool:
img = imread('Mlj6r.jpg'); % read the image
imgGray = rgb2gray(img); % convert to grayscale
sigma = 1;
imgGray = imgaussfilt(imgGray, sigma); % filter the image (we will take derivatives, which are sensitive to noise)
imshow(imgGray) % show the image
[gx, gy] = gradient(double(imgGray)); % take the first derivative
[gxx, gxy] = gradient(gx); % take the second derivatives
[gxy, gyy] = gradient(gy); % take the second derivatives
k = 0.04; %0.04-0.15 (see wikipedia)
blob = (gxx.*gyy - gxy.*gxy - k*(gxx + gyy).^2); % Harris corner detector (high second derivatives in two perpendicular directions)
blob = blob .* (gxx < 0 & gyy < 0); % select the top of the corner (i.e. positive second derivative)
figure
imshow(blob) % show the blobs
blobThresshold = 1;
circles = imregionalmax(blob) & blob > blobThresshold; % find local maxima and apply a thresshold
figure
imshow(imgGray) % show the original image
hold on
[X, Y] = find(circles); % find the position of the circles
plot(Y, X, 'w.'); % plot the circle positions on top of the original figure
nCircles = length(X)
This code counts 2710 circles, which is probably a slight (but not so bad) overestimation.
The following figure shows the original image with the circle positions indicated as white dots. Some wrong detections are made at the border of the object. You can try to make some adjustments to the constants sigma, k and blobThresshold to obtain better results. In particular, higher k may be beneficial. See wikipedia, for more information about the Harris corner detector.
To be exact I need the four end points of the road in the image below.
I used find[x y]. It does not provide satisfying result in real time.
I'm assuming the images are already annotated. In this case we just find the marked points and extract coordinates (if you need to find the red points dynamically through code, this won't work at all)
The first thing you have to do is find a good feature to use for segmentation. See my SO answer here what-should-i-use-hsv-hsb-or-rgb-and-why for code and details. That produces the following image:
we can see that saturation (and a few others) are good candidate colors spaces. So now you must transfer your image to the new color space and do thresholding to find your points.
Points are obtained using matlab's region properties looking specifically for the centroid. At that point you are done.
Here is complete code and results
im = imread('http://i.stack.imgur.com/eajRb.jpg');
HUE = 1;
SATURATION = 2;
BRIGHTNESS = 3;
%see https://stackoverflow.com/questions/30022377/what-should-i-use-hsv-hsb-or-rgb-and-why/30036455#30036455
ViewColoredSpaces(im)
%convert image to hsv
him = rgb2hsv(im);
%threshold, all rows, all columns,
my_threshold = 0.8; %determined empirically
thresh_sat = him(:,:,SATURATION) > my_threshold;
%remove small blobs using a 3 pixel disk
se = strel('disk',3');
cleaned_sat = imopen(thresh_sat, se);% imopen = imdilate(imerode(im,se),se)
%find the centroids of the remaining blobs
s = regionprops(cleaned_sat, 'centroid');
centroids = cat(1, s.Centroid);
%plot the results
figure();
subplot(2,2,1) ;imshow(thresh_sat) ;title('Thresholded saturation channel')
subplot(2,2,2) ;imshow(cleaned_sat);title('After morpphological opening')
subplot(2,2,3:4);imshow(im) ;title('Annotated img')
hold on
for (curr_centroid = 1:1:size(centroids,1))
%prints coordinate
x = round(centroids(curr_centroid,1));
y = round(centroids(curr_centroid,2));
text(x,y,sprintf('[%d,%d]',x,y),'Color','y');
end
%plots centroids
scatter(centroids(:,1),centroids(:,2),[],'y')
hold off
%prints out centroids
centroids
centroids =
7.4593 143.0000
383.0000 87.9911
435.3106 355.9255
494.6491 91.1491
Some sample code would make it much easier to tailor a specific solution to your problem.
One solution to this general problem is using impoint.
Something like
h = figure();
ax = gca;
% ... drawing your image
points = {};
points = [points; impoint(ax,initialX,initialY)];
% ... generate more points
indx = 1 % or whatever point you care about
[currentX,currentY] = getPosition(points{indx});
should do the trick.
Edit: First argument of impoint is an axis object, not a figure object.
I'm trying to measure the areas of each particle shown in this image:
I managed to get the general shape of each particle using MSER shown here:
but I'm having trouble removing the background. I tried using MATLAB's imfill, but it doesn't fill all the particles because some are cut off at the edges. Any tips on how to get rid of the background or find the areas of the particles some other way?
Cheers.
Edit: This is what imfill looks like:
Edit 2: Here is the code used to get the outline. I used this for the MSER.
%Compute region seeds and elliptial frames.
%MinDiversity = how similar to its parent MSER the region is
%MaxVariation = stability of the region
%BrightOnDark is used as the void is primarily dark. It also prevents dark
%patches in the void being detected.
[r,f] = vl_mser(I,'MinDiversity',0.7,...
'MaxVariation',0.2,...
'Delta',10,...
'BrightOnDark',1,'DarkOnBright',0) ;
%Plot region frames, but not used right now
%f = vl_ertr(f) ;
%vl_plotframe(f) ;
%Plot MSERs
M = zeros(size(I)) ; %M = no of overlapping extremal regions
for x=r'
s = vl_erfill(I,x) ;
M(s) = M(s) + 1;
end
%Display region boundaries
figure(1) ;
clf ; imagesc(I) ; hold on ; axis equal off; colormap gray ;
%Create contour plot using the values
%0:max(M(:))+.5 is the no of contour levels. Only level 0 is needed so
%[0 0] is used.
[c,h]=contour(M,[0 0]) ;;
set(h,'color','r','linewidth',1) ;
%Retrieve the image data from the contour image
f = getframe;
I2 = f.cdata;
%Convert the image into binary; the red outlines are while while the rest
%is black.
I2 = all(bsxfun(#eq,I2,reshape([255 0 0],[1 1 3])),3);
I2 = imcrop(I2,[20 1 395 343]);
imshow(~I2);
Proposed solution / trick and code
It seems you can work with M here. One trick that you can employ here would be to pad zeros all across the boundaries of the image M and then fill its holes. This would take care of filling the blobs that were touching the boundaries before, as now there won't be any blob touching the boundaries because of the zeros padding.
Thus, after you have M, you can add this code -
%// Get a binary version of M
M_bw = im2bw(M);
%// Pad zeros all across the grayscale image
padlen = 2; %// length of zeros padding
M_pad = padarray(M_bw,[padlen padlen],0);
%// Fill the holes
M_pad_filled = imfill(M_pad,'holes');
%// Get the background mask after the holes are gone
background_mask = ~M_pad_filled(padlen+1:end-padlen,padlen+1:end-padlen);
%// Overlay the background mask on the original image to show that you have
%// a working background mask for use
I(background_mask) = 0;
figure,imshow(I)
Results
Input image -
Foreground mask (this would be ~background_mask) -
Output image -