I am currently working on a project for my university: detection of pollen in an RGB-image.
the orange stuff is what I want to have marked.
I managed to detect the circular-shaped ones using imfindcircles() but for the triangular shapes I did not find a function.
Im = imread('pollen10.jpg');
image_gray = rgb2gray(Im);
%% Detect circles with given radii
Rmin = 30;
Rmax = 160;
[centersDark, radiiDark] = imfindcircles(image_gray,[Rmin Rmax],'ObjectPolarity','dark','sensitivity',0.90);
%% Checking results
imshow(image_gray);
hold on
viscircles(centersDark, radiiDark,'LineStyle','-','Color','b','Linewidth',1);
hold off
How can I detect the triangular shapes?
I managed to get an bw-image where I want to count the white polygons. I did not find something easy for this by now, but I will invest more time in searching soon
im = imread('pollen10.jpg');
BW = im(:,:,1) < 140;
cc = bwconncomp(BW);
stats = regionprops(cc, 'Centroid','Area');
centroids = cat(1,stats.Centroid);
idx = find([stats.Area] > 100);
BW2 = ismember(labelmatrix(cc), idx);
CC2 = bwconncomp(BW2);
stats2 = regionprops(CC2, 'Centroid','Area','BoundingBox','Eccentricity');
centroids2 = cat(1,stats2.Centroid);
imshow(BW2)
hold on
plot(centroids2(:,1),centroids2(:,2),'b*');
Related
I want to automatize the process of classifying the squares of chessboard images as black or white square. The step further would be to distinguish if it's an empty square or if the square is containing a piece. So far, I get close to classify every square as white or black using the average intensity of the center of the square but it's difficult setting a threshold. For the step further(empty squares or with a piece) I tried with std2 but also there it's was difficult.
Here is my original image, and what I got close to so far
Here's is my script:
image = imerode(original,strel('disk',3));
image = imadjust(image,[],[],2);
figure,imshow(image),hold on;
for i = 1:length(cells)
TL = cells(i).TL; %Cell's corner top left
TR = cells(i).TR; %Cell's corner top right
BL = cells(i).BL; %Cell's corner bottom left
BR = cells(i).BR; %Cell's corner bottom right
x = [TL(1) TR(1) BR(1) BL(1)];
y = [TL(2) TR(2) BR(2) BL(2)];
bw = poly2mask(x,y,size(image,1),size(image,2));
measurements = regionprops(bw,'BoundingBox');
cropped = imcrop(image, measurements.BoundingBox);
m = regionprops(bw,'Centroid');
x = m.Centroid(1);
y = m.Centroid(2);
w = 25; %width
h = 25; %height
tl = [round(x-w/2) round(y-h/2)];
center = image(tl(1):tl(1)+w,tl(2):tl(2)+h,:);
%stds(i) = std2(center);
avgs(i) = mean2(center);
if(avgs(i) > 55)
str = "W";
else
str = "B";
end
text(x,y,str,'Color','red','FontSize',16);
end
EDIT: Image below is the new result after
image = imerode(image,strel('disk',4));
image = image>160;
You can use Matlabs build in checkerboard methods detectCheckerboardPoints and checkerboard to find the size of the checkerboard and construct a new one of the appropriate size. As there can only exist two possible checkerboards, construct both and check which one matches the best.
img = imread('PNWSv.jpg'); %Load image
%Prepare image
I = rgb2gray(img);
I2 = imerode(I,strel('square',10));
bw = imbinarize(I2,'adaptive');
%Find checkerboard points
[imagePoints,boardSize] = detectCheckerboardPoints(bw);
%Find the size of the checkerboard fields
x = boardSize(2)-1;
y = boardSize(1)-1;
fields = cell(y,x);
for k = 1:length(imagePoints)
[i,j] = ind2sub([y,x],k);
fields{i,j} = imagePoints(k,:);
end
avgDx = mean(mean(diff(cellfun(#(x) x(1),fields),1,2)));
avgDy = mean(mean(diff(cellfun(#(x) x(2),fields),1,1)));
%Construct the two possibilities
ref1 = imresize(imbinarize(checkerboard),[avgDx,avgDy]*8);
ref2 = imcomplement(ref1);
%Check which ones fits the better
c1 = normxcorr2(ref1,I);
c2 = normxcorr2(ref2,I);
if max(c2(:))<max(c1(:))
ref=ref1;
c = c1;
else
ref=ref2;
c = c2;
end
%Plot the checkerboard bounding box on top of the original image
[ypeak, xpeak] = find(c==max(c(:)));
yoffSet = ypeak-size(ref,1);
xoffSet = xpeak-size(ref,2);
imshow(img);
imrect(gca, [xoffSet+1, yoffSet+1, size(ref1,2), size(ref1,1)]);
Erosion followed by binarization will help you find the empty white squares.
From these, you can reconstruct the whole chessboard grid more easily and estimate the occupied squares.
I want to find the corners of objects.
I tried the following code:
Vstats = regionprops(BW2,'Centroid','MajorAxisLength','MinorAxisLength',...
'Orientation');
u = [Vstats.Centroid];
VcX = u(1:2:end);
VcY = u(2:2:end);
[VcY id] = sort(VcY); % sorting regions by vertical position
VcX = VcX(id);
Vstats = Vstats(id); % permute according sort
Bv = Bv(id);
Vori = [Vstats.Orientation];
VRmaj = [Vstats.MajorAxisLength]/2;
VRmin = [Vstats.MinorAxisLength]/2;
% find corners of vertebrae
figure,imshow(BW2)
hold on
% C = corner(VER);
% plot(C(:,1), C(:,2), 'or');
C = cell(size(Bv));
Anterior = zeros(2*length(C),2);
Posterior = zeros(2*length(C),2);
for i = 1:length(C) % for each region
cx = VcX(i); % centroid coordinates
cy = VcY(i);
bx = Bv{i}(:,2); % edge points coordinates
by = Bv{i}(:,1);
ux = bx-cx; % move to the origin
uy = by-cy;
[t, r] = cart2pol(ux,uy); % translate in polar coodinates
t = t - deg2rad(Vori(i)); % unrotate
for k = 1:4 % find corners (look each quadrant)
fi = t( (t>=(k-3)*pi/2) & (t<=(k-2)*pi/2) );
ri = r( (t>=(k-3)*pi/2) & (t<=(k-2)*pi/2) );
[rp, ip] = max(ri); % find farthest point
tc(k) = fi(ip); % save coordinates
rc(k) = rp;
end
[xc,yc] = pol2cart(tc+1*deg2rad(Vori(i)) ,rc); % de-rotate, translate in cartesian
C{i}(:,1) = xc + cx; % return to previous place
C{i}(:,2) = yc + cy;
plot(C{i}([1,4],1),C{i}([1,4],2),'or',C{i}([2,3],1),C{i}([2,3],2),'og')
% save coordinates :
Anterior([2*i-1,2*i],:) = [C{i}([1,4],1), C{i}([1,4],2)];
Posterior([2*i-1,2*i],:) = [C{i}([2,3],1), C{i}([2,3],2)];
end
My input image is :
I got the following output image
The bottommost object in the image is not detected properly. How can I correct the code? It fails to work for a rotated image.
You can get all the points from the image, and use kmeans clustering and partition the points into 8 groups. Once partition is done, you have the points in and and you can pick what ever the points you want.
rgbImage = imread('your image') ;
%% crop out the unwanted white background from the image
grayImage = min(rgbImage, [], 3);
binaryImage = grayImage < 200;
binaryImage = bwareafilt(binaryImage, 1);
[rows, columns] = find(binaryImage);
row1 = min(rows);
row2 = max(rows);
col1 = min(columns);
col2 = max(columns);
% Crop
croppedImage = rgbImage(row1:row2, col1:col2, :);
I = rgb2gray(croppedImage) ;
%% Get the white regions
[y,x,val] = find(I) ;
%5 use kmeans clustering
[idx,C] = kmeans([x,y],8) ;
%%
figure
imshow(I) ;
hold on
for i = 1:8
xi = x(idx==i) ; yi = y(idx==i) ;
id1=convhull(xi,yi) ;
coor = [xi(id1) yi(id1)] ;
[id,c] = kmeans(coor,4) ;
plot(coor(:,1),coor(:,2),'r','linewidth',3) ;
plot(c(:,1),c(:,2),'*b')
end
Now we are able to capture the regions..the boundary/convex hull points are in hand. You can do what ever math you want with the points.
Did you solve the problem? I Looked into it and it seems that the rotation given by 'regionprops' seems to be off. To fix that I've prepared a quick solution: I've dilated the image to close the gaps, found 4 most distant peaks of each spine, and then validated if a peak is on the left, or on the right of the centerline (that I have obtained by extrapolating form sorted centroids). This method seems to work for this particular problem.
BW2 = rgb2gray(Image);
BW2 = imbinarize(BW2);
%dilate and erode will help to remove extra features of the vertebra
se = strel('disk',4,4);
BW2_dilate = imdilate(BW2,se);
BW2_erode = imerode(BW2_dilate,se);
sb = bwboundaries(BW2_erode);
figure
imshow(BW2)
hold on
centerLine = [];
corners = [];
for bone = 1:length(sb)
x0 = sb{bone}(:,2) - mean(sb{bone}(:,2));
y0 = sb{bone}(:,1) - mean(sb{bone}(:,1));
%save the position of the centroid
centerLine = [centerLine; [mean(sb{bone}(:,1)) mean(sb{bone}(:,2))]];
[th0,rho0] = cart2pol(x0,y0);
%make sure that the indexing starts at the dip, not at the corner
lowest_val = find(rho0==min(rho0));
rho1 = [rho0(lowest_val:end); rho0(1:lowest_val-1)];
th00 = [th0(lowest_val:end); th0(1:lowest_val-1)];
y1 = [y0(lowest_val:end); y0(1:lowest_val-1)];
x1 = [x0(lowest_val:end); x0(1:lowest_val-1)];
%detect corners, using smooth data to remove noise
[pks,locs] = findpeaks(smooth(rho1));
[pksS,idS] = sort(pks,'descend');
%4 most pronounced peaks are where the corners are
edgesFndCx = x1(locs(idS(1:4)));
edgesFndCy = y1(locs(idS(1:4)));
edgesFndCx = edgesFndCx + mean(sb{bone}(:,2));
edgesFndCy = edgesFndCy + mean(sb{bone}(:,1));
corners{bone} = [edgesFndCy edgesFndCx];
end
[~,idCL] = sort(centerLine(:,1),'descend');
centerLine = centerLine(idCL,:);
%extrapolate the spine centerline
yDatExt= 1:size(BW2_erode,1);
extrpLine = interp1(centerLine(:,1),centerLine(:,2),yDatExt,'spline','extrap');
plot(centerLine(:,2),centerLine(:,1),'r')
plot(extrpLine,yDatExt,'r')
%find edges to the left, and to the right of the centerline
for bone = 1:length(corners)
x0 = corners{bone}(:,2);
y0 = corners{bone}(:,1);
for crn = 1:4
xCompare = extrpLine(y0(crn));
if x0(crn) < xCompare
plot(x0(crn),y0(crn),'go','LineWidth',2)
else
plot(x0(crn),y0(crn),'ro','LineWidth',2)
end
end
end
Solution
I would like to find the relative position between the circle (red one in the image) and the door (or the door arrow on the door). I was able to detect the window (red circle on the image) and now would like to know the relative position between the circle and the door(door arrow) and also the minimal distance between the circle and door(door arrow). The minimal distance to to vertical door arrow and the minimal distance to the horizontal door arrow.
I posted the code to find the window(red circle) and two images( original one and the result image of finding the window). Any code or function that gives me that relative position and distances?
Here the code to find the window (MatLab) from Amitay Nachmani from this post
How do I find an object in image/video knowing its real physical dimension?
clear all
% Parameters
minValueWindow = 90;
maxValueWindow = 110;
% Read file
I = imread('image1.jpg');
Igray = rgb2gray(I);
[row,col] = size(Igray);
% Edge detection
Iedge = edge(Igray,'canny',[0 0.3]);
% Hough circle transform
rad = 40:80; % The approximate radius in pixels
detectedCircle = {};
detectedCircleIndex = 1;
for radIndex=1:1:length(rad)
[y0detect,x0detect,Accumulator] = houghcircle(Iedge,rad(1,radIndex),rad(1,radIndex)*pi/2);
if ~isempty(y0detect)
circles = struct;
circles.X = x0detect;
circles.Y = y0detect;
circles.Rad = rad(1,radIndex);
detectedCircle{detectedCircleIndex} = circles;
detectedCircleIndex = detectedCircleIndex + 1;
end
end
% For each detection run a color filter
ang=0:0.01:2*pi;
finalCircles = {};
finalCircleIndex = 1;
for i=1:1:detectedCircleIndex-1
rad = detectedCircle{i}.Rad;
xp = rad*cos(ang);
yp = rad*sin(ang);
for detectedPointIndex=1:1:length(detectedCircle{i}.X)
% Take each detected center and sample the gray image
samplePointsX = round(detectedCircle{i}.X(detectedPointIndex) + xp);
samplePointsY = round(detectedCircle{i}.Y(detectedPointIndex) + yp);
sampleValueInd = sub2ind([row,col],samplePointsY,samplePointsX);
sampleValueMean = mean(Igray(sampleValueInd));
% Check if the circle color is good
if(sampleValueMean > minValueWindow && sampleValueMean < maxValueWindow)
circle = struct();
circle.X = detectedCircle{i}.X(detectedPointIndex);
circle.Y = detectedCircle{i}.Y(detectedPointIndex);
circle.Rad = rad;
finalCircles{finalCircleIndex} = circle;
finalCircleIndex = finalCircleIndex + 1;
end
end
end
% Find Main circle by merging close hyptosis together
for finaCircleInd=1:1:length(finalCircles)
circleCenter(finaCircleInd,1) = finalCircles{finaCircleInd}.X;
circleCenter(finaCircleInd,2) = finalCircles{finaCircleInd}.Y;
circleCenter(finaCircleInd,3) = finalCircles{finaCircleInd}.Rad;
end
[ind,C] = kmeans(circleCenter,2);
c = [length(find(ind==1));length(find(ind==2))];
[~,maxInd] = max(c);
xCircle = median(circleCenter(ind==maxInd,1));
yCircle = median(circleCenter(ind==maxInd,2));
radCircle = median(circleCenter(ind==maxInd,3));
% Plot circle
imshow(Igray);
hold on
ang=0:0.01:2*pi;
xp=radCircle*cos(ang);
yp=radCircle*sin(ang);
plot(xCircle+xp,yCircle+yp,'Color','red', 'LineWidth',5);
And two image
I am doing a real-time people detection using HOG-LBP descriptor and using a sliding window approach for the detector also LibSVM for the classifier. However, after classifier I never get multiple detected people, sometimes is only 1 or might be none. I guess I have a problem on my classification step. Here is my code on classification:
label = ones(length(featureVector),1);
P = cell2mat(featureVector);
% each row of P' correspond to a window
% classifying each window
[~, predictions] = svmclassify(P', label,model);
% set the threshold for getting multiple detection
% the threshold value is 0.7
get_detect = predictions.*[predictions>0.6];
% the the value after sorted
[r,c,v]= find(get_detect);
%% Creating the bounding box for detection
for ix=1:length(r)
rects{ix}= boxPoint{r(ix)};
end
if (isempty(rects))
rects2=[];
else
rects2 = cv.groupRectangles(rects,3,'EPS',0.35);
end
for i = 1:numel(rects2)
rectangle('Position',[rects2{i}(1),rects2{i}(2),64,128], 'LineWidth',2,'EdgeColor','y');
end
For the whole my code, I have posted here : [HOG with SVM] (sliding window technique for multiple people detection)
I really need a help for it. Thx.
If you have problems wiith the sliding window, you can use this code:
topLeftRow = 1;
topLeftCol = 1;
[bottomRightCol bottomRightRow d] = size(im);
fcount = 1;
% this for loop scan the entire image and extract features for each sliding window
for y = topLeftCol:bottomRightCol-wSize(2)
for x = topLeftRow:bottomRightRow-wSize(1)
p1 = [x,y];
p2 = [x+(wSize(1)-1), y+(wSize(2)-1)];
po = [p1; p2];
img = imcut(po,im);
featureVector{fcount} = HOG(double(img));
boxPoint{fcount} = [x,y];
fcount = fcount+1;
x = x+1;
end
end
lebel = ones(length(featureVector),1);
P = cell2mat(featureVector);
% each row of P' correspond to a window
[~, predictions] = svmclassify(P',lebel,model); % classifying each window
[a, indx]= max(predictions);
My idea is simple here. I am using mexopencv and trying to see whether there is any object present in my current that matches with any image stored in my database.I am using OpenCV DescriptorMatcher function to train my images.
Here is a snippet, I am wishing to build on top of this, which is one to one one image matching using mexopencv, and can also be extended for image stream.
function hello
detector = cv.FeatureDetector('ORB');
extractor = cv.DescriptorExtractor('ORB');
matcher = cv.DescriptorMatcher('BruteForce-Hamming');
train = [];
for i=1:3
train(i).img = [];
train(i).points = [];
train(i).features = [];
end;
train(1).img = imread('D:\test\1.jpg');
train(2).img = imread('D:\test\2.png');
train(3).img = imread('D:\test\3.jpg');
for i=1:3
frameImage = train(i).img;
framePoints = detector.detect(frameImage);
frameFeatures = extractor.compute(frameImage , framePoints);
train(i).points = framePoints;
train(i).features = frameFeatures;
end;
for i = 1:3
boxfeatures = train(i).features;
matcher.add(boxfeatures);
end;
matcher.train();
camera = cv.VideoCapture;
pause(3);%Sometimes necessary
window = figure('KeyPressFcn',#(obj,evt)setappdata(obj,'flag',true));
setappdata(window,'flag',false);
while(true)
sceneImage = camera.read;
sceneImage = rgb2gray(sceneImage);
scenePoints = detector.detect(sceneImage);
sceneFeatures = extractor.compute(sceneImage,scenePoints);
m = matcher.match(sceneFeatures);
%{
%Comments in
img_no = m.imgIdx;
img_no = img_no(1);
%I am planning to do this based on the fact that
%on a perfect match imgIdx a 1xN will be filled
%with the index of the training
%example 1,2 or 3
objPoints = train(img_no+1).points;
boxImage = train(img_no+1).img;
ptsScene = cat(1,scenePoints([m.queryIdx]+1).pt);
ptsScene = num2cell(ptsScene,2);
ptsObj = cat(1,objPoints([m.trainIdx]+1).pt);
ptsObj = num2cell(ptsObj,2);
%This is where the problem starts here, assuming the
%above is correct , Matlab yells this at me
%index exceeds matrix dimensions.
end [H,inliers] = cv.findHomography(ptsScene,ptsObj,'Method','Ransac');
m = m(inliers);
imgMatches = cv.drawMatches(sceneImage,scenePoints,boxImage,boxPoints,m,...
'NotDrawSinglePoints',true);
imshow(imgMatches);
%Comment out
%}
flag = getappdata(window,'flag');
if isempty(flag) || flag, break; end
pause(0.0001);
end
Now the issue here is that imgIdx is a 1xN matrix , and it contains the index of different training indices, which is obvious. And only on a perfect match is the matrix imgIdx is completely filled with the matched image index. So, how do I use this matrix to pick the right image index. Also
in these two lines, I get the error of index exceeding matrix dimension.
ptsObj = cat(1,objPoints([m.trainIdx]+1).pt);
ptsObj = num2cell(ptsObj,2);
This is obvious since while debugging I saw clearly that the size of m.trainIdx is greater than objPoints, i.e I am accessing points which I should not, hence index exceeds
There is scant documentation on use of imgIdx , so anybody who has knowledge on this subject, I need help.
These are the images I used.
Image1
Image2
Image3
1st update after #Amro's response:
With the ratio of min distance to distance at 3.6 , I get the following response.
With the ratio of min distance to distance at 1.6 , I get the following response.
I think it is easier to explain with code, so here it goes :)
%% init
detector = cv.FeatureDetector('ORB');
extractor = cv.DescriptorExtractor('ORB');
matcher = cv.DescriptorMatcher('BruteForce-Hamming');
urls = {
'http://i.imgur.com/8Pz4M9q.jpg?1'
'http://i.imgur.com/1aZj0MI.png?1'
'http://i.imgur.com/pYepuzd.jpg?1'
};
N = numel(urls);
train = struct('img',cell(N,1), 'pts',cell(N,1), 'feat',cell(N,1));
%% training
for i=1:N
% read image
train(i).img = imread(urls{i});
if ~ismatrix(train(i).img)
train(i).img = rgb2gray(train(i).img);
end
% extract keypoints and compute features
train(i).pts = detector.detect(train(i).img);
train(i).feat = extractor.compute(train(i).img, train(i).pts);
% add to training set to match against
matcher.add(train(i).feat);
end
% build index
matcher.train();
%% testing
% lets create a distorted query image from one of the training images
% (rotation+shear transformations)
t = -pi/3; % -60 degrees angle
tform = [cos(t) -sin(t) 0; 0.5*sin(t) cos(t) 0; 0 0 1];
img = imwarp(train(3).img, affine2d(tform)); % try all three images here!
% detect fetures in query image
pts = detector.detect(img);
feat = extractor.compute(img, pts);
% match against training images
m = matcher.match(feat);
% keep only good matches
%hist([m.distance])
m = m([m.distance] < 3.6*min([m.distance]));
% sort by distances, and keep at most the first/best 200 matches
[~,ord] = sort([m.distance]);
m = m(ord);
m = m(1:min(200,numel(m)));
% naive classification (majority vote)
tabulate([m.imgIdx]) % how many matches each training image received
idx = mode([m.imgIdx]);
% matches with keypoints belonging to chosen training image
mm = m([m.imgIdx] == idx);
% estimate homography (used to locate object in query image)
ptsQuery = num2cell(cat(1, pts([mm.queryIdx]+1).pt), 2);
ptsTrain = num2cell(cat(1, train(idx+1).pts([mm.trainIdx]+1).pt), 2);
[H,inliers] = cv.findHomography(ptsTrain, ptsQuery, 'Method','Ransac');
% show final matches
imgMatches = cv.drawMatches(img, pts, ...
train(idx+1).img, train(idx+1).pts, ...
mm(logical(inliers)), 'NotDrawSinglePoints',true);
% apply the homography to the corner points of the training image
[h,w] = size(train(idx+1).img);
corners = permute([0 0; w 0; w h; 0 h], [3 1 2]);
p = cv.perspectiveTransform(corners, H);
p = permute(p, [2 3 1]);
% show where the training object is located in the query image
opts = {'Color',[0 255 0], 'Thickness',4};
imgMatches = cv.line(imgMatches, p(1,:), p(2,:), opts{:});
imgMatches = cv.line(imgMatches, p(2,:), p(3,:), opts{:});
imgMatches = cv.line(imgMatches, p(3,:), p(4,:), opts{:});
imgMatches = cv.line(imgMatches, p(4,:), p(1,:), opts{:});
imshow(imgMatches)
The result:
Note that since you did not post any testing images (in your code you are taking input from the webcam), I created one by distorting one the training images, and using it as a query image. I am using functions from certain MATLAB toolboxes (imwarp and such), but those are non-essential to the demo and you could replace them with equivalent OpenCV ones...
I must say that this approach is not the most robust one.. Consider using other techniques such as the bag-of-word model, which OpenCV already implements.