I have a final project about face detection. I decided to do this project using Matlab and the Computer Vision Toolbox because as you know, this toolbox uses Viola Jones Algorithm for object detection.
I wrote the code below but the code matches a face with the a non-face object.
Question
How can I change the code so that it matches faces only?
clear all
clc
% Read input image
I = imread('C:\imageprocessingwithMatlab\Image001.jpg');
figure,imshow(I);
%% Detect Faces in the image
% Create a detector object
faceDetector = vision.CascadeObjectDetector('FrontalFaceCART');
% Detect faces
bbox = step(faceDetector, I);
% Draw boxes around detected faces and display results
IFaces = insertObjectAnnotation(I, 'rectangle', bbox, 'Face');
figure, imshow(IFaces), title('Detected Faces');
Unfortunately, there is no guaranteed way to eliminate all false detections. However, you may be able to tweak some parameters to make the face detection work better on your particular image.
The first thing I would do is look at your false detections. If they tend to be larger or smaller than a typical face in your image, then you can try to adjust the MinSize and MaxSize parameters to get rid of them.
You can also try to use a different model, i. e. 'FrontalFaceLBP' instead of 'FrontalFaceCART'.
If that doesn't work, you can try a more clever trick. First detect the upper bodies of people using the 'UpperBody' classification model. Then detect the faces, and only keep the faces that are contained within upper bodies. This is likely to cut down on false detections, but you are also running a risk of missing real faces.
Finally, you can train your own face detector using the trainCascadeObjectDetector function. But that is definitely beyond the scope of your project.
Related
I'm using the function regionprops to detect the number of trees on a image taked by drone.
First I removed the ground using Blue NDVI:
Image with threshold:
Then I used the function regionprops to detect the number of trees on image:
But there are a problem on region 15, because all trees on that region are connected and it detects as one tree.
I tried to separate the trees on that region using Watershed Segmentation, but its not working:
Am I doing this the wrong way?
Is there a better method to separate the trees?
If anyone can help me with this problem I will appreciate. Here is the region 15 without the ground:
If it helps, here is the Gradient Magnitude image:
It has been some time since this question was asked. I hope it is not too late for an answer. I see a general problem of using watershed segmentation in similar questions. Sometimes the objects are apart, not touching each other like in this example . In such cases, only blurring the image is enough to use watershed segmentation. Sometimes the objects are located closely and touch each other, thus the boundaries of objects are not clear like in this example. In such cases, using distance transform-->blur-->watershed helps. In this question, the logical approach should be using distance transform. However, this time the boundaries are not clear due to shadows on and nearby the trees. In such cases, it is good to use any information that helps to separate the objects as in here or emphasise objects itself.
In this question, I suggest using colour information to emphasise tree pixels.
Here are the MATLAB codes and results.
im=imread('https://i.stack.imgur.com/aBHUL.jpg');
im=im(58:500,86:585,:);
imOrig=im;
%% Emphasize trees
im=double(im);
r=im(:,:,1);
g=im(:,:,2);
b=im(:,:,3);
tmp=((g-r)./(r-b));
figure
subplot(121);imagesc(tmp),axis image;colorbar
subplot(122);imagesc(tmp>0),axis image;colorbar
%% Transforms
% Distance transform
im_dist=bwdist(tmp<0);
% Blur
sigma=10;
kernel = fspecial('gaussian',4*sigma+1,sigma);
im_blured=imfilter(im_dist,kernel,'symmetric');
figure
subplot(121);imagesc(im_dist),axis image;colorbar
subplot(122);imagesc(im_blured),axis image;colorbar
% Watershed
L = watershed(max(im_blured(:))-im_blured);
[x,y]=find(L==0);
figure
subplot(121);
imagesc(imOrig),axis image
hold on, plot(y,x,'r.','MarkerSize',3)
%% Local thresholding
trees=zeros(size(im_dist));
centers= [];
for i=1:max(L(:))
ind=find(L==i & im_blured>1);
mask=L==i;
[thr,metric] =multithresh(g(ind),1);
trees(ind)=g(ind)>thr*1;
trees_individual=trees*0;
trees_individual(ind)=g(ind)>thr*1;
s=regionprops(trees_individual,'Centroid');
centers=[centers; cat(1,[],s.Centroid)];
end
subplot(122);
imagesc(trees),axis image
hold on, plot(y,x,'r.','MarkerSize',3)
subplot(121);
hold on, plot(centers(:,1),centers(:,2),'k^','MarkerFaceColor','r','MarkerSize',8)
You could try out a marker-based watershed. Vanilla watershed transforms never work out of the box in my experience. One way to perform one would be to first create a distance map of the segmented area by using imdist(). Then you could suppress local maxima by calling imhmax(). Then calling watershed() will usually perform noticeably better.
Here's a sample script on how to do it:
bwTrees = imopen(bwTrees, strel('disk', 10));
%stabilize the borders to lessen oversegmentation
distTrees = -bwDist(~bwTrees); %Distance transform
distTrees(~bwTrees) = -Inf; %set background to -Inf
distTrees = imhmin(distTrees, 3); %suppress local minima
basins = watershed(distTrees);
ridges = basins == 0;
segmentedTrees = bwTrees & ~ridges; %segment
segmentedTrees = imopen(segmentedTrees, strel('disk', 2));
%remove 'segmentation trash' caused by oversegmentation near the borders.
I fiddled around with the parameters for ~10min but got fairly poor results:
You'd need to pour work into this. Mostly in pre- and post-processing via the morphology. More curvature would help the segmentation, if you could lower the sensitivity of the segmentation in the first part. The size of the h-minima transform is also an paramater of interest. You can probably get adequate results this way.
Probably a better approach would come from the world of clustering techniques. If you have or can find a way to estimate the number of trees in the forest you should be able to use traditional clustering methods to segment out the trees. A Gaussian mixture model or a k-means with k-trees would probably work much better than a marker based watershed if you get even nearly the right amount of trees. Normally I'd estimate the number of trees based on the number of suppressed maxima on a h-maxima transform, but your labels might be a bit too sausagey for that. It's worth a try though.
I'm using the function regionprops to detect the number of trees on a image taked by drone.
First I removed the ground using Blue NDVI:
Image with threshold:
Then I used the function regionprops to detect the number of trees on image:
But there are a problem on region 15, because all trees on that region are connected and it detects as one tree.
I tried to separate the trees on that region using Watershed Segmentation, but its not working:
Am I doing this the wrong way?
Is there a better method to separate the trees?
If anyone can help me with this problem I will appreciate. Here is the region 15 without the ground:
If it helps, here is the Gradient Magnitude image:
It has been some time since this question was asked. I hope it is not too late for an answer. I see a general problem of using watershed segmentation in similar questions. Sometimes the objects are apart, not touching each other like in this example . In such cases, only blurring the image is enough to use watershed segmentation. Sometimes the objects are located closely and touch each other, thus the boundaries of objects are not clear like in this example. In such cases, using distance transform-->blur-->watershed helps. In this question, the logical approach should be using distance transform. However, this time the boundaries are not clear due to shadows on and nearby the trees. In such cases, it is good to use any information that helps to separate the objects as in here or emphasise objects itself.
In this question, I suggest using colour information to emphasise tree pixels.
Here are the MATLAB codes and results.
im=imread('https://i.stack.imgur.com/aBHUL.jpg');
im=im(58:500,86:585,:);
imOrig=im;
%% Emphasize trees
im=double(im);
r=im(:,:,1);
g=im(:,:,2);
b=im(:,:,3);
tmp=((g-r)./(r-b));
figure
subplot(121);imagesc(tmp),axis image;colorbar
subplot(122);imagesc(tmp>0),axis image;colorbar
%% Transforms
% Distance transform
im_dist=bwdist(tmp<0);
% Blur
sigma=10;
kernel = fspecial('gaussian',4*sigma+1,sigma);
im_blured=imfilter(im_dist,kernel,'symmetric');
figure
subplot(121);imagesc(im_dist),axis image;colorbar
subplot(122);imagesc(im_blured),axis image;colorbar
% Watershed
L = watershed(max(im_blured(:))-im_blured);
[x,y]=find(L==0);
figure
subplot(121);
imagesc(imOrig),axis image
hold on, plot(y,x,'r.','MarkerSize',3)
%% Local thresholding
trees=zeros(size(im_dist));
centers= [];
for i=1:max(L(:))
ind=find(L==i & im_blured>1);
mask=L==i;
[thr,metric] =multithresh(g(ind),1);
trees(ind)=g(ind)>thr*1;
trees_individual=trees*0;
trees_individual(ind)=g(ind)>thr*1;
s=regionprops(trees_individual,'Centroid');
centers=[centers; cat(1,[],s.Centroid)];
end
subplot(122);
imagesc(trees),axis image
hold on, plot(y,x,'r.','MarkerSize',3)
subplot(121);
hold on, plot(centers(:,1),centers(:,2),'k^','MarkerFaceColor','r','MarkerSize',8)
You could try out a marker-based watershed. Vanilla watershed transforms never work out of the box in my experience. One way to perform one would be to first create a distance map of the segmented area by using imdist(). Then you could suppress local maxima by calling imhmax(). Then calling watershed() will usually perform noticeably better.
Here's a sample script on how to do it:
bwTrees = imopen(bwTrees, strel('disk', 10));
%stabilize the borders to lessen oversegmentation
distTrees = -bwDist(~bwTrees); %Distance transform
distTrees(~bwTrees) = -Inf; %set background to -Inf
distTrees = imhmin(distTrees, 3); %suppress local minima
basins = watershed(distTrees);
ridges = basins == 0;
segmentedTrees = bwTrees & ~ridges; %segment
segmentedTrees = imopen(segmentedTrees, strel('disk', 2));
%remove 'segmentation trash' caused by oversegmentation near the borders.
I fiddled around with the parameters for ~10min but got fairly poor results:
You'd need to pour work into this. Mostly in pre- and post-processing via the morphology. More curvature would help the segmentation, if you could lower the sensitivity of the segmentation in the first part. The size of the h-minima transform is also an paramater of interest. You can probably get adequate results this way.
Probably a better approach would come from the world of clustering techniques. If you have or can find a way to estimate the number of trees in the forest you should be able to use traditional clustering methods to segment out the trees. A Gaussian mixture model or a k-means with k-trees would probably work much better than a marker based watershed if you get even nearly the right amount of trees. Normally I'd estimate the number of trees based on the number of suppressed maxima on a h-maxima transform, but your labels might be a bit too sausagey for that. It's worth a try though.
I am working on a project to perform automatically the landing of a quadrotor by visual recognition on a target. I have the code to detect the target through HOG features. Now the idea is to find the triangle, which is isosceles, and measure the lines so that I can determine the orientation that way. I have tried Hough, but I cannot manage to succeed.
The target is a proposed one
, and it consists of an isosceles triangle inside a circle. But if you can think of a better one, please let me know.
Please, ask any questions if anything is unclear. Thank you very much
Update 1:
#McMa 's idea works well when I deal only with the target as an image. This is the code:
clc; close all;
im=imread('target.bmp');
im=rgb2gray(im);
im2=imcrop(im,[467.51 385.51 148.98 61.98]);
im2=imcomplement(im2);
im2=imrotate(im2,0);
s=regionprops(im2,'Area','Centroid','Extrema','Orientation');
[imH,imW]=size(im2);
if imH-s(end).Centroid(2) < imH/2
state=1; % Upright
else
state=2; % Upside down
end
imshow(im2);hold on
plot(s(end).Centroid(1), s(end).Centroid(2), 'b*')
if s(end).Orientation>0
degrees=s(end).Orientation;
else
degrees=s(end).Orientation+180;
end
if (0<degrees)&&(degrees<89.99) && state==2
degrees=degrees+180;
elseif (90<degrees) && (degrees<179) && state==1
degrees=degrees+180;
end
fprintf('The orientation is %g degrees\n',degrees)
Update 2:
Now I have another problem: I need to know somehow whether the camera is seeing the whole target or only the small circle+triangle. I need this before computing the orientation.
I have tried many options. For example, I wanted to count the number of circles, so if there are 2, it is seeing the big target, and if there is 1, just the small one. But they are not well detected. Even if I play with the sensitivity, it's not going to be a robust method.
Image: https://www.dropbox.com/s/7mbpna3xfquq5n7/P0016.bmp?dl=0
Classifer: https://www.dropbox.com/s/236vm3romw56983/Cascade1Matlab.xml?dl=0
im=imread('P0016.bmp');
detector = vision.CascadeObjectDetector('Cascade1Matlab.xml');
bbox = step(detector, im); % Detect the target.
detectedImg = insertObjectAnnotation(im, 'rectangle', bbox, 'target'); % Insert bounding boxes and return marked image.
imshow(detectedImg)
BW=rgb2gray(im);
BW=imcrop(BW,bbox(1,:) +[0 0 10 10]);
[imH,imW]=size(im);
centers = imfindcircles(im,[1 round(imH)]);
figure;hold on;
imshow(im);
plot(centers(:,1),centers(:,2),'r*','LineWidth',4)
I also tried with other approaches such as the Euler number, but with no success, I can't find anything that works properly.
I think the easiest and fastest way would be finding your target and binarize the image. Afterwards use regionprops() and read the "Orientation" property to read the orientation.
If you can't use that toolbox the function is very easily implement by calculating the covariance matrix of your region. Let me know if you need some tips on this.
Edit:
I just so happend to have some nicely vectorized functions around here ;) so if speed is a top priority, you can easily write your own regionprops() trimmed to the bare minimum like this:
function M=ImMoment(Image,ii,jj)
ImSize=size(Image);
K=repmat((1:ImSize(1))',1,ImSize(2)).^ii;
J=repmat(1:ImSize(2),ImSize(1),1).^jj;
M=K.*J.*Image;
M=sum(M(:));
end
for the image moments and
function [Matrix,Centroid,Angle]=CovMat(Image)
Centroid=[ImMoment(Image,0,1)/ImMoment(Image,0,0),...
ImMoment(Image,1,0)/ImMoment(Image,0,0)];
Miu20=ImMoment(Image,0,2)/ImMoment(Image,0,0)-Centroid(1)^2;
Miu02=ImMoment(Image,2,0)/ImMoment(Image,0,0)-Centroid(2)^2;
Miu11=ImMoment(Image,1,1)/ImMoment(Image,0,0)-Centroid(1)*Centroid(2);
Matrix=[Miu20,Miu11 %Covariance Matrix in case you need it for anything...
Miu11,Miu02];
Angle=1/2*atand(2*Miu11/(Miu20-Miu02)); %Your orientation
end
for your orientation and covariance matrix. more about it here.
Image moments are very might, have fun!
I have a very blunt solution in mind. It may work. I have not actually tried it, since there is no image to work on. So if it fails, post the error.
Assumptions:- You have filtered the image and obtained the binary image that contains only triangle "OR" with uniform noise.
Now you can take a 0 degree image (image1). Filter it and obtain binary image (bw1).
So when you are trying to land your quadrotor, take image (image2), convert it binary (bw2).
Now find the correlation between these two images {corr2(bw1, bw2)}. Store this in a variable.
Rotate image with a step angle. Let angle be 5 degrees. {imrotate(bw2, 5)}
Now again find correlation between these two images.
Do this for all angles.
The orientation would the angle (no. of rotation * 5) where the correlation is maximum.
The term maximum signifies that, you may not find the correlation to be 1 as this highly depends on your filtering techniques to obtain perfect binary image.
I also accept that computing the correlation for all the angles requires high computation speed as well as long time. This would be really difficult to achieve in real time if you do not have high computation speed. (In this case you can look into Parallel Computing Toolbox) specially parfor.
Hope this was useful to you. Post a comment if you face any error.
Finally good luck. Nice project.
P.S. Pad white or black pixels depending on your binary image while rotating image.
The image below shows a cow where the boundary has been detected using a combination of thresholding and subtracting a background from a 3D depth image.
My goal is to perform feature extraction on the area INSDIE the boundary. I have read the other questions and have struggled to implement the steps refereed to in similar questions. I do not want to extract the area in the boundary, I simply want to use it for feature extraction.
Please could someone offer a solution that is perhaps simpler? For example, is there a way to give the extractSURFFeatures the boundary coordinates from which to work within?
Below is my boundary code which recieves my processed thresholded image (BW1).
figure(1);
imshow(ImageCell_int{i-269});
%title('Outlines, from bwboundaries()'); axis square;
hold on;
boundaries = bwboundaries(BW1);
numberOfBoundaries = size(boundaries);
for k = 1 : numberOfBoundaries
thisBoundary = boundaries{k};
plot(thisBoundary(:,2), thisBoundary(:,1), 'g', 'LineWidth', 2);
end
hold off;
I would be extremely grateful for any assistance on this.
Great, now I see the cow! :)
You cannot specify an irregularly-shaped region of interest for the detectSURFFeatures function. However, you can detect the features in the whole image, and then create a binary mask of the region of interest, and use it to exclude keypoints, which are outside it.
Edit: If your boundary is represented as a polygon, you can use roipoly function to create a binary mask from it.
Having said that, features that are outside your object's boundary can actually be useful, because they capture information about the shape of the object.
Also, what is your final goal? If you want to recognize individual cows, then local features may not be the best approach. You may do better with a global HOG descriptor (extractHOGFeatures) or with a color histogram, or both.
This answer was discovered on Matlab Central and completely solves the problem above for anyone struggling with a similar issue.
Start with a grey scale outline of the object of interest (BW1).
% Make the mask black and white
double(BW1);
BW2 = logical(BW1);
Next the mask is created and forced to be the same size as the normal image.
mask = cast(BW2, class(normalImage));
maskedImage = normalImage .* mask;
imshow(maskedImage);
Yields the following result:
It is now possible to perform feature extraction on the object of interest.
I want to get a metric of straightness of contour in my binary image (relatively faster). The image looks as follows:
Now, the contours in the red box are the ones which I would like to be removed preferably. Since they are not straight. These are the things I have tried. I am as of now implementing in MATLAB.
1.Collect row and column coordinates of each contour and then take derivative. For straight objects (such as rectangle), derivative will be mostly low with a few spikes (along the corners of the rectangle).
Problem: The coordinates collected are not in order i.e. the order in which the contour will be traversed if we imaging it as a path. Therefore, derivative gives absurdly high values sometimes. Also, the contour is not absolutely straight, its an output of edge detection algorithm, so you can imagine that there might be some discontinuity (see the rectangle at the bottom, human eye can understand that it is a rectangle though it is not absolutely straight).
2.Tried to think about polyfit, but again this contour issue comes up. Since its a rectangle I don't know how to apply polyfit to that point set.
Also, I would like to remove contours which are distributed vertically/horizontally. Basically this is a lane detection algorithm. So lanes cannot be absolutely vertical/horizontal.
Any ideas?
You should look into the features of regionprops more. To be fair I stole the script from this answer, but here it is:
BW = imread('lanes.png');
BW = im2bw(BW);
figure(1),
subplot(1,2,1);
imshow(BW);
cc = bwconncomp(BW);
l = labelmatrix(cc);
a_rp = regionprops(CC,'Area','MajorAxisLength','MinorAxislength','Orientation','PixelList','Eccentricity');
idx = ([a_rp.Eccentricity] > 0.99 & [a_rp.Area] > 100 & [a_rp.Orientation] < 70 & [a_rp.Orientation] > -90);
BW2 = ismember(l,find(idx));
subplot(1,2,2);
imshow(BW2);
You can mess around with the properties. 'Orientation', 'Eccentricity', and 'Area' are probably the parameters you want to mess with. I also messed with the ratios of the major/minor axis lengths but eccentricity basically does this (eccentricity is a measure of how "circular" an ellipse is). Here's the output:
I actually saw a good video specifically from matlab for lane detection using regionprops. I'll try to see if I can find it and link it.
You can segment your image using bwlabel, then work separately on each bwlabel connected object, using find. This should help solve your order problem.
About a metric, the only thing that come to mind at the moment is to fit to an ellipse, and set the a/b (major axis/minor axis) ratio (basically eccentricity) a parameter. For example a straight line (even if not perfect) will be fitted to an ellipse with a very big major axis and a very small minor axis. So say you set a ratio threshold of >10 etc... Fitting to an ellipse can be done using this FEX submission for example.