Here is model for face detection. Bounding box is to track and detect face. Active contours need to be added to get exact shape of the face not its features. To do this we need to proceed with segmentation of the frames within the video. I applied segmentation that is done on the image however on the image you can choose where you want to initialize segmentation as it is a still image, with the video however it needs to be dynamic and faster in the same time as it will loop trought live images which are not stored anywhere. I want both the bounding box and active contours can anyone guide me how to achieve this? Here is the code so far:
tic;
clear all
close all
clc
%Create the face detector object.
faceDetector = vision.CascadeObjectDetector();
%Get the input device using image acquisition toolbox,resolution = 640x480 to improve performance
obj =imaq.VideoDevice('winvideo', 1, 'YUY2_320x240','ROI', [1 1 320 240]);
set(obj,'ReturnedColorSpace', 'rgb'); % Set obejct to RGB colours
figure('menubar','none','tag','webcam');
%preview (obj)
while (true)
frame=step(obj);
%----IT IS TO DO WITH THIS PART OF CODE
%m = zeros(size(frame,1),size(frame,2)); %-- create initial mask
%m(20:222,20:250) = 1; %show the specific image with the give parameters
%m = imresize(m,.5); % for fast compution
%seg = region_seg(frame, m, 300); %-- run segmentation
bbox=step(faceDetector,frame);
boxInserter = insertObjectAnnotation(frame,'rectangle',bbox,'Face Detected');
imshow(boxInserter,'border','tight');
f=findobj('tag','webcam');
if (isempty(f));
close(gcf)
break
end
pause(0.05)
end
release(obj)
toc;
Some example of what I want to do: http://groups.inf.ed.ac.uk/calvin/FastVideoSegmentation/
Related
I have this problem in image processing and I couldn't find an algorithm to perform well under this condition.It's so simple to understand but I don't know how to implement it in OpenCV or in Matlab, so any algorithm or function in one of them (MATLAB or opencv) is helpful.
1 . lets suppose that an image and background of scene is like the below
2 . We apply an edge detector to image and my current image will be like the below picture.
now my Problem is that how we can find the biggest contour or area like the below in edge image?
If you want original picture the original picture is
and in matlab you can get edge image by below codes.
clc
clear
img = imread('1.png'); % read Image
gray = rgb2gray(img); % Convert RGB to gray-scale
edgeImage = edge(gray,'canny',0.09); % apply canny to gray-scale image
imshow(edgeImage) % Display result in figure(MATLAB)
In OpenCV you can use below code
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
Mat img = imread("1.png");
Mat gray;
cvtColor(img, //BGR form image
gray, //Mat for gray(destination)
CV_BGR2GRAY); //type of transform(in here BGR->GRay)
Mat edgeImage;
Canny(gray, //Input Array
edgeImage, //Output Array
40, // Lower threshold
120); //Upper threshold
namedWindow("Edge-Image"); //create a window for display image
imshow("Edge-Image",edgeImage); //Display edgeImage in window that in before line create
waitKey(0); //stick display window and wait for any key
return 0;
}
Here is a solution in Matlab using imdilate to close the contours and regionprops to get the closed objects area:
% Your code to start
img = imread('Image.png'); % read Image
gray = rgb2gray(img); % Convert RGB to gray-scale
edgeImage = edge(gray,'canny',0.09); % apply canny to gray-scale image
% First dilate to close contours
BW = imdilate(edgeImage, strel('disk',4,8));
% Then find the regions
R = regionprops(~BW, {'Area', 'PixelIdxList'});
% Find the second biggest region (the biggest is the background)
[~, I] = sort([R(:).Area], 'descend');
Mask = zeros(size(img));
Mask(R(I(2)).PixelIdxList) = 1;
% Display
clf
imshow(Mask)
An the result is:
Best,
First close the contour with morphological closing since you can't find it now as it is not really a distinct contour, but a part of the larger one.
After closing, just use the findContours() function and use its output to get the area of each contour and eventually find the maximum one by using the contourArea() function.
How to remove skin parts on segmentation ?
First, I made this first picture smaller, since the picture is somehow intimidating, I'll give the image at the end section.
I'm using RGB & ycbcr segmentation, but it seems the segmentation didn't work well.
clear all;
close all;
clc;
img=imread('acne.jpg');
%ycbcr segmentation
img_ycbcr=img; %image from the previous segmentation
ycbcr=rgb2ycbcr(img_ycbcr);
cb=ycbcr(:,:,2);
cr=ycbcr(:,:,3);
%Detect Skin
%[r,c,v] = find(cb>=77 & cb<=127 & cr>=133 & cr<=173);
[r c v] = find(cb<=77 | cb >=127 | cr<=133 | cr>=173);
numid = size(r,1);
%Mark Skin Pixels
for i=1:numid
img_ycbcr(r(i),c(i),:) = 0;
% bin(r(i),c(i)) = 1;
end
figure
title('ycbcr segmentation');
imshow(img_ycbcr);
%==============================================================
%rgb segmentation
img_rgb=img_ycbcr;
r=img_rgb(:,:,1);
g=img_rgb(:,:,2);
b=img_rgb(:,:,3);
[row col v]= find(b>0.79*g-67 & b<0.78*g+42 & b>0.836*g-14 & b<0.836*g+44 ); %non skin pixels
numid=size(row,1);
for i=1:numid
img_rgb(row(i),col(i),:)=0;
end
figure
imshow(img_rgb);
Here my sample :
I agree with Adriaan. Don't do it with just colour, use additional information such as the shape and the edges.
The last two colorplanes seem to have the most contrast, so let's use one of them:
Nipple = imread('N8y6Q.jpg')
Nipple = imadjust(Nipple(:,:,2));
imshow(Nipple)
[centers, radii] = imfindcircles(Nipple, [30,60]);
hold on
imshow(Nipple);
viscircles(centers, radii);
The circular Hough transform is a robust way to find circular objects if you know the approximate radius range and are satisfied with the approx. location and size of the object.
If not you can try other classical methods, e.g. (Canny) edge detection, using the Hough center point as a marker -> region growing, fitting a snake etc. etc.
I have written a computer vision code which uses MSER to detect features in MATLAB. I used the built in 'detectMSERFeatures' function to process a locally saved video. Now, I want to port it t C language using MATLAB Coder. But, MATLAB Coder doesn't support this function. I've attached a screenshot of the output. Any help would be appreciated.
My code is as follows:
function count = master
% Clear workspace and Initialize Frame count
clear all;
F=0;
count=0;
% 1.Input Video Object Handler Definition
inputVideo = vision.VideoFileReader('video.mp4');
videoPlayer = vision.VideoPlayer;
% 2.Cropping Height and Width of frame. Subject to convenient
% adjustments according to the position of camera. Mainly using to
% crop out street lights from the top of the frame.
height = floor(inputVideo.info().VideoSize(2)*0.7);
width = inputVideo.info().VideoSize(1);
crop = vision.ImagePadder(...
'SizeMethod','Output size', ...
'NumOutputRowsSource','Property', ...
'RowPaddingLocation','Top', ...
'NumOutputRows', height, ...
'NumOutputColumns', width);
% 3.Frame Conversion from True Colour to Grayscale
gray = vision.ColorSpaceConverter;
gray.Conversion = 'RGB to intensity';
% 4.Implementation on individual frames till the end of video.
while(~isDone(inputVideo))
% Current Frame number
F = F + 1;
%flag=0
% Current Frame
currentFrame = step(inputVideo);
% Crop
currentFrame = step(crop, currentFrame);
% Convert to Grayscale
currentFrame = step(gray, currentFrame);
% Threshold
currentFrame(currentFrame<0.7843) = 0;
% Detect MSER Regions
regions = detectMSERFeatures(currentFrame, ...
'RegionAreaRange', [800 3000], ...
'ThresholdDelta', 4);
% Check for 'big bright blob(s)', or high incoming beam
% and output detected blob count and corresponding frame
if(regions.Count >= 2 && regions.Count <=6)
disp([regions.Count, F]);
%flag=1;
count= count+1;
end
% Port frame to player
step(videoPlayer,currentFrame);
end
%5.Release both player and video file instances
release(inputVideo);
release(videoPlayer);
I'm using MATLAB R2013a.
The only way to fix this would be to upgrade to a more recent version of MATLAB. Code generation support for detectMSERFeatures was added in R2013b release.
I have the following code:
a = imaqhwinfo;
%[camera_name, camera_id, format] = getCameraInfo(a);
% Capture the video frames using the videoinput function
% You have to replace the resolution & your installed adaptor name.
% vid = videoinput('winvideo', 1);
vid = VideoReader('C:\VHDL Project\VHDL Course\Logtel\Image processing\Sacramento Kings vs Golden State Warriors.mp4')
% Set the properties of the video object
set(vid, 'FramesPerTrigger', Inf);
set(vid, 'ReturnedColorspace', 'rgb')
vid.FrameGrabInterval = 5;
%start the video aquisition here
start(vid)
% Set a loop that stop after 100 frames of aquisition
while(vid.FramesAcquired<=200)
% Get the snapshot of the current frame
data = getsnapshot(vid);
% Now to track red objects in real time
% we have to subtract the red component
% from the grayscale image to extract the red components in the image.
diff_im = imsubtract(data(:,:,1), rgb2gray(data));
%Use a median filter to filter out noise
diff_im = medfilt2(diff_im, [3 3]);
% Convert the resulting grayscale image into a binary image.
diff_im = im2bw(diff_im,0.18);
% Remove all those pixels less than 300px
diff_im = bwareaopen(diff_im,300);
% Label all the connected components in the image.
bw = bwlabel(diff_im, 8);
% Here we do the image blob analysis.
% We get a set of properties for each labeled region.
stats = regionprops(bw, 'BoundingBox', 'Centroid');
% Display the image
imshow(data)
hold on
%This is a loop to bound the red objects in a rectangular box.
for object = 1:length(stats)
bb = stats(object).BoundingBox;
bc = stats(object).Centroid;
rectangle('Position',bb,'EdgeColor','r','LineWidth',2)
plot(bc(1),bc(2), '-m+')
a=text(bc(1)+15,bc(2), strcat('X: ', num2str(round(bc(1))), ' Y: ', num2str(round(bc(2)))));
set(a, 'FontName', 'Arial', 'FontWeight', 'bold', 'FontSize', 12, 'Color', 'yellow');
end
hold off
end
% Both the loops end here.
% Stop the video aquisition.
stop(vid);
% Flush all the image data stored in the memory buffer.
flushdata(vid);
% Clear all variables
clear all
sprintf('%s','That was all about Image tracking, Guess that was pretty easy :) ')
I try to read a MP4 file, run it and find reds objects.
Unfortunately the tool pop up me the following error:
Error using VideoReader/set
The name 'FramesPerTrigger' is not an accessible property for an instance of class 'VideoReader'.
Error in RedObjectTracking (line 11)
set(vid, 'FramesPerTrigger', Inf);
I would be happy if someone can tell me where my mistake
Thanks for the help.
If you look at the documentation for VideoReader you will see that (as your error says) FramesPerTrigger is not listed as a valid property. Based on more of the documentation for FramesPerTrigger, it is a property that is only used for video devices in the image acquisition toolbox (similar to the videoinput that you have commented out on the line before. So the line where you attempt to set the FramesPerTrigger value shouldn't be there when you're only using a video file input via VideoReader.
This also makes sense, as your image acquisition system will have triggers and it is important for MATLAB to know how many frames to grab for each trigger. A video file, on the other hand (which VideoReader is intended to handle) has no triggers and the frame information is readily available from the file itself.
It looks, based on your code, that you're trying to simply substitute VideoReader in for videoinput but this will not work so there are many points throughout your code which will result in errors.
i have a real time skin detection algorithm where it gives me a bounding box around the skin region with rectangle('Position',bb,'EdgeColor','r','LineWidth',2) from an original image. I wish to use the code to first detect the skin region from the original image before i use Viola Jones to detect the face region from the cropped skin region. I wish to know after i crop the skin region then use the face detection algorithm to detect the face, how can i map the bounding box of the face to the original image.
function cameraon_Callback(hObject, eventdata, handles)
% hObject handle to cameraon (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
global video;
global videoFrame;
axes(handles.axes1);
video = videoinput('winvideo',1,'YUY2_320x240');
set(video,'ReturnedColorSpace','rgb');
handles.video=video;
triggerconfig(video,'manual');
video.FramesPerTrigger = 1;
guidata(hObject,handles);
faceDetector=vision.CascadeObjectDetector('FrontalFaceCART');
faceDetector.MinSize=[20 20];
faceDetector.MergeThreshold = 20;
videoFrame=getsnapshot(video);
bbox=step(faceDetector,videoFrame);
if numel(bbox) == 0
errordlg('Face not detected. Please try again.');
set(handles.cameraon,'String','Start Camera')
stop(video);
delete(video);
clear;
else
axes(handles.axes1);
start(video);
end
while(true)
frame=getsnapshot(video);
%Detect faces.
data = frame;% this is to read a image from data base. just put any image name u want to give make sure its placed in bin
diff_im = imsubtract(data(:,:,1), rgb2gray(data)); % deleting gray scale pixels from image
diff_im = medfilt2(diff_im, [3 3]); %applying filter one
diff_im = imadjust(diff_im); % adjust image function to fill small holes (check all the function's functionality to have idea of whats going on)
level = graythresh(diff_im);% extract level value
bw = im2bw(diff_im,level);
BW5 = imfill(bw,'holes');
bw6 = bwlabel(BW5, 8);
stats = regionprops(bw6,['basic']);%basic mohem nist
measurements = regionprops(bw6, 'boundingbox');
BB1=struct2cell(measurements);
BB2=cell2mat(BB1);
a = BB2(1);
b = BB2(2);
c = BB2(3);
d = BB2(4);
[N,M]=size(stats);
if (bw==0)% check if there is no skin color then exit
break;
else
tmp = stats(1);
for i = 2 : N % checking for biggest hole to mark it as face
if stats(i).Area > tmp.Area
tmp = stats(i);
end
end
bb = tmp.BoundingBox; % applying identification square to mark skin color region
bc = tmp.Centroid;
videoFrame=getsnapshot(video);
This is the place where i cannot put the bounding box back to the original image.
skinImage = imcrop(videoFrame,bb(1,:));
bbox = step(faceDetector,skinImage);
bbox(1,1:2) = bbox(1,1:2) + bb(1,1:2);
videoOut = insertObjectAnnotation(videoFrame,'rectangle',bbox,'Face');
cla;
imshow(videoOut,[]);
drawnow;
pause(0.0001);
end
end
guidata(hObject,handles);
I want to put the rectangle i got from the face detector onto the full size image at the original location in the image where the cropped image come from.
You simply add the coordinates of the top-left corner of the cropped region to the top-left corners of the detected bounding boxes.
Also, in the latest version of MATLAB vision.CascadeObjectDetector supports passing in the region of interest where you want to detect objects, so that you do not need to crop. Then it will adjust the coordinates for you. Check the documentation for the step() method of vision.CascadeObjectDetector.