Is it possible to track a face in video with subtracting frames without using face recognition?
What happen if the face change in next frame? Is there any way to detect this change with subtracting?
Try this example, which uses the Viola-Jones face detection algorithm, and the KLT (Kanade-Lucas-Tomasi) algorithm for tracking.
Face tracking is different than, face recognition. Simply,
Face tracking means, tracking an object that has features of a face.
Face recognition means detecting and recognizing a face among a set of already known faces.
For tracking a face firstly, you need to detect it. So, for detecting a face there are simple techniques such as Haar Feature-based Cascade Classifiers, and LBP cascade classifier. You can google them and read about them.
After the face is detected, then you can try to solve the problem of face tracking. But tracking the face through different frames, means that you repeat the face detection process for each frame. Now the question would be how to increase the speed of the detection, that be suitable for a normal frame rate like 30 FPS?
A simple solution is to decrease the search area. In other words, if the face is detected in the first frame, in the second frame there is no need to search the whole area of the frame. The optimum solution would be to start the search from the position of the face in the previous frame.
A simple face detection and tracking tutorial can be found here.
Related
I am trying to use a KLT tracker for human tracking in a CCTV footage. The people are very close to the CCTV. I noticed that some time people change the orientation of the heads and also the frame rate is slightly slow. I have read from Rodrigues et al. paper Section 3.4 that the:
"This simple procedure (KLT tracking procedure) is extremely robust and can establish matches between head detections where the head HAS NOT BEEN continuously detected continuously detected due to pose variation or partial occlusions due to other members of the crowd".
Paper can be found in this link : Rodriguez et al.
1). I understood that the KLT tracker is robust to pose variations and occlusions. Am I right?
I was trying to track one single person in footage till now by using the MATLAB KLT as in :
MATLAB KLT
However, the points were not being found after JUST 3 frames.
2). Can someone explain why this is happening or else a better solution to this. Maybe using a particle/Kalman filter should be better?
I do not recommend using a KLT tracker for close CCTV cameras due to the following reasons:
1. CCTV frame rate is typically low, so people change their appearance significantly between frames
2. Since the camera is close to the people, they also change their appearance over time due to perspective effects (e.g. face can be seen when person is far from camera, but as he/she gets closer, only the top of the head is seen).
3. Due to closeness, people also significantly change scale and aspect ratio, which is a challenge for some head detectors.
KLT only works well when the neighborhood of the pixel, including both foreground and background, remains similar. The above properties make this less likely for most pixels. I can only recommend KLT as an additional motion based hint for tracking, as a vector of field of part motions.
Most single person trackers do not adapt well to scale change. I suggest you start with some state of the art tracker, like Struck (C++ code by Sam Hare available here), and modify the search routine to work with scale change.
KLT by itself only works for short-term tracking. The problem is that you lose points because of tracking errors, 3D rotation, occlusion, or objects leaving the field of view. For long-term tracking you need some way of replenishing the points. In the multiple face tracking example the new points are acquired by periodically re-detecting the faces.
Your particular case sounds a little strange. You should not be losing all the points after just 3 frames. If this happens than either the object is moving too fast, or your frame rate is too low.
My project is a designation for a system which analyze soccer videos. In a part of this project I need to detect contours of players and everybody in the play field. For all players which don’t have occlusion with the advertisement billboards, I have used the color of play field (green) to detect contours and extract players. But I have problem with the situation that players or referee have occlusion with the advertisement billboards. Suppose a situation that advertisements on the billboards are dynamic (LED billboards). As you know in this situation finding the contours is more difficult because there is no static background color or texture. You can see two example of this condition in the following images.
NOTE: in order to find the position of occlusion, I use the region between the field line and advertisement billboards, because this region has the color of the field (green). This region is shown by a red rectangular in the following image.
I expect the result be similar to the following image.
Could anyone suggest an algorithm to detect these contours?
You can try several things.
Use vision.PeopleDetector object to detect people on the field. You can also track the detected people using vision.KalmanFilter, as in the Tracking Pedestrians from a Moving Car example.
Use vision.OpticalFlow object in the Computer Vision System Toolbox to compute optical flow. You can then analyze the resulting flow field to separate camera motion from the player motion.
Use frame differencing to detect moving objects. The good news is that that will give you the contours of the people. The bad news is that it will give you many spurious contours as well.
Optical Flow would work for such problems as it captures motion information. Foreground extraction techniques using HMM or GMM or non-parametric may solve the problem as I have used for motion analysis in surveillance videos to detect anomaly (Background was static). Magnitude and orientation of optical flow seems to be an effective method. I have read papers on segmentation using optical flow. I hope this may help you.
I have face recognition code which recognizes the face and show its name when I will generate the database for that, it is working well when only one face is in front of camera but I am using "FaceDetect = vision.CascadeObjectDetector;" for face detection which will detect all the faces in front of camera and I am getting bounding box around all the faces.
Is there any way to track face of my choice among the detected all faces?
Or can I code each bounding box face separately?
Please give me some guideline. Thanks
Here's an example of how you can track one face. However, if you detect multiple faces, you would need some way for your program to decide, which face to track.
Alternatively, you can track all faces.
I want to recognize the human from the image or video. I have used OPENCVSharp for Face detection it works fine with front face and low accuracy for side face. what i want is human detection (face detection wont work b'z face might be opposite to camera).
Can any one suggest any library or reference link for human detection from either image or video ? Also is it possible to identify the gender out of it ? is there any way we can track human from the video ?
First you need to investigate either Haar or HoG detection and decide which best suits your problem. You will then need to follow the same steps that you have conducted for face recognition but with a dataset that includes people instead.
Use this link which has a long list of free to use (non commercial) datasets which you can find one to use
then use opencv_traincascades to get your cascade.xml file
Is it possible to detect eye in the video mode in MATLAB? I am trying to detect the eye and make some predictions based on the movement of the eye. But am not sure on how to do that. Can someone help me in how to start about that? Thanks in advance.
You could take a look at this set of functions on The MathWorks File Exchange: Fast Eyetracking by Peter Aldrian.
Quoting from the description of the post (to give a little more detail here):
This project handles with the question
how to extract fixed feature points
from a given face in a real time
environment. It is based on the idea,
that a face is given by Viola Jones
Algorithm for face detection and
processed to track pupil movement in
relation to the face without using
infrared light.
My MATLAB is incredibly rusty, but this site appears to have some scripts to help track eye movement.
eye detection will be possible with MATLAB and you can come up with that. But there is a difference between recognition and detection that you need to consider carefully. Detection is checking if an object is available in the image whereas recognition is determining what the different objects in the image are.
I hope it'll increase someone's knowledge.