Currently I using dlib correlation tracker method, but it only track the bounding box of face alignment, and it does not work so well. So I wonder does any way to track all the face alignments like eye, nose and mouth etc.?
Related
I'm trying to use Vuforia in Unity to see a model in AR. It is working properly when I'm in a room with lost of different colors, but if I go in a room with one single color (example : white floor, white wall, no furniture), the model keeps disappearing. I'm using Extended tracking with Prediction enabled.
Is there a way to keep the model on screen whatever the background seen by webcam?
Is there a way to keep the model on screen whatever the background seen by webcam??
I am afraid this is not possible. Since vuforia uses Markerless Tracking it requires high contrast on the points.
Since most of AR SDKs only use a monocular RGB camera (not RGB-Depth), they rely on computer vision techniques to calculate missing depth information. It means extracting visual distinct feature points and locating device using estimated distance to these feature points over several frames while you move.
However, they also leverage from sensor fusion which means they combine data gathered from camera and the data from IMU unit(sensors) of the device. Unfortunately, this data is mainly used for complementing when motion tracking fails in situations like excessive motion(when camera image is blurred). Therefore, sensor data itself is not reliable which is the case when you walk into a room where there are no distinctive points to extract.
The only way you can solve this is by placing several image targets in that room. That will allow Vuforia to calculate device position in 3D space. Otherwise this is not possible.
You can also refer to SLAM for more information.
Detecting vertical planes is possible now with ios 11.3 and apple arkit 1.5 (a good example: ARKit Vertical Plane Detection ).
But there is one condition; you need to have some color differences or structure on your wall in order to get detected.
Is it also possible to detect blank walls or walls that have 1 color?
There is a natural tension here that imposes some inherent design constraints.
For ARKit to even “see” a surface for purposes of world tracking — before even detecting it as a plane — the surface needs to have some texture. Variation in color, relief, points of high contrast, something that causes it to have some visual features.
That’s okay for a lot of horizontal plane detection use cases, since people like to buy tables made of wood, install floors made of tile, take countertops for granite, etc. But a lot of walls in home and office environments are lightly textured or featureless. You probably can’t get your customers to change their walls. (If you do, though, I can refer the guy who did great textured paint on my house...)
So instead you need to think about how this fits into your AR experience at a basic design level...
For horizontal planes, you could make experiences where a small stretch of floor/table near the viewer becomes the play field for a game or whatever, but you can’t just flip that on its side for a vertical plane experience.
Vertical planes detect better at larger distances — you can find a wall when you see its edges, or the furniture backed against it, etc.
Use estimated plane hit tests to place content on a wall, and refine your placement when plane detection kicks in later.
Don’t use vertical planes the same way you would horizontal planes. They can be boundaries or background scenery instead of the focus of an experience.
Is it possible to track a face in video with subtracting frames without using face recognition?
What happen if the face change in next frame? Is there any way to detect this change with subtracting?
Try this example, which uses the Viola-Jones face detection algorithm, and the KLT (Kanade-Lucas-Tomasi) algorithm for tracking.
Face tracking is different than, face recognition. Simply,
Face tracking means, tracking an object that has features of a face.
Face recognition means detecting and recognizing a face among a set of already known faces.
For tracking a face firstly, you need to detect it. So, for detecting a face there are simple techniques such as Haar Feature-based Cascade Classifiers, and LBP cascade classifier. You can google them and read about them.
After the face is detected, then you can try to solve the problem of face tracking. But tracking the face through different frames, means that you repeat the face detection process for each frame. Now the question would be how to increase the speed of the detection, that be suitable for a normal frame rate like 30 FPS?
A simple solution is to decrease the search area. In other words, if the face is detected in the first frame, in the second frame there is no need to search the whole area of the frame. The optimum solution would be to start the search from the position of the face in the previous frame.
A simple face detection and tracking tutorial can be found here.
My project is a designation for a system which analyze soccer videos. In a part of this project I need to detect contours of players and everybody in the play field. For all players which don’t have occlusion with the advertisement billboards, I have used the color of play field (green) to detect contours and extract players. But I have problem with the situation that players or referee have occlusion with the advertisement billboards. Suppose a situation that advertisements on the billboards are dynamic (LED billboards). As you know in this situation finding the contours is more difficult because there is no static background color or texture. You can see two example of this condition in the following images.
NOTE: in order to find the position of occlusion, I use the region between the field line and advertisement billboards, because this region has the color of the field (green). This region is shown by a red rectangular in the following image.
I expect the result be similar to the following image.
Could anyone suggest an algorithm to detect these contours?
You can try several things.
Use vision.PeopleDetector object to detect people on the field. You can also track the detected people using vision.KalmanFilter, as in the Tracking Pedestrians from a Moving Car example.
Use vision.OpticalFlow object in the Computer Vision System Toolbox to compute optical flow. You can then analyze the resulting flow field to separate camera motion from the player motion.
Use frame differencing to detect moving objects. The good news is that that will give you the contours of the people. The bad news is that it will give you many spurious contours as well.
Optical Flow would work for such problems as it captures motion information. Foreground extraction techniques using HMM or GMM or non-parametric may solve the problem as I have used for motion analysis in surveillance videos to detect anomaly (Background was static). Magnitude and orientation of optical flow seems to be an effective method. I have read papers on segmentation using optical flow. I hope this may help you.
I have face recognition code which recognizes the face and show its name when I will generate the database for that, it is working well when only one face is in front of camera but I am using "FaceDetect = vision.CascadeObjectDetector;" for face detection which will detect all the faces in front of camera and I am getting bounding box around all the faces.
Is there any way to track face of my choice among the detected all faces?
Or can I code each bounding box face separately?
Please give me some guideline. Thanks
Here's an example of how you can track one face. However, if you detect multiple faces, you would need some way for your program to decide, which face to track.
Alternatively, you can track all faces.