How can I deal or track only one face while I have detected all the faces in front of camera? - matlab

I have face recognition code which recognizes the face and show its name when I will generate the database for that, it is working well when only one face is in front of camera but I am using "FaceDetect = vision.CascadeObjectDetector;" for face detection which will detect all the faces in front of camera and I am getting bounding box around all the faces.
Is there any way to track face of my choice among the detected all faces?
Or can I code each bounding box face separately?
Please give me some guideline. Thanks

Here's an example of how you can track one face. However, if you detect multiple faces, you would need some way for your program to decide, which face to track.
Alternatively, you can track all faces.

Related

How to generate surface/plane around a real world Object (Like bottle) using Unity & ARCore?

I built an apk using the HelloAR scene (which is provided with ARcore package). The app is only detecting Horizontal surface like table and creates it's own semi-transparent plane over it. When I moved my phone around a bottle, the app again, only created a horizontal plane cutting through the bottle. I expected ARCore to create planes along the bottle as I move my phone around, like polygons in a mesh.
Another scenario is, I placed 2 books on the floor, and each of them have different thickness. But the HelloAR app creates only one semi-transparent horizontal surface over the thicker book, instead of creating two surfaces (one for each book).
What is going wrong here? How can I fix it and make the HelloAR app work more precisely? Please help.
Software: Unity v2018.2,
ARcore v1.11.0
ARCore generates an approximate point cloud using a soft movement of the device to identify the featured points, this points are detected by contrast in the different shapes, if you use your application in test mode in unity you can see how the points are placed in your empty scene.
Once the program has enough points at the "same height" (I don't know the exact precision), it generates the plane that you can see, but it won't detect planes separated by a difference of 5cm or even more distance.
If you want to know the approximate accuracy of the app, test it with unity and make a script to capture the generated points that have been used to generate the planes, then check the Y difference to see which is the tolerance distance.
Okay so Vuforia is currently one of the leading SDKs for augmented reality providing a wide area of detection options (Images, Ground, Point, 3D objects, ...)
So regarding your question about detecting a bottle I would most certainly use the 3D model detection feature. You can read the official docs here.
You need to first generate an approximate of the object in a 3d modeling software and the use their program to generate the detection model. Then you put this in Unity and setup the detection. (no coding needed)
I have some experience with this kind of detection. I used it to detect a large 2mx2m scale model of an electric vehicle. It works great, you can walk around it and it tracks it through and through. You can see a short official demo here
Hope it helped to explain this in short!

How to fix objects in the indoor environment with ArCore?

I need to insert some virtual objects in an indoor environment, but I need the position of these objects to be fixed. I have already tried using markers with the vuforia but it is complicated, it takes time to recognize. I'm thinking of using Google's ArCore. Does anyone know if this is possible and, if so, do they know how to do it?
I'm using Unity to do this. Can someone help me?
ARCore places the camera relative to the detected plane, so you will need a plane at some point so the application can locate the camera into the game.
HelloAR shows how this works, you may test into the unity editor and see how the camera moves arround the points and the detected plane.
One solution for your problem may be the image detection of ARCore + Plane detection, you place the image on the floor and when the image is detected you will have your objects in place while you move arround, but you will need to have a plane to move, not only the image detection, because if you don't, you will lose the objects once the camera loses the image.

Face tracking with subtracting frames in video

Is it possible to track a face in video with subtracting frames without using face recognition?
What happen if the face change in next frame? Is there any way to detect this change with subtracting?
Try this example, which uses the Viola-Jones face detection algorithm, and the KLT (Kanade-Lucas-Tomasi) algorithm for tracking.
Face tracking is different than, face recognition. Simply,
Face tracking means, tracking an object that has features of a face.
Face recognition means detecting and recognizing a face among a set of already known faces.
For tracking a face firstly, you need to detect it. So, for detecting a face there are simple techniques such as Haar Feature-based Cascade Classifiers, and LBP cascade classifier. You can google them and read about them.
After the face is detected, then you can try to solve the problem of face tracking. But tracking the face through different frames, means that you repeat the face detection process for each frame. Now the question would be how to increase the speed of the detection, that be suitable for a normal frame rate like 30 FPS?
A simple solution is to decrease the search area. In other words, if the face is detected in the first frame, in the second frame there is no need to search the whole area of the frame. The optimum solution would be to start the search from the position of the face in the previous frame.
A simple face detection and tracking tutorial can be found here.

How to find contours of soccer player in dynamic background

My project is a designation for a system which analyze soccer videos. In a part of this project I need to detect contours of players and everybody in the play field. For all players which don’t have occlusion with the advertisement billboards, I have used the color of play field (green) to detect contours and extract players. But I have problem with the situation that players or referee have occlusion with the advertisement billboards. Suppose a situation that advertisements on the billboards are dynamic (LED billboards). As you know in this situation finding the contours is more difficult because there is no static background color or texture. You can see two example of this condition in the following images.
NOTE: in order to find the position of occlusion, I use the region between the field line and advertisement billboards, because this region has the color of the field (green). This region is shown by a red rectangular in the following image.
I expect the result be similar to the following image.
Could anyone suggest an algorithm to detect these contours?
You can try several things.
Use vision.PeopleDetector object to detect people on the field. You can also track the detected people using vision.KalmanFilter, as in the Tracking Pedestrians from a Moving Car example.
Use vision.OpticalFlow object in the Computer Vision System Toolbox to compute optical flow. You can then analyze the resulting flow field to separate camera motion from the player motion.
Use frame differencing to detect moving objects. The good news is that that will give you the contours of the people. The bad news is that it will give you many spurious contours as well.
Optical Flow would work for such problems as it captures motion information. Foreground extraction techniques using HMM or GMM or non-parametric may solve the problem as I have used for motion analysis in surveillance videos to detect anomaly (Background was static). Magnitude and orientation of optical flow seems to be an effective method. I have read papers on segmentation using optical flow. I hope this may help you.

How to create a level with curved lines with cocos2d + Box2d on the iphone?

I'd like to create a game that has levels such as this: http://img169.imageshack.us/img169/7294/picdq.png
The Player moves "flies" through the level and mustn't collide with the walls. How can I create such levels?
I found that piece of software: http://www.sapusmedia.com/levelsvg/
It's not that cheap, so I wonder whether there is another way to create such a level as shown in the picture above...?
You can do that pretty easy by reading the color value of pixels at specific places of the level. Take for instance that your level background is white and the walls are black. In order to perform collision detection, whether your character had hit the wall, you would do the following:
-take your character's position
-look at the color values of the pixels of your map that overlap with character's bounding box or sphere at that position
-if any of those contain black color you have yourself a collision :)
Now if your level is all colourful, you would want to build a black and white mask texture that would reflect the wall surfaces of your actual map. Then use the coloured map for drawing and the bw map for collision detection.
I'd spend a good solid couple weeks getting caught up on Objective-C, Xcode, Interface Builder, and Apple iOS documentation. There are many good tutorials out there and sample Xcode projects to download and run on the iPhone/iPad simulator.
If just starting out, some of those quick startup libraries can rob you of the intimate knowledge you'll need to create the intricacies and nuances you'll need when your application starts to reach outside the boundaries of the code sandbox. Not bad to use as learning tools or to speed up development time, but I'd advise against using them as a crutch until you strengthen your developer legs. Crawl. Walk. Run!