Facing issue of Real face detection in Vision Framework - swift

I have faced the issue of real face detection using Vision Framework.
I have referred below apple link.
https://developer.apple.com/documentation/vision/tracking_the_user_s_face_in_real_time
I used demo code provided in above link. I see, Camera can detect the face from printed photo or passport photo. It is not real face photo. How can I know if this is not real face in camera using Vision framework?

You can use https://developer.apple.com/documentation/arkit/arfacegeometry
This will create a 3D mesh of a human face. A 3D mesh will have different values (e.g. vertices , triangleIndices), in its topology compared to a 2D picture.

Here is a project link
here I have used camera API for face detection and eye blinking. you can check and customize according to your requirement.
Update: Here is another project for liveness Check using MLKit link

Vision + RealityKit
Apple Vision framework has been processing "2D requests". It works only with RGB channels. If you need to process 3D surfaces you have to implement LiDAR scanner API, that based on Depth principles. It will allow you to distinguish between a photo and a real face. I think that Vision + RealityKit is the best choice for you, because you can detect a face (2D or 3D) at first stage in Vision, and then using LiDAR, it's quite easy to find out whether normals of polygonal faces are directed in the same direction (2D surface), or in different directions (3D head).

Related

ARKit – 3d transform of face

On this page I can read:
Live camera video texture-mapped onto the ARKit face mesh, with which you can create effects that appear to distort the user’s real face in 3D.
I haven't found much sample and extra documentation about this.
Is it possible to distort the face itself in 3D with ARKit? Like changing chin shape and such?
If yes, briefly on a conceptual level, how could this be achieved?
You shouldn't distort the canonical mesh used by Metal (ARSCNFaceGeometry). However, if you intend to dynamically distort a facial mask, you have two options:
create an animated face model in 3D authoring app
use SceneKit's SCNMorpher with its morph targets

Stereo Vision for Obstacle Detection

I'm working on stereo vision project with Halcon/NET. My project is to scanning the surface of a metal plate. Is it possible to detect small hole(1-3mm) on it with stereo vision?
If you are somewhat familiar with epipolar geometry and MRF optimization, you can have a look at this classic paper on 'Depth Estimation from Video'.
http://www.cad.zju.edu.cn/home/bao/pub/Consistent_Depth_Maps_Recovery_from_a_Video_Sequence.pdf
For camera calibration, you can use their ACTS software from here -
http://www.zjucvg.net/acts/acts.html
It accepts a video sequence and generates camera parameters and depth maps.
I hope it helps!
Yes, it is definitely possible to detect it - but I doubt you need stereo vision for it. Stereo vision is only useful when you want to recover 3D information (depth) from a scene.
Detection and classification can be achieved through deep learning methods too, it will also be probably more intuitive that way - but it depends on how unique your 'hole' is compared to the background of your scene. A problem of similar novelty has been discussed in this paper.
The same problem persists for stereo-vision, if the background of your scene has similar features to what you are trying to 'detect' it will create problems during stereo-matching.
Even if you use a simple 'edge' detector using a monocular vision system, it will still cause a problem.

Live Texturing using Vuforia

I wanted to know if there is a possibility of doing live texturing in 3D model using vuforia.
If not live, can we at least pick the color from drawing book and apply the same color to our augmented 3D model?
Like done in the following video
https://www.youtube.com/watch?v=tmfXgvT9h3s
The answer is yes it is very possible with vuforia. But that's not how you ask questions on stackoverflow...

How to find contours of soccer player in dynamic background

My project is a designation for a system which analyze soccer videos. In a part of this project I need to detect contours of players and everybody in the play field. For all players which don’t have occlusion with the advertisement billboards, I have used the color of play field (green) to detect contours and extract players. But I have problem with the situation that players or referee have occlusion with the advertisement billboards. Suppose a situation that advertisements on the billboards are dynamic (LED billboards). As you know in this situation finding the contours is more difficult because there is no static background color or texture. You can see two example of this condition in the following images.
NOTE: in order to find the position of occlusion, I use the region between the field line and advertisement billboards, because this region has the color of the field (green). This region is shown by a red rectangular in the following image.
I expect the result be similar to the following image.
Could anyone suggest an algorithm to detect these contours?
You can try several things.
Use vision.PeopleDetector object to detect people on the field. You can also track the detected people using vision.KalmanFilter, as in the Tracking Pedestrians from a Moving Car example.
Use vision.OpticalFlow object in the Computer Vision System Toolbox to compute optical flow. You can then analyze the resulting flow field to separate camera motion from the player motion.
Use frame differencing to detect moving objects. The good news is that that will give you the contours of the people. The bad news is that it will give you many spurious contours as well.
Optical Flow would work for such problems as it captures motion information. Foreground extraction techniques using HMM or GMM or non-parametric may solve the problem as I have used for motion analysis in surveillance videos to detect anomaly (Background was static). Magnitude and orientation of optical flow seems to be an effective method. I have read papers on segmentation using optical flow. I hope this may help you.

recognize the face & distort it

We have to create an application which can take a photograph and can recognize the face and can distort it in a certain way. Below is an example:
http://itunes.apple.com/us/app/fatbooth/id372268904?mt=8
Any ideas? Is it possible using OpenCV library only?
Yes you can ado that using OpenCV. First you need to detect the face. There is Haar cascade based face detector in OpencCV. After that you can detect the facial landmark points using the same interface. But you need to train the landmark detectors like eye corners, eye center, mouth, chin etc. Once you get those points you can do different kind of warps by displacing the points.
Updating for iOS5: If you are targeting iOS5+, you don't need openCV - in iOS5 you can do simple face detection using the Apple provided methods.