recognize the face & distort it - iphone

We have to create an application which can take a photograph and can recognize the face and can distort it in a certain way. Below is an example:
http://itunes.apple.com/us/app/fatbooth/id372268904?mt=8
Any ideas? Is it possible using OpenCV library only?

Yes you can ado that using OpenCV. First you need to detect the face. There is Haar cascade based face detector in OpencCV. After that you can detect the facial landmark points using the same interface. But you need to train the landmark detectors like eye corners, eye center, mouth, chin etc. Once you get those points you can do different kind of warps by displacing the points.

Updating for iOS5: If you are targeting iOS5+, you don't need openCV - in iOS5 you can do simple face detection using the Apple provided methods.

Related

Facing issue of Real face detection in Vision Framework

I have faced the issue of real face detection using Vision Framework.
I have referred below apple link.
https://developer.apple.com/documentation/vision/tracking_the_user_s_face_in_real_time
I used demo code provided in above link. I see, Camera can detect the face from printed photo or passport photo. It is not real face photo. How can I know if this is not real face in camera using Vision framework?
You can use https://developer.apple.com/documentation/arkit/arfacegeometry
This will create a 3D mesh of a human face. A 3D mesh will have different values (e.g. vertices , triangleIndices), in its topology compared to a 2D picture.
Here is a project link
here I have used camera API for face detection and eye blinking. you can check and customize according to your requirement.
Update: Here is another project for liveness Check using MLKit link
Vision + RealityKit
Apple Vision framework has been processing "2D requests". It works only with RGB channels. If you need to process 3D surfaces you have to implement LiDAR scanner API, that based on Depth principles. It will allow you to distinguish between a photo and a real face. I think that Vision + RealityKit is the best choice for you, because you can detect a face (2D or 3D) at first stage in Vision, and then using LiDAR, it's quite easy to find out whether normals of polygonal faces are directed in the same direction (2D surface), or in different directions (3D head).

How to generate surface/plane around a real world Object (Like bottle) using Unity & ARCore?

I built an apk using the HelloAR scene (which is provided with ARcore package). The app is only detecting Horizontal surface like table and creates it's own semi-transparent plane over it. When I moved my phone around a bottle, the app again, only created a horizontal plane cutting through the bottle. I expected ARCore to create planes along the bottle as I move my phone around, like polygons in a mesh.
Another scenario is, I placed 2 books on the floor, and each of them have different thickness. But the HelloAR app creates only one semi-transparent horizontal surface over the thicker book, instead of creating two surfaces (one for each book).
What is going wrong here? How can I fix it and make the HelloAR app work more precisely? Please help.
Software: Unity v2018.2,
ARcore v1.11.0
ARCore generates an approximate point cloud using a soft movement of the device to identify the featured points, this points are detected by contrast in the different shapes, if you use your application in test mode in unity you can see how the points are placed in your empty scene.
Once the program has enough points at the "same height" (I don't know the exact precision), it generates the plane that you can see, but it won't detect planes separated by a difference of 5cm or even more distance.
If you want to know the approximate accuracy of the app, test it with unity and make a script to capture the generated points that have been used to generate the planes, then check the Y difference to see which is the tolerance distance.
Okay so Vuforia is currently one of the leading SDKs for augmented reality providing a wide area of detection options (Images, Ground, Point, 3D objects, ...)
So regarding your question about detecting a bottle I would most certainly use the 3D model detection feature. You can read the official docs here.
You need to first generate an approximate of the object in a 3d modeling software and the use their program to generate the detection model. Then you put this in Unity and setup the detection. (no coding needed)
I have some experience with this kind of detection. I used it to detect a large 2mx2m scale model of an electric vehicle. It works great, you can walk around it and it tracks it through and through. You can see a short official demo here
Hope it helped to explain this in short!

Stereo Vision for Obstacle Detection

I'm working on stereo vision project with Halcon/NET. My project is to scanning the surface of a metal plate. Is it possible to detect small hole(1-3mm) on it with stereo vision?
If you are somewhat familiar with epipolar geometry and MRF optimization, you can have a look at this classic paper on 'Depth Estimation from Video'.
http://www.cad.zju.edu.cn/home/bao/pub/Consistent_Depth_Maps_Recovery_from_a_Video_Sequence.pdf
For camera calibration, you can use their ACTS software from here -
http://www.zjucvg.net/acts/acts.html
It accepts a video sequence and generates camera parameters and depth maps.
I hope it helps!
Yes, it is definitely possible to detect it - but I doubt you need stereo vision for it. Stereo vision is only useful when you want to recover 3D information (depth) from a scene.
Detection and classification can be achieved through deep learning methods too, it will also be probably more intuitive that way - but it depends on how unique your 'hole' is compared to the background of your scene. A problem of similar novelty has been discussed in this paper.
The same problem persists for stereo-vision, if the background of your scene has similar features to what you are trying to 'detect' it will create problems during stereo-matching.
Even if you use a simple 'edge' detector using a monocular vision system, it will still cause a problem.

How to find contours of soccer player in dynamic background

My project is a designation for a system which analyze soccer videos. In a part of this project I need to detect contours of players and everybody in the play field. For all players which don’t have occlusion with the advertisement billboards, I have used the color of play field (green) to detect contours and extract players. But I have problem with the situation that players or referee have occlusion with the advertisement billboards. Suppose a situation that advertisements on the billboards are dynamic (LED billboards). As you know in this situation finding the contours is more difficult because there is no static background color or texture. You can see two example of this condition in the following images.
NOTE: in order to find the position of occlusion, I use the region between the field line and advertisement billboards, because this region has the color of the field (green). This region is shown by a red rectangular in the following image.
I expect the result be similar to the following image.
Could anyone suggest an algorithm to detect these contours?
You can try several things.
Use vision.PeopleDetector object to detect people on the field. You can also track the detected people using vision.KalmanFilter, as in the Tracking Pedestrians from a Moving Car example.
Use vision.OpticalFlow object in the Computer Vision System Toolbox to compute optical flow. You can then analyze the resulting flow field to separate camera motion from the player motion.
Use frame differencing to detect moving objects. The good news is that that will give you the contours of the people. The bad news is that it will give you many spurious contours as well.
Optical Flow would work for such problems as it captures motion information. Foreground extraction techniques using HMM or GMM or non-parametric may solve the problem as I have used for motion analysis in surveillance videos to detect anomaly (Background was static). Magnitude and orientation of optical flow seems to be an effective method. I have read papers on segmentation using optical flow. I hope this may help you.

Human detection from the image or video library

I want to recognize the human from the image or video. I have used OPENCVSharp for Face detection it works fine with front face and low accuracy for side face. what i want is human detection (face detection wont work b'z face might be opposite to camera).
Can any one suggest any library or reference link for human detection from either image or video ? Also is it possible to identify the gender out of it ? is there any way we can track human from the video ?
First you need to investigate either Haar or HoG detection and decide which best suits your problem. You will then need to follow the same steps that you have conducted for face recognition but with a dataset that includes people instead.
Use this link which has a long list of free to use (non commercial) datasets which you can find one to use
then use opencv_traincascades to get your cascade.xml file