i want to make surface detection like in this video this. But, i dont want to realtime, i want to detect by image like this. Example implemented you can check Here Do you guys have tutorial or can you give me some advice? Thanks before
ARCore doesn't detect surfaces based on single images. It uses a stream of images, plus information from other sensors, like the gyroscope.
Some information here.
For surface detection in images you can look at other libraries, like OpenCV.
Related
I have faced the issue of real face detection using Vision Framework.
I have referred below apple link.
https://developer.apple.com/documentation/vision/tracking_the_user_s_face_in_real_time
I used demo code provided in above link. I see, Camera can detect the face from printed photo or passport photo. It is not real face photo. How can I know if this is not real face in camera using Vision framework?
You can use https://developer.apple.com/documentation/arkit/arfacegeometry
This will create a 3D mesh of a human face. A 3D mesh will have different values (e.g. vertices , triangleIndices), in its topology compared to a 2D picture.
Here is a project link
here I have used camera API for face detection and eye blinking. you can check and customize according to your requirement.
Update: Here is another project for liveness Check using MLKit link
Vision + RealityKit
Apple Vision framework has been processing "2D requests". It works only with RGB channels. If you need to process 3D surfaces you have to implement LiDAR scanner API, that based on Depth principles. It will allow you to distinguish between a photo and a real face. I think that Vision + RealityKit is the best choice for you, because you can detect a face (2D or 3D) at first stage in Vision, and then using LiDAR, it's quite easy to find out whether normals of polygonal faces are directed in the same direction (2D surface), or in different directions (3D head).
I'm as the new of AR development. I want to create AR demo app, but I face some problems.
could anyone help me to solve below problems:
. Does it possible to recognize the floor, if I want to placement with big 3D object ( around 3 meter x 1.5 meter )?
. How can I touch screen to placement only one object on the floor? after that, can disable or enable (buttons) plane detection and still appear 3D object, that we have added to interact on 3D object.
. After added one 3D object, How can we make interaction on 3D object? ex: rotation or scale.
could you share me the tutorials or other links to solving that problems?
Thank you very much.
you're in luck. there is a video that shares how to make almost everything exactly like you want it. Also if you want to read over the components that enable you this I'll give you the link to the Vuforia official website documentation when they go over each component and how it works.
Video link: https://www.youtube.com/watch?v=0O6VxnNRFyg
Vuforia link: https://library.vuforia.com/content/vuforia-library/en/features/overview.html
I am new to unity and vuforia, I would like to know is it possible that instead of using a image target in vuforia, can I do something like augmenting real time hand drawn illustration? For instance, when I start to draw something on the AR app, then a 3D object representing the object I am drawing would also appear?
Vuforia can recognize pre-defined images, with enough features to be detected. If you draw by hand such an image, that's fine. Otherwise - the answer is no. Just an FYI - Vuforia also has text recognition, if you'd like, take a look here: Vuforia's Text Recognition
Of course that if a detection was made, what happens and what is drawn is up to you, so of course a 3D object is an option.
All the Tango Apps and Demos I have seen so far have one major limitation: 3D-Objects are always "on top" of the real world camera image. They are placed correctly in 3D space but a real object in front of the virtual object will not overlap it!
Question:
Is it possible to mask 3D objects or parts of them in realtime by real world objects in front of them?
In theory the 3D data deliverd by Tango sensors should be sufficient to do this. But I wonder if anyone has done it before or if there might be performance limitations that make this impossible? Thanks for your advice!
One approach is to use the 3D Reconstruction library (search "Unity How-to Guide: Meshing with Color") to pre-scan the environment, and then use this model to provide depth data when rendering the AR scene. Here's a video of an AR game that appears to use this technique. It's not perfect for sure, but it does sorta work.
This questions has been asked before.
I'm working on stereo vision project with Halcon/NET. My project is to scanning the surface of a metal plate. Is it possible to detect small hole(1-3mm) on it with stereo vision?
If you are somewhat familiar with epipolar geometry and MRF optimization, you can have a look at this classic paper on 'Depth Estimation from Video'.
http://www.cad.zju.edu.cn/home/bao/pub/Consistent_Depth_Maps_Recovery_from_a_Video_Sequence.pdf
For camera calibration, you can use their ACTS software from here -
http://www.zjucvg.net/acts/acts.html
It accepts a video sequence and generates camera parameters and depth maps.
I hope it helps!
Yes, it is definitely possible to detect it - but I doubt you need stereo vision for it. Stereo vision is only useful when you want to recover 3D information (depth) from a scene.
Detection and classification can be achieved through deep learning methods too, it will also be probably more intuitive that way - but it depends on how unique your 'hole' is compared to the background of your scene. A problem of similar novelty has been discussed in this paper.
The same problem persists for stereo-vision, if the background of your scene has similar features to what you are trying to 'detect' it will create problems during stereo-matching.
Even if you use a simple 'edge' detector using a monocular vision system, it will still cause a problem.