I'm as the new of AR development. I want to create AR demo app, but I face some problems.
could anyone help me to solve below problems:
. Does it possible to recognize the floor, if I want to placement with big 3D object ( around 3 meter x 1.5 meter )?
. How can I touch screen to placement only one object on the floor? after that, can disable or enable (buttons) plane detection and still appear 3D object, that we have added to interact on 3D object.
. After added one 3D object, How can we make interaction on 3D object? ex: rotation or scale.
could you share me the tutorials or other links to solving that problems?
Thank you very much.
you're in luck. there is a video that shares how to make almost everything exactly like you want it. Also if you want to read over the components that enable you this I'll give you the link to the Vuforia official website documentation when they go over each component and how it works.
Video link: https://www.youtube.com/watch?v=0O6VxnNRFyg
Vuforia link: https://library.vuforia.com/content/vuforia-library/en/features/overview.html
Related
Im finding it difficult to find information for detecting vertical planes and putting objects on walls.
I see alot about AR Core and using the HelloAR example app, but i get loads of compile errors as im mainly building the app for iOS, although i will do it for Android to at some point.
Although i dont mind editing C# I cant actually read/write c# so the simpler the resource/answer the better.
I also wouldnt mind been able to design/change the detector image thing, the thing that shows up when it detects a surface.
On the horizontal one theres just a simple square/crosshair and i love that.
Thanks in advance.
ARFoundation 1.0.0 preview 22 will surface the ability to select your plane detection mode: horizontal, vertical, or both. To use this mode, you will have to upgrade to the newly released Unity 2018.3: https://blogs.unity3d.com/2018/12/13/introducing-unity-2018-3/
For more about plane detection in ARFoundation, refer the following link
AR Plane Manager
I need to insert some virtual objects in an indoor environment, but I need the position of these objects to be fixed. I have already tried using markers with the vuforia but it is complicated, it takes time to recognize. I'm thinking of using Google's ArCore. Does anyone know if this is possible and, if so, do they know how to do it?
I'm using Unity to do this. Can someone help me?
ARCore places the camera relative to the detected plane, so you will need a plane at some point so the application can locate the camera into the game.
HelloAR shows how this works, you may test into the unity editor and see how the camera moves arround the points and the detected plane.
One solution for your problem may be the image detection of ARCore + Plane detection, you place the image on the floor and when the image is detected you will have your objects in place while you move arround, but you will need to have a plane to move, not only the image detection, because if you don't, you will lose the objects once the camera loses the image.
I am developing a marker based AR application with Vuforia in Unity 3D. I want to lay a coordinate system in a room with 4 unique markers on the walls so that I can place 3D objects on the floor of the room.
The camera may not be able to see the markers all the time. But the coordinate system should be persistent and should use motion/orientation sensors of the device to offer the user uninterrupted AR experience. As soon as the camera recognizes a marker, the coordinate system should be recalibrated.
I'm new to Vuforia, so can you please suggest a way I can achieve this kind of behavior? Does Vuforia support this kind of behavior out of the box?
Thank you.
Edit: Evert
After reading the comment from Evert, I realized I can use marker less AR to fill the gaps between marker based detection.
Now I'm curious to know how I can achieve that. Please help :)
Thank you.
I'm working on a little AR coloring book application using Unity and Vuforia. I did something similar a few years back, but now, with the new updates, they changed a lot of things (I'm using Unity 2017.3, Vuforia 7 and Texture Region Capture 2.0.6 available here https://github.com/maximrouf/RegionCapture).
When the Image Target is shown, a 3D model of that image appears and you should be able to color it. The problem is that on the 3D model I can see all the things captured by the camera, not only the texture, as shown in the image below.
Now, I don't know the reason for this, I tried looking at other tutorials, but even the scripts for this version of Region Capture differ. Below are some pictures containing the way I attached the cameras and the game object to the scripts..
Please help me find a solution.
I have faced the same problem today so I post here the solution hoping that this will help people who encounter the same problem.
To solve it, I had to link the Region Capture to my Image Target and resize it to match the Image Target's size:
All the Tango Apps and Demos I have seen so far have one major limitation: 3D-Objects are always "on top" of the real world camera image. They are placed correctly in 3D space but a real object in front of the virtual object will not overlap it!
Question:
Is it possible to mask 3D objects or parts of them in realtime by real world objects in front of them?
In theory the 3D data deliverd by Tango sensors should be sufficient to do this. But I wonder if anyone has done it before or if there might be performance limitations that make this impossible? Thanks for your advice!
One approach is to use the 3D Reconstruction library (search "Unity How-to Guide: Meshing with Color") to pre-scan the environment, and then use this model to provide depth data when rendering the AR scene. Here's a video of an AR game that appears to use this technique. It's not perfect for sure, but it does sorta work.
This questions has been asked before.