implementing Android AR with UVC / External camera (Unity) - unity3d

I'm working on an AR project (Unity) and I want to use an external camera instead of my Android's original one. I saw that Vuforia has such a feature - but claims that ny using that, Ground Plane detection wouldn't work at all and ModelTargets performances taking a hit.
I also saw EasyAR has CustomCamera and Camera2 lib in ARCore.
Question is: What's the best way to approach this? has anyone experienced using an external camera? and with what AR solution? (ARFoundation / Vuforia / EasyAR...).
2nd Question: What should I look for when buying said UVC? Any examples for one?
Also I'd like to hear about experiences with AR solutions regardless of the external camera thing.
Thanks in advance!

Unfortunately, this is unlikely to work with an external camera.
A key part of AR is having a precise calibration of the camera's optics. Without that, it's not possible to accurately analyze the world to draw new objects in it or other AR effects.
A UVC webcam doesn't come with any such calibration information. So it would have to be calibrated somehow, and the calibration information given to Unity's AR engine. I don't know if that's possible with Unity in some way.
Note that not that all internal camera devices on Android are calibrated enough for AR either, but the ARCore team is certifying devices that have sufficient calibration in place.

Related

How do I track the Unity position of physical objects the player is interacting with using Hololens2 hand tracking data?

Basically I am working on a mixed reality experience using the Hololens2 and Unity, where the player has several physical objects they need to interact with, as well as virtual objects. One of the physical objects is a gun controller that has an IMU to detect acceleration and orientation. My main challenge is this : how do I get the physical object's position in Unity, in order to accurately fire virtual projectiles at a virtual enemy?
My current idea is to have the player position the physical weapon inside a virtual bounding box at the start of the game. I can then track the position of the virtual box through collision with the player's hands when they pick up the physical controller. Does OnCollisionEnter, or a similar method, work with the Players hands? (see attached image)
I am also looking into the use of spatial awareness / image recognition / pose estimation to accomplish this task, as well as researching the use of a tracking base station to determine object position (similar to HTC Vive / Oculus Rift ).
Any suggestions, resources, and assistance is greatly appreciated here. Thank you!
EDIT UPDATE 11/30/2020 :
Hernando commented below suggesting QR codes, assume for this project we are not allowed to use QR codes, and we want as as precise orientation data as possible. Thanks Hernando!
For locating the object, QR code would definitely be the recommendation to find quickly with HL2 device. I have seen the QR approach in multiple venues too for VR LBE experiences like being described here. QR code is just sitting on top the device.
Otherwise, if the controller in question supports Bluetooth, can possibly pair the device and if device has location information, can possible transmit the location of where it is at. Based on what I am seeing from all of the above, this would be a custom solution and highly dependent on the controller abilities to be seen if QR codes are out of the equation. I have witnessed some controller solutions to first start the user experience to do something like touch the floor to get an initial reference point. Or alternatively doing something like always picking up the gun from specific location in the real world like some local based experiences do before starting.
Good luck with project, just my advice from using systems with VR
Is the controller allowed to paste multiple QRcodes? If allowed, we recommend you use QRCode tracking to assist in locating your controller. If you prefer to use image recognition, object detection, or other technologies, it needs Azure service or some third-party library, more information please see:Computer Vision documentation

Can you use the headset camera on the HTC Vive in Unity

I'm working in unity(2018) and building for the HTC Vive VR headset. I had an idea to use the small camera on the front of the headset to make an AR system, as in run the video from the headsets camera to the headset view to then be able to overlap things from a unity environment. But unfortunately, I can't seem to find any examples of others doing this (other than the Tron blue outline system that the Vive comes with) though perhaps I'm not looking with the right keywords.
If anyone has seen something like this or know if it can be done I'd greatly appreciate it.
It is registered as a standard WebCam, so you should be able to use Unitys WebCamTexture.
But the resolution of the cameras is very low.

How to do occlusion with Google Tango in Unity?

I am tring to do occlusion with Google Tango in Unity.
What I want is pretty simple to understand: when there is a real object in front of a virtual object, the virtual object is hidden (or rendered differently)
The perfect result would be like it is in this impressive video I found: https://www.youtube.com/watch?v=EpDhaM7ZhZs .
I already tried the "Enable occlusion" option of the Tango Camera and I am not so happy with the results (it is not accurate and not real time as it is based on mesh reconstruction from the point cloud).
If you have hints, tips or ideas about how to achieve this (like in the video), that would be awesome!
Occlusion is still a very experimental feature on Tango. The problem is that it's very hard to do occlusion with high fidelity and high performance, here's couple of ideas on how to achieve it using different method:
Use 3D reconstruction.
Tango does provide functionalities to construct 3D meshes from point cloud, you can find sample code from Tango sample code repository (C, Java, Unity). If you have a world that is pre-scanned, you can essentially use that mesh data to occluded virtual object.
Run time up-sampling depth image.
You can also project all point clouds on to an image plane, up-sample it, and use the image as a depth buffer for rendering. This is what ARScreen occlusion is using in TangoUnitySDK. Due to the limitation of Tango depth sensing hardware, the result quality is not very ideal, and it will not work if all physical objects are far away(beyond 4 meters) from the device.

Can Vuforia track spatial location when using targetless device tracking?

I am trying to wrap my head around Vuforia's capabilities. I want to make an app which lets me place a 3D object into a camera view and have that 3D object stick to the world. I've been learning how to use Vuforia in Unity3D, and Vuforia seems to be slightly capable of this, but is severely limited by its craving for "Targets". It doesn't seem to be able to do much if I don't give it some sort of target.
One workaround I've found is to set the ARCamera's World Center Mode to DEVICE_TRACKING. This seems to let me place a 3D object into the world (in Unity) and have this object overlay into the camera feed, almost making it seem like it's anhcored to the real world. This doesn't work perfectly though: it tracks properly when I angle the device up/down/left/right (rotation), but it does not seem to track the device's translational motion; that is, when I move the device forward/back/left/right, the overlaid object doesn't get closer/farther nor does it rotate as I move around it.
Is it possible to get this sort of tracking out of Vuforia, or am I better off switching to something like Google Tango?
The difficulty with setting World Center Mode to CAMERA in Vuforia is that apparently 3D objects rotate around the camera based on its accelerometer/gyroscope changes. This doesn't allow for objects to be anchored to the environment. Instead they follow with the camera.
Kudan is a good markerless tracking option.

Making my Unity Game with Stereoscopic View (VR)

I have built a Unity3D + Google Tango based game on the NVidia Dev. device. Everything seems to work fine, but now I would like to play this game in stereoscopic view (For Dive Goggles). I looked at the ExperimentalVirtualReality example (https://github.com/googlesamples/tango-examples-unity/tree/master/UnityExamples/Assets/TangoExamples/ExperimentalVirtualReality) and was successfully able to port all the prefabs into my game, but for some reason the experience is not satisfactory.
The stereoscopic view of my game tends to over lap with each other when I look through the Dive goggles. The experience is a quite off.
I noticed that there are some public parameters on the TangoVR Player Object in Unity Project for 'IPD in MM', 'Screen Width in MM', 'Eye Offset in MM', etc. Do I have to play around with any of these. What does these values even represent?
Any help or pointers will be greatly helpful and appreciated.
IPD would be Inter-Pupillary Distance, while offset is the distance from your eye to the 'point of articulation' when you move your head.
This describes it (with pictures!): http://gamasutra.com/blogs/NickWhiting/20130611/194007/Integrating_the_Oculus_Rift_into_Unreal_Engine_4.php
I've found when trying to use cardboard lenses on devices with wider displays than the fov of the lenses you get an unsatisfactory experience.
This has to do with the lenses not being in the center of the frame, when focused at the display.
To circumnavigate this with larger devices you can push in the margins of the stereoscopic views. For the tango, with testing standard cardboard lenses I found that things work nicely if they were pushed in about an inch. The apps on the play store, Tango Mini Town and Tango Mini Village do a nice job of demonstrating this work around.
The ideal way to get this working would be with google cardboard and a proper tango tablet 7 inch view controller, but currently the cardboard app is incompatible with the tango. Fingers crossed for cardboard support.
As far as simply playing around with an optimal view points in unity, one can modify the view port rect on the stereo camera inspector menu in unity to get the ideal experience for a specific device with what ever controller you choose.
Thanks for all those who helped answer this. Many of my concepts definitely got cleared but nothing got me close to an actual solution. After researching a lot, I finally found this article (http://www.talkingquickly.co.uk/2014/11/google-cardboard-unity-tutorial/) super useful. it basically tells me to implement the Durovis SDK (https://www.durovis.com/sdk.html) with its Unity package.
Everything was pretty straightforward and experience I got from it was so far the best.