Turning off HoloLens infrared senors? - unity3d

Is it possible to turn off the HoloLens infrared sensors in my unity application? I am using another external infrared sensor and the HoloLens infrared lights are interfering with my other device.

That is a great question, and, so far, Microsoft has not exposed an API to turn the Infrared Lights off. From my experiments, those LEDs play two critical roles. First, they light the environment so that HoloLens can create a 3D mesh (Spatial Mapping) of the environment in a similar way to what Microsoft Kinect did in Kinect Fusion. Second, they might help with gesture recognition.
If you are not using Spatial Mapping, and most of your interaction will happen through other means then air-taps, you can safely cover those with tape. Luckily, those LEDs are not a crucial part of HoloLens' implementation of SLAM (Simultaneous Location and Mapping), so Holograms will stay anchored onto anywhere you pinned them.

Related

implementing Android AR with UVC / External camera (Unity)

I'm working on an AR project (Unity) and I want to use an external camera instead of my Android's original one. I saw that Vuforia has such a feature - but claims that ny using that, Ground Plane detection wouldn't work at all and ModelTargets performances taking a hit.
I also saw EasyAR has CustomCamera and Camera2 lib in ARCore.
Question is: What's the best way to approach this? has anyone experienced using an external camera? and with what AR solution? (ARFoundation / Vuforia / EasyAR...).
2nd Question: What should I look for when buying said UVC? Any examples for one?
Also I'd like to hear about experiences with AR solutions regardless of the external camera thing.
Thanks in advance!
Unfortunately, this is unlikely to work with an external camera.
A key part of AR is having a precise calibration of the camera's optics. Without that, it's not possible to accurately analyze the world to draw new objects in it or other AR effects.
A UVC webcam doesn't come with any such calibration information. So it would have to be calibrated somehow, and the calibration information given to Unity's AR engine. I don't know if that's possible with Unity in some way.
Note that not that all internal camera devices on Android are calibrated enough for AR either, but the ARCore team is certifying devices that have sufficient calibration in place.

How do I track the Unity position of physical objects the player is interacting with using Hololens2 hand tracking data?

Basically I am working on a mixed reality experience using the Hololens2 and Unity, where the player has several physical objects they need to interact with, as well as virtual objects. One of the physical objects is a gun controller that has an IMU to detect acceleration and orientation. My main challenge is this : how do I get the physical object's position in Unity, in order to accurately fire virtual projectiles at a virtual enemy?
My current idea is to have the player position the physical weapon inside a virtual bounding box at the start of the game. I can then track the position of the virtual box through collision with the player's hands when they pick up the physical controller. Does OnCollisionEnter, or a similar method, work with the Players hands? (see attached image)
I am also looking into the use of spatial awareness / image recognition / pose estimation to accomplish this task, as well as researching the use of a tracking base station to determine object position (similar to HTC Vive / Oculus Rift ).
Any suggestions, resources, and assistance is greatly appreciated here. Thank you!
EDIT UPDATE 11/30/2020 :
Hernando commented below suggesting QR codes, assume for this project we are not allowed to use QR codes, and we want as as precise orientation data as possible. Thanks Hernando!
For locating the object, QR code would definitely be the recommendation to find quickly with HL2 device. I have seen the QR approach in multiple venues too for VR LBE experiences like being described here. QR code is just sitting on top the device.
Otherwise, if the controller in question supports Bluetooth, can possibly pair the device and if device has location information, can possible transmit the location of where it is at. Based on what I am seeing from all of the above, this would be a custom solution and highly dependent on the controller abilities to be seen if QR codes are out of the equation. I have witnessed some controller solutions to first start the user experience to do something like touch the floor to get an initial reference point. Or alternatively doing something like always picking up the gun from specific location in the real world like some local based experiences do before starting.
Good luck with project, just my advice from using systems with VR
Is the controller allowed to paste multiple QRcodes? If allowed, we recommend you use QRCode tracking to assist in locating your controller. If you prefer to use image recognition, object detection, or other technologies, it needs Azure service or some third-party library, more information please see:Computer Vision documentation

Model lost on uniform background surface with ARCamera (Vuforia, Unity)

I'm trying to use Vuforia in Unity to see a model in AR. It is working properly when I'm in a room with lost of different colors, but if I go in a room with one single color (example : white floor, white wall, no furniture), the model keeps disappearing. I'm using Extended tracking with Prediction enabled.
Is there a way to keep the model on screen whatever the background seen by webcam?
Is there a way to keep the model on screen whatever the background seen by webcam??
I am afraid this is not possible. Since vuforia uses Markerless Tracking it requires high contrast on the points.
Since most of AR SDKs only use a monocular RGB camera (not RGB-Depth), they rely on computer vision techniques to calculate missing depth information. It means extracting visual distinct feature points and locating device using estimated distance to these feature points over several frames while you move.
However, they also leverage from sensor fusion which means they combine data gathered from camera and the data from IMU unit(sensors) of the device. Unfortunately, this data is mainly used for complementing when motion tracking fails in situations like excessive motion(when camera image is blurred). Therefore, sensor data itself is not reliable which is the case when you walk into a room where there are no distinctive points to extract.
The only way you can solve this is by placing several image targets in that room. That will allow Vuforia to calculate device position in 3D space. Otherwise this is not possible.
You can also refer to SLAM for more information.

how to use spatial mapping on immersive headset with MixedRealityToollkit

I'm trying to use MixedRealityToollkitin Unity to render spatial mapping data in the same way a hololens does, but using a windows full immersion headset, not a hololens. The prefabs for hololens works for running the full immersive headset, but using the spatial mapping prefab, like I would with hololens does nothing.
Spatial Mapping is not supported on WMR Immersive headsets, sorry.
You can also find which other parts of the MixedRealityToolkit support which devices, here. Under "Feature areas" there are icons showing compatibility.

Treasure hunt in Augmented Reality

I'm looking for an augmented reality browser/toolkit/api that supports the following:
Adding fixed 3d models such as a treasure-chest.
Possible image recognition of this treasure-chest so the iPhone knows when you're looking at it.
Specify altitude on a 3d model so it can be positioned on the ground or the second floor in an apartment building for example.
It must have support for "migrating" it to a standalone app that can be published on the app store.
The ability to customize the camera overlay with own buttons, huds, text and other UIViews.
Support for both iPhone and Android.
I have tried Wikitude which doesn't have support for 3d models in iPhone.
I have tried Junaio which doesn't support to create a standalone app using their browser.
I have tried Layar Player SDK, and asked the question on their community if I can customize the interface with own buttons etc.
I have tried the artoolkit on github.
None of the libraries I've tried have support for all my demands.
Am I looking for too much here?
Is there something I've missed using Layar, Wikitude and Junaio?
Specify altitude on a 3d model so it
can be positioned on the ground or the
second floor in a apartment building
for example.
Can you break this down? Do you want the phone to recognize it's on the second floor, at a particular location within the building? In general altitude is surprisingly tricky, and indoor positioning is very approximate - in the absence of indoor GPS repeaters or some other indoor positioning mechanisms which would probably require a lot of additional effort (bluetooth beacons, WiFi triangulation etc) this might be infeasible - in general, and not just given a particular AR library.
I think the Junaio libraries cover the other bases - CV recognition of a (prepared) object, stand-alone application packaging, customizable UI, and iPhone and Android support.