How do I track the Unity position of physical objects the player is interacting with using Hololens2 hand tracking data? - unity3d

Basically I am working on a mixed reality experience using the Hololens2 and Unity, where the player has several physical objects they need to interact with, as well as virtual objects. One of the physical objects is a gun controller that has an IMU to detect acceleration and orientation. My main challenge is this : how do I get the physical object's position in Unity, in order to accurately fire virtual projectiles at a virtual enemy?
My current idea is to have the player position the physical weapon inside a virtual bounding box at the start of the game. I can then track the position of the virtual box through collision with the player's hands when they pick up the physical controller. Does OnCollisionEnter, or a similar method, work with the Players hands? (see attached image)
I am also looking into the use of spatial awareness / image recognition / pose estimation to accomplish this task, as well as researching the use of a tracking base station to determine object position (similar to HTC Vive / Oculus Rift ).
Any suggestions, resources, and assistance is greatly appreciated here. Thank you!
EDIT UPDATE 11/30/2020 :
Hernando commented below suggesting QR codes, assume for this project we are not allowed to use QR codes, and we want as as precise orientation data as possible. Thanks Hernando!

For locating the object, QR code would definitely be the recommendation to find quickly with HL2 device. I have seen the QR approach in multiple venues too for VR LBE experiences like being described here. QR code is just sitting on top the device.
Otherwise, if the controller in question supports Bluetooth, can possibly pair the device and if device has location information, can possible transmit the location of where it is at. Based on what I am seeing from all of the above, this would be a custom solution and highly dependent on the controller abilities to be seen if QR codes are out of the equation. I have witnessed some controller solutions to first start the user experience to do something like touch the floor to get an initial reference point. Or alternatively doing something like always picking up the gun from specific location in the real world like some local based experiences do before starting.
Good luck with project, just my advice from using systems with VR

Is the controller allowed to paste multiple QRcodes? If allowed, we recommend you use QRCode tracking to assist in locating your controller. If you prefer to use image recognition, object detection, or other technologies, it needs Azure service or some third-party library, more information please see:Computer Vision documentation

Related

Is ARKit usable for outdoor navigation if we don't use ARGeoTrackingConfiguration?

My question is: I would like to create an app where the user would create a starting point in the place where he/she actually is and place e.g. a 3D navigation arrow that leads to another arrow or to an object of interest, so that if another user arrives at that place, the navigation arrow is loaded with the correct orientation? Can this be done with any usable accuracy?
ARGeoTrackingConfiguration is only usable in mapped cities but I don't know if something like this can be achieved for example with WorldMaps by combining GPS navigation and compass including google maps. What tools could be used?
ARKit and RealityKit were created for indoor/outdoor world tracking. It's not necessary to use Geo Tracking if you are on a street. The purpose of using Geo Tracking is to place models with the help of "visual variant" of GPS data.
The only unpleasant drawback of using a world tracking outside is tracking errors accumulation if a distance from starting point is considerable (it's a problem of any AR framework, not only ARKit or RealityKit). It's up to you what tools to use to keep precise tracking and object placement. You are an "artist" of your AR app.
As a starting point use this github project.

UE4 - Changing ADS Camera when using a different weapon

I'm very new to Unreal Engine 4 and have been following an fps guide online!
Currently have an AK and M4 in the game and can switch between the two using 1 / 2 on the keypad. I had to setup the first aim down sights camera to the AK and it works well! However if I equip the M4 and aim down sights then the camera is no longer in the correct spot and it doesn't line up at all with the ironsights. So I added another camera called M4A1 ADS Camera, but can't figure out how to switch to that camera when aiming down sights then going back to the AK camera if using that weapon.
Is there a better way of doing this or any tutorials / tips to help with the process for the future?
If I want to try and answer your question I'd say that you should add a switch case or make branches to check wich weapon is equipped at the time.
But I'd say a better way to do this would be to add a camera to your weapon blueprint then you could access the camera from the weapon directly (assuming you have a master weapon class). This way you would configure 1 ADS camera per weapon and align it properly in it own blueprint.
you can use "Set View Targent With Blend" function to change your cameras, it is very good for changing speed, and blending other things.
I know this is old but even cleaner than Deimos's suggestion would be to have an ADS camera component on your character and attach it to a socket you create on each of your weapons. You can adjust the socket position and rotation on each weapon's skeleton and then all you do from the character side is attach the camera to the weapon any time you equip one.

How to have a reference frame for markerless inside out tracking in VR, to achieve absolute positional tracking and prevent drift

We have the new Vive Focus headset, which has markerless inside out tracking. By default these headsets can only do relative tracking from their initial position. But this isn't totally accurate, you get some drift, and the position in the virtual world and the real world go out of sync. For the y position, this can mean ending up at the wrong height in the virtual world as well. Also, you don't know the user's absolute position, which you would need for a mulitplayer game for instance, so your players don't run into each other.
Now the ZED camera from Stereolabs has an option to use a reference frame (which I assume is a pointmap), which it will then use to do absolute positional tracking by calculating your position relative to the reference frame, instead of to the last frame (which I assume normal markerless inside out tracking does). Of course the ZED code is in a dll, so my question is, how difficult is it to code this system using a reference frame for the Vive Focus or another markerless inside out tracked headset. Preferably in C#, preferably using the Unity plugin, but any example would help.
And what I'm wondering about this reference frame system is would one reference frame be enough? The ZED documentation says you need to look at approximately the same scene as you were when you first made the reference frame. This makes sense, otherwise how would the system find its reference. But doesn't that mean you would need more references, for the other sides of your room as well? The ZED documentation also says that using the reference frame setting can cause jumps in VR, when syncing to the reference. Would this be a big problem? Because if it would jump all the time, that would only increase motion sickness, which is a big enough problem as it is in VR. And finally, would it require a lot of processing power to track using a reference frame? Because we're dealing with standalone headsets here powered by mobile processors, they have a hard enough time of it as it is.
Or would it be feasible to make something using markers and maybe Vuforia and do absolute positional tracking that way?
Thanks for any input!

3D AR Markers with Project Tango

I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.

How to detect height of iPhone (for use in augmented reality game)?

I'm working on locating an iPhone device in 3D space.
I can use lat/long to detect physical location, I can use the magnetometer to figure out the direction they're facing, and I might be able to use the accelerometer to figure out how their device is oriented, but I can't figure out a way to get height of the device off the floor.
Specifically, I need to know if the user is squatting down, or raising their hand toward the ceiling (a different of about 2 meters/6 feet).
I posted a more detailed description of what I'm trying to do on my blog: http://pushplay.net/blog_detail.php?id=36
I would love any suggestions as to how to even fake this sort of info. I really want the sort of interactivity and movement that would require ducking and bobbing, versus just letting someone sit back and angle the phone -- kind of the way people can "cheat" playing with a Wii...
The closest I could see you getting to what you're looking for is using the accelerometer/magnetometer as an inertial tracker. You'd have to calibrate the user's initial position on startup to a "base" position, then continuously sample the sensors on a background thread to build a movement model. This post talks about boosting the default sample rate of the accelerometer functions so that you can get a pretty fine-grained picture of the user's movements.
I'm not sure this will solve your concern about people simply angling the device to produce the desired action, but you will have to strike a balance between being too strict in interpreting movements and allowing for differences in movement
The CoreLocation stuff gives you elevation aswell as lat/long, so you could potentially use that although there are some significant problems with this:
Won't work well indoors (not a problem for Sat Nav, is a problem for games)
Your users would have to "calibrate" (probably by placing the phone on the floor) each location they use!
In fact, you'd need to start keeping a list of "previously calibrated locations"... which could vary hugely just in one house (eg multiple rooms and floors). Could get in the way of the game.
Can't be used on moving transport (tranes, planes, automobiles... even walking) because elevation changing so frequently.
Therefore I'd have thought that using the accelerometer as a proxy for height is a substantially more preferable route than determining absolute elevation.
I am not intimately familiar with the iphone. But it might require a hardware add-on. (which you probably don't want to do). After thinking on this the only way I know how is through light or more specific laser. You shoot out a laser on the floor and record the time it takes to get back. It's actually not a lot to put this hardware together and I am sure the iphone has connections for peripherals. Unless osmeone can trump me, I say ther eis no way to do that with an image.