MRTK: can InputSimulation also simulate hand mesh input? - unity3d

I'm trying to record and replay hand animations on the Hololens 2. I managed to record the tracked Transforms of Joints and use the recordings to animate given hand rigs. Now I'm trying to also record the tracked hand mesh. I'm aware of OnHandMeshUpdated in the IMixedRealityHandMeshHandler interface. Also, the following post guided me in this direction (very helpful):
How to get hand mesh data from Hololens2 without turning on Hand Mesh Visualization option
My question is: Is there a simple way to simulate hand mesh data in the Unity Editor? At the moment I don't have access to my team's Hololens, so I'm trying to figure out how to develop this feature directly in Unity.
AFAIK the OnHandMeshUpdated event is only called when there is actual mesh data on the Hololens, but not in the Editor where there are only the simulated joints of the controller, but not the hand mesh.
Any suggestions are welcome!

To simulate hand mesh input, you can use the RiggedHandVisualizer to control a SkinnedMesh built with hand joints data to visualize the hands, and it can work with InputSimulation in the Unity editor. You can find an example in the RiggedHandVisualizer scene under: MRTK/Examples/Experimental/RiggedHandVisualizer/Scenes, and more detail please seeRigged Hand Visualizer [Experimental]

Related

Rendering Mediapipe Hands in Three JS for a Physics Simulation

I'm looking to use Handsfree.js and Three to create a game where you can pick up and move around blocks with your hand. I'm wondering where to start: I have a basic understanding of Three but don't know how I should approach this project. I'm trying to render and animate a hand dynamically based on the coordinates that Mediapipe outputs.
I've looked into Three's Skinned Mesh, but I'm not sure if it would be feasible to control and entire hand using it. I've also tried rigging a hand in Blender and then importing it in Three js, but I couldn't find any documentation on how to control imported rigs. How should I go about dynamically animating a hand? Also, how would I add physics to a hand that I import in? Is Three the right tool for the job?
I've also tried using unity, but can't find any libraries that output 3d coordinates for the hands.
(for reference, mediapipe outputs an array of Vector3s with the positions of each joint on the hand)

How to add 3D elements into the Hololens 2 field of view

I'm trying to build a Remote Assistance solution using the Hololens 2 for university, i already set up the MRTK WebRTC example with Unity. Now i want to add the functionality of the desktop counterpart being able to add annotations in the field of view of the Hololens to support the remote guidance, but i have no idea how to achieve that. I was considering the Azure Spatial Anchors, but i haven't found a good example of adding 3D elements in the remote field of view of the Hololens from a 2D desktop environment. Also i'm not sure if the Spatial Anchors is the right framework, as they are mostly for persistent markers in the AR environment, and i'm rather looking for a temporary visual indicator.
Did anyone already work on such a solution and can give me a few frameworks/hints where to start?
To find the actual world location of a point from a 2D image, you can refer this answer: https://stackoverflow.com/a/63225342/11502506
In short, cameraToWorldMatrix and projectionMatrix transforms define for each pixel a ray in 3D space representing the path taken by the photons that produced the pixel. But anything along a certain ray will show up on the same pixel. So to find the actual world location of a point, you'll need either use Physics.Raycast method to calculate the impact point in world space where the ray hit the SpatialMapping.

Implementing different objects for each side of the cube in multi-image tracking Vuforia - unity

I am trying to build an AR application with a cube using vuforia and Multi-Image Target system. I am able to make an object appear in the place of the cube. But i would like to have different objects when the camera scans different sides of the cube. Any suggestions or ideas about how i can do it?
I have tried using the angle of the object with respect to the camera
After trying different methods i decided to use ray-casters from each face of the cube and trigger different objects on the same multi-image target. This method is better than using extended tracking and is more stable.

How to prevent the car from getting out of the way in a Unity3D driving simulation that uses MapBox maps?

I'm trying to make a car simulation that uses real world map. I'm currently using Mapbox for getting map features. For car asset I'm using Unity's Standart Asset.
My question is how can prevent the car from get off the road. There are many another features like park, lake, etc,.. And I want to make the driver use only the roads for driving.
Is there anything I can do? I thought about add collider for all other features(park, garden, ..) but there are good amount of features for adding collider. Is there any other solution?
If you get the road information in terms of coordinates from Mapbox (which I don't know) you could write a script wich would automatically create a mesh with mesh collider on each side of the road.
You can also create a collision mesh in a software like Blender, Maya, 3DSMAX or other and import it to Unity3D. You could then use this imported model with the mesh collider.
Here you can see one of many tutorials on Creating Custom Collision for your Unity Scenes.

3D AR Markers with Project Tango

I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.