How to mimic HoloLens 2 hand tracking wIth Windows Mixed Reality controllers [MRTK2]? - unity3d

The HoloLens 2 will feature hand tracking and the ability to reach out and poke UI elements. With Unity and the Mixed Reality Toolkit V2, the input for the hand-tracked near interactions (ie poking) comes the PokePointer class and to generate events for GameObjects having BaseNearInteractionTouchable components.
My question is how can we get the same PokePointer events from virtual reality controllers such as the Windows Mixed Reality controllers? This would make it possible to prototype on the desktop using a VR headset and even directly use the same near interactions of the Mixed Reality Toolkit within VR applications.
Can the PokePointer component be attached to a hand GameObject that is a controller model? Or is there a better way to do this through the MRTK profiles system?

Actually, it's possible to add a poke pointer and a grab pointer to a VR device. In fact, adding basic functionality without visualization can be done without even writing any code!
Getting the existing grab and poke pointers to work with VR
Open your current pointer configuration profile by selecting the MixedRealityToolkit object in the scene view, going to the inspector window, then navigating to Input -> Pointers.
Under pointer options, set the controller type for the PokePointer and the Grab Pointer to include your VR Controller type (in my case, it was Windows Mixed Reality, though you may wish to use OpenVR)
The poke pointer is configured to follow the Index Finger Pose, which does not exist for VR. So you will need to open the PokePointer.prefab file and in the inspector, Under Poke Poker -> Pose Action, set the value to "Pointer Pose"
Hit play. The grab pointer will be slightly below and do the right of the motion controller gizmo, and the poke pointer will appear be right at the origin.
Bonus: Improving the grab, poke pointers by using custom pointer
You can greatly improve the pointers you have by using custom pointers instead of the default pointers. For example, you can:
have the poke pointer be offset from the gizmo origin by setting the PokePointer's raycastorigin field to a custom transform
Add visuals to actually show where the pointers are
I've created an example that demonstrates a custom grab and poke pointer which visualizes the grab and poke locations, and also offsets the poke position to be more convenient. You can download a unitypackage of the sample here, or just clone the mrtktips repository and look at the VRGrabPokePointers scene.
Note: to get the visuals to actually show up, use the following script (pointers currently disable all renderers on startup to avoid flickering).
using UnityEngine;
public class EnableRenderers : MonoBehaviour
{
void Start()
{
foreach (var renderer in GetComponentsInChildren<Renderer>())
{
renderer.enabled = true;
}
}
}
You can see an example of a custom MRTK and pointer profile in the example here, and also in the VRGrabPokePointersUnity scene

Related

How to modify behaviour of a VR controller's pointer

My university colleagues and I are trying to develop a Virtual Reality project for university where we use an Oculus headset and create a scene with a mouse where you can select and click and drag different objects in the scene. You are supposed to be stationary and move one of the controllers as if it was a mouse. However, we want to modify the behaviour of the controller to better fit the 3D environment. When an object is not selected, we want to interpolate the depth of the cursor according to the interpolation of the nearest objects. There is a paper that we were shown in class that we are supposed to drag inspiration from, and it achieved this kind of behaviour of the cursor with a normal mouse but I can't seem to find any information on how they did it. Our final goal would be to compare both ways of managing the scene and assess which one is better. We are using Unity with VRTK as suggested by our professor, but we and can't really seem to be able to access the mouse's file on how it moves or its behaviour, and we are kind of lost on where to go. Could someone help with this?
Here is the paper where they talk about it:
https://dl.acm.org/doi/pdf/10.1145/3491102.3501884
We so far have tried creating a simple scene and adding objects with different behaviours as well as a controller instance, but we seem to only be able to modify the events of the mouse and not its specific behaviour.
Kind regards and thanks

How to animate grabbing in Unity VR?

I've been googling this for 10 hours and I'm running out of ideas. I found several video-tutorials and written-tutorials but none of them really works or they're overkill.
I have VR Unity (2020.3.16f) project designed to be run on Quest 2. I'm not using OpenXR. I already created hand, created one simple grabbing animation, added animation to Animator, created transitions, and set "IsGrabbed" parameter. Now I'm looking a simple way to change "IsGrabbed" to true/false whenever I grab/release anything. I'm expecting something like this:
public class grabber : MonoBehaviour
{
// Start is called before the first frame update
Animator animator;
???
void Start()
{
???
}
// Update is called once per frame
void Update()
{
if (???)
{
animator.SetBool("IsGrabbing", true);
}
elseif (???)
{
animator.SetBool("IsGrabbing", false);
}
}
}
Please help. We're talking about VR here so I'm sure grabbing animation is the very basics of very basics of very basics. It can't be any difficult.
Best regards
First of all, I highly recommend watching this video by Justin P Barnett to get a much better overview of how this all works.
If you don't want to go that route for some reason, there are other options available to you. One such option is the "Player Input" component, which can act as a bridge between your input devices and your code. Most XR packages these days use the new Input System package, and it makes life easier, so I will assume you have that installed.
First, you will need to create an Input Actions asset, which can be done in the project pane: right-click -> Create -> Input Actions. There are many tutorials which explain this asset in detail, but here is a simple setup to get you started. Double click on the new asset to open the editing window, and create a new Action Map. In the "Actions" list, create a new action with action type Value, Control Type Axis, and in the dropdown arrow on your new action set the path to the input source. As an example source path, I will use XR Controller -> XR Controller -> XR Controller (Left Hand) -> Optional Controls -> grip. Make sure to click Save Asset before closing the window.
Create a script similar to this:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.InputSystem;
public class ControllerInputReceiver : MonoBehaviour {
public void FloatToScale(InputAction.CallbackContext context) {
float val = 0.1f + 0.1f * context.ReadValue<float>();
transform.localScale = new Vector3(val, val, val);
}
}
Create a cube somewhere visible in your scene, and add the Input Action Manager component to it, and drag your created Input Actions asset to its list of Action Assets. Then add the ControllerInputReceiver script. Also on this cube, create a Player Input component and drag your Input Actions asset to its Actions element. Choose your map as the default map and change behavior to Invoke Unity Events. Under the events drop down, you should see an element for the Action you created earlier. Drop your Controller Input Receiver component into this Action and select the FloatToScale function.
In theory it should work at this point. Build the game to your device and see if pulling the grip causes the cube to resize. If it does, then you can replace your Update function with:
void SetGrabbing(InputAction.CallbackContext context) {
animator.SetBool("IsGrabbing", context.ReadValue<float>() > cutoff);
}
If you are still having issues at this point, I really recommend checking out these youtube channels. I only started VR a couple of months ago and learned everything I know so far from these people. JustinPBarnett, VRwithAndrew, ValemVR, LevelUp2020. (Links removed because it kept screwing up my post)
Note, the new input system has button options instead of value/axis options for VR devices. These may be closer to what you want, but I had no luck getting them to work today.
Also note, depending on how you organize your code, you may or may not need the "Input Action Manager" component somewhere in your scene with your input actions asset in its list. It enables your actions for you, without you needing to do this programmatically.
Another solution would be:
Using the OVR plugin and Hands prefab (for left and right), to check whether the rotation of each of the fingers on a specific axis (ex. Z-Axis) falls under a specific range-meaning fingers are grasping/flexed.
By referencing for example the Transform of b_l_index1 , which is the corresponding part of the Index Finger within the model, you can check its local roation every frame and trigger the event of grasping when all fingers are rotated to a specific angle. Subsequently, triggering the animation you need.

How much efforts does it takes to let three monitors to replace VR headset program?

I have a unity project. It is developed for VR headset training usage. However, users have a strong dizzy feeling after playing the game. Now, I want to use 3 monitors to replace the VR headset so the users can look at the 3 monitors to drive. Is it a big effort to change the software code to achieve this? What can I do for the software so that it can be run in monitor?
Actually it is quite simple:
See Unity Manual Multi-Display
In your Scene have 3 Camera objects and set their according Camera.targetDisplay via the Inspector (1-indexed).
To make them follow the vehicle correctly simply make them childs of the vehicle object then they are always rotated and moved along with it. Now position and rotate them according to your needs relative to the vehicle.
In PlayerSettings &rightarrow; XRSettings (at the bottom) disable the Virtual Reality Supported since you do not want any VR-HMD move the Camera but it's only controlled by the vehicle transform.
Then you also have to activate according Displays (0-indexes where 0 is the default monitor which is always enabled) in your case e.g.
private void Start()
{
Display.displays[1].Activate();
Display.displays[2].Activate();
}
I don't know how exactly the "second" or "third" connected Monitor is defined but I guess it should match with the monitor numbering in the system display settings.

Cannot Animate Interactable Gameobject using Mixed Reality Toolkit

I am a bit of a novice with the Unity Engine and Mixed Reality App development so please bear with me.
I have been working with the Microsoft Mixed Reality Toolkit for Unity to try and animate a game object and move it to the side. A simple action, very similar to an example scene provided by Microsoft with the toolkit called "InteractableObject" (Information links provided below)
Interactable Object - Mixed Reality (Microsoft Docs)
Mixed Reality Toolkit-Unity Interactable Objects and Receivers (Github)
This example scene in Unity has multiple objects to be used as "buttons". With the Mixed Reality Toolkit, even objects that you want the user to interact with to perform some sort of action when selected is even considered a button. At least according to the documentation I have actually been able to find on the subject. This is a series of screenshots depicting the inspector panels for my GameObject and the container for my object:
GameObject Inspector Panel
GameObject Container Inspector Panel (Part 1
GameObject Container Inspector Panel (Part 2
I am trying to make a single game object move to the side when I place the standard cursor on it. This same action is done with a balloon object in the example scene I mentioned. I have created the animator and the state machine the same as they did in there example as well as setup my game object in an almost identical format. Only real difference is that created a balloon object themselves and I am using a different set of custom models from my company.
When I attempt to play back the app in the Unity Editor, the state does not change when I place the cursor on the object. I can force the state to change using the editor and the required animation engages, but it will not change the state on its own. I configured my state machine the same as the Microsoft example and setup my state variable the same as well. It should move from an "Observation" state to a "Targeted" or "ObservationTargeted" state when the cursor moves onto the object. A screenshot of the GameObject state machine and the inspector panel of the specific transition in question are provided below:
GameObject Animator State Machine Setup
Observation to ObservationTargeted Transition Inspector Panel
I went through and verified that all components added by the Mixed Reality Toolkit are the same and they are. This includes the DefaultCursor, InputManager, MixedRealityCameraParent and Directional Light. I also checked that all the scripts were coded the same as well and they are. I am running out of places to look. I attached the Visual Studio debugger to the project in Unity and have verified that it just isn't changing the state on its own. But I cannot figure out why. I believe the problem has something to do with the setup of the transition, but I haven't been able to find the issue. All of the other mentioned components are provided by Microsoft and are not changed by myself nor are they changed in the sample scene.
If anyone else has had a similar problem or may know where I can look to find the problem please let me know. I haven't even built the project into an UWP application yet.
I know it's been a few months, but do you still looking for the solution?
With the newest version of Mixed Reality Toolkit you could make any GameObject to act as a button. Simply read this documentation. I have some cubes as buttons in my Unity project and the only extra Component I added to it to make it work was Interactable, which comes from Mixed Reality Toolkit.
If you want to trigger some animation when you place the cursor on the object (or look at it if you're going to use it with Hololens) then you can add them in Interactable object by adding a new Event (for example: OnFocus() event)
Hope this helps is any way

Positioning 3d objects for AR in Unity3d

I'm experimenting with an AR experience in Unity3D. I'd like to place models in my Unity scene and have them show up on top of real world objects using tango. I'm using tango's augmentedReality scene as a starting point.
Say there is a table in a room and I want a 3d cube to sit on top of it when it is in tangos view. Do I need to be using an .adf file to solve this problem or is there something else I should be looking into.
Is there some way to test an .adf file locally in my unity scene? This would be ideal to establish and debug the correct positions to place models in my scene.
Just trying to sort everything out.
If you want keep your virtual object's position persistent between different runs of the application, you will need a ADF file to relocalize. Unfortunately, there's no in-editor debug functions for ADF at the moment, so you will need to create a program to place the objects.
You could take a look of the Experiments/PersistentState example for reference. This example is not using AR, however, it's saving objects position with respect to your ADF's origin and keeping them persistently.