How to animate grabbing in Unity VR? - unity3d

I've been googling this for 10 hours and I'm running out of ideas. I found several video-tutorials and written-tutorials but none of them really works or they're overkill.
I have VR Unity (2020.3.16f) project designed to be run on Quest 2. I'm not using OpenXR. I already created hand, created one simple grabbing animation, added animation to Animator, created transitions, and set "IsGrabbed" parameter. Now I'm looking a simple way to change "IsGrabbed" to true/false whenever I grab/release anything. I'm expecting something like this:
public class grabber : MonoBehaviour
{
// Start is called before the first frame update
Animator animator;
???
void Start()
{
???
}
// Update is called once per frame
void Update()
{
if (???)
{
animator.SetBool("IsGrabbing", true);
}
elseif (???)
{
animator.SetBool("IsGrabbing", false);
}
}
}
Please help. We're talking about VR here so I'm sure grabbing animation is the very basics of very basics of very basics. It can't be any difficult.
Best regards

First of all, I highly recommend watching this video by Justin P Barnett to get a much better overview of how this all works.
If you don't want to go that route for some reason, there are other options available to you. One such option is the "Player Input" component, which can act as a bridge between your input devices and your code. Most XR packages these days use the new Input System package, and it makes life easier, so I will assume you have that installed.
First, you will need to create an Input Actions asset, which can be done in the project pane: right-click -> Create -> Input Actions. There are many tutorials which explain this asset in detail, but here is a simple setup to get you started. Double click on the new asset to open the editing window, and create a new Action Map. In the "Actions" list, create a new action with action type Value, Control Type Axis, and in the dropdown arrow on your new action set the path to the input source. As an example source path, I will use XR Controller -> XR Controller -> XR Controller (Left Hand) -> Optional Controls -> grip. Make sure to click Save Asset before closing the window.
Create a script similar to this:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.InputSystem;
public class ControllerInputReceiver : MonoBehaviour {
public void FloatToScale(InputAction.CallbackContext context) {
float val = 0.1f + 0.1f * context.ReadValue<float>();
transform.localScale = new Vector3(val, val, val);
}
}
Create a cube somewhere visible in your scene, and add the Input Action Manager component to it, and drag your created Input Actions asset to its list of Action Assets. Then add the ControllerInputReceiver script. Also on this cube, create a Player Input component and drag your Input Actions asset to its Actions element. Choose your map as the default map and change behavior to Invoke Unity Events. Under the events drop down, you should see an element for the Action you created earlier. Drop your Controller Input Receiver component into this Action and select the FloatToScale function.
In theory it should work at this point. Build the game to your device and see if pulling the grip causes the cube to resize. If it does, then you can replace your Update function with:
void SetGrabbing(InputAction.CallbackContext context) {
animator.SetBool("IsGrabbing", context.ReadValue<float>() > cutoff);
}
If you are still having issues at this point, I really recommend checking out these youtube channels. I only started VR a couple of months ago and learned everything I know so far from these people. JustinPBarnett, VRwithAndrew, ValemVR, LevelUp2020. (Links removed because it kept screwing up my post)
Note, the new input system has button options instead of value/axis options for VR devices. These may be closer to what you want, but I had no luck getting them to work today.
Also note, depending on how you organize your code, you may or may not need the "Input Action Manager" component somewhere in your scene with your input actions asset in its list. It enables your actions for you, without you needing to do this programmatically.

Another solution would be:
Using the OVR plugin and Hands prefab (for left and right), to check whether the rotation of each of the fingers on a specific axis (ex. Z-Axis) falls under a specific range-meaning fingers are grasping/flexed.
By referencing for example the Transform of b_l_index1 , which is the corresponding part of the Index Finger within the model, you can check its local roation every frame and trigger the event of grasping when all fingers are rotated to a specific angle. Subsequently, triggering the animation you need.

Related

How to enable/disable Locomotion System components via button click

Pretty much what the title says.
I'm creating a VR game in Unity and I want the player to be able to change their movement and camera settings via an in-game menu. What's giving me trouble at the moment is figuring out how to retrieve each (or just one) component (Action-Based Continuous Move, Action-Based Snap Turn, etc.) so that I can set them to true/false.
My idea was to set up a Canvas/Panel/Button on click event that passed a string into a function, and based on that string value, used if-statements to determine which components to enable/disable.
The script itself grabs the game object using: GameObject locomotion = GameObject.Find("Locomotion System");.
I tried using GetComponent<ActionBasedContinuousMoveProvider>.enabled = false; however it didn't work like I'd hoped.

Unity Navmesh bake multiple maps in the same scene

I have an scene with multiple maps, and when the game starts I randomly activate one of the maps and keep the others inactive.
My problem is that I can only have one the maps baked, because when I change to bake other map it's overwrited by the previous bake.
I search for other post to try baking at runtime but is seems it's not possible.
Is there a way to have multiple bakes or to bake only the active map at runtime?
To solve the problem, you can call the bake navigation code after the level changes.
Bake Navigation Inside Unity Editor
This code works similar to what is in the Unity Editor. Just make sure your Navigation Static object is enabled before using it.
The NavMeshBuilder class will allow this. In the code bellow.
using UnityEditor.AI;
...
public void Generate()
{
GenerateLevel(); // for e.g
NavMeshBuilder.BuildNavMesh();
}
Bake Navigation in Runtime
To bake in runtime, you need to download the necessary NavMeshComponents.
The component's will give you a NavMeshSurface component. It does not require static navmesh and works locally. Add component to all of your game ground's then put them in a list as the code bellow. After each run of the game, it is enough to BuildNavMesh all or part of them.
public List<NavMeshSurface> surfaces;
public void Start()
{
GenerateLevel(); // for e.g
surfaces.ForEach(s => s.BuildNavMesh());
}
Also this tutorial from Brackeys will help you so much.

Unity MRTK with HoloLens 2: How to detect whether a hand touches a GameObject in code?

I have been working with Unity 2020.3 LTS, the Windows XR Plugin, and the amazing MRTK 2.7.0 to port an existing application to HoloLens 2.
In this application I have a scene with several GameObjects in it and I need to detect whether a hand touches a GameObject (either with the indexfingertip near interaction or the pinch gesture far interaction). The important part here is that this detection needs to happen in a central script in the scene (i.e. maybe have the hand as an object in the code) and not from the view of the touched Gameobject itself.
I have successfully implemented the latter using this example with the two code examples below on that page, but the touched GameObject itself firing events via a listener does not work well with my use case. I need to detect the touch from the hand's perspective, so to speak.
I have searched the web and the Microsoft MRTK documentation several times for this and unfortunately I could not find anything remotely helpful. For head-gaze the documentation has a super simple code example that works beautifully: Head-gaze in Unity. I need the exact same thing for detecting when a hand touches a GameObject.
Eventually I will also need the same thing for eye-tracking when looking at a GameObject, but I have not looked into this yet and right now the hand interaction is giving me headaches. I hope someone can help me with this. Thanks in advance :).
but the touched GameObject itself firing events via a listener does not work well with my use case.
Why does the event not work? Could you provide more detail about it?
In addition to NearInteractionTouchable, have you tried the Interactable component? It's usually used to attach to the touched Game Object and will fire the event receiver when catching input actions. In the event receiver (in the Component UI), you can add any function attached to any object as the listener, such as a central script in the scene. It should be an effortless way can meet your request. For more information please see: Event
After some additional fiddling around I was able to get it to work the way I want/need to with the Touch Code Example. The solution was to create an empty GameObject variable in the code of the central script that is continuously checked whether it is null or not. The touch on the GameObject itself then binds itself to that checked GameObject variable as long as it is touched and sets it back to null once it is not touched anymore. This allows the central script to work with the touched GameObject as long as it is touched.
void Start()
{
centralScript = GameObject.Find("Scripts").GetComponent<CentralScript>();
NearInteractionTouchableVolume touchable = gameObject.AddComponent<NearInteractionTouchableVolume>();
touchable.EventsToReceive = TouchableEventType.Pointer;
pointerHandler = gameObject.AddComponent<PointerHandler>();
pointerHandler.OnPointerDown.AddListener((e) =>
{
centralScript.handTouchGameObject = gameObject;
});
pointerHandler.OnPointerUp.AddListener((e) =>
{
centralScript.handTouchGameObject = null;
});
}

How to mimic HoloLens 2 hand tracking wIth Windows Mixed Reality controllers [MRTK2]?

The HoloLens 2 will feature hand tracking and the ability to reach out and poke UI elements. With Unity and the Mixed Reality Toolkit V2, the input for the hand-tracked near interactions (ie poking) comes the PokePointer class and to generate events for GameObjects having BaseNearInteractionTouchable components.
My question is how can we get the same PokePointer events from virtual reality controllers such as the Windows Mixed Reality controllers? This would make it possible to prototype on the desktop using a VR headset and even directly use the same near interactions of the Mixed Reality Toolkit within VR applications.
Can the PokePointer component be attached to a hand GameObject that is a controller model? Or is there a better way to do this through the MRTK profiles system?
Actually, it's possible to add a poke pointer and a grab pointer to a VR device. In fact, adding basic functionality without visualization can be done without even writing any code!
Getting the existing grab and poke pointers to work with VR
Open your current pointer configuration profile by selecting the MixedRealityToolkit object in the scene view, going to the inspector window, then navigating to Input -> Pointers.
Under pointer options, set the controller type for the PokePointer and the Grab Pointer to include your VR Controller type (in my case, it was Windows Mixed Reality, though you may wish to use OpenVR)
The poke pointer is configured to follow the Index Finger Pose, which does not exist for VR. So you will need to open the PokePointer.prefab file and in the inspector, Under Poke Poker -> Pose Action, set the value to "Pointer Pose"
Hit play. The grab pointer will be slightly below and do the right of the motion controller gizmo, and the poke pointer will appear be right at the origin.
Bonus: Improving the grab, poke pointers by using custom pointer
You can greatly improve the pointers you have by using custom pointers instead of the default pointers. For example, you can:
have the poke pointer be offset from the gizmo origin by setting the PokePointer's raycastorigin field to a custom transform
Add visuals to actually show where the pointers are
I've created an example that demonstrates a custom grab and poke pointer which visualizes the grab and poke locations, and also offsets the poke position to be more convenient. You can download a unitypackage of the sample here, or just clone the mrtktips repository and look at the VRGrabPokePointers scene.
Note: to get the visuals to actually show up, use the following script (pointers currently disable all renderers on startup to avoid flickering).
using UnityEngine;
public class EnableRenderers : MonoBehaviour
{
void Start()
{
foreach (var renderer in GetComponentsInChildren<Renderer>())
{
renderer.enabled = true;
}
}
}
You can see an example of a custom MRTK and pointer profile in the example here, and also in the VRGrabPokePointersUnity scene

Unity3d 3d skybox not quite right

I'm messing around in unity3d in order to learn it.
Had a crack at making my own 3d skybox like in source engine for example. I'm using the standard 1st person controller.
I made another camera with same FOV for my skybox, and slaved it to the camera in the 1st person controller using the script below which I put on my skybox camera.
(Maincam field has the 1st person controller camera component in it)
using UnityEngine;
using System.Collections;
public class CameraSlave : MonoBehaviour {
public Component Maincam;
// Use this for initialization
void Start () {
}
// Update is called once per frame
void Update () {
transform.rotation = Maincam.transform.rotation;
}
}
You can see result here. Its a bit funny. (The big tetrahedron shape in the background is in my skybox everything else is normal)
As far as I understand it as long as the camera fov is the same it doesnt matter what size my skybox things are.
I think the problem, is there is some lag maybe? Like the Update in the code above is being called one frame too late? I tried calling that update from the 1st person controllers mouse look script but as well as getting loads of errors the result was the same.
I can't visualize your example, btw:
I think the problem, is there is some lag maybe? Like the Update in
the code above is being called one frame too late? I tried calling
that update from the 1st person controllers mouse look script but as
well as getting loads of errors the result was the same.
You can't rely on the order in which Update method will be called by the engine (unless you force a particular order, but this isn't generally a good choice). For camera update operations it's better to use LateUpdate. It's guaranteed that it will be called after all Update methods.