Cannot Animate Interactable Gameobject using Mixed Reality Toolkit - unity3d

I am a bit of a novice with the Unity Engine and Mixed Reality App development so please bear with me.
I have been working with the Microsoft Mixed Reality Toolkit for Unity to try and animate a game object and move it to the side. A simple action, very similar to an example scene provided by Microsoft with the toolkit called "InteractableObject" (Information links provided below)
Interactable Object - Mixed Reality (Microsoft Docs)
Mixed Reality Toolkit-Unity Interactable Objects and Receivers (Github)
This example scene in Unity has multiple objects to be used as "buttons". With the Mixed Reality Toolkit, even objects that you want the user to interact with to perform some sort of action when selected is even considered a button. At least according to the documentation I have actually been able to find on the subject. This is a series of screenshots depicting the inspector panels for my GameObject and the container for my object:
GameObject Inspector Panel
GameObject Container Inspector Panel (Part 1
GameObject Container Inspector Panel (Part 2
I am trying to make a single game object move to the side when I place the standard cursor on it. This same action is done with a balloon object in the example scene I mentioned. I have created the animator and the state machine the same as they did in there example as well as setup my game object in an almost identical format. Only real difference is that created a balloon object themselves and I am using a different set of custom models from my company.
When I attempt to play back the app in the Unity Editor, the state does not change when I place the cursor on the object. I can force the state to change using the editor and the required animation engages, but it will not change the state on its own. I configured my state machine the same as the Microsoft example and setup my state variable the same as well. It should move from an "Observation" state to a "Targeted" or "ObservationTargeted" state when the cursor moves onto the object. A screenshot of the GameObject state machine and the inspector panel of the specific transition in question are provided below:
GameObject Animator State Machine Setup
Observation to ObservationTargeted Transition Inspector Panel
I went through and verified that all components added by the Mixed Reality Toolkit are the same and they are. This includes the DefaultCursor, InputManager, MixedRealityCameraParent and Directional Light. I also checked that all the scripts were coded the same as well and they are. I am running out of places to look. I attached the Visual Studio debugger to the project in Unity and have verified that it just isn't changing the state on its own. But I cannot figure out why. I believe the problem has something to do with the setup of the transition, but I haven't been able to find the issue. All of the other mentioned components are provided by Microsoft and are not changed by myself nor are they changed in the sample scene.
If anyone else has had a similar problem or may know where I can look to find the problem please let me know. I haven't even built the project into an UWP application yet.

I know it's been a few months, but do you still looking for the solution?
With the newest version of Mixed Reality Toolkit you could make any GameObject to act as a button. Simply read this documentation. I have some cubes as buttons in my Unity project and the only extra Component I added to it to make it work was Interactable, which comes from Mixed Reality Toolkit.
If you want to trigger some animation when you place the cursor on the object (or look at it if you're going to use it with Hololens) then you can add them in Interactable object by adding a new Event (for example: OnFocus() event)
Hope this helps is any way

Related

MRTK 2.7.3 - Outline Shader is not visible on HoloLens

I saw the scene OutlineExamples in the MRTK examples package and recreated it in my own project.
The outlining works if I stay in unity in play mode. But if I deploy it on the HoloLens the object does not get a outline effect.
The OutlineExamples scene from the MRTKHub-project works as excpected on the HoloLens!
So I guess I missed something in my own project, but I cant find it. I compared the setup multiple times, but cant find a difference. And I also used the simplest object (the cube) from the example scene.
Setup for the cube
(the screenshot shows on the left side my project and on the right side the mrtkhub-project):
Mesh Filter (standard)
Mesh Renderer (standard)
Box Collider (standard)
MeshOutline with the Material "OutlineOrange" or "OutlineGreen" (added)
Object Manipulator (added)
Constraint Manager (added)
The only thing that I had to setup after adding the as "added" marked compenents, was the material for the MeshOutline component.
Is there something else someone has to setup to see the outline shader on the HoloLens?
My Setup:
Unity 2020.3.30
MRTK 2.7.3
Visual Studio 2019
What else did I check?
The XR Plug-in Management is set up the same way
--EDIT
I noticed something strange and I guess this will help someone who knows more about shader!
I launched my application on the HoloLens, grabbed the cube and put it in front of a window in my room. While placing the cube in front of the window, I saw the outline! But as soon as I move it outside the window area, the outline disappears! Another aspect is that I'm using the spatial mapping from MRTK. That means that the window does not get meshed, only the walls. And I guess the walls have their own shader on it, right?
So the spatial mesh shader and the outline shader "dont like each other". Is this possible?
The user derHugo gave me a hint that led to the solution! I went to the material, that I use on the cube and changed the property Render Queue Override under Advanced Options to a higher value than the material MRTK_Occlusion, which is used for the spatial mapping, has.

Hololens + Unity: GameObjects are invisible

After I build my Unity project and send it to the Hololens, I have the following problem:
The splash screen appear followed by a debugging window on the bottom. In the background is a white net. However, you can't see any game objects. I've tested a lot but haven't found a solution for that. Visual Studio does not display any error messages. What I've looked at roughly:
These are my modules. Im using the 2019.4.22f1 version of Unity and the MRTK Foundation Toolkit 2.7.2.
My build settings
My project settings
I tried to place the objects in the middle of the camera and changed the colors.
MRTK settings I haven't changed anything most of the time
Main camera settings
My scene
When i start the scene i get this error in the console. I dont know if this has anything to do with my problem
i have two possible solutions (no guarantee)##
you could spawn the objects on input directly in front of the
camera, add a debug.log("object in front of you"); so you can find
the issue.
If this doesnt work i would try to test differnet types of materials
like you do with HDRP.
if this does not work either i probably cant help you out now.
It seems like your GameObject is too far to be hidden behind by the mesh. Please make the spatial mesh invisible by setting the Display Option property of Spatial Mesh Observer Setting to None, this item can be found under the Spatial Awareness profile of the MRTK profile.

How to mimic HoloLens 2 hand tracking wIth Windows Mixed Reality controllers [MRTK2]?

The HoloLens 2 will feature hand tracking and the ability to reach out and poke UI elements. With Unity and the Mixed Reality Toolkit V2, the input for the hand-tracked near interactions (ie poking) comes the PokePointer class and to generate events for GameObjects having BaseNearInteractionTouchable components.
My question is how can we get the same PokePointer events from virtual reality controllers such as the Windows Mixed Reality controllers? This would make it possible to prototype on the desktop using a VR headset and even directly use the same near interactions of the Mixed Reality Toolkit within VR applications.
Can the PokePointer component be attached to a hand GameObject that is a controller model? Or is there a better way to do this through the MRTK profiles system?
Actually, it's possible to add a poke pointer and a grab pointer to a VR device. In fact, adding basic functionality without visualization can be done without even writing any code!
Getting the existing grab and poke pointers to work with VR
Open your current pointer configuration profile by selecting the MixedRealityToolkit object in the scene view, going to the inspector window, then navigating to Input -> Pointers.
Under pointer options, set the controller type for the PokePointer and the Grab Pointer to include your VR Controller type (in my case, it was Windows Mixed Reality, though you may wish to use OpenVR)
The poke pointer is configured to follow the Index Finger Pose, which does not exist for VR. So you will need to open the PokePointer.prefab file and in the inspector, Under Poke Poker -> Pose Action, set the value to "Pointer Pose"
Hit play. The grab pointer will be slightly below and do the right of the motion controller gizmo, and the poke pointer will appear be right at the origin.
Bonus: Improving the grab, poke pointers by using custom pointer
You can greatly improve the pointers you have by using custom pointers instead of the default pointers. For example, you can:
have the poke pointer be offset from the gizmo origin by setting the PokePointer's raycastorigin field to a custom transform
Add visuals to actually show where the pointers are
I've created an example that demonstrates a custom grab and poke pointer which visualizes the grab and poke locations, and also offsets the poke position to be more convenient. You can download a unitypackage of the sample here, or just clone the mrtktips repository and look at the VRGrabPokePointers scene.
Note: to get the visuals to actually show up, use the following script (pointers currently disable all renderers on startup to avoid flickering).
using UnityEngine;
public class EnableRenderers : MonoBehaviour
{
void Start()
{
foreach (var renderer in GetComponentsInChildren<Renderer>())
{
renderer.enabled = true;
}
}
}
You can see an example of a custom MRTK and pointer profile in the example here, and also in the VRGrabPokePointersUnity scene

Unity Editor: ExecuteInEditMode vs Editor scripts

What are the use-cases for using ExecuteInEditMode and what are for Editor scripts? When to use one instead of another?
ExecuteInEditMode - This is an attribute for scripts, denoted as [ExecuteInEditMode]. By default, MonoBehaviours are only executed in play mode. By adding this attribute, any instance of the MonoBehaviour will have its callback functions executed while the Editor is not in playmode. Use-Cases for this include, but are not limited to:
Position constraining – your script may help you position your game objects by constraint object positions using a custom algorithm.
Connecting objects – if the ComponentA of the ObjectA requires instance of the ComponentB that is somewhere on the scene, and you are able to find it using your code, then you can do it automatically instead of making the designer do it manually later.
In-Editor warnings and errors – Unity will generate a standard set of errors if something is wrong with your code or the initial Unity components setup. Still, you won’t receive any warning if you create a room without doors or windows by mistake. This can be done by a custom script, so the designer is warned when editing.
Management scripts – These are scripts that are keeping things in order. Let’s say that your scene has 5 cameras, but all these cameras should have the same FOV parameters. Instead of changing those manually (yes, you can select all of them, but there’s no guarantee that someone else will do the same) you can make your management script adjust all values using one master value. As a result all single camera parameters will be locked and can be changed only by using the management script.
The Knights of Unity have released a tutorial on some sample functionality for ExecuteInEditMode that expands on this.
Editor Scripts - This is a collection of scripts that extend the Editor class, a Base class to derive custom Editors from. This can be used to create your own custom inspector guis and editors for your objects. For more information, check out this video on Editor Scripting. Since Editor Scripts are programmed, you may also want to view the Scripting API.
With Editor you customize MonoBehaviour appearance.
For ExecuteInEditMode most use cases would be modification of Scene View, for example Mesh Renderer (I do not know if Mesh Renderer uses ExecuteInEditMode but it could), it will render Mesh in game but it does also render this mesh in scene view.
Some other use cases: validation, communication with other components, modification other components, modification gameobjects, basically you can do most of things that you could do in-game and in-editor.

How can i instantiate Vuforia's virtual button and assign a handler to it?

I am currently working on a project that requires creating virtual buttons while the app is running and naturally assign an event handler to it to detect it being pressed/released.
I have tried all the solutions i found on Vuforia's forums and stackoverflow but the virtual buttons never worked. They get instantiated and a clone is made like I want but apparently the event handler is not assigned properly.
So my question is, is it even possible to create a virtual button after the app starts and assign a handler to it?
Either the button is a GUI button then, it would be a basic UI button from the uGUI framework from Unity:
https://unity3d.com/learn/tutorials/modules/beginner/ui/ui-button
The second is a button that is position in the scene, most likely along the model that you show on tracker found.
In this case, either use a world canvas button, similar to the previous one but with a world canvas or use a quad/box object and use a basic raycast as you would do in a normal game.
If you need them to show on track found, set them on and off just like you do with the model by listening to the OnTrackFound/Lost.
To register listener, it is explained in the video.
After too much research, I found out that creating custom virtual buttons after the beginning of tracking is not possible using the current Vuforia version.
The alternative I found is creating a unity gameObject (Cube in my case) with a box collider and a tag that could be pressed through the mobile screen of the app and using raycasting.
Would welcome answering any questions you have.