Kudan in Unity: how to stop or reset markerless tracking? - unity3d

I am creating an application with Kudan where a photograph (a 2D sprite) appears via markerless tracking. Based on the sample project I've successfully made adjustments so that the 2D plane is always perpendicular to the camera and placed on the screen in the position I want. Really wonderful!
But I am unable to figure out how to restart/reset the tracking via a script. I can always force the tracking to restart by blocking the camera or shaking the phone, but I want to do it via a button-- it is exactly the same behavior I've found described in the "ArbiTrack Basics" guide for Android and iOS, but am unable to reproduce it in Unity. To what script should I send a stop tracking command in order to get the tracking instance to restart (exactly the same effect as blocking the camera when running one of the sample Unity projects in Markerless Mode).
The situation is described here for Android coding: https://wiki.kudan.eu/ArbiTrack_Basics#Stopping_ArbiTrack
where it says to call these three things:
// Stop ArbiTrack
arbiTrack.stop();
// Display target node
arbiTrack.getTargetNode().setVisible(true);
//Change enum and label to reflect ArbiTrack state
arbitrack_state = ARBITRACK_STATE.ARBI_PLACEMENT;

I have found one way to do this-- though I'm not sure it's ideal.
Looking in the TrackingMethodMarkerless.cs script, it seems that the StopTracking() does not work-- it disables the updating of the tracking but doesn't actually disable the instance of detection. But taking a note from it, I added an if statement to the ProcessFrame() function:
//
if (disableMarkerless == false)
trackable.isDetected = _kudanTracker.ArbiTrackIsTracking ();
else
trackable.isDetected = false;
//
Now, changing the disableMarkerless bool operator disables the tracking.

Related

Unity MRTK with HoloLens 2: How to detect whether a hand touches a GameObject in code?

I have been working with Unity 2020.3 LTS, the Windows XR Plugin, and the amazing MRTK 2.7.0 to port an existing application to HoloLens 2.
In this application I have a scene with several GameObjects in it and I need to detect whether a hand touches a GameObject (either with the indexfingertip near interaction or the pinch gesture far interaction). The important part here is that this detection needs to happen in a central script in the scene (i.e. maybe have the hand as an object in the code) and not from the view of the touched Gameobject itself.
I have successfully implemented the latter using this example with the two code examples below on that page, but the touched GameObject itself firing events via a listener does not work well with my use case. I need to detect the touch from the hand's perspective, so to speak.
I have searched the web and the Microsoft MRTK documentation several times for this and unfortunately I could not find anything remotely helpful. For head-gaze the documentation has a super simple code example that works beautifully: Head-gaze in Unity. I need the exact same thing for detecting when a hand touches a GameObject.
Eventually I will also need the same thing for eye-tracking when looking at a GameObject, but I have not looked into this yet and right now the hand interaction is giving me headaches. I hope someone can help me with this. Thanks in advance :).
but the touched GameObject itself firing events via a listener does not work well with my use case.
Why does the event not work? Could you provide more detail about it?
In addition to NearInteractionTouchable, have you tried the Interactable component? It's usually used to attach to the touched Game Object and will fire the event receiver when catching input actions. In the event receiver (in the Component UI), you can add any function attached to any object as the listener, such as a central script in the scene. It should be an effortless way can meet your request. For more information please see: Event
After some additional fiddling around I was able to get it to work the way I want/need to with the Touch Code Example. The solution was to create an empty GameObject variable in the code of the central script that is continuously checked whether it is null or not. The touch on the GameObject itself then binds itself to that checked GameObject variable as long as it is touched and sets it back to null once it is not touched anymore. This allows the central script to work with the touched GameObject as long as it is touched.
void Start()
{
centralScript = GameObject.Find("Scripts").GetComponent<CentralScript>();
NearInteractionTouchableVolume touchable = gameObject.AddComponent<NearInteractionTouchableVolume>();
touchable.EventsToReceive = TouchableEventType.Pointer;
pointerHandler = gameObject.AddComponent<PointerHandler>();
pointerHandler.OnPointerDown.AddListener((e) =>
{
centralScript.handTouchGameObject = gameObject;
});
pointerHandler.OnPointerUp.AddListener((e) =>
{
centralScript.handTouchGameObject = null;
});
}

How much efforts does it takes to let three monitors to replace VR headset program?

I have a unity project. It is developed for VR headset training usage. However, users have a strong dizzy feeling after playing the game. Now, I want to use 3 monitors to replace the VR headset so the users can look at the 3 monitors to drive. Is it a big effort to change the software code to achieve this? What can I do for the software so that it can be run in monitor?
Actually it is quite simple:
See Unity Manual Multi-Display
In your Scene have 3 Camera objects and set their according Camera.targetDisplay via the Inspector (1-indexed).
To make them follow the vehicle correctly simply make them childs of the vehicle object then they are always rotated and moved along with it. Now position and rotate them according to your needs relative to the vehicle.
In PlayerSettings &rightarrow; XRSettings (at the bottom) disable the Virtual Reality Supported since you do not want any VR-HMD move the Camera but it's only controlled by the vehicle transform.
Then you also have to activate according Displays (0-indexes where 0 is the default monitor which is always enabled) in your case e.g.
private void Start()
{
Display.displays[1].Activate();
Display.displays[2].Activate();
}
I don't know how exactly the "second" or "third" connected Monitor is defined but I guess it should match with the monitor numbering in the system display settings.

Is there a way to reset ARCore detected surfaces without destroying and remaking the session script?

I'm making an application which uses optional ARCore. This means that I enable and disable the ARCore device on runtime. I noticed that the detected surfaces will still exist even though you disabled and re-enabled the ARCore device.
Is there a way to reset detected surfaces data? I want the users to start fresh every time they open up the AR content.
I have found answers to this in other threads, but all of them involve forcefully destroying the ARCoreSession script from the ARCore device and then re-adding the script back onto it. This seems.. stupid and inefficiënt.
To delete just Detected Planes in your scene (without destroying and recreating an ArSession) is not a good practice using ARCore. An alternative to this is deleting all ArAnchors what are designed to hold Renderables. Although the most robust approach is to destroy a current ArSession and create a new one again.
Look at this GitHub issue #253 for further details: Clear Planes and Anchors.
and...
Look at StackOverflow post How to remove all planes in ARCore to find out more.

Listen for GazeInput down event without selecting anything - Google VR Unity

I'm working with the Google VR Unity SDK and I'm trying to create a VR application where users can switch between multiple ambients(places). I want them to switch to a different ambient just by pushing down the magnetic sensor of the cardboard, pointing anywhere. My problem is that every link (like this one) I've found, works with objects selection. I've tried adding an Event Trigger to my Main Camera and adding a Mesh collider to my building but none of them worked.
So, ¿is it possible to listen for the magnetic sensor pushdown in the full scene without having to select an object?
Turns out it's simpler than I thought.
if(Input.GetButtonDown("Fire1")){
//some code
}
Thing is googleVR removed magnetic button support since version 0.9.0 and I was using 1.0.3. So if you want to implement a trigger for cardboard's magnetic button you need to use v0.8.5.
You could put up a Canvas that's attached to the camera in World Space, so that it always stays in the line of sight. Add a Button to the canvas at the location where the gaze input cursor is, and you should always hit that when triggering.

Unity default ThirdPersonController error

I've got a player spawning in the scene upon it's start. Using the standard assets, I've dragged the default ThirdPersonController.
However, when I run, although the player spawns correctly, this error comes along.
This is the default script from the standard assets (which I've made no change to).
The way I'm saving and loading the character is with two scripts: [GameMaster][3] and [GameSettings][4] (which are generated along with the player).
Please notice that the default camera that should follow the player is also not working. What am I missing?
Best,
Sporting