Cursor doesn't gaze on my UI Slider when deployed in the Hololens - unity3d

Based on this repo https://github.com/qian256/ur5_unity. I am trying to get this working with some modifications, as I need. You can also see the issue in this repo.
I am unable to get the sliders move when I deploy it in the Hololens. I have a cursor that can gaze over the robot body but doesn't gaze on the slider bar or handle. I have tried most of the suggestions online including setting canvas to world-space.
I have already tried this out: HoloLens - UI/Slider and Cursor do not intersect during gaze

Without knowing more, if you can debug, set a breakpoint, and try to determine which object is being hit by your raycast, If it is an object behind your slider, then you need to adjust the layer of your slider object to be the top most layer and not shared by other game objects. If it is your slider, you might try and use the pinch and hold event, and detect which direction the user is pulling and then adjust the slider value manually.

The gaze would use Physics.Raycast thereby a Collider is required to gaze at somethiing. You can set quad primitives into your UI (with Canvas set to World Space with proper camera) by removing the MeshRenderer component but keeping the Collider one. You would then need to handle yourself gaze events for your UI e.g. gazing upper side of quad, you would manually move the slider up, etc.

Don't have 50 rep otherwise I would have commented but, I answered a very similar question here dealing with if your cursor goes through your UI element.
In the canvas I loose the cursor
However, if the problem is just that you can't move the cursor then, you need to make sure you have subscribed to the scroll/manipulation events for your cursor. You can test this to see if this is the case by just clicking on the slider in a different spot and the slider should jump to the spot you clicked.

Related

How to detect UI Objects from Main Camera object when using Google Cardboard VR SDK for Unity

When using Google Cardboard VR SDK for Unity, how can I detect when the Main Camera object looks at UI objects inside Canvas? OnPointerEnter() and OnPointerExit() happens when I look at 3D Objects in the samples project that Google offers, but there is no way to do it for UI Objects.
Based on your explanation, I guess you have set the Render Mode of your Canvas to Screen Space-Overlay or Screen Space-Camera. When you use one of these render modes, the position of your Canvas on the screen will never change, so you can never catch any element of the UI unless its bound includes the middle point of the screen. why? because you are using google VR and you have no joystick or something similar to control the cursor's position, therefore the cursor position is always (0, 0).
Let's assume the cursor and Canvas, both, are children of your camera. You can move your camera by shaking your head, this way the canvas and the cursor move in comparison with other objects in your virtual world based on camera movement, but they never move in comparison with each other.
So what's the solution? I think you can set the Render Mode of your canvas to World Space. This way your canvas will be an object in the world that the cursor can navigate through it. I know you think it's weird, cause likely you want to always see the canvas just in front of your eyes. So I think you just have one solution:
Do not change the Render Mode of the canvas, locate All UI elements on the canvas, it's better to locate them near the edges of the screen and also with adequate distances, then write a script to do this: When the cursor is moving, calculate the direction of movement and find out which UI elements could be the target of this movement, move it towards the middle point of the screen, when its bound includes the middle point, its OnClick will be called. After that, you must return all UI elements to their early positions. Also if the cursor stopped moving before the UI element reaches the middle point, you must return all UI elements to their early positions.
I know, it's so hard to handle this, but it's the only way I could propose.

World space button click event to take priority over the collider's click event it's inside?

Short story:
I have a button on a world space canvas with a click event handler. This is inside (in 3d space not parenting) of a 3d collider with it's own click event.
The collider always gets the click event as expected as it's nearer the camera.
I want the button to get the event.
Long Story:
I have a person mesh with a collider. You can click on them and the OnPointerClick triggers to do something.
I have a button that sits in a world space canvas which itself is located just above the mesh and is pointed towards the orthographic camera. You click on the coin and the OnClick event triggers to do something else.
Both events work as expected until the coin is inside the mesh's collider (which it is a lot of the time). At which point it's ONLY the mesh collider's OnPointerClick event triggers, not the button.
However, I ALWAYS want the button to take priority over the collider, and any other collider. This is easier when the canvas is screen space, but it's not (with reason).
How do I do this?
NOTES:
The button never gets the onclick event, so any filtering on the containing collider won't help
I've fiddled with the world space canvas and camera ray filtering settings to no effect.
The coin has to be a world space canvas for automatic tracking and because I use text too
IsPointerOverGameObject doesn't help as it's true for any collider, not just UI elements. Not to mention it wont stop the collider consuming the click anyway. A custom version of this that works via a layer wont help either, because again the OnPointerClick on the collider GO stil consumes the click.
I don't want either event to have to do any event filtering & passing on if possible, they should be atomic. Any filtering should be via setting properties in objects and inherent functionality of Unity if possible
Just to reiterate, writing a function to find all objects in the ray and then selecting ones that are on the UI layer first wont help. Because that does not change the fact that the collider still is the only thing that gets the event, which then you'd have to manually propagate down to the button.. which I don't want to do.
I've been able to fix this by putting the button's world space canvas on a sorting layer of 1. That way, even if it's behind colliders, it will register first.
Nice and clear solution that I was hoping would exist.

Unity - MRTK - HoloLens: Modify Collider of 2D-Buttons so the Cursor gets closer

My current problem is that the cursor appears too far away from a button. You can see in the screenshot what I mean. Hovering over a button from the list lookes like this:
Question: What can I do so the cursor get closer to the button, because on the HoloLens you see the distance?
Looking somewhere else on the canvas except the buttons, the cursor gets closer:
--Edit--
I should mention that the scene has a scaled cube (the gray thing in the screenshot) and in front of that a world canvas (white thing), which contains the scrollview/list.
I saw the same behaviour for UI elements.
I can only provide you a workaround. It is a bit hacky but it works:
Go through all UI elements especially Text and Image and disable the option RayCast Target.
This makes the Cursor be sitting right on top of them ... but you will notice your Buttons are now non-responsive and you can not interact with them anymore.
This happens because the Physics system requires either a RayCastTarget or a Collider in order to fire it's pointer events like e.g. PointerEnter, PointerDown etc.
Therefore now add a BoxCollider (not BoxCollider2D!) to your Buttons and scale it to the correct size. It looks like you are using a VerticalLayoutGroup so you can simply correct the positioning of the BoxCollider by setting the RectTransform to centered once (the VerticalLayoutgroup will anyway re-enforce the Top-Left anchoring). In my case the BoxCollider needs with 0.8 and height 0.1 ... and for the z I choose 0.01 but it can be smaller if you whish
Hurray, now the buttons are interactable again and the Cursor only has it's usual distance + the half of the choosen z thikness of the BoxColliders.
Since the Background cube has it's own BoxCollider anyway we don't need to add further Colliders for the ScrollView and UI panels.
You might have to add some though for the ScrollBars as well if you need them!
As said this is more like a quick workaround and might not be a final solution since whenever a size of the Button or the ScrollRect is changed you have to rework those hardcoded BoxCollider dimesnions as well ...
I had similar issue on 3D objects. This could happen because the objects collider definition. I mean, you can import a render mesh but the mesh collider could be different (bigger, smaller, ...)
I hope this solves your problem ;)

How do you wrap a level around based on the players position

I want to create a circular room in a 2D level. How can I handle this problem?
My thought process was to break the level into chunks and move their position depending on where the player currently is. This would allow the level to wrap around depending on where the player travels. I can do this manually with each part but i'm looking for a better solution that can handle this programatically. I'm open to better ways to solve this problem as well.
Is the space 2D? if so, you could place two invisible colliders at the extremities of the room (one at the beginning and one at the end), and change the player's position when he collides with them. To ensure that the transition is smooth, place them a little outside of the camera space: the player won't be rendered during transition, and you would obtain a teleport effect from side to side.
As another suggestion, you can lock the player to being in the center chunk with a camera just showing that chunk. Everytime he gets through a collider on the end or the start of the middle platform you delete the opposite side platform and place it in the far end of the platform the player is now seeing, effectively making the new platform as the middle one.

Detect hand swipe gesture in Unity using Kinect with OpenNI

I have a 3D model in my Unity project and I have a JavaScript that rotates the camera based on keyboard arrow keys (left/right).
Now, I need to have a script that detects a horizontal swipe hand gesture and returns a vector that I would use to rotate the camera.
I am using the ZigFu SDK with PrimeSense OpenNI/NITE. The ZigFu SDK comes with sample scripts, one of which is SwipeDetector - I am wondering how does it work?
My setup:
I have 3 GameObjects: a 3D model, a MainCamera, and a Directional Light.
So, how do I use the SwipeDetector script in my project? The way I do it right now is 1)Create an empty game object "SwipeDetection", 2) "drag and drop" the SwipeDetector script from ZigFu. I've put in logs in the SwipeDetector script, but I don't see them.
The Zigfu bindings (I'm assuming you're using version 1.4?) dont have a SwipeDetector sample, but they do include a SwipeDetector MonoBehaviour. The SwipeDetector detects vertical and horizontal swipes, but unfortunately doesn't detect the velocity of the swipe.
You have a few options:
Use the provided Swipe Detector, and rotate the camera by a fixed amount every time you detect a horizontal swipe (SwipeDetector_Left or SwipeDetector_Right events)
Use the provided Swipe Detector, start rotating on Swipe, and stop rotating on the SwipeDetector_Release event. This would be similar to pressing on the arrow keys (assuming you have the same behaviour on keydown/keyup events)
Keep track of the hand velocity, and check its value when the swipe occurs. Use this value to rotate the camera. You can keep track of velocity by creating a new MonoBehaviour, and implementing Hand_Create, Hand_Update, and Hand_Destroy (look at any of the scripts in the HandpointControls folder). Keep a queue with the hand points from the last n frames. The delta between the newest & oldest points will be your velocity for those n frames (I recommend you start with 15 frames, or about half a second)
(This will be included in a future Zigfu release :))
Your game object setup sounds right - if you dont see any logs you may not be performing the 'focus gesture' correctly. Try waving or performing a tap towards the sensor - this should cause the Hand_Create event to be called. Once you have a valid handpoint you should get the proper events from the Swipe Detector.
Also worth mentioning your swipe detection game object should have a HandPointControl component (added implicitly with RequireComponent) and that 'ActiveOnStart' should be true.