Unity UI canvas not working with VR - unity3d

I have been trying to get a very simple demo of a native Unity UI canvas working with VR.
I have read the oculus blog post here: https://developer3.oculus.com/blog/unitys-ui-system-in-vr/ but i need to use the native unity UI as i want to redistribute the code without license worries.I followed this tutorial https://unity3d.com/learn/tutorials/topics/virtual-reality/interaction-vr?playlist=22946 and downloaded the unity vr samples project from the asset store. In this they provide some scripts to place on the camera (VRInput and VREyeRaycaster) and some scripts to place on the target object (VRInteractiveItem and ExampleInteractiveItem).
When i apply the target scripts to a regular GameObject in the scene (e.g. a cube) the raycast works fine and the appropriate calls are made when fire1 is activated. When i try and do this for a canvas object (e.g. a button) - no hit is detected. I have tried placing the two target scripts (VRInteractiveItem and ExampleInteractiveItem) on the canvas, the image containing the button and the button itself and none work. What am i doing wrong? Why would it work on a regular gameobject and not on a UI canvas? I have made sure all my canvas elements have their raycast target boolean property ticked
EDIT:
It seems to work when i attach a box collider to the UI element, is this required? i thought it should just work with a GraphicsRaycaster attached. but the configuration below doesn't work (when box collider is disabled and graphics raycaster is enabled)
This is what is on my players camera:
I dont have a problem using box colliders if i have to but i wanted to take advantage of the UI buttons changes in highlighted and pressed color properties

In Unity raycasting works only with game objects having colliders. Raycast returns true when it hits a collider. Without colliders there is nothing the ray can hit.
Unity Physics.Raycast documentation

I believe, for anyone just seeing this for the first time, a potential reason it is not working is because the canvas from the above picture is using a "Graphics Raycaster" element and not an "OVR Raycaster" element. The OVR Raycaster is meant to replace the graphics raycaster to connect Oculus to Unity UI.

If you want to use the unity's UI in VR you might want to take a look at this asset: VRTK
There are some examples of VR UI using controllers or camera targeting.

Go to your canvas, you should have an option that is "Plane Distance" it's set to 100 , I change it to 0.5 and it works quite well.

Related

Unity Performance - Referencing a camera from a Worldspace Canvas Prefab to eliminate "FindMainCamera" calls

Looking at the Profiler data, it seems that WorldSpace canvases (I got a bunch of those) who should support click events (they got buttons), are making a lot of calls to FindMainCamera through the EventSystem.Update() function. I assume this is because all of these canvases are Prefabs and so it isn't trivial for them to get a reference to the active camera from within the editor in prefab mode.
In this project I'm not using Dependency Injection, and I only got one camera in the game, so I was thinking maybe I should just create a SingletonCamera component and then have the Prefab canvas reference it.
I do wonder though if there's a better solution, as this seems like a pretty common scenario that a WorldSpace canvas would be inside a prefab, and have no easy access to a camera.
Thanks!
You can use the FindObjectOfType function to get a reference to the active camera at runtime.
This function searches the scene for a component of the specified type and returns the first one it finds.
So, you could add a script to your WorldSpace canvases that uses FindObjectOfType to get a reference to the active camera when the canvas is enabled or when the canvas needs to interact with the camera.

OVRCameraRig not displaying both camera eyes in Unity player?

Unity 5.5.2f1
I'm trying to get started with making an Oculus app, following the instructions here, modifying a very simple scene I made. However, when I disable the MainCamera and drag in the OVRCameraRig, and run it in the Editor, I still only see a single frame, not these multiple eye views like in the picture. Player settings have "VR Supported" selected. Do I have something not set up right with the app? Or is the tutorial out of date or something?

Pause/Freeze a scene with a trackable active in vuforia unity 3d

I am developing an app with vuforia Cloud Recos. I want to add the feature of allowing the user to pause the page so she does not have to keep pointing the device on the target to view the trackable. This is pretty useful when I want to show texts. Is there anyway to achieve that on Unity3D ? A good example is Microsoft's Here City Lens app which includes a button to pause the page as the screenshot shows;
You could take a screenshot of the screen and apply it to an Image UI object. That is if you do not need the camera feed anymore.
If you need interaction with the elements, I would only take a screenshot of the camera feed without items. Get AR camera transform, apply it to a new camera, disable AR camera.Then apply the screenshot to a background plane covering the whole screen. Keep items on as well and they do not listen to Vuforia anymore. You are pretty much recreating a basic Unity scene. The items should not be moving with Vuforia, the camera is. So they are still in the middle and you need to know where was the camera when you took the shot. Your scene is complete

Develop 2D game Inside Canvas Scaler

I'm new in Unity and i've realized that it's difficult do a multi resolution 2d game on unity without paid 3rd plugins available on Asset Store.
I've made some tests and i'm able to do multi resolution support in this way:
1- Put everything from UI (buttons etc) inside a Canvas object in Render Mode Screen Space - Overlay with 16:9 reference resolution and fixed width.
2- Put the rest of the game objects inside a Game Object called GameManager with the Canvas Scaler component in Render Mode Screen Space - Camera with 16:9 reference resolution, fixed width and the Main Camera attached. After that, all game objects like player, platforms etc inside GameManager need to have a RectTransform component, CanvasRenderer component and Image Component for example.
Can i continue developing the game in that way, or this is a wrong way to do the things?
Regards
Also don't forget GUI, Graphics. It's a common misconception that GUI it's depreciated and slow. No it's not. The GameObject helpers for GUI were bad and are depreciated, but the API for putting in OnGUI works great when all you need is to draw a texture or some text on a screen. They're called legacy, but there are no plans as to remove them, as the whole Unity UI is made out of it anyway.
I have made a few games just on these, using Unity as a very overengineered multiplatform API for draw quad.
There is also GL if you want something more.
Just remember - there will be no built-in physics, particle effects, path finding or anything - just a simple way to draw stuff on the screen. You will have total control over what will be drawn - and this is both a good and bad thing, depending on what you want to do.
I will not recommend you using Canvas Scaler for developing a complete game. Intended purpose of the canvas scaler was to create menus and you should use it to create menus only.
The 2D games created without the canvas scaler don't create much problems (mostly they don't cause any problems) on multiple resolutions.
So, your step 1 is correct but for step 2 you don't need to have a canvas scaler component attached.
Do remember to mark your scene as 2D (not necessary) and your camera to orthographic (necessary) while developing 2D games.

NGUI invisible after tracking with Vuforia

I am using Vuforia 4-2-3, the latest NGUI verion and Unity5.0.1.p3
My GUI works fine until I track a target. After that, my GUI is invisible However, the collision still works. So buttons are working, only I can't see sprites, textures or labels.
There is a 3D building that shows up while tracking. That 3D object uses the standard shader. The NGUI atlas uses the unlit/transparend colored shader.
I guess there is a conflict between those? Did someone else had this problem before?
EDIT:
This is what my hierarchy looks like
I have an Image Target with several 3D objects.
The NGUI and the ARCamera are two different objects aswell.
This is what my NGUI looks like, when I start tracking
Where have you linked your NGUI to? The ARCamera or another GameObject?
I would suggest you link your script to the ARCamera at all times. This would ensure that it shows, since a GameObject below the ARCamera it might not show since the hierarchy pattern to be followed by general Unity users.
EDIT: If you've used OnGUI() for your GUI needs, then the script in which its contained should be attached as a script on the ARCamera. Also, try putting ARCamera on top of Image Target in the Heirarchy.