Unity AR UI not showing up - unity3d

I have created a simple Unity AR Foundation app which places objects on a plane whenever the screen is touched. I would like to add some UI so the user can press a button rather than anywhere on the screen.
I have followed several different tutorials which seem to be doing mostly the same thing. I right-click the Hierarchy -> UI -> Button. I have scaled it so it should fit my mobile screen and anchored it to the center so it should be easy enough to find.
These are the canvas settings:
Might the UI somehow be hidden behind the camera feed from the AR Session Origin -> AR Camera? Am I missing any steps to anchor the UI to the screen?
As you can probably tell, I am very new to Unity but I feel like I have followed the tutorials for creating a UI, but it simply won't show. If you need more information, please just ask and I will provide.

Not sure but sounds like you might need to have that Canvas Scalar to scale with the screen size. Change the UI scale mode to Scale with Screen Size.

I was compiling the wrong scene. I had two very similar scenes, so when I compiled I didn't realize there were no changes and that I was inspecting the entirely wrong scene.
Once I changed to the correct scene the setup above worked as expected.

Related

Why aren't my Unity Game's ui elements working on Google Play store?

I added some ui elements to my game and they work properly in unity and on my device when connected to unity remote.
But I uploaded it to google play, downloaded it, and some text doesn't appear, buttons aren't pressable, and the background is red for some reason (supposed to be blue). Anyone know what's going on?
It may be a ratio problem, to fix this use anchors in your UI elements by using the anchor presets or by using the anchor or by using the anchor min and max values in insector, to make it scale with size you can set its UI Scale Mode to Scale With Screen Size.(https://docs.unity3d.com/Packages/com.unity.ugui#1.0/manual/HOWTO-UIMultiResolution.html). Let me know if that helps

Unity, at higher resolution (2560 x 1440) tester cannot click the navigation elements on the screen even though they are in position

Here is my issue, the way my game works is that you click on the edges of the screen to navigate. This is done through an Overlay Canvas using OnPointerClick. I colored the navigation areas, and scale the canvas according to screen size and the navigation areas appear on their screen as they should. Yet they CANNOT click any of the elements for some reason. They have another monitor that works without issue. They've noted that their larger monitor is AMD. I've also confirmed that the ability to navigate in game has not been hindered. I have never had this issue (or any other testers) on 1920x1080 monitors.
I have completely exhausted what the issue can be, they cannot even navigate on a bare bones tutorial. Any ideas would be GREATLY appreciated.
Here is an image of their screen and the respective clickable navigation areas.
[img]https://i.imgur.com/NVQkA8g.png[/img]
Unity 2019.2.14f1
Put a Image component without sprite, because without that, you will not detect any click. You can set the alpha to 0 after you see that working.
Also check the anchors of the navigation areas.

Unity UI screen size issue

I'm quite new to Unity, so I'm sorry if this is a basic question. I've been trying to set up the UI for a mobile game, but I'm not quite sure how to make the UI lock it's position, no matter the screen size. I've tried using anchors (though I don't fully understand how to use them properly), I've tried using a canvas scaler, I've looked at the Unity document and I just can't seem to find an answer. The buttons are off screen/half off the screen when I build the game to my device/switch screen sizes in the game view. Does anyone know how to fix this?
You can set your anchor point by selecting the UI object (such as a button) and then clicking here and selecting the right anchor point. You can also press down shift to set the pivot and/or alt to move the object to that point at the same time. The object should now be anchored to that point and keep its position even if the resolution is changed. You can set a precise position from the inspector, too. Simply adjust the Pos X and Pos Y variables. It will still adhere to the anchor point.
Note that you might have to play around with the Canvas object's UI Scale mode and its settings to get the right setup.

UI Hololens - HandDraggable Issues

I've recently created a 2D app for the HoloLens. It is a UI Panel with several buttons into it. In order to drag the panel and be positioned as the user wants, I implemented the HanDdraggable.cs functionality (from HoloToolKit). However, whenever I try to move the panel it also rotates.
To change that I modified the Rotation Mode from "Default" to "Orient Towards User" and "Orient Towards User and Keep Uptight". But then It works even worst; if I implement that case, whenever I try to select the panel and drag it to somewhere, the panel runs off from my field of view and it suddenly disappears.
I wanted to ask if somebody has already tried to implement the HandDraggable option into an UI Hololens app and knows how to fix this nodding issue.
I'm currently working on hololens UI for one of my projects and to manipulate UI I used TwoHandManipulatable script which is built into MixedRealityToolKit. In Manipulation Mode of that script you could only set "Move" as an option, and this would allow you to move a menu with two hands, as well as one. (I wanted to have a menu which you can also rotate and scale - which works perfectly with this script, you can lock around which axis you want to have rotation enabled, to avoid unwanted manipulation).
For your script HandDraggable, did you try to set RotationMode to Lock Object Rotation? Sounds like this could solve the problem.

Hololens: how to render element visible only in AR, but not in mixed reality capture

I'm making a presentation of someone using the Hololens that is duplicated on a big screen. For duplication it uses the device portal's mixed reality capture option (live stream).
I need to render a tool tip to be visible only to the person with the Hololens - but invisible to the people watching it on the big screen.
From what I've seen, the only difference in rendering between the two is that I can render black on the live stream (if I omit rendering the alpha channel) with it being invisible on the Hololens due to the way it's screen works. This is unfortunately useless to me, as I need to show something to the Hololens viewer, not big screen viewers.
Any ideas on how can I make part of the content visible only to the hololens user?
I can't use spectator view due to other constraints (I need first person view).
Found a solution, not the best one possible, but usable.
I render the tooltip objects only to the right eye, as only the contents of the left eye are included in the live view.
For anyone wondering, in a shader, there is a magic value of unity_StereoEyeIndex that has the value 1 or 0, depending on the eye. To use this value, it first needs to be set up.
If anoyone has an idea how can I do this without sacrificing stereoscopy, I'll be happy to hear about it.