Unity3D: changing camera from perspective to orthographic hides UI elements - unity3d

In my project I'm using 2 cameras:
the Main Camera for the scene 3D elements;
a secondary camera for UI elements, which I need(?*) since I want to render 3D objects inside the UI (I followed this YouTube tutorial).
?* - I'm actually not 100% sure if that's the only and/or correct way to render 3D objects in UI, but I haven't found any other solution.
Unity Inspector
Main Camera
Here's the Inspector of my Main Camera (Perspective Projection):
Orthographic Camera
Inspector of one of the orthographic cameras:
UI Camera
Inspector of the UI camera (also orthographic):
Canvas Details
Inspector of the canvas I'm using for the UI:
I'm trying to make a sort of switch for different orthogonal projections, that makes use of different cameras to change the perspective.
Therefore I made a dropdown UI element that seems to work fine: I used the gameobject.SetActive(bool) method to switch camera, even though the docs say to use the enabled property to switch cameras, because that wasn't working.
Switch Camera Script
public class ChangeCamera : MonoBehaviour
{
public List<Camera> cameras;
public void SetCamera(int camera)
{
for (int i = 0; i < cameras.Count; i++)
{
cameras[i].gameObject.SetActive(i == camera);
}
}
}
Problem
So far so good.
The problem is that, when I switch the camera in Play Mode, the UI disappears.
Demo
1. Perspective Camera
Before switching camera (UI correctly displayed):
2. Orthographic Camera
After switching camera to orthogonal (UI disappeared):
And that's what I see in the Scene window:
Update
Setting the Canvas "Render Mode" to Screen Space - Overlay seems to be solving the problem, but then I'm not able to see the 3D objects in the UI:

Turns out it was easier than I thought. Big thanks to derHugo :)
Problem
The problem here is the camera depth:
the Main Camera (perspective) has a depth of -1;
the UI camera has a depth of 0;
the orthographic cameras have a depth of 0.
Since the camera depth controls the cameras rendering order, a camera with a lower depth gets rendered before with a camera with higher depth. Therefore, when I was changing the camera to one of the orthographics one, they were rendered in the same depth of the UI, masking it completely.
Solution
I changed the orthographic cameras depth to -1 (the same as the Main Camera), and that solved the problem.
NB: this had nothing to do with the camera using perspective or orthographic projection.
Demo
1. Before
2. After

Related

Pixelation of scroll view object, when grayscale mode of the AR camera is turned on

Hello everyone!
I have a grayscale shader/ material on the AR camera. The grayscaling works for every object in the scene except for the scroll view object, which can be turn on and off. When the scroll view object is in gray mode, you can see many pixels. The pixels look like they mirror different parts of the scroll view itself. From looking at the image, does someone have an idea why this could happen? Many thanks in advance~
At first, I though it would be 2 shaders that are not compatible (Unity UI Default shader on the scrollview object and a Grayscale shader on the AR camera).
I've tried to just return
float4 col = IN.color;
or
float4 col = (1,1,1,1);
with a custom simple UI Default shader to check on incompatibilities. But the pixels still appear in grayscale mode.
Update: As it is probably not the shaders fault, it could be the camera. I've found 2 cameras in the project. An AR camera (culling mask: everything except 'UI') and a UI camera (culling mask: 'UI'). When I deactivated the UI camera's camera component, the pixelation does not happen in grayscale mode anymore, but also all UI elements are not shown anymore.
Did not work:
changing Layer of the scroll view from 'UI' to 'Default'
Worked:
deactivating the camera component of the UI Camera, but UI elements are not visible anymore...
After trying again another grayscale shader, I've found out that deleting the 'Blend SrcAlpha OneMinusSrcAlpha' from the code helped :) although I still don't know why.

Unity XR Interaction Toolkit multiple cameras shows mirrored pixelated mess

New to unity so hoping this is a dumb / quick fix.
I'm using XR Interaction Toolkit's XR Origin camera (device-based if it matters) and want to add a second camera to overlay my UI elements so the canvases appear on top of walls / elements instead of intersecting and being hidden by them.
I've duplicated my main camera and removed all scripts except the main camera one, set it as a child to the previous main camera, and adjusted the culling masks to be UI-only and Everything-but-UI. I've also set the depth of the ui camera to be higher than the main camera, and set Clear Flags to Depth only. My UI elements are in the correct layer and the camera of the canvas is the child UI camera.
The result is a disaster. When I click the button to bring up my UI overlay, me view becomes a 360 pixelated block of tiny mirrored rectangles.
Any help would be appreciated, or workarounds not using 2 cameras.
Thanks all

Oculus Unity VR - Separate UI Camera Not Working

I have a VR project for which I am trying to render a canvas to be "always on top" of everything using a separate UI camera as noted here.
I made my UICamera object a child of the main camera - which is the centerEyeAnchor object in the OVRCameraRig.
And I set the culling mask for my UICamera to only show its layer and removed that layer from the main camera (CenterEyeAnchor).
But in fact the perspective is weird and it seems like the UICamera is offset a little bit but its transform is zero'd out in the inspector, so I don't why it's displaying so weird.
If I set the culling mask to "Everything" for both cameras it's still offset a little.
In general you don't need the UI camera to be a child of CenterEyeAnchor. Move it out to to the top level and zero out the coordinates. The Oculus rig might be doing some magic with IPD or something else and it screws up the pixel-perfectness of UI.

how to render static 3d object in certain coordinates unity

This is my first time with unity, I need to make a 3d object in the camera of the device (vr) in certain coordinates of a map, do you know any links that can help me? Is only a static object will always be in the same position and users will be able to see it on their devices
If this is to be overlayed above a 3D scene use a renderTexture, then add the rendered texture to the UI using a Raw Image component.
Another way to do this is to use a canvas set to screen space overlay rendering. You can create a canvas by going to GameObject > UI > Image.
(see page for UI>canvas on Unity manual site, I can only post 2 links as a new user)
The canvas in this rendering mode is described this way on the unity manual page for the UI Canvas:
Screen Space - Overlay
This render mode places UI elements on the screen rendered on top of the scene. If the screen is resized or changes resolution, the Canvas will automatically change size to match this.
I am assuming this is a UI element over a 3D scene, because you mentioned VR.
Credit: I got some of my links and info from the unity forum thread "What is the best way to display 3D models as UI elements?"

Unity 2D - Camera isn't where it's supposed to be

In Unity 2D, I am working on a game that uses a square shaped camera that hovers over a tilemap (Zelda NES style). In the camera preview in the game view, it shows what it's supposed to:
But when I switch to the game view, this is what it shows:
Here are my camera settings in case it helps.
Does anybody know why this is happening and how I can fix it?
I finally figured out how to fix it. All I needed to do was set "Depth" on the camera settings to 0 (it was at -1 before).
You scene must have multiple cameras and one of those camera's depth would be > -1 hence its showing that camera and not your maincamera, if you just have one camera the depth does not matter