I have basic working VR Application. At a given point. I would like to show a static image that covers the screen. I had the idea to load it into a material and the modify the render texture. Like so:
void OnRenderObject() {
Graphics.Blit(RenderTexture.active, RenderTexture.active, mat);
}
However this doesn't really work. The player shows black as does the left eye. But the right eye shows the image.
How (and where) should I apply the mat to the render texture so that it looks good on VR AND appears correctly in the player as well?
Related
In my project I'm using 2 cameras:
the Main Camera for the scene 3D elements;
a secondary camera for UI elements, which I need(?*) since I want to render 3D objects inside the UI (I followed this YouTube tutorial).
?* - I'm actually not 100% sure if that's the only and/or correct way to render 3D objects in UI, but I haven't found any other solution.
Unity Inspector
Main Camera
Here's the Inspector of my Main Camera (Perspective Projection):
Orthographic Camera
Inspector of one of the orthographic cameras:
UI Camera
Inspector of the UI camera (also orthographic):
Canvas Details
Inspector of the canvas I'm using for the UI:
I'm trying to make a sort of switch for different orthogonal projections, that makes use of different cameras to change the perspective.
Therefore I made a dropdown UI element that seems to work fine: I used the gameobject.SetActive(bool) method to switch camera, even though the docs say to use the enabled property to switch cameras, because that wasn't working.
Switch Camera Script
public class ChangeCamera : MonoBehaviour
{
public List<Camera> cameras;
public void SetCamera(int camera)
{
for (int i = 0; i < cameras.Count; i++)
{
cameras[i].gameObject.SetActive(i == camera);
}
}
}
Problem
So far so good.
The problem is that, when I switch the camera in Play Mode, the UI disappears.
Demo
1. Perspective Camera
Before switching camera (UI correctly displayed):
2. Orthographic Camera
After switching camera to orthogonal (UI disappeared):
And that's what I see in the Scene window:
Update
Setting the Canvas "Render Mode" to Screen Space - Overlay seems to be solving the problem, but then I'm not able to see the 3D objects in the UI:
Turns out it was easier than I thought. Big thanks to derHugo :)
Problem
The problem here is the camera depth:
the Main Camera (perspective) has a depth of -1;
the UI camera has a depth of 0;
the orthographic cameras have a depth of 0.
Since the camera depth controls the cameras rendering order, a camera with a lower depth gets rendered before with a camera with higher depth. Therefore, when I was changing the camera to one of the orthographics one, they were rendered in the same depth of the UI, masking it completely.
Solution
I changed the orthographic cameras depth to -1 (the same as the Main Camera), and that solved the problem.
NB: this had nothing to do with the camera using perspective or orthographic projection.
Demo
1. Before
2. After
In Unity, I have 2 images in the form of a texture that I am merging together (one overlaying the other). I do this in a Compute Shader and put the results on a RenderTexture. I want this RenderTexture to be everything that the output camera sees.
I have found articles saying to use a the ReplacementShader property of the camera, but I couldn't get that to properly work.
Currently I have simply put the RenderTexture onto a UIRawImage that covers the whole UI Canvas so that the entire camera is filled. This however has a lot of lag and is obviously a suboptimal solution.
So how does one output the Rendertexture or the compute Shader result directly onto the camera. Thanks.
You could probably use OnRenderImage
Event function that Unity calls after a Camera has finished rendering, that allows you to modify the Camera's final image.
and use Graphics.Blit
Copies source texture into destination render texture with a shader.
and do something like e.g.
// This script goes onto your according Camera
public class RenderReplacement : MonoBehaviour
{
public RenderTexture replacement;
void OnRenderImage(RenderTexture src, RenderTexture dest)
{
// To overwrite the entire screen
Graphics.Blit(replacement, null);
// Or to overwrite only what this specific Camera renders
//Graphics.Blit(replacement, dest);
}
}
Where
dest
The destination RenderTexture. Set this to null to blit directly to screen. See description for more information.
Note as mentioned in the API the OnRenderImage is called after this Camera already finished rendering. So in order to make this more efficient - since we basically throw away that render - simply make sure that camera basically isn't rendering anything by disabling all Layers and e.g. let it only render a single color background
I was going to use the VideoPlayer to render to Camera Near Plane, but I also want to display subtitles for the video for the sake of accessibility. I'm wondering what the best way to do that is.
I can't see anything on a canvas if I render to Near Plane. I'd like the video to appear in front of the scene so that I can have the scene there once the video is complete.
Do I need to be using a render texture to achieve this? Seems like a render texture might incur some unnecessary overhead for my purposes, but I could be wrong.
The idea is this:
Far Background - Scene
Background - Black Image (so i can fade to scene)
Middleground - Video
Foreground - Subtitles
More info:
This is a 2D point and click adventure game with a pre-rendered cutscene.
You could do this with a render texture, place it in front of the camera at an exact distance and size, but I wouldn't. Probably would be a different camera anyway for lighting or clipping purposes.
I would use a second Camera, rendering over top of the Main Camera, with the subtitle UI's canvas targeting the second camera's screen space, and clearing depth only. It will render what it sees, but with a totally transparent background. Then, you can render your video on either the main camera's near plane or the new subtitle camera's far plane.
You could put your black square in front of this camera, too, though it would be in front of the video. It could be UI on the main camera, or stick a third camera in between them. You might have to worry about performance if there are too many cameras, but I have used two or three before to no noticeable performance hit.
Robert Mocks's answer is perfectly tenable and makes sense to me. Thank you for that!
What I decided to do instead was use a RawImage so that I wouldn't have to deal with extra cameras. This way I can use the canvas as I normally would and don't have to deal with render textures.
This involves using the API Only setting along with the following code:
rawImage.texture = videoPlayer.texture;
That seems to work well for me.
I am developing an application for Pico VR headset using Unity. The SDK has a prefab containing two camera.
I have added a quad and a script to change its texture at runtime.
Since the quad is rectangular and display is circular, my texture is unable to fill the screen. If quad is too close to camera, the corners cannot fit the display. If its so far that corners are able to fit inside the headset's display, I am able to see the background at the edges.
The display of HMD looks like a circle (may be spherical - because I have observed the texture that I apply on to the quad looks zoomed in when I run the app on the device). I want to add such object and configure my cameras so that they can view the entire texture of that object and nothing else.
Hi.
I have read about scaling-GUI and image aspect ratio, and even if it's a lot of scripts and tutorials out there it's hard to find a good way to handle it for so many different devices and screens.
But for my game it doesn't need to be so complicated, I hope. Scaling the 2D GUI was a quick fix making them a % of the value from Screen.width
The attached jpg shows the only problem I need to fix. The first picture shows the scene in a some kind of normal aspect ratio where i see the whole (test) scene. Pic 1 dragging the view sideways, and important game elements will disappear out of the screen (this is what I don't want). Pic 2 on the other hand showing that Unity scale the scene but so you see the whole scene no matter how extreme widescreen I make it.
And this is really what I want; make it scale the scene sideways and not crop it out of the picture.. Any idea how I make Unity do that?
Create an empty Gameobject and attach following script so everytime you run a game your screen will adjust
using UnityEngine;
using System.Collections;
public class ScreenAdjust : MonoBehaviour {
void Start () {
Camera.main.aspect = 800f/1280f;
}
}
You should use Canvas Scaler
http://docs.unity3d.com/ScriptReference/UI.CanvasScaler.html
Here is a short tutorial:
https://www.youtube.com/watch?v=XkfhxuNr9Es