I set the depth for the camera, but it will be rendered first - unity3d

I am using Unity 5.4.1f1.
There are multiple cameras on the scene and there is camera I would like to draw last for post effects.
But whatever value I set for the depth of the camera,
That camera will be rendered first on Profiler.
Therefore, the rendered object is not displayed.
How can I change the rendering order?

I solved it myself. When Camera.targetTexture set, then render first.

Related

How to use other camera depth buffer to render the scene on another camera in URP?

I have two camera in the same position. I want to use the first camera's depth buffer to render the scene again on the second camera.
Use case:
I want to detect if a line that I draw on the 3d space is rendered behind an object or not. To do that, first I draw the scene normally on the first camera. Then I render the scene again on the second camera, which rendered unto a texture and set to culled all object except the line. The line is using a shader that will shows Red on the part that rendered behind an object, but to draw it correctly I need to render the second camera using the first camera's depth buffer. After that I just check if the texture has the red color.
Is it possible to do that on URP? Or do you guys have any other idea how to achieve what I want? !(https://i.stack.imgur.com/7hxDo.png)

Unity 2021.3 transparent render texture not working

I'm currently trying to render an object separately from the rest of the scene and pixelate it in the process. I used to cameras, one that renders everything but the object and another that only renders that object (using layers) and the output of the secondary is sent to a render texture that's being displayed in the UI covering the whole screen. When I enter playmode, it looks alright but as soon as I move the camera around, the last frame of the secondary camera doesn't get deleted so I'm left with a trail of the past frames.
This is how it looks at first:
And this is how it looks when I move the camera:
Any help? Thanks!
Found a fix!
Make sure to set the background type of the secondary camera to "solid color", not "uninitialized". See the image below:

Make a URP renderer feature affect only the current camera

I'm making a renderer feature with a single ScriptableRenderPass. This renderer feature is present on a single 2D Renderer, like so:
and I have a single camera that is using this renderer, that only affects a particular layer of the camera:
The camera only renders everything on the PixelPerfect layer, ignoring anything else. This camera is in a camera stack, like so:
But, somehow, the renderer feature on Downscaled Camera affects the Background Camera - I suspect that the render pass somehow sees everything from the previous cameras, but I have no idea how that even makes sense, as when singling out only the downscaled camera, I only see the layer that I have set the Camera to cull.
Here's how the Downscaled Camera is set-up:
I'm Blitting to the renderingData.cameraData.renderer.cameraColorTarget in Execute.
I've found this post on the GameDev StackExchange, but this was before the era of URP and scriptable renderer features, but it describes my problem perfectly. Any thoughts?
I am having a similar issue where I have selected a custom renderer on my camera, but it refuses to use my custom renderer and only uses the default. I have yet to figure out why.
EDIT: For future reference, I fixed my problem. Turns out there was no problem. The scene view (and subsequently any camera previews for cameras you have selected) will always render with the default renderer. My render target was being rendered with the correct renderer.

Unity VideoPlayer with Subtitles

I was going to use the VideoPlayer to render to Camera Near Plane, but I also want to display subtitles for the video for the sake of accessibility. I'm wondering what the best way to do that is.
I can't see anything on a canvas if I render to Near Plane. I'd like the video to appear in front of the scene so that I can have the scene there once the video is complete.
Do I need to be using a render texture to achieve this? Seems like a render texture might incur some unnecessary overhead for my purposes, but I could be wrong.
The idea is this:
Far Background - Scene
Background - Black Image (so i can fade to scene)
Middleground - Video
Foreground - Subtitles
More info:
This is a 2D point and click adventure game with a pre-rendered cutscene.
You could do this with a render texture, place it in front of the camera at an exact distance and size, but I wouldn't. Probably would be a different camera anyway for lighting or clipping purposes.
I would use a second Camera, rendering over top of the Main Camera, with the subtitle UI's canvas targeting the second camera's screen space, and clearing depth only. It will render what it sees, but with a totally transparent background. Then, you can render your video on either the main camera's near plane or the new subtitle camera's far plane.
You could put your black square in front of this camera, too, though it would be in front of the video. It could be UI on the main camera, or stick a third camera in between them. You might have to worry about performance if there are too many cameras, but I have used two or three before to no noticeable performance hit.
Robert Mocks's answer is perfectly tenable and makes sense to me. Thank you for that!
What I decided to do instead was use a RawImage so that I wouldn't have to deal with extra cameras. This way I can use the canvas as I normally would and don't have to deal with render textures.
This involves using the API Only setting along with the following code:
rawImage.texture = videoPlayer.texture;
That seems to work well for me.

Offsetting the rendered result of a camera in Unity

I am trying to make everything that is rendered by my perspective "Camera A" appear 100 points higher. This is due to the fact that my App has an interface with an open space on the upper part.
My app uses face detection to simulate the face movement into an in game avatar. To do this I compute the "Model-View-Matrix" to set it into the camera's "worldToCameraMatrix".
So far this works well, but everything is rendered with the center as the origin, now i want to move this center origin a certain distance "up" so that it matches my interface.
Is there a way to tell Unity to offset the rendered camera result?
An alternative I thought about is to render into a texture, then I can just move the texture itself, but I thought there must be an easier way.
By the way, my main camera is orthographic, and i use this one to render the camera texture. In this case simply moving the rendering game object quad up does the trick.
I found a property called "pixelRect", the description says:
Where on the screen is the camera rendered in pixel coordinates.
However moving the center up seems to scale down my objects.
You can set the viewport rect/orthosize so that its offset or you can render to a render texture and render that as a overlay with a offset or diffirence in scale.
Cheers