I am using Unity 2019.3.5f1 and I am using the Universal Render Pipeline (URP).
I am trying to use the post processing in URP for the foreground only (in my case, the players), and I want the leave the background (which in my case, is just a quad with a texture), as is.
I have tried using the Camera Stack but it wont work for me because the overlay camera can't have post processing effects according to the documentation.
The only solution that I could come up with is create the some sort of custom render which:
Render the background to buffer A.
Render the foreground to buffer B, and save the depth.
Combine the two using a shader that gets both textures, and the depth texture, and based on the depth, takes buffer A or buffer B.
The problem with this I believe is that I cant use it with the post processing for Unity.
Any idea what I can do?
EDIT:
I tried another thing in Unity which seem to not be working (might be a bug):
I created 3 cameras: Foreground Camera, Depth Camera (only render the foreground), and a Background Camera.
I set up the Depth camera so it will render to a render texture and, indeed now I have a render texture with the proper depth I need.
Now, from here everything went wrong, there seem to be odd things happening when using Unity new Post processing (the built in one):
The Foreground Camera is set to Tag=MainCamera, and when I enable Post Processing and add an effect, indeed we can see it. (as expected)
The Background Camera is essentially a duplicate of the Foreground one, but with Tag=Untagged, I use the same options (enabled Post).
Now, the expected thing is we see the Background Camera with effects like Foreground, but no:
When using Volume Mask on my background layer, the Post processing just turns off, no effect at all no matter what (and I have set my background the Background layer).
When I disable the Foreground Camera (or remove its tag) and set the Background Camera to MainCamera, still nothing changes, the post still wont work.
When I set Volume Mask to Default (or everything), the result is shown ONLY in the scene view, I tried rendering the camera to a RenderTexture but still, you clearly see no effect applied!
Related
I'm trying to take a screenshot of a MetalKit view (MTKView) like in the answer Take a snapshot of current screen with Metal in swift but it requires that the MTKView set framebufferOnly to false which disables some optimizations according to Apple.
Is there a way to copy the MTKView texture (e.g. view.currentDrawable.texture) so that I can read the pixels? I don't need to take screenshots often so it would be a shame to disable the optimization for the entire lifecycle of the program running.
I tried using MTLTexture.newTextureViewWithPixelFormat and blit buffers but I still get the same exception about the frame buffer only being true.
When a screenshot is requested, you could toggle framebufferOnly, do one rendering pass, and then toggle it back.
Alternatively, you can do one rendering pass targeting a texture of your own specification, blit that to the drawable's texture (so as not to visually drop a frame), and then save the contents of your own texture.
I'm looking at using Camera TartetTexture RenderTexture functionality for less processing intensive menu transitions but I'm having some trouble. Every texture I render out from the camera doesn't have masks working. I can see the whole version of every graphic on the screen. How can I get it rendering keeping the masks in tact? It is also failing to render any of my spawned prefabs. Either that or they could be hidden behind the unmasked graphics.
Also, I was told to render to a material. None of the shaders I've tried have supported the masks (don't know if that's really the problem) or have looked like the original image. They all look dark and moody, with the occasional weird alpha channel in the upper left corner. How can I get the image looking just like my screen?
My menus are all on a Screen Space - Overlay canvas, so they shouldn't need to be lit.
In the image below, you can see my scene including a map. However in play mode, due to the color adjustment of the main camera, only a single flat color is visible. I tried to play with the color settings in any way (changing colors, transparency) but it still doesn't work.
Also, whenever I add a new scene, the camera comes with the same setting and nothing is observed in play mode. What can be the possible reasons for this problem?
I'm trying to create a cumulative trail effect in a render texture. By cumulative I mean that the render texture would show the last few frames overlaid on each other. Currently, when my camera outputs to a render texture it completely overwrites whatever was there previously.
Let me know if I can clarify anything.
Thanks!
You could set the clear flag on the camera to Don't clear. This will prevent the clearing of previous frame on your camera and then will create this overlapping kinda like Flash movement style.
The issue is that everything will be kept on screen so if only the character moves then it is ok but if the camera moves then the effect also applies to environment and your scene becomes a big blur.
You could have two cameras for the effect, each with different rendering layers. One takes care of the items that should not have the effect and one takes care of those that are considered for the effect. This way you can apply the effect on characters and ignore the environment, if that is required else just go with one camera.
I have a post-process effect that uses Unity's Graphics.Blit to pixelate or apply a crt-screen effect to a scene. There are some UI elements that display after the fact (basically making it not a true post process, but let's put that aside for a moment).
Now, I want to apply a second process that performs a screen wipe and transitions out one scene for another. This time I want to include the UI buttons in the effect.
I've looked into using a render texture and then rendering to a second camera, but I feel like there is a smarter/more accurate way to handle this.
1st version of this question:
Can I selectively include the screenspace-overlay UI in the script that applies the post process?
or, 2nd version of this question
Is there a trick to getting the render texture to preserve resolution and display accurately (i.e.: without lost quality) when re rendering to a second camera?