Unity 2D Renderer URP Makes my UI Camera have a background color - unity3d

I am trying to add lighting to my 2D project, so I created a new Universal Render Pipeline that takes in a 2D renderer. I have 2 separate cameras, one for UI, and one for game elements. The moment I add a pipeline to my project, my UI Camera has a background color, although the clear flags are set to depth only.
What I've tried so far:
Turning of Post-Processing, did not work.
Camera stacking, but it is not available on the 2D Renderer, according to the docs.
Making the camera background color transparent, but the Alpha channel does not seem to affect the color in any way.
The unity version I m working on is 2019.3.4f1

I found the solution. So just in case anyone runs to this problem in the future, make sure you have the latest version of the URP on the package manager, then you will have the option to use camera stacking which works just fine for this case.

Related

Transparent video in Android AR Scene

I am looking to solve the problem of displaying a transparent video in the AR scenes using Unity ARFoundation and Android platform.
I mean, accurately with a simple effect presented for the iOS platform: https://www.youtube.com/watch?v=vralbqaeqrk
In the normal 3D application I use the transcoded .Webm file and I achieve the intended purpose.
Using the same solution in the AR (ARCore) scene the background color is visible.
Can you use specialized/dedicated assets? Or should I stop dreaming about such a result using Unity and Android?
You need to make sure that your video clip does have an alpha channel then just click keep alpha property in video importing section and hit apply. However it will only show if your video does have an alpha.
Then just attach a Video player component to the gameobject which has a Mesh renderer.
Make sure the Render mode is Material override and Material property tells unity on which map of the material video output will be displayed.
If you want to play it on UI, just make a render texture and assign it to RawImage and assign the Video player with following settings.
Lastly make sure the render texture you created does have support for alpha.

Is there a way to apply bloom to a specific object?

I've currently noticed that, if i uncheck the "is Global" checkbox on the Bloom Effect of a Post Processing Volume, even thought I adjusted my layer to affect one in particular, the Bloom doesnt apply to that layer I've set in the P-p layer. In fact, it doesn't apply at all. Either it sets bloom for everything in the scene, or it doesn't.
Extras: I have no Pipeline asset, maybe thats the issue, but I've tried to setting one LRP (because for some reason URP in my 2019.2.17f1 version doenst exist) and it just breaks all my materials that i use for Particle Systems (Particles/Standard Unlit) even if i upgrade them for LRP materials.
Any ideas? If it's possible to deliver a solution to both these problems excellent, but the main one is the title question.
Note: The "camera stacking" approach mentioned here applies only to Unity URP. For the Unity Built-in Render Pipeline or Unity versions prior to 2019.3.0f3 you can achieve a similar effect with RenderTextures. Though Unity HDRP has no explicit "camera stacking" feature it does allow for the same net effect via the HDRP-specific Graphics Compositor.
"Is there a way to apply bloom to a specific object?"
You could take a leaf out of Unity camera stacking whereby one set of objects are rendered by one camera and another set with a different camera. The results of each camera rendering are merged together automatically by Unity and presented to the screen.
But don't take my word for it, this is what Unity has to say:
In the Universal Render Pipeline (URP), you use Camera Stacking to layer the output of multiple Cameras and create a single combined output. Camera Stacking allows you to create effects such as a 3D model in a 2D UI, or the cockpit of a vehicle. Tell me more...
...and (my emphasis):
A Camera Stack overrides the output of the Base Camera with the combined output of all the Cameras in the Camera Stack. As such, anything that you can do with the output of a Base Camera, you can do with the output of a Camera Stack. For example, you can render a Camera Stack to a given render target, apply post-process effects, and so on. Tell me more...
When you consider that each camera has the potential for its own rendering settings (including bloom) the solution is clear:
ensure there are two cameras in the scene, say My Default Camera and Bloomin' Camera
create a custom layer called "Bloom"
assign whatever objects you want to be rendered with a bloom to layer Bloom
setup the camera stack as per "Adding a Camera to a Camera Stack".
My Default Camera should be set to "Base":
Bloomin' Camera should be set to overlay:
Add Bloomin' Camera to My Default Camera Stack settings:
ensure that the Culling mask for My Default Camera has the Bloom layer unticked. This ensures that the objects to be bloomed are only drawn once on the Bloom layer
ensure that the Culling mask for Bloomin' Camera has a single ticked entry for the Bloom layer and nothing else. You don't want to double-up on rendering otherwise you will get funky and undesirable z-order effects apart from hurting game performance. Other layers will be rendered by My Default Camera.
apply bloom effects to camera Bloomin' Camera
run game, celebrate
The is global might sound confusing at first. Ultimately it does not mean where to apply the post processing effect, but when to apply the effect. If it is set to Global, it will always be applied, otherwise you can set a layer and a border that triggers the effect.
The general approach is to only set emission to materials where you want the effect to take place. If your Materials are to dark otherwise you should adjust the ambient lighting settings.
Atleast in URP there are some work arounds for older versions like this, but afaik this does not work in 2020.3 since they made some changes on URP and the camera system.
edit: on the video Chris Hull
Chris Hull game an answer for how to do it with the new system
#Mezzanine Add your actual game objects to a created bloom layer.
Create two cameras and set one of them to cull everything except that
bloom layer you made. Set the other to only cull the bloom layer. Then
you can set your camera to overlay and it will be added to the other.
You can then use separate post process stacks on these cameras. Note
that you can only bloom objects in the background with this technique
as if you add bloom to an overlay camera, for some reason it just adds
bloom to everything rather than just the things in that camera view.
Doesn't make much sense and makes the purpose of the layers redundant
in my opinion. If you can find a way to add post process to the
overlay camera before it is added to the final image, to do let me
know.
i have not tested that yet, but i presume it's still valid.

Are masked sprites and Stencil Buffers in Unity only visible on one eye if deployed on HoloLens2?

Is it possible that Sprite Renderer in a HoloLens2 Unity Project, which are masked via a SpriteMask are only visible on one eye in the final HoloLens2 build (UWP via VisualStudio2019 deployed on HoloLens2 device).
I also experienced the same behaviour on elements which are masked with a StencilShader.
I am using a 24-bit depth buffer for my unity project if that helps, otherways the StencilShader wouldn´t work.
To display certain objects to one eye, for different rendering configurations, different ways need to be used.
With MultiPass, you should be able to set a camera to render to only one eye(Camera component -> Output-> Target Eye), so doing per eye stuff is super easy. With Stereo Instancing, you can draw in Shader but you will need to multiply the projection matrix of the current eye.

Unity LWPR does not see some layers

I have 2D Unity project with two cameras: the main one and one designed for parallax effect.
After I installed LWPR for setting lights the second camera stopped showing its layer in game.
Is there a way to fix this?
The practice of rendering two cameras at the same time as you are describing, "camera stacking", is not currently supported on LWRP or URP. There is some discussion about adding support for it again.
You could try using a camera to render onto a render texture and display that as your background.
For the 2D light, it is present but is greyed out. You should be able to enable experimental feature use in the player settings for the project.

Develop 2D game Inside Canvas Scaler

I'm new in Unity and i've realized that it's difficult do a multi resolution 2d game on unity without paid 3rd plugins available on Asset Store.
I've made some tests and i'm able to do multi resolution support in this way:
1- Put everything from UI (buttons etc) inside a Canvas object in Render Mode Screen Space - Overlay with 16:9 reference resolution and fixed width.
2- Put the rest of the game objects inside a Game Object called GameManager with the Canvas Scaler component in Render Mode Screen Space - Camera with 16:9 reference resolution, fixed width and the Main Camera attached. After that, all game objects like player, platforms etc inside GameManager need to have a RectTransform component, CanvasRenderer component and Image Component for example.
Can i continue developing the game in that way, or this is a wrong way to do the things?
Regards
Also don't forget GUI, Graphics. It's a common misconception that GUI it's depreciated and slow. No it's not. The GameObject helpers for GUI were bad and are depreciated, but the API for putting in OnGUI works great when all you need is to draw a texture or some text on a screen. They're called legacy, but there are no plans as to remove them, as the whole Unity UI is made out of it anyway.
I have made a few games just on these, using Unity as a very overengineered multiplatform API for draw quad.
There is also GL if you want something more.
Just remember - there will be no built-in physics, particle effects, path finding or anything - just a simple way to draw stuff on the screen. You will have total control over what will be drawn - and this is both a good and bad thing, depending on what you want to do.
I will not recommend you using Canvas Scaler for developing a complete game. Intended purpose of the canvas scaler was to create menus and you should use it to create menus only.
The 2D games created without the canvas scaler don't create much problems (mostly they don't cause any problems) on multiple resolutions.
So, your step 1 is correct but for step 2 you don't need to have a canvas scaler component attached.
Do remember to mark your scene as 2D (not necessary) and your camera to orthographic (necessary) while developing 2D games.