Unity 2D and UI rendering issues - unity3d

I have been developing a game for a little while now and have started to notice rendering issues with the Unity3D engine. The poop emoji is supposed to appear in front of the floor tile but this isn't the case here. There are more depending on how you position the camera. They are both on the default layer and I have no idea what the cause would be.

If multiple sprite are on the same sorting layer and the same order in layer index you cant be sure which one will be on top of others. It could change anytime depending of the camera position or even on game session.
To be sure of your rendering , use the order in layer and set a higher value to the emoji than the floor.
Note : I use the 2020.2.0b unity version. Your inspector could look different but the Order in Layer should be there.

Related

World space UI gets blured with Depth Of Field effect when intracting with skybox

Im making a game in which players have world spaced UI tags on top of them. but when I use Depth Of Field effect and there is no object behind that UI tag (SKYBOX is being rendered behind) the UI gets highly blured for no reason :
Example
I tried changing the World Camera on the name tag's canvas but didn't worked.
I didn't found any solutions for this problems since people don't usually use Depth Of Field.
I'm using Unity Universual Renderer Pipeline and its Volume.cs
This is reasonable, common transparent shaders don't write the depth buffer, so the DOF effect shader cannot know the distance from these HUDs.
To solve this problem, enable write depth feature for these object. Check this example: https://docs.unity3d.com/Packages/com.unity.render-pipelines.universal#15.0/manual/renderer-features/how-to-custom-effect-render-objects.html

How to make one game object to be in front of another in unity

I heard that game objects are drawn in the same order they appear in the Hierarchy. But in my case it doesn't work.
For example I wanted the wolf is placed in front of the rabbit, but it doesn't work.
Is there some way to make it with sorting objects according in hierarchy or I can make it only with layers?
The hierarchy sorting you speak of only works in the canvas - so for example with RectTransforms and Images. However I guess you want to use Sprites. SpriteRenderer component has a Order in layer property. Plus Sprites are more lightweight than Images with Transparency. Or you could just move Transforms closer and further away from the camera (even if your game is 2D/ using an orthogonal camera). If everything fails you could change the RenderQueue of the Materials.

Is there a way to apply bloom to a specific object?

I've currently noticed that, if i uncheck the "is Global" checkbox on the Bloom Effect of a Post Processing Volume, even thought I adjusted my layer to affect one in particular, the Bloom doesnt apply to that layer I've set in the P-p layer. In fact, it doesn't apply at all. Either it sets bloom for everything in the scene, or it doesn't.
Extras: I have no Pipeline asset, maybe thats the issue, but I've tried to setting one LRP (because for some reason URP in my 2019.2.17f1 version doenst exist) and it just breaks all my materials that i use for Particle Systems (Particles/Standard Unlit) even if i upgrade them for LRP materials.
Any ideas? If it's possible to deliver a solution to both these problems excellent, but the main one is the title question.
Note: The "camera stacking" approach mentioned here applies only to Unity URP. For the Unity Built-in Render Pipeline or Unity versions prior to 2019.3.0f3 you can achieve a similar effect with RenderTextures. Though Unity HDRP has no explicit "camera stacking" feature it does allow for the same net effect via the HDRP-specific Graphics Compositor.
"Is there a way to apply bloom to a specific object?"
You could take a leaf out of Unity camera stacking whereby one set of objects are rendered by one camera and another set with a different camera. The results of each camera rendering are merged together automatically by Unity and presented to the screen.
But don't take my word for it, this is what Unity has to say:
In the Universal Render Pipeline (URP), you use Camera Stacking to layer the output of multiple Cameras and create a single combined output. Camera Stacking allows you to create effects such as a 3D model in a 2D UI, or the cockpit of a vehicle. Tell me more...
...and (my emphasis):
A Camera Stack overrides the output of the Base Camera with the combined output of all the Cameras in the Camera Stack. As such, anything that you can do with the output of a Base Camera, you can do with the output of a Camera Stack. For example, you can render a Camera Stack to a given render target, apply post-process effects, and so on. Tell me more...
When you consider that each camera has the potential for its own rendering settings (including bloom) the solution is clear:
ensure there are two cameras in the scene, say My Default Camera and Bloomin' Camera
create a custom layer called "Bloom"
assign whatever objects you want to be rendered with a bloom to layer Bloom
setup the camera stack as per "Adding a Camera to a Camera Stack".
My Default Camera should be set to "Base":
Bloomin' Camera should be set to overlay:
Add Bloomin' Camera to My Default Camera Stack settings:
ensure that the Culling mask for My Default Camera has the Bloom layer unticked. This ensures that the objects to be bloomed are only drawn once on the Bloom layer
ensure that the Culling mask for Bloomin' Camera has a single ticked entry for the Bloom layer and nothing else. You don't want to double-up on rendering otherwise you will get funky and undesirable z-order effects apart from hurting game performance. Other layers will be rendered by My Default Camera.
apply bloom effects to camera Bloomin' Camera
run game, celebrate
The is global might sound confusing at first. Ultimately it does not mean where to apply the post processing effect, but when to apply the effect. If it is set to Global, it will always be applied, otherwise you can set a layer and a border that triggers the effect.
The general approach is to only set emission to materials where you want the effect to take place. If your Materials are to dark otherwise you should adjust the ambient lighting settings.
Atleast in URP there are some work arounds for older versions like this, but afaik this does not work in 2020.3 since they made some changes on URP and the camera system.
edit: on the video Chris Hull
Chris Hull game an answer for how to do it with the new system
#Mezzanine Add your actual game objects to a created bloom layer.
Create two cameras and set one of them to cull everything except that
bloom layer you made. Set the other to only cull the bloom layer. Then
you can set your camera to overlay and it will be added to the other.
You can then use separate post process stacks on these cameras. Note
that you can only bloom objects in the background with this technique
as if you add bloom to an overlay camera, for some reason it just adds
bloom to everything rather than just the things in that camera view.
Doesn't make much sense and makes the purpose of the layers redundant
in my opinion. If you can find a way to add post process to the
overlay camera before it is added to the final image, to do let me
know.
i have not tested that yet, but i presume it's still valid.

Does setting camera's culling mask to Nothing when UI covers the screen is good?

I have a pretty large and full scene and therefore gets a lot of draw calls.
Sometimes I display a video in the game, which covers the entire screen.
When I tested my game with Unity's profiler tool I noticed that the camera still renders everything (although occlusion culling is enabled and calculated), and it causes the video to lag.
My question is how can I disable the camera?
When I disable the Camera component or the camera's GameObject I get a warning ⚠ in the game view that says No camera is rendering to this display. Which, I guess, is not good (correct me if I'm wrong).
So I was wondering if cancelling the culling mask on the camera (by setting it to Nothing) would force unity to stop render the scene.
Or does it still do some work in the background?
(Like with UI elements that still being rendered even though they are fully transparent).
Thanks in advance
I have a pretty large and full scene and therefore gets a lot of draw
calls.
I recommend activating "Instancing" on your materials, it can greatly reduce draw calls.
When the UI Pops open, it can help removing the "Default" layer (or whatever layer the majority of your renderers are) from the active cameras. You can do this easily with layer masks. Or you can just set Camera.main.farClippingPlane to 1 or any low number.

Develop 2D game Inside Canvas Scaler

I'm new in Unity and i've realized that it's difficult do a multi resolution 2d game on unity without paid 3rd plugins available on Asset Store.
I've made some tests and i'm able to do multi resolution support in this way:
1- Put everything from UI (buttons etc) inside a Canvas object in Render Mode Screen Space - Overlay with 16:9 reference resolution and fixed width.
2- Put the rest of the game objects inside a Game Object called GameManager with the Canvas Scaler component in Render Mode Screen Space - Camera with 16:9 reference resolution, fixed width and the Main Camera attached. After that, all game objects like player, platforms etc inside GameManager need to have a RectTransform component, CanvasRenderer component and Image Component for example.
Can i continue developing the game in that way, or this is a wrong way to do the things?
Regards
Also don't forget GUI, Graphics. It's a common misconception that GUI it's depreciated and slow. No it's not. The GameObject helpers for GUI were bad and are depreciated, but the API for putting in OnGUI works great when all you need is to draw a texture or some text on a screen. They're called legacy, but there are no plans as to remove them, as the whole Unity UI is made out of it anyway.
I have made a few games just on these, using Unity as a very overengineered multiplatform API for draw quad.
There is also GL if you want something more.
Just remember - there will be no built-in physics, particle effects, path finding or anything - just a simple way to draw stuff on the screen. You will have total control over what will be drawn - and this is both a good and bad thing, depending on what you want to do.
I will not recommend you using Canvas Scaler for developing a complete game. Intended purpose of the canvas scaler was to create menus and you should use it to create menus only.
The 2D games created without the canvas scaler don't create much problems (mostly they don't cause any problems) on multiple resolutions.
So, your step 1 is correct but for step 2 you don't need to have a canvas scaler component attached.
Do remember to mark your scene as 2D (not necessary) and your camera to orthographic (necessary) while developing 2D games.