Separate shadow-casting from "shadow-clipping" in a ShadowCaster pass - unity3d

I am using a single surface shader with a custom vertex function, and tried to I use macros like UNITY_PASS_SHADOWCASTER to add pass-specific code to the shadow processing, for example moving the vertices away from the light source to fix self-shadowing. However, I discovered that doing so has weird effects on how the shadows are rendered on the object, and even when some of its pixels are displayed.
Eventually, I managed to find out that the ShadowCaster pass must be called at least twice even if there is a single light source: once with the virtual camera matching the light source, but also a second time when the shadow is to be applied to it. This is the call that controls the visibility of the shadows behind the object.
Now I have two questions:
What is this mode of execution called?
How do I make code branch depending on which of these mode is executing? In other words, I want to move the vertices to a different position when casting the shadow, but make them stay when the shadows are applied to the object. At the moment, I am checking whether ObjSpaceLightDir matches ObjSpaceViewDir, but it doesn't sound like the best idea. Considering the shader pass is probably being compiled only once, I suppose I would have to look for a runtime variable, but I am not sure whether there is even any...
I managed to find mentions of a ShadowCollector pass for older versions of Unity. Is this the same thing?
I am using Unity 2020.3.32f1 with the built-in render pipeline.

Related

Is it possible to force SpriteKit to respect depth buffer in Metal with a SKRenderer?

I want to mix SpriteKit and Metal, and I found that there is a special class for that - SKRenderer. But, as I see, it can only draw whole SKScene as a single layer above all screen. So, can it use the depth buffer and zPosition property for the proper rendering? For example, if my scene contains two SKNodes, and I want to draw "Metal object" between them?
I see in a GPU debugger that every SKNode is rendered with a separate draw call (even more that one, actually), so, in theory, it's possible to use SKNode.zPosition not only for sorting inside SKScene. For example, they can just translate it into viewport's z-position as is (and, of course, keep depth test).
The reason why I think that it's possible is that in the documentation, I see this sentence:
For example, you might write the environmental effects layer of your app that does fog, clouds, and rain, with custom Metal shaders, and continue to layer content below and above that with SpriteKit.
and I just can't believe that as "continue to layer content below and above that with SpriteKit" they mean "OK, you can create two different SKScenes".

How can i find for every pixel on the screen which object it belongs to?

Each frame unity generate an image. I want that it will also create an additional arrays of int's and every time it decide to write a new color on the generated image it will write the id of the object on the correspond place in the array of int's.
In OpenGL I know that it’s pretty common and I found a lot of tutorials for this kind of things, basically based on the depth map you decide which id should be written at each pixel of the helper array. but in unity i using a given Shader and i didn't find a proper way to do just that. i think there should be any build in functions for this kind of common problem.
my goal is to know for every pixel on the screen which object it belongs to.
Thanks.
In forward rendering if you don't use it for another purpose you could store the ID into the alpha channel of the back buffer (and it would only be valid for opaque objects), up to 256 IDs without HDR. In deferred you could edit the unused channel of the gbuffer potentially.
This is if you want to minimize overhead, otherwise you could have a more generic system that re-renders specific objects into a texture in screenspace, whith a very simple shader that just outputs ID, into whatever format you need, using command buffers.
You'll want to make a custom shader that renders the default textures and colors to the mainCamera and renders an ID color to a renderTexture trough another camera.
Here's an example of how it works Implementing Watering in my Farming Game!

Making multiple objects with the same shader fade at different times

I have a death transformation for one of my GameObjects which goes from a spherical ball to a bunch of small individual blocks. Each of these blocks I want to fade at different times but since they all use the same shader I cannot seem to figure out how to make all of them not fade out at the same time.
This first picture is the Spherical Ball in its first step for when it turns from a spherical ball to a Minecraft'ish looking block ball and to the right of it is one of the blocks that make up the Minecraft'ish looking ball shown by the red arrow.
Now this is my Inspector for one of the little blocks that make up the Minecraft'ish looking ball.
I have an arrow pointing to what makes the object fade but that is globally across all of the blocks since they use the same shader. Is it possible to have each block fade separately or am I stuck and need to find a new disappear act for the little block dudes?
You need to modify the material property by script at runtime, and you need to do it through the Renderer.material property. When you access Renderer.material, Unity will automatically create a copy of the material for you that is handled separately -- including getting its own draw call, if you care about performance. You can tell this has happened because the material name in the renderer will change to "Materialname (Instance)".
Set the material's fade property using Renderer.material.SetFloat() (or whatever the appropriate Set... function is). Unfortunately the property's name isn't "Fade Factor". You can find the property's name by looking at the shader script, or by switching the inspector to debug mode and digging through the Saved Properties array for one that looks right.

What is the purpose of Enable Client State?

In all the examples I've seen, these lines are used before drawing meshes:
glEnableClientState(GL10.GL_VERTEX_ARRAY);
glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
and sometimes glEnableClientState(GL10.GL_NORMAL_ARRAY);
And then these are always disabled again at the end of the draw call for each mesh.
I don't really understand what they actually do, and why you would want to disable them. I know that I probably need to turn them on if I'm drawing triangles from an array, using textures, and using lighting. But I don't know when I actually need to turn them off.
I presume it would be more efficient not to disable and re-enable these for each mesh in your scene if you don't have to. Can you just leave them on all the time? In what circumstances do you need to disable them?
I haven't been able to find any explanation of the actual meaning of these client states, so I don't know where I can safely leave them on or off in my code.
Can you just leave them on all the time?
Yes, if you want to, and if all your primitives uses all the arrays you're enabling.
In what circumstances do you need to disable them?
In order to not destroy or mess up the next drawings.
For example, consider you have a primitive that uses normals, you'll simply enable it by a call to glEnableClientState(GL_NORMAL_ARRAY) and telling OpenGL where your normal data is through glNormalPointer(). If you don't disable GL_NORMAL_ARRAY your next coming primitive will use the same normal array as your previous primitive. This may have consequences if your next coming primitive doesn't use normals.
Therefore, it's considered as a good practice to restore the OpenGL state when a primitive's drawing is done. That being said, you can leave them enabled if all your primitives uses all the arrays you enable, exactly like I leave GL_TEXTURE_2D enabled during the entire time the application is running. That's because I know I'll use textures frequently, and then there's no reason for enable/disable it in every object's draw call; this will only decrease the application's performance.
glEnableClientState(GL_VERTEX_ARRAY)
If you declare like above, it enables the OpenGL to use the vertices from the vertex array
otherwise opengl dont know what array it has to use to show the vertices so that it will display nothing

Multiple Effects in a Shader

My question does have a slight basis in GLSL, since that happens to be the shading language I know.
Its my opinion that shaders & the programmable graphics pipeline are a huge step up from the fixed function pipeline. Shaders are excellent at applying effects and making 3D graphics look far more realistic. However, not every effect is meant to be applied to every scenario. For instance, I wouldn't want my flag waving effect used across an entire scene. If that scene contains one flag, I want that flag to wave back and forth and thats about it. I'd want a water effect applied only to water. You get the idea.
My question is what is the best way to implement this toggling of effects. The only way I can think of is to have a series of uniform variables and toggle/untoggle them before and after drawing something.
For instance,
(pseudocode)
toggle flag effect uniform
draw flag
untoggle flag effect uniform
Inside the shader code, it would check the value of these uniforms and act accordingly.
EDIT: I understand one can have multiple shader programs, and switch on their use as needed, but would this actually be faster than the above method or come with a serious performance overhead from moving all that data around in the GPU? It would seem to be that possibly doing this multiple times per frame would be extremely costly