How do I get a Light's Range value in Shader? - unity3d

I'm trying to write a simple frag/vert shader that, depending on whether it is in the range of a light, will paint the appropriate colour from either the 'lit' texture or from the 'unlit' texture.
Therefore, I need to compare the distance between the light to the range of the light.
I've been googling all kinds of things, but I can't seem to find a way of accessing the range value of the light. Is there a way to do so? If not, is there some kind of derived data I could use as an alternative?
Update
I was able to find this method here, which seems to be the most promising so far, however after playing around for a bit, I still can't seem to get what I need. There's some talk about _LightMatrix0 not being populated. Can anyone confirm?
Update 2
I found the variable unity_LightAtten in the Unity Shader Variables documentation. However, this is only used for Vertex Lit shading, which isn't exactly ideal, especially considering the lack of console support.
Could there be a way to pipe this variable to Forward Rendering?

You can pass Light.range into the shader using Material.SetFloat. You need to attach a script to do that.

Related

Pixel local position within a quad

I have this quad in the 3D scene:
I need to get the local positions of all painted (non transparent) pixels of this quad. Already tried to use GetPixels() and filter the result by the alpha value to get only pixels with a valid color in it. But then I noticed that it isn't possible to get the pixels' local positions using this method, cause it returns a Color array, which doesn't offer a way to retrieve that information. Already tried to google and nothing came up, maybe the only way to get what I want is to build something at shader level, but I don't know much about this subject either. I can offer more context to my doubt if needed, but I'm trying to keep things short here. Also, there's no code to show except for the wrong one using GetPixels(), which doesn't work for my case as far as I know.
Any help is appreciated!

Separate shadow-casting from "shadow-clipping" in a ShadowCaster pass

I am using a single surface shader with a custom vertex function, and tried to I use macros like UNITY_PASS_SHADOWCASTER to add pass-specific code to the shadow processing, for example moving the vertices away from the light source to fix self-shadowing. However, I discovered that doing so has weird effects on how the shadows are rendered on the object, and even when some of its pixels are displayed.
Eventually, I managed to find out that the ShadowCaster pass must be called at least twice even if there is a single light source: once with the virtual camera matching the light source, but also a second time when the shadow is to be applied to it. This is the call that controls the visibility of the shadows behind the object.
Now I have two questions:
What is this mode of execution called?
How do I make code branch depending on which of these mode is executing? In other words, I want to move the vertices to a different position when casting the shadow, but make them stay when the shadows are applied to the object. At the moment, I am checking whether ObjSpaceLightDir matches ObjSpaceViewDir, but it doesn't sound like the best idea. Considering the shader pass is probably being compiled only once, I suppose I would have to look for a runtime variable, but I am not sure whether there is even any...
I managed to find mentions of a ShadowCollector pass for older versions of Unity. Is this the same thing?
I am using Unity 2020.3.32f1 with the built-in render pipeline.

How can i find for every pixel on the screen which object it belongs to?

Each frame unity generate an image. I want that it will also create an additional arrays of int's and every time it decide to write a new color on the generated image it will write the id of the object on the correspond place in the array of int's.
In OpenGL I know that it’s pretty common and I found a lot of tutorials for this kind of things, basically based on the depth map you decide which id should be written at each pixel of the helper array. but in unity i using a given Shader and i didn't find a proper way to do just that. i think there should be any build in functions for this kind of common problem.
my goal is to know for every pixel on the screen which object it belongs to.
Thanks.
In forward rendering if you don't use it for another purpose you could store the ID into the alpha channel of the back buffer (and it would only be valid for opaque objects), up to 256 IDs without HDR. In deferred you could edit the unused channel of the gbuffer potentially.
This is if you want to minimize overhead, otherwise you could have a more generic system that re-renders specific objects into a texture in screenspace, whith a very simple shader that just outputs ID, into whatever format you need, using command buffers.
You'll want to make a custom shader that renders the default textures and colors to the mainCamera and renders an ID color to a renderTexture trough another camera.
Here's an example of how it works Implementing Watering in my Farming Game!

FMod and Frac material functions cause seam in texture

Is there any way to get rid of the interpolation seam caused by using Frac or FMod? Coming from writing actual fragment shaders, I find it odd that this seam even exists (why is interpolation happening in the pixel sampling?)
Example, say we have this super simple texture
And we want the top right corner to loop instead of the whole texture, something that's easy to accomplish with an FMod.
As you can see, an artifact is now present from when the UV jumps from 0.25 to 0 and I'm not sure why. Is there a way I can disable this interpolation? With the power of MSPaint, here's what I expected to see:
I've found an okay temporary solution. If you disable mip-mapping (via the properties on the texture themselves) then the artifacts disappear. Would love to have another answer that describes how to do this while maintaining mipmaps.

DirectCompute atomic counter

In a compute shader (with Unity) I have a raycast finding intersections with mesh triangles. At some point I would like to return how many intersections are found.
I can clearly see how many intersections there are by marking the pixels, however if I simply increment a global int for every intersection in the compute shader (and return via a buffer), the number I get back makes no sense. I assume this is because I'm creating a race condition.
I see that opengl has "atomic counters" : https://www.opengl.org/wiki/Atomic_Counter, which seem like what I need in this situation. I have had no luck finding such a feature in either the Unity nor the DirectCompute documentation. Is there a good way to do this?
I could create an appendBuffer, but it seems silly as I literally need to return just a single int.
HA! That was easy. I'll leave this here just in-case someone runs into the same problem.
HLSL has a whole set of "interlocked" functions that prevent this sort of thing from happening:
https://msdn.microsoft.com/en-us/library/windows/desktop/ff476334(v=vs.85).aspx
In my case it was:
InterlockedAdd(collisionCount, 1);
To replace
collisionCount++;
And that's it!