Prevent transparent areas being shaded by projection shader - unity3d

I'm trying to make a decal shader to use with a projector in Unity. Here's what I've put together:
Shader "Custom/color_projector"
{
Properties {
_Color ("Tint Color", Color) = (1,1,1,1)
_MainTex ("Cookie", 2D) = "gray" {}
}
Subshader {
Tags {"Queue"="Transparent"}
Pass {
ZTest Less
ColorMask RGB
Blend One OneMinusSrcAlpha
Offset -1, -1
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f {
float4 uvShadow : TEXCOORD0;
float4 pos : SV_POSITION;
};
float4x4 unity_Projector;
float4x4 unity_ProjectorClip;
v2f vert (float4 vertex : POSITION)
{
v2f o;
o.pos = UnityObjectToClipPos (vertex);
o.uvShadow = mul (unity_Projector, vertex);
return o;
}
sampler2D _MainTex;
fixed4 _Color;
fixed4 frag (v2f i) : SV_Target
{
fixed4 tex = tex2Dproj (_MainTex, UNITY_PROJ_COORD(i.uvShadow));
return _Color * tex.a;
}
ENDCG
}
}
}
This works well in most situations:
However, whenever it projects onto a transparent surface (or multiple surfaces) it seems to render an extra time for each surface. Here, I've broken up the divide between the grass and the paving using grass textures with transparent areas:
I've tried numerous blending and options and all of the ZTesting options. This is the best I can get it to look.
From reading around I gather this might be because the a transparent shader does not write to the depth buffer. I tried adding ZWrite On and I tried doing a pass before the main pass:
Pass {
ZWrite On
ColorMask 0
}
But neither had any effect at all.
How can this shader be modified so that it only projects the texture once on the nearest geometries?
Desired result (photoshopped):

The problem is due to how projectors work. Basically, they render all meshes within their field of view a second time, except with a different shader. In your case, this means that both the ground and the plane with the grass will be rendered twice and layered on top of each other. I think it could be possible to fix this using two steps;
First, add the following to the tags of the transparent (grass) shader:
"IgnoreProjector"="True"
Then, change the render queue of your projector from "Transparent" to "Transparent+1". This means that the ground will render first, then the grass edges, and finally the projector will project onto the ground (except appearing on top, since it is rendered last).
As for the blending, i think you want regular alpha blending:
Blend SrcAlpha OneMinusSrcAlpha
Another option if you are using deferred rendering is to use deferred decals. These are both cheaper and usually easier to use than projectors.

Related

Gradient shader Z fighting in Scene view only

I have created the following gradient that takes an Image components source image and apply a two colour gradient to it. Using a toggle it can be switched to using the Source image's alpha for the gradient alpha, or set the alpha per gradient colour.
Properties
{
[PerRendererData] _MainTex ("Texture", 2D) = "white" {}
[Header(Colours)]
_Color1("Color 1", Color) = (0,0,0,1)
_Color2("Color 2", Color) = (1,1,1,1)
[Toggle]_UseImageAlpha("Use Image alpha", float) = 0
[Header(Cull mode)]
[Enum(UnityEngine.Rendering.CullMode)] _CullMode("Cull mode", float) = 2
[Header(ZTest)]
[Enum(UnityEngine.Rendering.CompareFunction)] _ZTest("ZTest", float) = 4
[Toggle(UNITY_UI_ALPHACLIP)] _UseUIAlphaClip("Use Alpha Clip", Float) = 1
}
SubShader
{
Tags {"Queue" = "Transparent" "RenderType"="Transparent"}
LOD 100
Blend SrcAlpha OneMinusSrcAlpha
ZTest [_ZTest]
Cull [_CullMode]
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma multi_compile_local _ UNITY_UI_ALPHACLIP
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
fixed4 col : COLOR;
};
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
fixed4 col : COLOR;
};
sampler2D _MainTex;
float4 _MainTex_ST;
fixed4 _Color1;
fixed4 _Color2;
bool _UseImageAlpha;
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
o.col = v.col;
return o;
}
fixed4 frag (v2f i) : SV_Target
{
if (_UseImageAlpha) {
_Color1.a = i.col.a;
_Color2.a = i.col.a;
}
fixed4 col = tex2D(_MainTex, i.uv);
col *= lerp(_Color1, _Color2, i.uv.y);
col.a = clamp(col.a, 0, 1);
#ifdef UNITY_UI_ALPHACLIP
clip(col.a - .001);
#endif
return col;
}
ENDCG
}
}
This shader works fine and shows the gradient as expected, however once I start adding multiple layers of Images (in example a blue square behind it, and a green quare in front of it) it starts having issues with Z fighting in the scene view only based on the angle of the scene camera with the object that comes next in the hierachy (in this example the green square). In the Game view and on builds the Z fighting doesn't occur.
I am using the default LessEqual ZTest option, with back culling and render queue set to 3000 (which is the same as the render queue for the image in front and behind of it). As per Unity's documentation having it set to LessEqual should make it so Objects in front get drawn on top, and objects behind get hidden:
How should depth testing be performed. Default is LEqual (draw objects in from or at the distance as existing objects; hide objects behind them).
Setting the ZTest to any of the other options (off, always, greaterEqual etc) doens't yield a better result.
If I set the Render queue higher (3001) it will always draw the gradient on top in the Scene view (no changes in the Game view) whereas setting it to 2999 will still make it z fight with the object in front of it (green square), while making the blue square behind it transparent.
When I only have the green square in front of the gradient it will z fight in some places, while cutting out the green square in other places where the source image doesn't have any pixels.
Using the alpha of the source image, or using the alpha of the two individual colours does not make a difference either.
(gyazo) Example gif of the fighting changing depending on the camera angle.
What is causing this z fighting, and why does it only occur in the scene view?
Using Unity 2019.3.13f1, same results in 2019.2, 2019.1m 2018.4 LTS, 2017 LTS on Windows.
Try adding ZWrite Off. With shaders it might be useful just to start with (or at least look at) one of Unity's built-in shaders that is close to what you want. In your case that would be UI-Default.shader.

Rendering Multiple Materials on single Mesh

In Unity3D I try to render a creature and display an outline when it is selected.
The creature alone is rendered fine:
I downloaded an Outline Shader on Github and applied it as a second material to my mesh:
With the expanded materials looking like this:
However, the result is not at all as expected:
Without knowing much about materials and shaders, I tried fiddling around and found out that if I change the Rendering Mode of the Standard material to transparent the result looks fine:
But now the creature alone renders in a kind of strange way where the limbs are overlapping the body:
What is the correct way to achieve what I'm trying to do? Do you have resources where I can read more?
The problem with your setup is the Render Queue. Transparent objects are rendered after opaque ones so your outline just draws on top of the creature. If you want to change the rendering order you have to treat the object with an outline as a "special" opaque object (eg. draw normal objects, draw outline, draw creature).
Here are a couple of alternatives:
Use Cull Front - This shader is basically drawing a bigger copy of the object on top of the original, like a shell. Cull Front makes it so it draws the back, instead of the front, of the shell which is behind the object.
Use the stencil buffer to mark the region where the original object is drawn and skip it when you draw the outline.
Below is a modified version of your shader (removed second color pass and surface shader pass since you don't use them). This is the stencil buffer option. If you want to try the other one, remove the first pass, the stencil block in the second pass and replace Cull Back with Cull Front.
Shader "Outlined/UltimateOutline"
{
Properties
{
_Color("Main Color", Color) = (0.5,0.5,0.5,1)
_FirstOutlineColor("Outline color", Color) = (1,0,0,0.5)
_FirstOutlineWidth("Outlines width", Range(0.0, 2.0)) = 0.15
_Angle("Switch shader on angle", Range(0.0, 180.0)) = 89
}
CGINCLUDE
#include "UnityCG.cginc"
struct appdata {
float4 vertex : POSITION;
float4 normal : NORMAL;
};
uniform float4 _FirstOutlineColor;
uniform float _FirstOutlineWidth;
uniform float4 _Color;
uniform float _Angle;
ENDCG
SubShader{
Pass {
Tags{ "Queue" = "Transparent-1" "IgnoreProjector" = "True" }
ZWrite Off
Stencil {
Ref 1
Comp always
Pass replace
}
ColorMask 0
}
//First outline
Pass{
Tags{ "Queue" = "Transparent" "IgnoreProjector" = "True" "RenderType" = "Transparent" }
Stencil {
Ref 1
Comp NotEqual
}
Blend SrcAlpha OneMinusSrcAlpha
ZWrite Off
Cull Back //Replace this with Cull Front for option 1
CGPROGRAM
struct v2f {
float4 pos : SV_POSITION;
};
#pragma vertex vert
#pragma fragment frag
v2f vert(appdata v) {
appdata original = v;
float3 scaleDir = normalize(v.vertex.xyz - float4(0,0,0,1));
//This shader consists of 2 ways of generating outline that are dynamically switched based on demiliter angle
//If vertex normal is pointed away from object origin then custom outline generation is used (based on scaling along the origin-vertex vector)
//Otherwise the old-school normal vector scaling is used
//This way prevents weird artifacts from being created when using either of the methods
if (degrees(acos(dot(scaleDir.xyz, v.normal.xyz))) > _Angle) {
v.vertex.xyz += normalize(v.normal.xyz) * _FirstOutlineWidth;
}
else {
v.vertex.xyz += scaleDir * _FirstOutlineWidth;
}
v2f o;
o.pos = UnityObjectToClipPos(v.vertex);
return o;
}
half4 frag(v2f i) : COLOR{
return _FirstOutlineColor;
}
ENDCG
}
}
Fallback "Diffuse"
}

how to programmatically allow Unity Shader to control which object renders in front?

I've only just started learning Unity, but because I come from a background of coding in C#, I've found the standard scripting to be very quick to learn. Unfortunately, I've now come across a problem for which I believe a custom shader is required and I'm completely lost when it comes to shaders.
Scenario:
I'm using a custom distance scaling process so that really big, far away objects are moved within a reasonable floating point precision range from the player. This works great and handles scaling of the objects based on their adjusted distance so they appear to actually be really far away. The problem occurs though when two of these objects pass close to eachother in game space (this would still be millions of units apart in real scale) because they visibly collide.
Ex: https://www.youtube.com/watch?v=KFnuQg4R8NQ
Attempted Solution 1:
I've looked into flattening the objects along the player's view axis and this fixes the collision, but this affects shading and particle effects so wasn't a good option
Attempted Solution 2:
I've tried changing the RenderOrder, but because sometimes one object is inside the mesh of another (though the centre of this object is still closer to the camera) it doesn't fix the issue and particle effects are problematic again.
Attempted Solution 3:
I've tried moving the colliding objects to their own layer, spawning a new camera with a higher depth at the same position as my main camera and forcing the cameras to only see the items on their respective layers, but this caused lighting issues as some objects are lighting others and I had only a limited number of layers so this solution was quite limiting as it forced me to only have a low number of objects that could be overlapping at a time. NOTE: this solution is probably the closest I was able to come to what I need though.
Ex: https://www.youtube.com/watch?v=CyFDgimJ2-8
Attempted Solution 4:
I've tried updating the Standard shader code by downloading it from Unity's downloads page and creating my own, custom shader that allows me to modify the ZWrite and ZTest properties, but because I've no real understanding of how these work, I'm not getting anywhere.
Request:
I would greatly appreciate a Shader script code example of how I can programmatically force one object who's mesh is either colliding with or completely inside another mesh to render in front of said mesh. I'm hoping I can then take that example and apply it to all the shaders that I'm currently using (Standard, Particle Additive) to achieve the effect I'm looking for. Thanks in advance for your help.
In the gif below both objects are colliding and according to the camera position the cube is in front of the sphere but I can change their visibility with the render queue:
If that's what you want you only have to add ZWrite Off in your subshader before the CGPROGRAM starts, the following is the Standard Surface Shader including the line:
Shader "Custom/Shader" {
Properties {
_Color ("Color", Color) = (1,1,1,1)
_MainTex ("Albedo (RGB)", 2D) = "white" {}
_Glossiness ("Smoothness", Range(0,1)) = 0.5
_Metallic ("Metallic", Range(0,1)) = 0.0
}
SubShader {
Tags { "RenderType"="Opaque" }
LOD 200
ZWrite Off
CGPROGRAM
// Physically based Standard lighting model, and enable shadows on all light types
#pragma surface surf Standard fullforwardshadows
// Use shader model 3.0 target, to get nicer looking lighting
#pragma target 3.0
sampler2D _MainTex;
struct Input {
float2 uv_MainTex;
};
half _Glossiness;
half _Metallic;
fixed4 _Color;
// Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader.
// See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing.
// #pragma instancing_options assumeuniformscaling
UNITY_INSTANCING_BUFFER_START(Props)
// put more per-instance properties here
UNITY_INSTANCING_BUFFER_END(Props)
void surf (Input IN, inout SurfaceOutputStandard o) {
// Albedo comes from a texture tinted by color
fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
o.Albedo = c.rgb;
// Metallic and smoothness come from slider variables
o.Metallic = _Metallic;
o.Smoothness = _Glossiness;
o.Alpha = c.a;
}
ENDCG
}
FallBack "Diffuse"
}
Now sorting particles, look at the shadows and how they collide and how we can change their visibility regardless of their position.
Here's the shader for particles, I'm using the Unity Built-in shader, the only thing added is Ztest Always
Shader "Particles/Alpha Blended Premultiply Custom" {
Properties {
_MainTex ("Particle Texture", 2D) = "white" {}
_InvFade ("Soft Particles Factor", Range(0.01,3.0)) = 1.0
}
Category {
Tags { "Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent" "PreviewType"="Plane" }
ZTest Always
Blend SrcAlpha OneMinusSrcAlpha
ColorMask RGB
Cull Off Lighting Off ZWrite Off
SubShader {
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma target 2.0
#pragma multi_compile_particles
#pragma multi_compile_fog
#include "UnityCG.cginc"
sampler2D _MainTex;
fixed4 _TintColor;
struct appdata_t {
float4 vertex : POSITION;
fixed4 color : COLOR;
float2 texcoord : TEXCOORD0;
UNITY_VERTEX_INPUT_INSTANCE_ID
};
struct v2f {
float4 vertex : SV_POSITION;
fixed4 color : COLOR;
float2 texcoord : TEXCOORD0;
#ifdef SOFTPARTICLES_ON
float4 projPos : TEXCOORD1;
#endif
UNITY_VERTEX_OUTPUT_STEREO
};
float4 _MainTex_ST;
v2f vert (appdata_t v)
{
v2f o;
UNITY_SETUP_INSTANCE_ID(v);
UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
o.vertex = UnityObjectToClipPos(v.vertex);
#ifdef SOFTPARTICLES_ON
o.projPos = ComputeScreenPos (o.vertex);
COMPUTE_EYEDEPTH(o.projPos.z);
#endif
o.color = v.color;
o.texcoord = TRANSFORM_TEX(v.texcoord,_MainTex);
return o;
}
UNITY_DECLARE_DEPTH_TEXTURE(_CameraDepthTexture);
float _InvFade;
fixed4 frag (v2f i) : SV_Target
{
#ifdef SOFTPARTICLES_ON
float sceneZ = LinearEyeDepth (SAMPLE_DEPTH_TEXTURE_PROJ(_CameraDepthTexture, UNITY_PROJ_COORD(i.projPos)));
float partZ = i.projPos.z;
float fade = saturate (_InvFade * (sceneZ-partZ));
i.color.a *= fade;
#endif
return i.color * tex2D(_MainTex, i.texcoord) * i.color.a;
}
ENDCG
}
}
}
}

Unity3D Custom Shader Prevent Light Addition?

I've used Unity Projectors and a custom shader to create the effect of a custom image shape coming from a projector. It works great except if the light from the projector comes into contact with the light from another copy of the project, the light colors are combined. I don't want this to happen, so if i specify green for example for both projectors, and the light comes into contact with each other, the light should overlap and remain green for both projectors. Here is a picture of what I mean:
I'm new to shaders and found this shader online. Any help on how I could modify the shader to accomplish my goal would be much appreciated. Or if there is another way to accomplish this goal would be great. I tried putting each projector into a layer and tell each to ignore that layer when projecting their light, but this had no effect. Thanks.
Shader "Custom/MyProjectorShader" {
Properties{
_Color("Tint Color", Color) = (1,1,1,1)
_Attenuation("Falloff", Range(0.0, 1.0)) = 1.0
_ShadowTex("Cookie", 2D) = "gray" {}
}
Subshader{
Tags{ "Queue" = "Transparent" }
Pass{
ZWrite Off
ColorMask RGB
Blend SrcAlpha One // Additive blending
Offset -1, -1
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f {
float4 uvShadow : TEXCOORD0;
float4 pos : SV_POSITION;
};
float4x4 unity_Projector;
float4x4 unity_ProjectorClip;
v2f vert(float4 vertex : POSITION)
{
v2f o;
o.pos = UnityObjectToClipPos(vertex);
o.uvShadow = mul(unity_Projector, vertex);
return o;
}
sampler2D _ShadowTex;
fixed4 _Color;
float _Attenuation;
fixed4 frag(v2f i) : SV_Target
{
// Apply alpha mask
fixed4 texCookie = tex2Dproj(_ShadowTex, UNITY_PROJ_COORD(i.uvShadow));
fixed4 outColor = _Color * texCookie.a;
// Attenuation
float depth = i.uvShadow.z; // [-1 (near), 1 (far)]
return outColor * clamp(1.0 - abs(depth) + _Attenuation, 0.0, 1.0);
}
ENDCG
}
}
}
Blend SrcAlpha OneMinusSrcAlpha
Unlike what the comment says, Blend One One is the regular Additive (Linear Dodge) blend mode. Using this new blend mode you'll need to make sure your textures have an alpha channel too.
https://docs.unity3d.com/Manual/SL-Blend.html
Probably Blend One OneMinusSrcAlpha would work better in this case because it would also avoid so remaining border cutout.

Unity3D visible seams on borders when tiling texture

For my game I have written a shader that allows my texture to tile nicely over multiple objects. I do that by choosing the uv not based on the relative position of the vertex, but on the absolute world position. The custom shader is as follows. Basically it just tiles the texture in a grid of 1x1 world units.
Shader "MyGame/Tile"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 200
CGPROGRAM
#pragma surface surf Lambert
sampler2D _MainTex;
struct Input
{
float2 uv_MainTex;
float3 worldPos;
};
void surf (Input IN, inout SurfaceOutput o)
{
//adjust UV for tiling
float2 cell = floor(IN.worldPos.xz);
float2 offset = IN.worldPos.xz - cell;
float2 uv = offset;
float4 mainTex = tex2D(_MainTex, uv);
o.Albedo = mainTex.rgb;
}
ENDCG
}
FallBack "Diffuse"
}
I have done this approach in Cg and in HLSL shaders on XNA before and it always worked like a charm. With the Unity shader, however, I get a very visible seam on the edges of the texture. I tried a Unity surface shader as well as a vertex/fragment shader, both with the same results.
The texture itself looks as follows. In my game it is actually a .tga, not a .png, but that doesn't cause the problem. The problem occurs on all texture filter settings and on repeat or clamp mode equally.
Now I've seen someone have a similar problem here: Seams between planes when lightmapping.
There was, however, no definitive answer on how to solve such a problem. Also, my problem doesn't relate to a lightmap or lighting at all. In the fragment shader I tested, there was no lighting enabled and the issue was still present.
The same question was also posted on the Unity answers site, but I received no answers and not a lot of views, so I am trying it here as well: Visible seams on borders when tiling texture
This describes the reason for your problem:
http://hacksoflife.blogspot.com/2011/01/derivatives-i-discontinuities-and.html
This is a great visual example, like yours:
http://aras-p.info/blog/2010/01/07/screenspace-vs-mip-mapping/
Unless you're going to disable mipmaps, I don't think this is solvable with Unity, because as far as I know, it won't let you use functions that let you specify what mip level to use in the fragment shader (at least on OS X / OpenGL ES; this might not be a problem if you're only targeting Windows).
That said, I have no idea why you're doing the fragment-level uv calculations that you are; just passing data from the vertex shader works just fine, with a tileable texture:
struct v2f {
float4 position_clip : SV_POSITION;
float2 position_world_xz : TEXCOORD;
};
#pragma vertex vert
v2f vert(float4 vertex : POSITION) {
v2f o;
o.position_clip = mul(UNITY_MATRIX_MVP, vertex);
o.position_world_xz = mul(_Object2World, vertex).xz;
return o;
}
#pragma fragment frag
uniform sampler2D _MainTex;
fixed4 frag(v2f i) : COLOR {
return tex2D(_MainTex, i.position_world_xz);
}