Rendering Multiple Materials on single Mesh - unity3d

In Unity3D I try to render a creature and display an outline when it is selected.
The creature alone is rendered fine:
I downloaded an Outline Shader on Github and applied it as a second material to my mesh:
With the expanded materials looking like this:
However, the result is not at all as expected:
Without knowing much about materials and shaders, I tried fiddling around and found out that if I change the Rendering Mode of the Standard material to transparent the result looks fine:
But now the creature alone renders in a kind of strange way where the limbs are overlapping the body:
What is the correct way to achieve what I'm trying to do? Do you have resources where I can read more?

The problem with your setup is the Render Queue. Transparent objects are rendered after opaque ones so your outline just draws on top of the creature. If you want to change the rendering order you have to treat the object with an outline as a "special" opaque object (eg. draw normal objects, draw outline, draw creature).
Here are a couple of alternatives:
Use Cull Front - This shader is basically drawing a bigger copy of the object on top of the original, like a shell. Cull Front makes it so it draws the back, instead of the front, of the shell which is behind the object.
Use the stencil buffer to mark the region where the original object is drawn and skip it when you draw the outline.
Below is a modified version of your shader (removed second color pass and surface shader pass since you don't use them). This is the stencil buffer option. If you want to try the other one, remove the first pass, the stencil block in the second pass and replace Cull Back with Cull Front.
Shader "Outlined/UltimateOutline"
{
Properties
{
_Color("Main Color", Color) = (0.5,0.5,0.5,1)
_FirstOutlineColor("Outline color", Color) = (1,0,0,0.5)
_FirstOutlineWidth("Outlines width", Range(0.0, 2.0)) = 0.15
_Angle("Switch shader on angle", Range(0.0, 180.0)) = 89
}
CGINCLUDE
#include "UnityCG.cginc"
struct appdata {
float4 vertex : POSITION;
float4 normal : NORMAL;
};
uniform float4 _FirstOutlineColor;
uniform float _FirstOutlineWidth;
uniform float4 _Color;
uniform float _Angle;
ENDCG
SubShader{
Pass {
Tags{ "Queue" = "Transparent-1" "IgnoreProjector" = "True" }
ZWrite Off
Stencil {
Ref 1
Comp always
Pass replace
}
ColorMask 0
}
//First outline
Pass{
Tags{ "Queue" = "Transparent" "IgnoreProjector" = "True" "RenderType" = "Transparent" }
Stencil {
Ref 1
Comp NotEqual
}
Blend SrcAlpha OneMinusSrcAlpha
ZWrite Off
Cull Back //Replace this with Cull Front for option 1
CGPROGRAM
struct v2f {
float4 pos : SV_POSITION;
};
#pragma vertex vert
#pragma fragment frag
v2f vert(appdata v) {
appdata original = v;
float3 scaleDir = normalize(v.vertex.xyz - float4(0,0,0,1));
//This shader consists of 2 ways of generating outline that are dynamically switched based on demiliter angle
//If vertex normal is pointed away from object origin then custom outline generation is used (based on scaling along the origin-vertex vector)
//Otherwise the old-school normal vector scaling is used
//This way prevents weird artifacts from being created when using either of the methods
if (degrees(acos(dot(scaleDir.xyz, v.normal.xyz))) > _Angle) {
v.vertex.xyz += normalize(v.normal.xyz) * _FirstOutlineWidth;
}
else {
v.vertex.xyz += scaleDir * _FirstOutlineWidth;
}
v2f o;
o.pos = UnityObjectToClipPos(v.vertex);
return o;
}
half4 frag(v2f i) : COLOR{
return _FirstOutlineColor;
}
ENDCG
}
}
Fallback "Diffuse"
}

Related

Unity 3D HDRP - Making a hole in a gameobject by shader using stencil

Unity version 2019.1.2f1 HDRP project.
I am trying to make a hole in a finished gameobject which has a material and shader don't want to interfere with, the point being this should work with any object it is done on.
I found this approach somewhere else, but it does not seem to work for the HD Render Pipeline version.
Basically two extra gameobjects/shapes are put where the hole is to be made, one object has a preparation pass, the other makes the hole.
Example shader:
Shader "Custom/HolePrepare" {
Properties{
}
SubShader{
Tags { "RenderType" = "Opaque" "Queue" = "Geometry+1"}
ColorMask 0
ZWrite off
Stencil {
Ref 1
Comp always
Pass replace
}
CGINCLUDE
struct appdata {
float4 vertex : POSITION;
};
struct v2f {
float4 pos : SV_POSITION;
};
v2f vert(appdata v) {
v2f o;
o.pos = UnityObjectToClipPos(v.vertex);
return o;
}
half4 frag(v2f i) : SV_Target {
return half4(1,1,0,1);
}
ENDCG
Pass {
Cull Front
ZTest Less
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
ENDCG
}
Pass {
Cull Back
ZTest Greater
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
ENDCG
}
}
}
and
Shader "Custom/Hole" {
Properties{
}
SubShader{
Tags{ "Queue" = "Geometry+10" }
ColorMask RGB
ZWrite On
Stencil {
Ref 1
Comp notequal
Pass Zero
}
Pass{}
}
}
I figure these values aren't theoretically correct, because that's not how the example code was, but since it wasn't working for HDRP I started playing around and see if I got anywhere.
The Hole shader should have ColorMask RGB 0, however nothing happens except the hole object becomes invisible. This led me to believe I am not able to use the stencil buffer properly.
I got to the point where I could mark the pixels that I wanted to erase, however I am not able to erase them.
I want to be able to erase this part of the primary object(cheese) so that the background(floor in this case) is visible.
Is there any way to do this in the HDRP?
Note: I have also tried a render queue approach by script and shader in my HDRP project, without success, although the very same code worked on a standard unity project.

Prevent transparent areas being shaded by projection shader

I'm trying to make a decal shader to use with a projector in Unity. Here's what I've put together:
Shader "Custom/color_projector"
{
Properties {
_Color ("Tint Color", Color) = (1,1,1,1)
_MainTex ("Cookie", 2D) = "gray" {}
}
Subshader {
Tags {"Queue"="Transparent"}
Pass {
ZTest Less
ColorMask RGB
Blend One OneMinusSrcAlpha
Offset -1, -1
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f {
float4 uvShadow : TEXCOORD0;
float4 pos : SV_POSITION;
};
float4x4 unity_Projector;
float4x4 unity_ProjectorClip;
v2f vert (float4 vertex : POSITION)
{
v2f o;
o.pos = UnityObjectToClipPos (vertex);
o.uvShadow = mul (unity_Projector, vertex);
return o;
}
sampler2D _MainTex;
fixed4 _Color;
fixed4 frag (v2f i) : SV_Target
{
fixed4 tex = tex2Dproj (_MainTex, UNITY_PROJ_COORD(i.uvShadow));
return _Color * tex.a;
}
ENDCG
}
}
}
This works well in most situations:
However, whenever it projects onto a transparent surface (or multiple surfaces) it seems to render an extra time for each surface. Here, I've broken up the divide between the grass and the paving using grass textures with transparent areas:
I've tried numerous blending and options and all of the ZTesting options. This is the best I can get it to look.
From reading around I gather this might be because the a transparent shader does not write to the depth buffer. I tried adding ZWrite On and I tried doing a pass before the main pass:
Pass {
ZWrite On
ColorMask 0
}
But neither had any effect at all.
How can this shader be modified so that it only projects the texture once on the nearest geometries?
Desired result (photoshopped):
The problem is due to how projectors work. Basically, they render all meshes within their field of view a second time, except with a different shader. In your case, this means that both the ground and the plane with the grass will be rendered twice and layered on top of each other. I think it could be possible to fix this using two steps;
First, add the following to the tags of the transparent (grass) shader:
"IgnoreProjector"="True"
Then, change the render queue of your projector from "Transparent" to "Transparent+1". This means that the ground will render first, then the grass edges, and finally the projector will project onto the ground (except appearing on top, since it is rendered last).
As for the blending, i think you want regular alpha blending:
Blend SrcAlpha OneMinusSrcAlpha
Another option if you are using deferred rendering is to use deferred decals. These are both cheaper and usually easier to use than projectors.

how to programmatically allow Unity Shader to control which object renders in front?

I've only just started learning Unity, but because I come from a background of coding in C#, I've found the standard scripting to be very quick to learn. Unfortunately, I've now come across a problem for which I believe a custom shader is required and I'm completely lost when it comes to shaders.
Scenario:
I'm using a custom distance scaling process so that really big, far away objects are moved within a reasonable floating point precision range from the player. This works great and handles scaling of the objects based on their adjusted distance so they appear to actually be really far away. The problem occurs though when two of these objects pass close to eachother in game space (this would still be millions of units apart in real scale) because they visibly collide.
Ex: https://www.youtube.com/watch?v=KFnuQg4R8NQ
Attempted Solution 1:
I've looked into flattening the objects along the player's view axis and this fixes the collision, but this affects shading and particle effects so wasn't a good option
Attempted Solution 2:
I've tried changing the RenderOrder, but because sometimes one object is inside the mesh of another (though the centre of this object is still closer to the camera) it doesn't fix the issue and particle effects are problematic again.
Attempted Solution 3:
I've tried moving the colliding objects to their own layer, spawning a new camera with a higher depth at the same position as my main camera and forcing the cameras to only see the items on their respective layers, but this caused lighting issues as some objects are lighting others and I had only a limited number of layers so this solution was quite limiting as it forced me to only have a low number of objects that could be overlapping at a time. NOTE: this solution is probably the closest I was able to come to what I need though.
Ex: https://www.youtube.com/watch?v=CyFDgimJ2-8
Attempted Solution 4:
I've tried updating the Standard shader code by downloading it from Unity's downloads page and creating my own, custom shader that allows me to modify the ZWrite and ZTest properties, but because I've no real understanding of how these work, I'm not getting anywhere.
Request:
I would greatly appreciate a Shader script code example of how I can programmatically force one object who's mesh is either colliding with or completely inside another mesh to render in front of said mesh. I'm hoping I can then take that example and apply it to all the shaders that I'm currently using (Standard, Particle Additive) to achieve the effect I'm looking for. Thanks in advance for your help.
In the gif below both objects are colliding and according to the camera position the cube is in front of the sphere but I can change their visibility with the render queue:
If that's what you want you only have to add ZWrite Off in your subshader before the CGPROGRAM starts, the following is the Standard Surface Shader including the line:
Shader "Custom/Shader" {
Properties {
_Color ("Color", Color) = (1,1,1,1)
_MainTex ("Albedo (RGB)", 2D) = "white" {}
_Glossiness ("Smoothness", Range(0,1)) = 0.5
_Metallic ("Metallic", Range(0,1)) = 0.0
}
SubShader {
Tags { "RenderType"="Opaque" }
LOD 200
ZWrite Off
CGPROGRAM
// Physically based Standard lighting model, and enable shadows on all light types
#pragma surface surf Standard fullforwardshadows
// Use shader model 3.0 target, to get nicer looking lighting
#pragma target 3.0
sampler2D _MainTex;
struct Input {
float2 uv_MainTex;
};
half _Glossiness;
half _Metallic;
fixed4 _Color;
// Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader.
// See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing.
// #pragma instancing_options assumeuniformscaling
UNITY_INSTANCING_BUFFER_START(Props)
// put more per-instance properties here
UNITY_INSTANCING_BUFFER_END(Props)
void surf (Input IN, inout SurfaceOutputStandard o) {
// Albedo comes from a texture tinted by color
fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
o.Albedo = c.rgb;
// Metallic and smoothness come from slider variables
o.Metallic = _Metallic;
o.Smoothness = _Glossiness;
o.Alpha = c.a;
}
ENDCG
}
FallBack "Diffuse"
}
Now sorting particles, look at the shadows and how they collide and how we can change their visibility regardless of their position.
Here's the shader for particles, I'm using the Unity Built-in shader, the only thing added is Ztest Always
Shader "Particles/Alpha Blended Premultiply Custom" {
Properties {
_MainTex ("Particle Texture", 2D) = "white" {}
_InvFade ("Soft Particles Factor", Range(0.01,3.0)) = 1.0
}
Category {
Tags { "Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent" "PreviewType"="Plane" }
ZTest Always
Blend SrcAlpha OneMinusSrcAlpha
ColorMask RGB
Cull Off Lighting Off ZWrite Off
SubShader {
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma target 2.0
#pragma multi_compile_particles
#pragma multi_compile_fog
#include "UnityCG.cginc"
sampler2D _MainTex;
fixed4 _TintColor;
struct appdata_t {
float4 vertex : POSITION;
fixed4 color : COLOR;
float2 texcoord : TEXCOORD0;
UNITY_VERTEX_INPUT_INSTANCE_ID
};
struct v2f {
float4 vertex : SV_POSITION;
fixed4 color : COLOR;
float2 texcoord : TEXCOORD0;
#ifdef SOFTPARTICLES_ON
float4 projPos : TEXCOORD1;
#endif
UNITY_VERTEX_OUTPUT_STEREO
};
float4 _MainTex_ST;
v2f vert (appdata_t v)
{
v2f o;
UNITY_SETUP_INSTANCE_ID(v);
UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
o.vertex = UnityObjectToClipPos(v.vertex);
#ifdef SOFTPARTICLES_ON
o.projPos = ComputeScreenPos (o.vertex);
COMPUTE_EYEDEPTH(o.projPos.z);
#endif
o.color = v.color;
o.texcoord = TRANSFORM_TEX(v.texcoord,_MainTex);
return o;
}
UNITY_DECLARE_DEPTH_TEXTURE(_CameraDepthTexture);
float _InvFade;
fixed4 frag (v2f i) : SV_Target
{
#ifdef SOFTPARTICLES_ON
float sceneZ = LinearEyeDepth (SAMPLE_DEPTH_TEXTURE_PROJ(_CameraDepthTexture, UNITY_PROJ_COORD(i.projPos)));
float partZ = i.projPos.z;
float fade = saturate (_InvFade * (sceneZ-partZ));
i.color.a *= fade;
#endif
return i.color * tex2D(_MainTex, i.texcoord) * i.color.a;
}
ENDCG
}
}
}
}

Unity - how to make material double sided

Search for the issue gives a number of solutions, but they don't work for some reason in mine Unity3D 5.4. Like
camera inside a sphere
I do not see cull and/or sides in material in Unity editor.
In C#
rend = GetComponent<Renderer>();
mater = rend.material;
rend.setFaceCulling( "front", "ccw" );
mater.side = THREE.DoubleSide;
gives no such setFaceCulling and side property.
How to make material double sided?
You need a custom shader to enable double sided material by using Cull Off
The easiest/fastest way to test is to create a new Standard Surface Shader in the editor and open it. Add the line Cull Off below LOD 200.
Now one thing to consider is that lightning will not render correctly for the back faces. If you want that, I would recommend doing models with 2 sides.
Use or create a shader with
Cull off
Seen here in this simple 2 sided shader:
Shader "Custom/NewSurfaceShader" {
Properties {
}
SubShader {
Cull off
Pass {
ColorMaterial AmbientAndDiffuse
}
}
}
Maybe my answer for your Unity Version doesn't work, but here it is a solution for newer versions in HDRP in image below
just Creat an Unlitshader and edit it:
you should write Cull off bellow LOD 100
then drag it to a new material and set an picture for test - now drag material to an object .
lightning will render correctly !!! ( my unity is 2019.4 )
enter code here
Shader "Unlit/unlit"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 100
Cull off
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma multi_compile_fog
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
UNITY_FOG_COORDS(1)
float4 vertex : SV_POSITION;
};
sampler2D _MainTex;
float4 _MainTex_ST;
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
UNITY_TRANSFER_FOG(o,o.vertex);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
// sample the texture
fixed4 col = tex2D(_MainTex, i.uv);
// apply fog
UNITY_APPLY_FOG(i.fogCoord, col);
return col;
}
ENDCG
} }}
Also, In Unity 2020.3 URP/Lit shader has a Render face option guys.
You may use a custom surface shader with Cull Off but the opposite faces will not get the light proper, because the normals are valid only for front faces, for back faces the normals are opposite to faces. If you want the back face to be treated like the front face and don't want to make a model with double sides mesh to consume double memory you can draw in 2 Passes, 1 Pass for front face and 1 for back face where you can inverse normal for every vertex in vertex shader. You can use Cull back and Cull front.
SubShader
{
//Here start first Pass, if you are using standard surface shader passes are created automated,
//else you should specify Pass { }
Tags { ... }
LOD 200
Cull Back
...
struct Input
{
...
}
...
...
//or vert & frag shaders
void surf(Input IN, inout SurfaceOutputStandard p)
{
//processing front face, culling back face
...
}
...
//Here start second pass put automated by unity
Tags {...}
LOD 200
Cull Front
#pragma vertex vert
#pragma surface ...
...
struct Input
{
...
}
...
void vert(inout appdata_full v)
{
v.normal = -v.normal;//flip
}
void surf(Input IN, inout SurfaceOutputStandard p)
{
//processing back face, culling front face
...
}
}

How to make Unity glass shader only refract objects behind it?

I am looking for a glass shader for Unity that only refracts the objects behind it, or ideas for how to modify an existing glass shader to do that.
This screenshot shows what happens when I use FX/Glass/Stained BumpDistort on a curved plane mesh.
As you can see, the glass shader refracts both the sphere in front of the mesh and the ground behind it. I am looking for a shader that will only refract the objects behind it.
Here is the code for that shader, for reference:
// Per pixel bumped refraction.
// Uses a normal map to distort the image behind, and
// an additional texture to tint the color.
Shader "FX/Glass/Stained BumpDistort" {
Properties {
_BumpAmt ("Distortion", range (0,128)) = 10
_MainTex ("Tint Color (RGB)", 2D) = "white" {}
_BumpMap ("Normalmap", 2D) = "bump" {}
}
Category {
// We must be transparent, so other objects are drawn before this one.
Tags { "Queue"="Transparent" "RenderType"="Opaque" }
SubShader {
// This pass grabs the screen behind the object into a texture.
// We can access the result in the next pass as _GrabTexture
GrabPass {
Name "BASE"
Tags { "LightMode" = "Always" }
}
// Main pass: Take the texture grabbed above and use the bumpmap to perturb it
// on to the screen
Pass {
Name "BASE"
Tags { "LightMode" = "Always" }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma multi_compile_fog
#include "UnityCG.cginc"
struct appdata_t {
float4 vertex : POSITION;
float2 texcoord: TEXCOORD0;
};
struct v2f {
float4 vertex : SV_POSITION;
float4 uvgrab : TEXCOORD0;
float2 uvbump : TEXCOORD1;
float2 uvmain : TEXCOORD2;
UNITY_FOG_COORDS(3)
};
float _BumpAmt;
float4 _BumpMap_ST;
float4 _MainTex_ST;
v2f vert (appdata_t v)
{
v2f o;
o.vertex = mul(UNITY_MATRIX_MVP, v.vertex);
#if UNITY_UV_STARTS_AT_TOP
float scale = -1.0;
#else
float scale = 1.0;
#endif
o.uvgrab.xy = (float2(o.vertex.x, o.vertex.y*scale) + o.vertex.w) * 0.5;
o.uvgrab.zw = o.vertex.zw;
o.uvbump = TRANSFORM_TEX( v.texcoord, _BumpMap );
o.uvmain = TRANSFORM_TEX( v.texcoord, _MainTex );
UNITY_TRANSFER_FOG(o,o.vertex);
return o;
}
sampler2D _GrabTexture;
float4 _GrabTexture_TexelSize;
sampler2D _BumpMap;
sampler2D _MainTex;
half4 frag (v2f i) : SV_Target
{
// calculate perturbed coordinates
half2 bump = UnpackNormal(tex2D( _BumpMap, i.uvbump )).rg; // we could optimize this by just reading the x & y without reconstructing the Z
float2 offset = bump * _BumpAmt * _GrabTexture_TexelSize.xy;
i.uvgrab.xy = offset * i.uvgrab.z + i.uvgrab.xy;
half4 col = tex2Dproj( _GrabTexture, UNITY_PROJ_COORD(i.uvgrab));
half4 tint = tex2D(_MainTex, i.uvmain);
col *= tint;
UNITY_APPLY_FOG(i.fogCoord, col);
return col;
}
ENDCG
}
}
// ------------------------------------------------------------------
// Fallback for older cards and Unity non-Pro
SubShader {
Blend DstColor Zero
Pass {
Name "BASE"
SetTexture [_MainTex] { combine texture }
}
}
}
}
My intuition is that it has to do with the way that _GrabTexture is captured, but I'm not entirely sure. I'd appreciate any advice. Thanks!
No simple answer for this.
You cannot think about refraction without thinking about the context in some way, so let's see:
Basically, it's not easy to define when an object is "behind" another one. There are different ways to even meassure a point's distance to the camera, let alone accounting for the whole geometry. There are many strange situations where geometry intersects, and the centers and bounds could be anywhere.
Refraction is usually easy to think about in raytracing algorithms (you just march a ray and calculate how it bounces/refracts to get the colors). But here in raster graphics (used for 99% of real-time graphics), the objects are rendered as a whole, and in turns.
What is going on with that image is that the background and ball are rendered first, and the glass later. The glass doesn't "refract" anything, it just draws itself as a distortion of whatever was written in the render buffer before.
"Before" is key here. You don't get "behinds" in raster graphics, everything is done by being conscious of rendering order. Let's see how some refractions are created:
Manually set render queue tags for the shaders, so you know at what point in the pipeline they are drawn
Manually set each material's render queue
Create a script that constantly marshals the scene and every frame calculates what should be drawn before or after the glass according to position or any method you want, and set up the render queues in the materials
Create a script that render the scene filtering out (through various methods) the objects that shouldn't be refracted, and use that as the texture to refract (depending on the complexity of the scene, this is sometimes necessary)
These are just some options off the top of my head, everything depends on your scene
My advice:
Select the ball's material
Right-click on the Inspector window --> Tick on "Debug" mode
Set the Custom Render Queue to 2200 (after the regular geometry is drawn)
Select the glass' material
Set the Custom Render Queue to 2100 (after most geometry, but before the ball)