Cg: omit depth write - unity3d

I am outputting depth in Cg in a branch, like so:
ZWrite On
..
void frag(v2f IN, out color : COLOR, out depth : DEPTH) {
if (statement) {
color = float4(1);
} else {
color = float4(0);
depth = 0;
}
}
However as you see I omit writing the depth in the first condition. This results in undefined behaviour, but I believe this is common practice in GLSL (omitting writing to glFragDepth will result in the original depth).
What should I do to get the original depth in the first condition in Cg when having a depth value for output?

YMMV w/ this script. The code, as I recall, needed to be targeted to old implementations of OpenGL or else you'd get an error like shader registers cannot be masked related to this D3D issue.
But I believe you can pull the depth from the camera depth texture and rewrite it out. You do need to calculate a projected position first using ComputeScreenPos in the vertex shader. Documentation is non-existent, AFAIK, for the functions Linear01Depth and LinearEyeDepth so I can't tell you what the performance hit might be.
Shader "Depth Shader" { // defines the name of the shader
SubShader { // Unity chooses the subshader that fits the GPU best
Pass { // some shaders require multiple passes
ZWrite On
CGPROGRAM // here begins the part in Unity's Cg
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f
{
float4 position : POSITION;
float4 projPos : TEXCOORD1;
};
v2f vert(float4 vertexPos : POSITION)
{
v2f OUT;
OUT.position = mul(UNITY_MATRIX_MVP, vertexPos);
OUT.projPos = ComputeScreenPos(OUT.position);
return OUT;
}
//camera depth texture here
uniform sampler2D _CameraDepthTexture; //Depth Texture
void frag(v2f IN, out float4 color:COLOR, out float depth:DEPTH) // fragment shader
{
color = float4(0);
// use eye depth for actual z...
depth = LinearEyeDepth (tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(IN.projPos)).r);
//or this for depth in between [0,1]
//depth = Linear01Depth (tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(IN.projPos)).r);
}
ENDCG // here ends the part in Cg
}
}
}

Related

I can't understand the result of my fragment shader

I'm very newbie at unity shader programming. And I've tried some lines of Shader codes. But I couldn't understand the result of it.
Here's my shader codes.
Shader "Test/MyShader"{
Properties
{}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 100
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct vertInput
{
float4 vertex : POSITION;
};
struct fragInput
{
float4 vertex : SV_POSITION;
};
fragInput vert (vertInput IN)
{
fragInput o;
o.vertex = UnityObjectToClipPos(IN.vertex);
return o;
}
fixed4 frag (fragInput IN) : SV_Target
{
return fixed4(IN.vertex);
}
ENDCG
}
}
}
I applied this shader code to the normal Plane. I expected the result to be seemed like spectrums.
But what I've got is very different from what I've expected.
Here's the image link.
And this is Plane's inspector info.
Can anyone explain why this result come out?
As I understand, you expect, that color will depend on pixel position on the screen. To make it you should know what is result of UnityObjectToClipPos(IN.vertex);. And the reslut will be a vector, that contains
x = pixel X coordinate on screen in range [0, ScreenWidthInPixels]
y = pixel Y coordinate on screen in range [0, ScreenHeightInPixels]
z = 0
w = distance to the object in this pixel
And in your example you try to somehow map it to color vector, thats elements should be in range [0, 1]. So as result this color will be the same as if you specify color (1.0, 1.0, 0, 1.0). To receive some sanity result you shold make your fragment shader looks like this or something:
fixed4 frag (fragInput IN) : SV_Target
{
//IN.vertex.w contains distance from camera to the object
return fixed4(IN.vertex.x / _ScreenParams.x, IN.vertex.y / _ScreenParams.y, 0.0, 1.0);
}
And the result will be like:
Usefull links:
Unity shaders predefined variables where you can read about _ScreenParams
Unity shaders predefined methods where you can read about UnityObjectToClipPos(...)

Custom shader does not receive light

I made a grid shader which is working fine. However, it does not get impacted at all by any light. Just so that you know concerning the plane having the shader:
Its dimensions are 1000x1x1000 (so wide enough)
Displays shadows with any other material and cast shadows is on
Using Unity 2019.3.0f3
Universal Render Pipeline
The plane using custom grid shader (not receiving light)
The plane using basic shader (receiving light)
Custom grid shader code
I tried few solutions though including adding FallBack "Diffuse" at the end, or #include along with TRANSFER_SHADOW things. However, these don't work either.
You need to tell your shader what to do with the light information if you want it to be lit. Here is an example applying diffuse light directly to the albedo of your grid shader:
Shader "Custom/Grid"
{
Properties
{
_GridThickness("Grid Thickness", Float) = 0.01
_GridSpacing("Grid Spacing", Float) = 10.0
_GridColour("Grid Colour", Color) = (0.5, 0.5, 0.5, 0.5)
_BaseColour("Base Colour", Color) = (0.0, 0.0, 0.0, 0.0)
}
SubShader{
Tags { "Queue" = "Transparent" }
Pass {
ZWrite Off
Blend SrcAlpha OneMinusSrcAlpha
Tags {
"LightMode" = "ForwardBase"
} // gets us access to main directional light
CGPROGRAM
// Define the vertex and fragment shader functions
#pragma vertex vert
#pragma fragment frag
#include "UnityStandardBRDF.cginc" // for shader lighting info and some utils
#include "UnityStandardUtils.cginc" // for energy conservation
// Access Shaderlab properties
uniform float _GridThickness;
uniform float _GridSpacing;
uniform float4 _GridColour;
uniform float4 _BaseColour;
// Input into the vertex shader
struct vertexInput
{
float4 vertex : POSITION;
float3 normal : NORMAL; // include normal info
};
// Output from vertex shader into fragment shader
struct vertexOutput
{
float4 pos : SV_POSITION;
float4 worldPos : TEXCOORD0;
float3 normal : TEXCOORD1; // pass normals along
};
// VERTEX SHADER
vertexOutput vert(vertexInput input)
{
vertexOutput output;
output.pos = UnityObjectToClipPos(input.vertex);
// Calculate the world position coordinates to pass to the fragment shader
output.worldPos = mul(unity_ObjectToWorld, input.vertex);
output.normal = input.normal; //get normal for frag shader from vert info
return output;
}
// FRAGMENT SHADER
float4 frag(vertexOutput input) : COLOR
{
float3 lightDir = _WorldSpaceLightPos0.xyz;
float3 viewDir = normalize(_WorldSpaceCameraPos - input.worldPos);
float3 lightColor = _LightColor0.rgb;
float3 col;
if (frac(input.worldPos.x / _GridSpacing) < _GridThickness || frac(input.worldPos.z / _GridSpacing) < _GridThickness)
col = _GridColour;
else
col = _BaseColour;
col *= lightColor * DotClamped(lightDir, input.normal); // apply diffuse light by angle of incidence
return float4(col, 1);
}
ENDCG
}
}
}
You should check out these tutorials to learn more about other ways to light your objects. Same applies if you want them to accept shadows.
Setting FallBack "Diffuse" won't do anything here since the shader is not "falling back", it's running exactly the way you programmed it to, with no lighting or shadows.

How to make Unity glass shader only refract objects behind it?

I am looking for a glass shader for Unity that only refracts the objects behind it, or ideas for how to modify an existing glass shader to do that.
This screenshot shows what happens when I use FX/Glass/Stained BumpDistort on a curved plane mesh.
As you can see, the glass shader refracts both the sphere in front of the mesh and the ground behind it. I am looking for a shader that will only refract the objects behind it.
Here is the code for that shader, for reference:
// Per pixel bumped refraction.
// Uses a normal map to distort the image behind, and
// an additional texture to tint the color.
Shader "FX/Glass/Stained BumpDistort" {
Properties {
_BumpAmt ("Distortion", range (0,128)) = 10
_MainTex ("Tint Color (RGB)", 2D) = "white" {}
_BumpMap ("Normalmap", 2D) = "bump" {}
}
Category {
// We must be transparent, so other objects are drawn before this one.
Tags { "Queue"="Transparent" "RenderType"="Opaque" }
SubShader {
// This pass grabs the screen behind the object into a texture.
// We can access the result in the next pass as _GrabTexture
GrabPass {
Name "BASE"
Tags { "LightMode" = "Always" }
}
// Main pass: Take the texture grabbed above and use the bumpmap to perturb it
// on to the screen
Pass {
Name "BASE"
Tags { "LightMode" = "Always" }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma multi_compile_fog
#include "UnityCG.cginc"
struct appdata_t {
float4 vertex : POSITION;
float2 texcoord: TEXCOORD0;
};
struct v2f {
float4 vertex : SV_POSITION;
float4 uvgrab : TEXCOORD0;
float2 uvbump : TEXCOORD1;
float2 uvmain : TEXCOORD2;
UNITY_FOG_COORDS(3)
};
float _BumpAmt;
float4 _BumpMap_ST;
float4 _MainTex_ST;
v2f vert (appdata_t v)
{
v2f o;
o.vertex = mul(UNITY_MATRIX_MVP, v.vertex);
#if UNITY_UV_STARTS_AT_TOP
float scale = -1.0;
#else
float scale = 1.0;
#endif
o.uvgrab.xy = (float2(o.vertex.x, o.vertex.y*scale) + o.vertex.w) * 0.5;
o.uvgrab.zw = o.vertex.zw;
o.uvbump = TRANSFORM_TEX( v.texcoord, _BumpMap );
o.uvmain = TRANSFORM_TEX( v.texcoord, _MainTex );
UNITY_TRANSFER_FOG(o,o.vertex);
return o;
}
sampler2D _GrabTexture;
float4 _GrabTexture_TexelSize;
sampler2D _BumpMap;
sampler2D _MainTex;
half4 frag (v2f i) : SV_Target
{
// calculate perturbed coordinates
half2 bump = UnpackNormal(tex2D( _BumpMap, i.uvbump )).rg; // we could optimize this by just reading the x & y without reconstructing the Z
float2 offset = bump * _BumpAmt * _GrabTexture_TexelSize.xy;
i.uvgrab.xy = offset * i.uvgrab.z + i.uvgrab.xy;
half4 col = tex2Dproj( _GrabTexture, UNITY_PROJ_COORD(i.uvgrab));
half4 tint = tex2D(_MainTex, i.uvmain);
col *= tint;
UNITY_APPLY_FOG(i.fogCoord, col);
return col;
}
ENDCG
}
}
// ------------------------------------------------------------------
// Fallback for older cards and Unity non-Pro
SubShader {
Blend DstColor Zero
Pass {
Name "BASE"
SetTexture [_MainTex] { combine texture }
}
}
}
}
My intuition is that it has to do with the way that _GrabTexture is captured, but I'm not entirely sure. I'd appreciate any advice. Thanks!
No simple answer for this.
You cannot think about refraction without thinking about the context in some way, so let's see:
Basically, it's not easy to define when an object is "behind" another one. There are different ways to even meassure a point's distance to the camera, let alone accounting for the whole geometry. There are many strange situations where geometry intersects, and the centers and bounds could be anywhere.
Refraction is usually easy to think about in raytracing algorithms (you just march a ray and calculate how it bounces/refracts to get the colors). But here in raster graphics (used for 99% of real-time graphics), the objects are rendered as a whole, and in turns.
What is going on with that image is that the background and ball are rendered first, and the glass later. The glass doesn't "refract" anything, it just draws itself as a distortion of whatever was written in the render buffer before.
"Before" is key here. You don't get "behinds" in raster graphics, everything is done by being conscious of rendering order. Let's see how some refractions are created:
Manually set render queue tags for the shaders, so you know at what point in the pipeline they are drawn
Manually set each material's render queue
Create a script that constantly marshals the scene and every frame calculates what should be drawn before or after the glass according to position or any method you want, and set up the render queues in the materials
Create a script that render the scene filtering out (through various methods) the objects that shouldn't be refracted, and use that as the texture to refract (depending on the complexity of the scene, this is sometimes necessary)
These are just some options off the top of my head, everything depends on your scene
My advice:
Select the ball's material
Right-click on the Inspector window --> Tick on "Debug" mode
Set the Custom Render Queue to 2200 (after the regular geometry is drawn)
Select the glass' material
Set the Custom Render Queue to 2100 (after most geometry, but before the ball)

Screen coordinates in fragment shader

In a fragment shader like the below:
Shader "ColorReplacement" {
Properties {
_MainTex ("Greyscale (R) Alpha (A)", 2D) = "white" {}
}
SubShader {
ZTest LEqual
ZWrite On
Pass {
Name "ColorReplacement"
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma fragmentoption ARB_precision_hint_fastest
#pragma target 3.0
#include "UnityCG.cginc"
struct v2f
{
float4 pos : SV_POSITION;
float4 uv : TEXCOORD0;
};
v2f vert (appdata_tan v)
{
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
o.uv = v.texcoord.xy;
return o;
}
sampler2D _MainTex;
float4 frag(v2f i) : COLOR
{
}
ENDCG
}
}
Fallback off
}
Is there a way to know the coordinates of i.uv in the screen?
I'm totally new to shaders. The shader is applied to an object being drawn somewhere in the screen, the first pixel passed to frag possibly does not correspond to the first pixel of the screen (the viewport), is there a way to know the position of this pixel in screen coordinates?
EDIT
Yes, I want to obtain the fragment location on the screen.
Unity accepts vertex and fragment programs written in both Cg and HLSL. But I don't know how to convert this shader to HLSL.
The equivalent of gl_FragCoord in Cg is WPOS. I can run the following shader:
Shader "Custom/WindowCoordinates/Base" {
SubShader {
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma target 3.0
#include "UnityCG.cginc"
float4 vert(appdata_base v) : POSITION {
return mul (UNITY_MATRIX_MVP, v.vertex);
}
fixed4 frag(float4 sp:WPOS) : COLOR {
return fixed4(sp.xy/_ScreenParams.xy,0.0,1.0);
}
ENDCG
}
}
}
That uses the screen position the way I want, but I'm a such noob that I can't even mix both shaders to get what I want: in my shader I'm trying to access v2f.pos, which is calculated the same way of sp in the shader above, but I get the error:
Program 'frag', variable/member "pos" has semantic "POSITION" which is not visible in this profile
If I change pos to be WPOS instead of SV_POSITION I get a similar error:
Program 'vert', variable/member "pos" has semantic "WPOS" which is not visible in this profile at line 35
Which is strange, since I'm using the same target 3.0 of the above shader.
In GLSL fragment stage there's a built-in variable gl_FragCoord which carries the fragment pixel position within the viewport. If the viewport covers the whole screen, this is all you need. If the viewport covers only a subwindow of the screen, you'll have to pass the xy offset the viewport and add it to gl_FragCoord.xy to get the screen position. Now your shader code is not written in GLSL, but apparently Cg (with Unity extensions as it seems). But it should have some correspondence this this available.
Thought I would suggest you to read some books or manuals on shaders,here is a simple solution
vec2 texelSize = 1.0 / vec2(textureSize(yourTexSampler, 0));
vec2 screenCoords = gl_FragCoord.xy * texelSize;
I don't remember how gl_FragCoord is called in Cg,so search in the docs.For the textureSize() - - substitute with width/height of the input texture.
Here is the pretty same question I asked some time ago.
Also take a look at this.
According to docs there is a helper function - ComputeScreenPos.

CG: Specify a variable not to be interpolated between vertex and fragment shader

I'm using GC for writing shaders inside Unity3D.
I'm using vertex colors attributes for passing some parameters to the shader. They won't be used so for defining colors, and should be forwarded from vertex shader to pixel shader without modifyng them.
This is the structure I'm taking as input from Unity3D to the vertex shader:
struct appdata_full {
float4 vertex : POSITION;
float4 tangent : TANGENT;
float3 normal : NORMAL;
float4 texcoord : TEXCOORD0;
float4 texcoord1 : TEXCOORD1;
fixed4 color : COLOR;
#if defined(SHADER_API_XBOX360)
half4 texcoord2 : TEXCOORD2;
half4 texcoord3 : TEXCOORD3;
half4 texcoord4 : TEXCOORD4;
half4 texcoord5 : TEXCOORD5;
#endif
};
This is the structure returned by vertex shader as input to the fragment:
struct v2f {
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
fixed4 col: COLOR;
};
If I simply forward the parameter to the fragment shader, of course it will be interpolated:
v2f vert (appdata_full v)
{
v2f output;
//....
output.col = v.color;
}
I'd like to pass v.color parameter not interpolated to the fragment shader.
Is this possible?if yes how?
EDIT
like Tim pointed out, this is the expected behavior, because of the shader can't do anything else than interpolating colors if those are passed out from vertex shader to fragment.
I'll try to explain better what I'm trying to achieve. I'm using per vertex colors to store other kind of information than colors. Without telling all details on what I'm doing with that, let's say you can consider each color vertex as an id(each vertex of the same triangle, will have the same color. Actually each vertex of the same mesh).
So I used the color trick to mask some parameters because I have no other way to do this. Now this piece of information must be available at the fragment shader in some way.
If a pass as an out parameter of the vertex shader, this information encoded into a color will arrive interpolated at the fragment, that can't no longer use it.
I'm looking for a way of propagating this information unchanged till the fragment shader (maybe is possible to use a global variable or something like that?if yes how?).
I'm not sure this counts for an answer but it's a little much for a comment. As Bjorke points out, the fragment shader will always receive an interpolated value. If/when Unity supports Opengl 4.0 you might have access to Interpolation qualifiers, namely 'flat' that disables interpolation, deriving all values from a provoking vertex.
That said, the problem with trying to assign the same "color" value to all vertices of a triangle is that the vertex shader iterates over the vertices once, not per triangle. There will always be a "boundary" region where some vertex shares multiple edges with other vertices of a different "color" or "id", see my dumb example below. When applied to a box at (0,0,0), the top will be red, the bottom green, and the middle blue.
Shader "custom/colorbyheight" {
Properties {
_Unique_ID ("Unique Identifier", float) = 1.0
}
SubShader {
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f {
float4 pos : SV_POSITION;
fixed4 color : COLOR;
};
uniform float _Unique_ID;
v2f vert (appdata_base v)
{
v2f o;
o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
float3 worldpos = mul(_Object2World, v.vertex).xyz;
if(worldpos[1] >= 0.0)
o.color.xyz = 0.35; //unique_id = 0.35
else
o.color.xyz = 0.1; //unique_id = 0.1
o.color.w = 1.0;
return o;
}
fixed4 frag (v2f i) : COLOR0 {
// local unique_id's set by the vertex shader and stored in the color
if(i.color.x >= 0.349 && i.color.x <=0.351)
return float4(1.0,0.0,0.0,1.0); //red
else if(i.color.x >= 0.099 && i.color.x <=0.11)
return float4(0.0,1.0,0.0,1.0); //green
// global unique_id set by a Unity script
if(_Unique_ID == 42.0)
return float4(1.0,1.0,1.0,1.0); //white
// Fallback color = blue
return float4(0.0,0.0,1.0,1.0);
}
ENDCG
}
}
}
In your addendum note you say "Actually each vertex of the same mesh." If that's the case, why not use a modifiable property, like I have included above. Each mesh just needs a script then to change the unique_id.
public class ModifyShader : MonoBehaviour {
public float unique_id = 1;
// Use this for initialization
void Start () {
}
// Update is called once per frame
void Update () {
renderer.material.SetFloat( "_Unique_ID", unique_id );
}
}
I know this is an old thread, but it's worth answering anyway since this is one of the top google results.
You can now use the nointerpolation option for your variables in regular CG shaders. i.e.
nointerpolation fixed3 diff : COLOR0;
This is a pretty old thread, but I recently had a similar issue and I found a super simple answer. OSX Mavericks now supports OpenGL 4.1 so soon it won't be an issue at all, but it still may take a while before Unity3d picks it up.
Anyway, there is a neat way to enable flat shading in Unity even on earlier OSX (e.g. Mountain Lion) !
The shader below will do the job (the crucial part is the line with #extension, otherwise you'd get a compilation error for using a keyword flat"
Shader "GLSL flat shader" {
SubShader {
Pass {
GLSLPROGRAM
#extension GL_EXT_gpu_shader4 : require
flat varying vec4 color;
#ifdef VERTEX
void main()
{
color = gl_Color;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
#endif
#ifdef FRAGMENT
void main()
{
gl_FragColor = color; // set the output fragment color
}
#endif
ENDGLSL
}
}
}
Got to it by combining things from:
http://poniesandlight.co.uk/notes/flat_shading_on_osx/
http://en.wikibooks.org/wiki/GLSL_Programming/Unity/Debugging_of_Shaders
The GPU will always interpolated between values. If you want a constant value for a triangle, you need to set the same value for all vertices of that triangle. This can at times be inefficient, but it's how OpenGL (and DirectX) works. There is no inherent notion of a "face" value.
You might do this: glShadeModel(GL_FLAT). This turns off interpolation for all fragment shader inputs, and is available in older OpenGL also (pre 4.0).
If you have some inputs you want to interpolate and some you don't, render once with GL_FLAT to a texture of the same resolution as your output, and then render again with GL_SMOOTH and sample the texture to read the flat values for each pixel (while also getting interpolated values in the usual way).
If you could use DirectX9 instead, you can use the nointerpolation modifier on individual fragment shader inputs (shader model 4 or later).
The following steps works for me.
Unfortunately, DX uses vertex 0 as the provoking vertex while GL by default uses 2.
You can change this in GL but glProvokingVertex does not seem to be exposed.
We are doing flat shading and this reduces our vertex count significantly.
We have to reorder the triangles and compute normals in a special way (if anyone is interested I can post example source).
The problem is that we have to have different meshes on GL vs DX as the triangle indexes need to be rotated in order for the triangles to use the appropriate provoking vertex.
Maybe there is some way to execute a GL command via a plugin.