I want to draw a horizontal line on an object with shader code (hlsl).
The clipping shader simply takes the distance to a given Y-coordinate in the surface shader and checks if it is higher that a given value.
If so it will discard. The result is a shader that simply clips away all pixels that are not on a line.
void surf (Input IN, inout SurfaceOutputStandard o) {
// Albedo comes from a texture tinted by color
fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
float d = abs(_YClip - IN.worldPos.y); // _YClip is is the properties and can be changed
if (d > _LineThickness) {
discard;
}
}
Can I somehow combine this shader with the standard unity shader without changing the code?
I plan to have a gizmo shader that renders lines and all kind of stuff. It would be very practical if I could just tell unity to render this gizmo shader on top.
I believe you might be able to use or adapt this shader to your purpose.
Image showing before y axis reached.
Image showing during, where one half is above cutoff y value and other half is below. Note that the pattern it dissolves in, depends on a texture pattern you supply yourself. So it should be possible to have a strict cutoff instead of a more odd and uneven pattern.
After the object has fully passed by the cutoff y value. What I did in this case is to hide an object inside the start object that is slightly smaller than the first object you saw. But if you don't have anything inside, the object will just be invisible, or clipped.
Shader "Dissolve/Dissolve"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
_DissolveTexture("Dissolve Texture", 2D) = "white" {}
_DissolveY("Current Y of the dissolve effect", Float) = 0
_DissolveSize("Size of the effect", Float) = 2
_StartingY("Starting point of the effect", Float) = -1 //the number is supposedly in meters. Is compared to the Y coordinate in world space I believe.
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 100
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
// make fog work
//#pragma multi_compile_fog
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
//UNITY_FOG_COORDS(1)
float4 vertex : SV_POSITION;
float3 worldPos : TEXCOORD1;
};
sampler2D _MainTex;
float4 _MainTex_ST;
sampler2D _DissolveTexture;
float _DissolveY;
float _DissolveSize;
float _StartingY;
v2f vert (appdata v) //"The vertex shader"
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
o.worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
//UNITY_TRANSFER_FOG(o,o.vertex);
return o;
}
fixed4 frag (v2f i) : SV_Target //"For drawing the pixel on top"
{
float transition = _DissolveY - i.worldPos.y; //Cutoff value where world position is taken into account.
clip(_StartingY + (transition + (tex2D(_DissolveTexture, i.uv)) * _DissolveSize)); //Clip = cutoff if above 0.
//My understanding: If StartingY for dissolve effect + transition value and uv mapping of the texture is taken into account, clip off using the _DissolveSize.
//This happens to each individual pixel.
// sample the texture
fixed4 col = tex2D(_MainTex, i.uv);
// apply fog
//UNITY_APPLY_FOG(i.fogCoord, col);
//clip(1 - i.vertex.x % 10); //"A pixel is NOT rendered if clip is below 0."
return col;
}
ENDCG
}
}
}
Here you see the inspector fields available.
I have a similar script with the x axis.
Related
I have created the following gradient that takes an Image components source image and apply a two colour gradient to it. Using a toggle it can be switched to using the Source image's alpha for the gradient alpha, or set the alpha per gradient colour.
Properties
{
[PerRendererData] _MainTex ("Texture", 2D) = "white" {}
[Header(Colours)]
_Color1("Color 1", Color) = (0,0,0,1)
_Color2("Color 2", Color) = (1,1,1,1)
[Toggle]_UseImageAlpha("Use Image alpha", float) = 0
[Header(Cull mode)]
[Enum(UnityEngine.Rendering.CullMode)] _CullMode("Cull mode", float) = 2
[Header(ZTest)]
[Enum(UnityEngine.Rendering.CompareFunction)] _ZTest("ZTest", float) = 4
[Toggle(UNITY_UI_ALPHACLIP)] _UseUIAlphaClip("Use Alpha Clip", Float) = 1
}
SubShader
{
Tags {"Queue" = "Transparent" "RenderType"="Transparent"}
LOD 100
Blend SrcAlpha OneMinusSrcAlpha
ZTest [_ZTest]
Cull [_CullMode]
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma multi_compile_local _ UNITY_UI_ALPHACLIP
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
fixed4 col : COLOR;
};
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
fixed4 col : COLOR;
};
sampler2D _MainTex;
float4 _MainTex_ST;
fixed4 _Color1;
fixed4 _Color2;
bool _UseImageAlpha;
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
o.col = v.col;
return o;
}
fixed4 frag (v2f i) : SV_Target
{
if (_UseImageAlpha) {
_Color1.a = i.col.a;
_Color2.a = i.col.a;
}
fixed4 col = tex2D(_MainTex, i.uv);
col *= lerp(_Color1, _Color2, i.uv.y);
col.a = clamp(col.a, 0, 1);
#ifdef UNITY_UI_ALPHACLIP
clip(col.a - .001);
#endif
return col;
}
ENDCG
}
}
This shader works fine and shows the gradient as expected, however once I start adding multiple layers of Images (in example a blue square behind it, and a green quare in front of it) it starts having issues with Z fighting in the scene view only based on the angle of the scene camera with the object that comes next in the hierachy (in this example the green square). In the Game view and on builds the Z fighting doesn't occur.
I am using the default LessEqual ZTest option, with back culling and render queue set to 3000 (which is the same as the render queue for the image in front and behind of it). As per Unity's documentation having it set to LessEqual should make it so Objects in front get drawn on top, and objects behind get hidden:
How should depth testing be performed. Default is LEqual (draw objects in from or at the distance as existing objects; hide objects behind them).
Setting the ZTest to any of the other options (off, always, greaterEqual etc) doens't yield a better result.
If I set the Render queue higher (3001) it will always draw the gradient on top in the Scene view (no changes in the Game view) whereas setting it to 2999 will still make it z fight with the object in front of it (green square), while making the blue square behind it transparent.
When I only have the green square in front of the gradient it will z fight in some places, while cutting out the green square in other places where the source image doesn't have any pixels.
Using the alpha of the source image, or using the alpha of the two individual colours does not make a difference either.
(gyazo) Example gif of the fighting changing depending on the camera angle.
What is causing this z fighting, and why does it only occur in the scene view?
Using Unity 2019.3.13f1, same results in 2019.2, 2019.1m 2018.4 LTS, 2017 LTS on Windows.
Try adding ZWrite Off. With shaders it might be useful just to start with (or at least look at) one of Unity's built-in shaders that is close to what you want. In your case that would be UI-Default.shader.
I made a grid shader which is working fine. However, it does not get impacted at all by any light. Just so that you know concerning the plane having the shader:
Its dimensions are 1000x1x1000 (so wide enough)
Displays shadows with any other material and cast shadows is on
Using Unity 2019.3.0f3
Universal Render Pipeline
The plane using custom grid shader (not receiving light)
The plane using basic shader (receiving light)
Custom grid shader code
I tried few solutions though including adding FallBack "Diffuse" at the end, or #include along with TRANSFER_SHADOW things. However, these don't work either.
You need to tell your shader what to do with the light information if you want it to be lit. Here is an example applying diffuse light directly to the albedo of your grid shader:
Shader "Custom/Grid"
{
Properties
{
_GridThickness("Grid Thickness", Float) = 0.01
_GridSpacing("Grid Spacing", Float) = 10.0
_GridColour("Grid Colour", Color) = (0.5, 0.5, 0.5, 0.5)
_BaseColour("Base Colour", Color) = (0.0, 0.0, 0.0, 0.0)
}
SubShader{
Tags { "Queue" = "Transparent" }
Pass {
ZWrite Off
Blend SrcAlpha OneMinusSrcAlpha
Tags {
"LightMode" = "ForwardBase"
} // gets us access to main directional light
CGPROGRAM
// Define the vertex and fragment shader functions
#pragma vertex vert
#pragma fragment frag
#include "UnityStandardBRDF.cginc" // for shader lighting info and some utils
#include "UnityStandardUtils.cginc" // for energy conservation
// Access Shaderlab properties
uniform float _GridThickness;
uniform float _GridSpacing;
uniform float4 _GridColour;
uniform float4 _BaseColour;
// Input into the vertex shader
struct vertexInput
{
float4 vertex : POSITION;
float3 normal : NORMAL; // include normal info
};
// Output from vertex shader into fragment shader
struct vertexOutput
{
float4 pos : SV_POSITION;
float4 worldPos : TEXCOORD0;
float3 normal : TEXCOORD1; // pass normals along
};
// VERTEX SHADER
vertexOutput vert(vertexInput input)
{
vertexOutput output;
output.pos = UnityObjectToClipPos(input.vertex);
// Calculate the world position coordinates to pass to the fragment shader
output.worldPos = mul(unity_ObjectToWorld, input.vertex);
output.normal = input.normal; //get normal for frag shader from vert info
return output;
}
// FRAGMENT SHADER
float4 frag(vertexOutput input) : COLOR
{
float3 lightDir = _WorldSpaceLightPos0.xyz;
float3 viewDir = normalize(_WorldSpaceCameraPos - input.worldPos);
float3 lightColor = _LightColor0.rgb;
float3 col;
if (frac(input.worldPos.x / _GridSpacing) < _GridThickness || frac(input.worldPos.z / _GridSpacing) < _GridThickness)
col = _GridColour;
else
col = _BaseColour;
col *= lightColor * DotClamped(lightDir, input.normal); // apply diffuse light by angle of incidence
return float4(col, 1);
}
ENDCG
}
}
}
You should check out these tutorials to learn more about other ways to light your objects. Same applies if you want them to accept shadows.
Setting FallBack "Diffuse" won't do anything here since the shader is not "falling back", it's running exactly the way you programmed it to, with no lighting or shadows.
I don't know much about shaders, so I am struggling to add transparency to a shader I already use.
So basically I used the shader below to display 360 videos on a sphere. It flipps the normals so it is displayed on the inside.
However, I would like to add an alpha value to it so I can make the sphere (and therefore the video) as transparent as I need it to be. What should I change?
Shader "Custom/Equirectangular" {
Properties {
_Color ("Main Color", Color) = (1,1,1,1)
_MainTex ("Diffuse (RGB) Alpha (A)", 2D) = "gray" {}
}
SubShader{
Pass {
Tags {"LightMode" = "Always"}
Cull Front
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma fragmentoption ARB_precision_hint_fastest
#pragma glsl
#pragma target 3.0
#include "UnityCG.cginc"
struct appdata {
float4 vertex : POSITION;
float3 normal : NORMAL;
};
struct v2f
{
float4 pos : SV_POSITION;
float3 normal : TEXCOORD0;
};
v2f vert (appdata v)
{
v2f o;
o.pos = UnityObjectToClipPos(v.vertex);
o.normal = v.normal;
return o;
}
sampler2D _MainTex;
#define PI 3.141592653589793
inline float2 RadialCoords(float3 a_coords)
{
float3 a_coords_n = normalize(a_coords);
float lon = atan2(a_coords_n.z, a_coords_n.x);
float lat = acos(a_coords_n.y);
float2 sphereCoords = float2(lon, lat) * (1.0 / PI);
return float2(1 - (sphereCoords.x * 0.5 + 0.5), 1 - sphereCoords.y);
}
float4 frag(v2f IN) : COLOR
{
float2 equiUV = RadialCoords(IN.normal);
return tex2D(_MainTex, equiUV);
}
ENDCG
}
}
FallBack "VertexLit"
}
EDIT
I have also noticed that texture tiling and offset does not work on this shader. Any ideas how to make that work?
Short story: this is going to be really difficult
Not the shader, the shader's easy. All you have to do is modify this line:
return tex2D(_MainTex, equiUV);
The Long story:
Or: what to modify this line to.
Video formats, due to their very nature, do not natively contain an alpha channel. You'll be hard pressed to find one that does (I looked into this briefly back in 2015 when "interviewing" for a "job" where they needed something similar).
Once you figure out how you're going to encode the alpha, then you can modify the shader to look for that data and convert it to an alpha value, and bam you're done.
I think the place that I was "interviewing" at did it by splitting the video into an upper and lower sections, the upper half was just the alpha channel (black/white) and the lower half was the color data. The player would split the video horizontally and treat the two halves differently. I didn't have to mess with it, they'd already done it, so I'm not sure how it was done programmatically, I can only speculate.
You forgot blending for transparency. And it is better to provide corresponding shader tags as well.
Tags { "LightMode"="Always" "Queue"="Transparent" "RenderType"="Transparent" }
Blend SrcAlpha OneMinusSrcAlpha
Cull Front
I've been trying to project a 360 video inside a Sphere with flipped normals for Google Cardboard VR. The Video works fine, only it is inverted horizontally, which is noticeable only when there is some text on the screen. I've included a screen shot of video and a UI.Text element in front of it to compare it with.
I've tried to invert the view of the camera through projectionMatrix but then it just ends up in blank space. Screenshot :
I can't figure out a way to make the video project the right way. Please help!
Here is a shader that displays the content correctly without inverting it, I have tested it with Unity 2018.1.1 as I am currently using it in my project:
Shader "InsideVisible" {
Properties {
_MainTex ("Base (RGB)", 2D) = "white" {}
}
SubShader {
Tags { "RenderType"="Opaque" }
Cull front // ADDED BY BERNIE, TO FLIP THE SURFACES
LOD 100
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata_t {
float4 vertex : POSITION;
float2 texcoord : TEXCOORD0;
};
struct v2f {
float4 vertex : SV_POSITION;
half2 texcoord : TEXCOORD0;
};
sampler2D _MainTex;
float4 _MainTex_ST;
v2f vert (appdata_t v) {
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
// ADDED BY BERNIE:
v.texcoord.x = 1 - v.texcoord.x;
o.texcoord = TRANSFORM_TEX(v.texcoord, _MainTex);
return o;
}
fixed4 frag (v2f i) : SV_Target {
fixed4 col = tex2D(_MainTex, i.texcoord);
return col;
}
ENDCG
}
}
}
If you need more information about the shader you can view this tutorial.
Flipping the normals on a sphere is insufficient, you also need to reverse the U part of the UV coordinates (that is, change all the values U such that they are 1-U). A sphere is set up so that the outside renders text correctly from right to left. When you flip the normals "right" is still on the right from the outside...meaning that it's on the left when viewed from the inside.
You will either need to manually edit the UV coordinates yourself or get a premade inverted sphere off the asset store (IIRC there are two that are available for free).
I am looking for a glass shader for Unity that only refracts the objects behind it, or ideas for how to modify an existing glass shader to do that.
This screenshot shows what happens when I use FX/Glass/Stained BumpDistort on a curved plane mesh.
As you can see, the glass shader refracts both the sphere in front of the mesh and the ground behind it. I am looking for a shader that will only refract the objects behind it.
Here is the code for that shader, for reference:
// Per pixel bumped refraction.
// Uses a normal map to distort the image behind, and
// an additional texture to tint the color.
Shader "FX/Glass/Stained BumpDistort" {
Properties {
_BumpAmt ("Distortion", range (0,128)) = 10
_MainTex ("Tint Color (RGB)", 2D) = "white" {}
_BumpMap ("Normalmap", 2D) = "bump" {}
}
Category {
// We must be transparent, so other objects are drawn before this one.
Tags { "Queue"="Transparent" "RenderType"="Opaque" }
SubShader {
// This pass grabs the screen behind the object into a texture.
// We can access the result in the next pass as _GrabTexture
GrabPass {
Name "BASE"
Tags { "LightMode" = "Always" }
}
// Main pass: Take the texture grabbed above and use the bumpmap to perturb it
// on to the screen
Pass {
Name "BASE"
Tags { "LightMode" = "Always" }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma multi_compile_fog
#include "UnityCG.cginc"
struct appdata_t {
float4 vertex : POSITION;
float2 texcoord: TEXCOORD0;
};
struct v2f {
float4 vertex : SV_POSITION;
float4 uvgrab : TEXCOORD0;
float2 uvbump : TEXCOORD1;
float2 uvmain : TEXCOORD2;
UNITY_FOG_COORDS(3)
};
float _BumpAmt;
float4 _BumpMap_ST;
float4 _MainTex_ST;
v2f vert (appdata_t v)
{
v2f o;
o.vertex = mul(UNITY_MATRIX_MVP, v.vertex);
#if UNITY_UV_STARTS_AT_TOP
float scale = -1.0;
#else
float scale = 1.0;
#endif
o.uvgrab.xy = (float2(o.vertex.x, o.vertex.y*scale) + o.vertex.w) * 0.5;
o.uvgrab.zw = o.vertex.zw;
o.uvbump = TRANSFORM_TEX( v.texcoord, _BumpMap );
o.uvmain = TRANSFORM_TEX( v.texcoord, _MainTex );
UNITY_TRANSFER_FOG(o,o.vertex);
return o;
}
sampler2D _GrabTexture;
float4 _GrabTexture_TexelSize;
sampler2D _BumpMap;
sampler2D _MainTex;
half4 frag (v2f i) : SV_Target
{
// calculate perturbed coordinates
half2 bump = UnpackNormal(tex2D( _BumpMap, i.uvbump )).rg; // we could optimize this by just reading the x & y without reconstructing the Z
float2 offset = bump * _BumpAmt * _GrabTexture_TexelSize.xy;
i.uvgrab.xy = offset * i.uvgrab.z + i.uvgrab.xy;
half4 col = tex2Dproj( _GrabTexture, UNITY_PROJ_COORD(i.uvgrab));
half4 tint = tex2D(_MainTex, i.uvmain);
col *= tint;
UNITY_APPLY_FOG(i.fogCoord, col);
return col;
}
ENDCG
}
}
// ------------------------------------------------------------------
// Fallback for older cards and Unity non-Pro
SubShader {
Blend DstColor Zero
Pass {
Name "BASE"
SetTexture [_MainTex] { combine texture }
}
}
}
}
My intuition is that it has to do with the way that _GrabTexture is captured, but I'm not entirely sure. I'd appreciate any advice. Thanks!
No simple answer for this.
You cannot think about refraction without thinking about the context in some way, so let's see:
Basically, it's not easy to define when an object is "behind" another one. There are different ways to even meassure a point's distance to the camera, let alone accounting for the whole geometry. There are many strange situations where geometry intersects, and the centers and bounds could be anywhere.
Refraction is usually easy to think about in raytracing algorithms (you just march a ray and calculate how it bounces/refracts to get the colors). But here in raster graphics (used for 99% of real-time graphics), the objects are rendered as a whole, and in turns.
What is going on with that image is that the background and ball are rendered first, and the glass later. The glass doesn't "refract" anything, it just draws itself as a distortion of whatever was written in the render buffer before.
"Before" is key here. You don't get "behinds" in raster graphics, everything is done by being conscious of rendering order. Let's see how some refractions are created:
Manually set render queue tags for the shaders, so you know at what point in the pipeline they are drawn
Manually set each material's render queue
Create a script that constantly marshals the scene and every frame calculates what should be drawn before or after the glass according to position or any method you want, and set up the render queues in the materials
Create a script that render the scene filtering out (through various methods) the objects that shouldn't be refracted, and use that as the texture to refract (depending on the complexity of the scene, this is sometimes necessary)
These are just some options off the top of my head, everything depends on your scene
My advice:
Select the ball's material
Right-click on the Inspector window --> Tick on "Debug" mode
Set the Custom Render Queue to 2200 (after the regular geometry is drawn)
Select the glass' material
Set the Custom Render Queue to 2100 (after most geometry, but before the ball)