I am trying to implement a shader. It will use with Unity LineRenderer. Shader will have noise that scrolling overtime raltive to texture coordinates. For example in parallel to x axis of uv space of texture. I have an implementation, but i dont'know how to get direction relative to texture uv (consider the texture rotation) in a vert function. I am only have a world space-relativew scrolling.
Main problem - how to convert uv coordinates (for example (0, 0) or (1, 0)) to world space?
Here is a my shader:
Shader "LineRendering/Test"
{
Properties
{
[PerRendererData] _MainTex("Sprite Texture", 2D) = "white" {}
_Freq("Frequency", Float) = 1
_Speed("Speed", Float) = 1
}
SubShader
{
Tags
{
"Queue" = "Transparent"
"IgnoreProjector" = "True"
"RenderType" = "Transparent"
"PreviewType" = "Plane"
"CanUseSpriteAtlas" = "True"
}
LOD 200
Cull Off
Lighting Off
ZWrite Off
Fog { Mode Off }
Blend One OneMinusSrcAlpha
Pass
{
CGPROGRAM
#pragma target 3.0
#pragma vertex vert
#pragma fragment frag
#pragma enable_d3d11_debug_symbols
#include "noiseSimplex.cginc"
struct appdata_t
{
fixed4 vertex : POSITION;
fixed2 uv : TEXCOORD0;
};
struct v2f
{
fixed4 vertex : SV_POSITION;
fixed2 texcoord : TEXCOORD0;
fixed2 srcPos : TEXCOORD1;
};
uniform fixed _Freq;
uniform fixed _Speed;
v2f vert(appdata_t IN)
{
v2f OUT;
OUT.vertex = UnityObjectToClipPos(IN.vertex);
OUT.texcoord = IN.uv;
OUT.srcPos = mul(unity_ObjectToWorld, IN.vertex).xy;
OUT.srcPos *= _Freq;
//This is my trying to convert uv coordinates to world coodinates, but it is still unsuccessfully.
//fixed2 v0Pos = mul(unity_WorldToObject, fixed3(0, 0, 0)).xy;
//fixed2 v1Pos = mul(unity_WorldToObject, fixed3(1, 0, 0)).xy;
//fixed2 scrollOffset = v1Pos - v0Pos;
fixed2 scrollOffset = fixed2(1, 0);
OUT.srcPos.xy -= fixed2(scrollOffset.x, scrollOffset.y) * _Time.y * _Speed;
return OUT;
}
fixed4 frag(v2f IN) : COLOR
{
fixed4 output;
float ns = snoise(IN.srcPos) / 2 + 0.5f;
output.rgb = fixed3(ns, ns, ns);
output.a = ns;
output.rgb *= output.a;
return output;
}
ENDCG
}
}
}
Noise library getted form here: https://forum.unity.com/threads/2d-3d-4d-optimised-perlin-noise-cg-hlsl-library-cginc.218372/#post-2445598. Please help me.
Texture coordinates are already in texture space. If I understand correctly, you should be able to just do this:
v2f vert(appdata_t IN)
{
v2f OUT;
OUT.vertex = UnityObjectToClipPos(IN.vertex);
OUT.texcoord = IN.uv;
OUT.srcPos = IN.uv;
OUT.srcPos *= _Freq;
fixed2 scrollOffset = fixed2(1, 0);
OUT.srcPos.xy -= fixed2(scrollOffset.x, scrollOffset.y) * _Time.y * _Speed;
return OUT;
}
Option 1
Each of your UVs is associated with a specific vertex. Once you can establish which UV is assigned to which vertex, then look up the world position of the vertex.
Option 2
Another way to do this though may be with a texture that is a pre-baked image of the local space coordinates of the object. In the texture, the XYZ coords would map to RGB. Then you'll do a texture lookup and get to local object coordinates. You'll then have to multiply that vector by the world projection matrix in order to get the actual world space value.
When you create the texture, you'll have to account for the inability to store negative values. So first you'll have to set up the object so that it fits entirely inside the world coordinates of [-1, 1], in all three axes. Then, as part of the baking procedure, you'll have to divide all values by two, and then add .5. This will ensure that all your negative coordinate space values are stored from [0,.5) and all positive values are stored from [.5,1].
Note
I had a hard time understanding your exact request. I hope my options help with your program
Related
I am creating a videogame in Unity. Every sprite is rendered with a Sprite Renderer with a Material that has the CornucopiaShader.shader. The problem I have is I want to limit the max brightness (or color) of the sprite to just be a normal image of the sprite regardless of the power of how many point lights are hitting it, the intensity of the lights, and also the ambient light in the unity scene. When the intensity of the lights hitting the sprite is below that max brightness level I want it to act like a normal lit sprite and be black if no lights are hitting it, and be half lit up if an intensity of 0.5 is hitting it etc, and everything in between like normal.
Problem 1: In summary if three lights at say 5 intensity hit the sprite, I want the sprite to just look normal brightness of 1 and not flushed out white with light.
Since the player can rotate like paper mario and switch sides the current shader code acts that way, and also currently light that hits from the backface should also light up both sides like it currently does in the shader.
Problem 2: But another problem I am having, like is seen in the four images I have included is when I flip the player, the intensity changes.
I have been trying to figure out these two problems for 3 days straight and cannot figure it out.
Picture 1
Picture 2
Picture 3
Picture 4
Shader "Custom/CornucopiaShader" {
Properties{
_MainCol("Main Tint", Color) = (1,1,1,1)
_MainTex("Main Texture", 2D) = "white" {}
_Cutoff("Alpha cutoff", Range(0,0.5)) = 0.5
}
SubShader{
Tags {"Queue" = "Transparent" "IgnoreProjector" = "True" "RenderType" = "Transparent" "PreviewType" = "Plane"}
Cull Off
ZWrite Off
LOD 200
ColorMask RGB
Blend SrcAlpha OneMinusSrcAlpha
CGPROGRAM
#pragma surface surf SimpleLambert alphatest:_Cutoff addshadow fullforwardshadows alpha:blend
#pragma target 3.0
#include "RetroAA.cginc"
sampler2D _MainTex;
float4 _MainTex_TexelSize;
fixed4 _MainCol;
half4 LightingSimpleLambert(SurfaceOutput s, half3 lightDir, half atten)
{
half4 c;
c.rgb = s.Albedo * _MainCol.rgb * (atten)* _LightColor0.rgb;
c.a = s.Alpha;
return c;
}
struct Input {
float2 uv_MainTex;
};
void surf(Input IN, inout SurfaceOutput o) {
fixed4 c = RetroAA(_MainTex, IN.uv_MainTex, _MainTex_TexelSize);
o.Albedo = lerp(c.rgb, c.rgb, c.a);
o.Alpha = c.a;
}
ENDCG
}
Fallback "Transparent/Cutout/VertexLit"
}
#include "UnityCG.cginc"
#pragma target 3.0
fixed4 RetroAA(sampler2D tex, float2 uv, float4 texelSize){
float2 texelCoord = uv*texelSize.zw;
float2 hfw = 0.5*fwidth(texelCoord);
float2 fl = floor(texelCoord - 0.5) + 0.5;
float2 uvaa = (fl + smoothstep(0.5 - hfw, 0.5 + hfw, texelCoord - fl))*texelSize.xy;
return tex2D(tex, uvaa);
}
You can't really do this with surface shaders, but you can do it very efficiently with vertex fragment shaders. Unity stores the 4 closest point lights in a set of vectors to be used for per-vertex (non-important) lights. Fortunately, these are also accessible in the fragment shader, so you can use them to shade all 4 lights at once in a single pass! When you have all lights summed together, make sure that their intensity can't go above 1. Here is a quick shader i threw together for you:
Shader "Unlit/ToonTest"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
}
SubShader
{
Name "FORWARD"
Tags { "LightMode" = "ForwardBase" "RenderType" = "TransparentCutout" "Queue"="AlphaTest"}
Cull Off
LOD 100
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma multi_compile_fog
#pragma multi_compile_fwdbase
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
half3 normal : NORMAL;
};
struct v2f
{
float2 uv : TEXCOORD0;
UNITY_FOG_COORDS(1)
float4 vertex : SV_POSITION;
float3 worldPos : TEXCOORD1;
float3 ambient : TEXCOORD2;
};
sampler2D _MainTex;
float4 _MainTex_ST;
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
o.ambient = ShadeSH9(mul(unity_ObjectToWorld, float4(v.normal, 0.0 ))); // Ambient from spherical harmonics
UNITY_TRANSFER_FOG(o,o.vertex);
return o;
}
float3 Shade4Lights (
float4 lightPosX, float4 lightPosY, float4 lightPosZ,
float3 lightColor0, float3 lightColor1, float3 lightColor2, float3 lightColor3,
float4 lightAttenSq,
float3 pos)
{
// to light vectors
float4 toLightX = lightPosX - pos.x;
float4 toLightY = lightPosY - pos.y;
float4 toLightZ = lightPosZ - pos.z;
// squared lengths
float4 lengthSq = 0;
lengthSq += toLightX * toLightX;
lengthSq += toLightY * toLightY;
lengthSq += toLightZ * toLightZ;
// don't produce NaNs if some vertex position overlaps with the light
lengthSq = max(lengthSq, 0.000001);
// attenuation
float4 atten = 1.0 / (1.0 + lengthSq * lightAttenSq);
float4 diff = atten; //ndotl * atten;
// final color
float3 col = 0;
col += lightColor0 * diff.x;
col += lightColor1 * diff.y;
col += lightColor2 * diff.z;
col += lightColor3 * diff.w;
return col;
}
fixed4 frag (v2f i) : SV_Target
{
// sample the texture
fixed4 col = tex2D(_MainTex, i.uv);
half3 intensity = Shade4Lights(unity_4LightPosX0, unity_4LightPosY0, unity_4LightPosZ0, unity_LightColor[0], unity_LightColor[1], unity_LightColor[2], unity_LightColor[3], unity_4LightAtten0, i.worldPos);
intensity = min((half3)1, i.ambient + intensity);
col.rgb *= intensity;
clip(col.a - 0.5);
// apply fog
UNITY_APPLY_FOG(i.fogCoord, col);
return col;
}
ENDCG
}
}
}
The "Shade4Lights" function is a modified version of Unity's "Shade4PointLights", with diffuse lambert lighting removed (attenuation only). You'll also have to add your RetroAA function to the texture sampling. Your cutoff value is the "- 0.5" inside the "clip" funciton - you can expose this if you need it. If you need shadow casting for this shader, you can copy/paste the shadow pass from Unity's standard shader (you can download the source code from their page). For shadow receiving, you need to add a few lines to the shader - again check the source code for this.
You can read more about built-in shader variables here:
https://docs.unity3d.com/Manual/SL-UnityShaderVariables.html
After hours of Google, copy-pasting codes and playing around, I still could not find a solution to my problem.
I try to write a postprocessing shader using the vertex and fragment functions. My problem is that I do not know how to compute the radial distance of the current vertex to the camera position (or any other given position) in world coordinates.
My goal is the following:
Consider a very big 3D plane where the camera is on top and looks exactly down to the plane. I now want a postprocessing shader that draws a white line onto the plane, such that only those pixels that have a certain radial distance to the camera are painted white. The expected result would be a white circle (in this specific setup).
I know how to do this in principal, but the problem is that I cannot find out how to compute the radial distance to the vertex.
The problem here might be that this is a POSTPROCESSING shader. So this shader is not applied to a certain object. If I would do so, I could get the world coordinates of the vertex by using mul(unity_ObjectToWorld, v.vertex), but for postprocessing shaders this gives a nonsense value.
This is my debug code for this issue:
Shader "NonHidden/TestShader"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
}
SubShader
{
Tags { "RenderType"="Transparent" "Queue"="Transparent-1"}
LOD 100
ZWrite Off
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma target 3.0
#include "UnityCG.cginc"
sampler2D _MainTex;
sampler2D _CameraDepthTexture;
uniform float4 _MainTex_TexelSize;
// V2F
struct v2f {
float4 outpos : SV_POSITION;
float4 worldPos : TEXCOORD0;
float3 rayDir : TEXCOORD1;
float3 camNormal : TEXCOORD2;
};
// Sample Depth
float sampleDepth(float2 uv) {
return Linear01Depth(
UNITY_SAMPLE_DEPTH(
tex2D(_CameraDepthTexture, uv)));
}
// VERTEX
v2f vert (appdata_tan v)
{
TANGENT_SPACE_ROTATION;
v2f o;
o.outpos = UnityObjectToClipPos(v.vertex);
o.worldPos = mul(unity_ObjectToWorld, v.vertex);
o.rayDir = mul(rotation, ObjSpaceViewDir(v.vertex));
o.camNormal = UNITY_MATRIX_IT_MV[2].xyz;
return o;
}
// FRAGMENT
fixed4 frag (v2f IN) : SV_Target
{
// Get uv coordinates
float2 uv = IN.outpos.xy * (_ScreenParams.zw - 1.0f);
// Flip y if necessary
#if UNITY_UV_STARTS_AT_TOP
if (_MainTex_TexelSize.y < 0)
{
uv.y = 1 - uv.y;
}
#endif
// Get depth
float depth = sampleDepth(uv);
// Set color
fixed4 color = 0;
if(depth.x < 1)
{
color.r = IN.worldPos.x;
color.g = IN.worldPos.y;
color.b = IN.worldPos.z;
}
return color;
}
ENDCG
}
}
}
CurrentState
This image shows the result when the camera looks down on the plane:
Image 1: Actual result
The blue value is (for whatever reason) 25 in every pixel. The red and green areas reflect the x-y coordinates of the screen.
Even if I rotate the camera a little bit, I get the exact same shading at the same screen coordinates:
That shows me that the computed "worldPos" coordinates are screen coordinates and have nothing to do with the world coordinates of the plane.
Expected Result
The result I expect to see is the following:
Here, pixels that have the same (radial) distance to the camera have the same color.
How do I need to change the above code to achieve this effect? With rayDir (computed in the vert function) I tried to get at least the direction vector from the camera center to the current pixel, such that I could compute the radial distance using the depth information. But rayDir has a constant value for all pixels ...
At this point I also have to say that I don't really understand what is computed inside the vert function. This is just stuff that I found on the internet and that I tried out.
Alright, I found some solution to my problem, since I found this video here: Shaders Case Study - No Man's Sky: Topographic Scanner
In the video description is a link to the corresponding GIT repository. I downloaded, analyzed and rewrote the code, such that it fits my purpose, is easier to read and understand.
The major thing I learned is, that there is no built-in way to compute the radial distance using post-processing shaders (correct me if I'm wrong!). So in order to get the radial distance, the only way seems to be in fact to use the direction vector from the camera to the vertex and the depth buffer. Since the direction vector is also not available in a built-in way, a trick is used:
Instead of using the Graphics.Blit function in the post-processing script, a custom Blit function can be used to set some additional shader variables. In this case, the frustum of the camera is stored in a second set of texture coordinates, which are then available in the shader code as TEXCOORD1. The trick here is that the according shader variable automatically contains an interpolated uv value, that is identical to the direction vector ("frustum ray") I was looking for.
The code of the calling script now looks as follows:
using UnityEngine;
using System.Collections;
[ExecuteInEditMode]
public class TestShaderEffect : MonoBehaviour
{
private Material material;
private Camera cam;
void OnEnable()
{
// Create a material that uses the desired shader
material = new Material(Shader.Find("Test/RadialDistance"));
// Get the camera object (this script must be assigned to a camera)
cam = GetComponent<Camera>();
// Enable depth buffer generation#
// (writes to the '_CameraDepthTexture' variable in the shader)
cam.depthTextureMode = DepthTextureMode.Depth;
}
[ImageEffectOpaque] // Draw after opaque, but before transparent geometry
void OnRenderImage(RenderTexture source, RenderTexture destination)
{
// Call custom Blit function
// (usually Graphics.Blit is used)
RaycastCornerBlit(source, destination, material);
}
void RaycastCornerBlit(RenderTexture source, RenderTexture destination, Material mat)
{
// Compute (half) camera frustum size (at distance 1.0)
float angleFOVHalf = cam.fieldOfView / 2 * Mathf.Deg2Rad;
float heightHalf = Mathf.Tan(angleFOVHalf);
float widthHalf = heightHalf * cam.aspect; // aspect = width/height
// Compute helper vectors (camera orientation weighted with frustum size)
Vector3 vRight = cam.transform.right * widthHalf;
Vector3 vUp = cam.transform.up * heightHalf;
Vector3 vFwd = cam.transform.forward;
// Custom Blit
// ===========
// Set the given destination texture as the active render texture
RenderTexture.active = destination;
// Set the '_MainTex' variable to the texture given by 'source'
mat.SetTexture("_MainTex", source);
// Store current transformation matrix
GL.PushMatrix();
// Load orthographic transformation matrix
// (sets viewing frustum from [0,0,-1] to [1,1,100])
GL.LoadOrtho();
// Use the first pass of the shader for rendering
mat.SetPass(0);
// Activate quad draw mode and draw a quad
GL.Begin(GL.QUADS);
{
// Using MultiTexCoord2 (TEXCOORD0) and Vertex3 (POSITION) to draw on the whole screen
// Using MultiTexCoord to write the frustum information into TEXCOORD1
// -> When the shader is called, the TEXCOORD1 value is automatically an interpolated value
// Bottom Left
GL.MultiTexCoord2(0, 0, 0);
GL.MultiTexCoord(1, (vFwd - vRight - vUp) * cam.farClipPlane);
GL.Vertex3(0, 0, 0);
// Bottom Right
GL.MultiTexCoord2(0, 1, 0);
GL.MultiTexCoord(1, (vFwd + vRight - vUp) * cam.farClipPlane);
GL.Vertex3(1, 0, 0);
// Top Right
GL.MultiTexCoord2(0, 1, 1);
GL.MultiTexCoord(1, (vFwd + vRight + vUp) * cam.farClipPlane);
GL.Vertex3(1, 1, 0);
// Top Left
GL.MultiTexCoord2(0, 0, 1);
GL.MultiTexCoord(1, (vFwd - vRight + vUp) * cam.farClipPlane);
GL.Vertex3(0, 1, 0);
}
GL.End(); // Finish quad drawing
// Restore original transformation matrix
GL.PopMatrix();
}
}
The shader code looks like this:
Shader "Test/RadialDistance"
{
Properties
{
_MainTex("Texture", 2D) = "white" {}
}
SubShader
{
// No culling or depth
Cull Off ZWrite Off ZTest Always
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct VertIn
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
float4 ray : TEXCOORD1;
};
struct VertOut
{
float4 vertex : SV_POSITION;
float2 uv : TEXCOORD0;
float4 interpolatedRay : TEXCOORD1;
};
// Parameter variables
sampler2D _MainTex;
// Auto filled variables
float4 _MainTex_TexelSize;
sampler2D _CameraDepthTexture;
// Generate jet-color-sheme color based on a value t in [0, 1]
half3 JetColor(half t)
{
half3 color = 0;
color.r = min(1, max(0, 4 * t - 2));
color.g = min(1, max(0, -abs( 4 * t - 2) + 2));
color.b = min(1, max(0, -4 * t + 2));
return color;
}
// VERT
VertOut vert(VertIn v)
{
VertOut o;
// Get vertex and uv coordinates
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = v.uv.xy;
// Flip uv's if necessary
#if UNITY_UV_STARTS_AT_TOP
if (_MainTex_TexelSize.y < 0)
o.uv.y = 1 - o.uv.y;
#endif
// Get the interpolated frustum ray
// (generated the calling script custom Blit function)
o.interpolatedRay = v.ray;
return o;
}
// FRAG
float4 frag (VertOut i) : SV_Target
{
// Get the color from the texture
half4 colTex = tex2D(_MainTex, i.uv);
// flat depth value with high precision nearby and bad precision far away???
float rawDepth = DecodeFloatRG(tex2D(_CameraDepthTexture, i.uv));
// flat depth but with higher precision far away and lower precision nearby???
float linearDepth = Linear01Depth(rawDepth);
// Vector from camera position to the vertex in world space
float4 wsDir = linearDepth * i.interpolatedRay;
// Position of the vertex in world space
float3 wsPos = _WorldSpaceCameraPos + wsDir;
// Distance to a given point in world space coordinates
// (in this case the camera position, so: dist = length(wsDir))
float dist = distance(wsPos, _WorldSpaceCameraPos);
// Get color by distance (same distance means same color)
half4 color = 1;
half t = saturate(dist/100.0);
color.rgb = JetColor(t);
// Set color to red at a hard-coded distance -> red circle
if (dist < 50 && dist > 50 - 1 && linearDepth < 1)
{
color.rgb = half3(1, 0, 0);
}
return color * colTex;
}
ENDCG
}
}
}
I'm now able to achieve the desired effect:
But there are still some questions I have and I would be thankful if anyone could answer them for me:
Is there really no other way to get the radial distance? Using a direciton vector and the depth buffer is inefficient and inaccurate
I don't really understand the content of the rawDepth variable. I mean yes, it's some depth information, but if you use the depth information as texture color, you basically get a black image if you are not ridiculously close to an object. That leads to a very bad resolution for objects that are further away. How can anyone work with that?
I don't understand what exactly the Linear01Depth function does. Since the Unity documentation sucks in general, it also doesn't offer any information about this one as well
I'm having trouble projecting objects (for example a plane) onto a spherical surface.
The shader just have to take vertex local position (P0), convert it in world coordinates (P1), then find the vector from a given center (C) to P1 (P1 - C). So normalize this vector and multiply by a given coefficient, and finally convert back to local coordinates.
I'm working in Unity with surface shaders
Shader "Custom/testShader" {
Properties {
_MainTex("texture", 2D) = "white" {}
_Center("the given center", Vector) = (0,0,0)
_Height("the given coefficient", Range(1, 1000) = 10
}
Subshader {
CGPROGRAM
#pragma surface surf Standard vertex:vert
sampler2D _MainTex;
float3 _Center;
float _Height;
struct Input { float2 uv_MainTex; }
// IMPORTANT STUFF
void vert (inout appdata_full v) {
float3 world_vertex = mul(unity_ObjectToWorld, v.vertex) - _Center;
world_vertex = normalize(world_vertex) * _Height;
v.vertex = mul(unity_WorldToObject, world_vertex);
}
// END OF IMPORTANT STUFF
void surf (Input IN, inout SurfaceOutputStandard o) {
o.Albedo = tex2D(_MainTex, IN.uv_MainTex).rgb;
}
ENDCG
}
}
Now the problem is that in the scene, where I have some planes with this shader, they look split and much more little that what they are supposed to be. Any ideas?
EDIT
Here are some screenshots:
You are transforming world_vertex as a direction (X,Y,Z,0) instead of position (X,Y,Z,1). See this for more info.
So this line
v.vertex = mul(unity_WorldToObject, world_vertex); should be
v.vertex = mul(unity_WorldToObject, float4(world_vertex, 1) );
I am trying to create my first Shader in Unity3D. My goal is to add more alpha to pixels if they are close to some point in world space. But I can't get it right. My pixels are not getting smooth transparency (From min value, to max value), just or min, or max..
Here is my code:
Shader "Custom/Shield" {
Properties {
_MainTex ("Color (RGB) Alpha (A)", 2D) = "white" {}
_TexUsage ("Text usage", Range(0.1, 0.99)) = 0
_HitPoint ("Hit point", Vector) = (1, 1, 1, 1)
_Distance ("Distance", float) = 4.0
}
SubShader {
Tags { "Queue" = "Transparent" "RenderType" = "Transparent" }
LOD 200
CGPROGRAM
// Physically based Standard lighting model, and enable shadows on all light types
// And generate the shadow pass with instancing support
#pragma surface surf Standard fullforwardshadows alpha
// Use shader model 3.0 target, to get nicer looking lighting
#pragma target 3.0
sampler2D _MainTex;
half _TexUsage;
float3 _HitPoint;
fixed _Distance;
struct Input {
float2 uv_MainTex;
float3 worldPos;
};
void surf (Input IN, inout SurfaceOutputStandard o) {
IN.uv_MainTex.x = frac(IN.uv_MainTex.x + frac(_Time.x));
o.Albedo = tex2D(_MainTex, IN.uv_MainTex).rgba;
float dist = distance(_HitPoint, IN.worldPos);
float minAlpha = 0.2;
float st = step(_Distance, dist);
float blend = (dist / _Distance) * (1 - st) + minAlpha * st;
o.Alpha = blend;
}
ENDCG
}
FallBack "Diffuse"
}
And here is a example how it works now:
But this area should be less visible, if not so close to hit point.
What am I am doing wrong?
I'm trying to code a shader similar to this one from the Unity manual which “slices” the object by discarding pixels in nearly horizontal rings via the Clip() function.
Shader "Example/Slices" {
Properties {
_MainTex ("Texture", 2D) = "white" {}
_BumpMap ("Bumpmap", 2D) = "bump" {}
}
SubShader {
Tags { "RenderType" = "Opaque" }
Cull Off
CGPROGRAM
#pragma surface surf Lambert
struct Input {
float2 uv_MainTex;
float2 uv_BumpMap;
float3 worldPos;
};
sampler2D _MainTex;
sampler2D _BumpMap;
void surf (Input IN, inout SurfaceOutput o) {
clip (frac((IN.worldPos.y+IN.worldPos.z*0.1) * 5) - 0.5);
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
o.Normal = UnpackNormal (tex2D (_BumpMap, IN.uv_BumpMap));
}
ENDCG
}
Fallback "Diffuse"
}
Rather than just horizontal lines I want to be able to slice at an arbitrary angle. I discovered through experimentation (since I'm new to coding shaders) that the multiplier on worldPos Z does indeed change the angle of the slice, so I set a property variable to it:
clip (frac((IN.worldPos.y+IN.worldPos.z*_MYANGLEVARIABLE) * 5) - 0.5);
This has two problems however. 1) Values up to 1.0 rotate the lines by up to 45 degrees but beyond this the lines start to go "squiggly" and convolve into all sorts of patterns rather than neat lines and 2) this only works if the face is oriented toward the positive or negative X axis. When facing Z the lines don't move and when facing Y they get bigger but don't rotate.
Changing IN.worldPos.y to IN.worldPos.x does what you might expect - similar situation but working as expected in Z rather than X.
Any ideas how to
1) Achieve arbitrary angles?
2) Have them work regardless of facing direction?
I'm using worlPos because I always want the lines to be relative to the object rather than screen space but perhaps there's another way? My actual shader is a fragment rather than a surface shader & I'm passing worldPos from the vert to the frag.
Many thanks
To get arbitrary angles you can define a plane using a normal vector and clip using the dot product.
Shader "Example/Slices" {
Properties {
_MainTex ("Texture", 2D) = "white" {}
_BumpMap ("Bumpmap", 2D) = "bump" {}
_PlaneNormal ("Plane Normal", Vector) = (0, 1, 0)
}
SubShader {
Tags { "RenderType" = "Opaque" }
Cull Off
CGPROGRAM
#pragma surface surf Lambert
struct Input {
float2 uv_MainTex;
float2 uv_BumpMap;
float3 worldPos;
};
sampler2D _MainTex;
sampler2D _BumpMap;
float3 _PlaneNormal;
void surf (Input IN, inout SurfaceOutput o) {
float d = dot(_PlaneNormal, IN.worldPos);
clip (frac(d) - 0.5);
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
o.Normal = UnpackNormal (tex2D (_BumpMap, IN.uv_BumpMap));
}
ENDCG
}
Fallback "Diffuse"
}
The longer the normal vector here, the more frequent the slices.
If you want the slices to be relative to the object, you'll need to use a set of coordinates other than worldPos. Possibly this answer would help: http://answers.unity3d.com/questions/561900/get-local-position-in-surface-shader.html