I was asked to draw a line between two given points using surface shader. The point is given in texture coordinate (between 0 and 1) and directly goes into the surface shader in Unity. I want to do this by calculating the pixel position and see if it is on that line. So I either try to translate the texture corrdinate to world pos, or i get the position of pixel in relative to that texture coordinate.
But I only found worldPos and screenPos in unity shader manual. Is there some way i can get the position in texture coordinate (or at least get the size of the textured object in world pos?)
Here is a simple example:
Shader "Line" {
Properties {
// Easiest way to get access of UVs in surface shaders is to define a texture
_MainTex("Texture", 2D) = "white"{}
// We can pack both points into one vector
_Line("Start Pos (xy), End Pos (zw)", Vector) = (0, 0, 1, 1)
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert
sampler2D _MainTex;
float4 _Line;
struct Input {
// This UV value will now represent the pixel coordinate in UV space
float2 uv_MainTex;
};
void surf (Input IN, inout SurfaceOutput o) {
float2 start = _Line.xy;
float2 end = _Line.zw;
float2 pos = IN.uv_MainTex.xy;
// Do some calculations
return fixed4(1, 1, 1, 1);
}
ENDCG
}
}
Here is a good post on how to calculate whether a point is on a line:
How to check if a point lies on a line between 2 other points
Let's say you define a function from this with the following signature:
inline bool IsPointOnLine(float2 p, float2 l1, float2 l2)
Then for return value, you can put this:
return IsPointOnLine(pos, start, end) ? _LineColor : _BackgroundColor
If you want UV coordinates without using a texture, i recommend making a vertex fragment shader instead and defining float2 uv : TEXCOORD0 inside the appdata/VertexInput struct. You can then pass that on to the fragment shader inside the vertex function.
Related
I have been trying to obtain the Z position of a vertex in the clip plane, i.e. its location in the depth buffer, but I have been observing weird behaviour affecting the result of UnityObjectToClipPos.
I have written a surface shader that colors vertices based on the depth. Here is the relevant code:
Tags { "RenderType"="Opaque" }
LOD 200
Cull off
CGPROGRAM
#pragma target 3.0
#pragma surface surf StandardSpecular alphatest:_Cutoff addshadow vertex:vert
#pragma debug
struct Input
{
float depth;
};
float posClipZ(float3 vertex)
{
float4 clipPos = UnityObjectToClipPos(vertex);
float depth = clipPos.z / clipPos.w;
#if !defined(UNITY_REVERSED_Z)
depth = depth * 0.5 + 0.5;
#endif
return depth;
}
void vert(inout appdata_full v, out Input o)
{
UNITY_INITIALIZE_OUTPUT(Input, o);
o.depth = posClipZ(v.vertex);
}
void surf(Input IN, inout SurfaceOutputStandardSpecular o)
{
o.Albedo.x = clamp(IN.depth, 0, 1);
o.Alpha = 1;
}
ENDCG
Based on my understanding, UnityObjectToClipPos should return the position of the vertex in the camera's clip coordinates, and the Z coordinate should be (when transformed from homogenous coordinates) between 0 and 1. However, this is not what I am observing at all:
This shows the camera intersecting a sphere. Notice that vertices near or behind the camera near clip plane actually have negative depth (I've checked that with other conversions to albedo color). It also seems that clipPos.z is actually constant most of the time, and only clipPos.w is changing.
I've managed to hijack the generated fragment shader to add a SV_Position parameter, and this is what I actually expected to see in the first place:
However, I don't want to use SV_Position, as I want to be able to calculate the depth in the vertex shader from other positions.
It seems like UnityObjectToClipPos is not suited for the task, as the depth obtained that way is not even monotonic.
So, how can I mimic the second image via depth calculated in the vertex shader? It should also be perfect regarding interpolation, so I suppose I will have to use UnityObjectToViewPos first in the vertex shader to get the linear depth, then scale it in the fragment shader accordingly.
I am not completely sure why UnityObjectToClipPos didn't return anything useful, but it wasn't the right tool for the task anyway. The reason is that the depth of the vertex is not linear in the depth buffer, and so first the actual distance from the camera has to be used for proper interpolation of the depth of all the pixels between the vertices:
float posClipZ(float3 vertex)
{
float3 viewPos = UnityObjectToViewPos(vertex);
return -viewPos.z;
}
Once the fragment/surface shader is executed, LinearEyeDepth seems to be the proper function to retrieve the expected depth value:
void surf(Input IN, inout SurfaceOutputStandardSpecular o)
{
o.Albedo.x = clamp(LinearEyeDepth(IN.depth), 0, 1);
o.Alpha = 1;
}
Once again it is important not to use LinearEyeDepth inside the vertex shader, since the values will be interpolated incorrectly.
I'm trying to implement the SPSM approach described in this paper: http://jmarvie.free.fr/Publis/2014_I3D/i3d_2014_SubPixelShadowMapping.pdf
First, why am I trying to do this: I am currently using a shadow map, rendered from the viewpoint of a character to display their view cone on the ground. Using regular shadow mapping works fine so far, but produces jagged edges, mainly where obstacles obstruct the view. I hope SPSM might provide better results.
I'm running into problems while doing step 3.3, the encoding of triangle information in the shadow map. There are two distinct problems.
Getting all the information needed into the fragment shader for writing into a render texture.
Encoding the data into a suitable 128bits-RGBA format.
Regarding 1: I need the following data for the SPSM:
All three triangle vertices of the closest occluding triangle
Depth value at texel center
Depth derivatives
I tried getting the triangle information from the geometry shader while calculating the other two in the fragment shader. My first naive approach looks like this:
struct appdata
{
float4 vertex : POSITION;
};
struct v2g
{
float4 pos : POSITION;
float3 viewPos : NORMAL;
};
struct g2f
{
float4 vertex : SV_POSITION;
float4 v0 : TEXCOORD1;
float4 v1 : TEXCOORD2;
float4 v2 : TEXCOORD3;
};
v2g vert(appdata v)
{
v2g o;
UNITY_INITIALIZE_OUTPUT(v2g, o);
o.pos = UnityObjectToClipPos(v.vertex);
return o;
}
[maxvertexcount(3)]
void geom(triangle v2g input[3], inout TriangleStream<g2f> outStream)
{
g2f o;
float4 vert0 = input[0].pos;
float4 vert1 = input[1].pos;
float4 vert2 = input[2].pos;
o.vertex = vert0;
o.v0 = vert0;
o.v1 = vert1;
o.v2 = vert2;
outStream.Append(o);
o.vertex = vert1;
o.v0 = vert0;
o.v1 = vert1;
o.v2 = vert2;
outStream.Append(o);
o.vertex = vert2;
o.v0 = vert0;
o.v1 = vert1;
o.v2 = vert2;
outStream.Append(o);
}
float4 frag(g2f i) : SV_TARGET
{
float4 col;
half depth = i.vertex.z;
half dx = ddx(i.vertex.z);
half dy = ddy(i.vertex.z);
float r1 = i.v0.x; // _ScreenParams.x; <– this ranges from aroung -5 to 5
float g1 = i.v0.y; // _ScreenParams.y;
float r2 = i.vertex.x // _ScreenParams.x; <– this ranges from 0 to 1920
float g2 = i.vertex.y // _ScreenParams.y;
col = float4(r2, g2, 0, 1);
return col;
}
I'm currently rendering the interpolated vertex position as the fragment color for debugging. When rendering it like this I get the following output:
using the interpolated position as color
If I render the non-interpolated position of the "first" vertex of each triangle I get the following:
using the non-interpolated triangle vertex positions
It looks correct so far. What confuses me, is that i.vertex.x has values ranging from 0 to screen width (e.g. 1920) and i.v0.x has value ranging from around -5 to +5. Shouldn't both be at least roughly the same (I know one is interpolated while the other is not) since they are both transformed from object to clip space? Or is the SV_POSITION semantic working some magic behind the scenes?
Regarding 2: My second problem is the actual encoding of the values into a 128bits-RGBA format. The paper describes the encoding very briefly on page 3. Is there a way to pack two half into one float? Or a clever way to bring those values in the range of [0, 1) so I can use Unity's encoding? What about the two derivatives (8bit values)?
encoding as described in the paper
Alternatively, I'm very glad for any advice on how to improve the "shadow quality" for the view cone rendering in a different way, apart from using a higher resolution or more shadow maps.
I wish to create an unlit shader for a particle system that emits cube meshes, such that each emitted mesh has a hard black outline around it.
Here is the pass for the outline (in Cg):
struct appdata {
float4 vertex : POSITION;
float3 normal : NORMAL;
};
struct v2f {
float4 pos : POSITION;
float4 color : COLOR;
};
uniform float _Outline;
uniform float4 _OutlineColor;
v2f vert(appdata v) {
v2f o;
v.vertex *= ( 1 + _Outline);
o.pos = UnityObjectToClipPos(v.vertex);
o.color = _OutlineColor;
return o;
}
half4 frag(v2f i) :COLOR { return i.color; }
(And after this is a simple pass to render the unlit geometry of the mesh itself...)
As you can see, we are simply stretching the vertices outward... but from what?
For a single cube mesh, the shader works perfectly:
However, when applied to a particle system emitting cube meshes, the shader breaks down:
My suspicion is that the line v.vertex *= ( 1 + _Outline); stretches the vertices outward from the object center, not the mesh center.
Does anyone have a replacement shader or insight on how to fix this problem?
Thanks,
rbjacob
It turns out that I misconstrued the problem. When accessing the POSITION semantic of the vertices, you are getting the vertices of the emitted particles in world space; therefore, stretching the vertices by multiplying is actually just scaling them away from the world center.
To access the vertices relative to each particle, we must be able to access each particle's mesh center from within the shader. To do this, we enable "Custom Vertex Streams" inside the Renderer module of the particle system and press the + button to add the Center stream.
Now we can access TEXCOORD0 (or whatever is specified to the right of the Center stream in the particle renderer GUI) from the shader to get the mesh center in world space. Then we subtract the mesh center from each vertices position, scale outward, and add the mesh center back. And voila, each particle has an outline.
Here are the final vert and frag snippets for the outline pass:
struct appdata {
float3 vertex : POSITION;
float4 color : COLOR;
float3 center : TEXCOORD0;
};
struct v2f {
float4 pos : POSITION;
float4 color : COLOR;
float3 center : TEXCOORD0;
};
uniform float _Outline;
uniform float4 _OutlineColor;
v2f vert(appdata v) {
v2f o;
o.center = v.center;
float3 vert = v.vertex - v.center;
vert *= ( 1 + _Outline);
v.vertex = vert + v.center;
o.pos = UnityObjectToClipPos(v.vertex);
o.color = _OutlineColor;
return o;
}
half4 frag(v2f i) :COLOR { return i.color; }
TLDR: Enable vertex streams, add a stream for the particle center, and access this value in the shader to scale individual vertices outward.
My suspicion is that the line v.vertex *= ( 1 + _Outline); stretches the vertices outward from the object center, not the mesh center.
That would be correct. Or mostly correct (particle systems combine all the particles into one runtime mesh and that's what your shader is applied to, not the underlying individual particle mesh, which isn't obvious). Try your outline shader on a non-convex mesh (that is also not a particle): You'll find that the concave part won't have the desired outline, confirming your suspicion.
I wrote this shader a couple of years back because the only free shaders I could find that generated outlines were either (a) not free or (b) of the "just scale it bigger" variety. It still has problems (such as getting jagged and weird at large thickness values), but I was never able to resolve them satisfactorily. It uses a geometry pass to turn the source mesh's edges into camera-facing quads, then stencil magic to render only the outline portion.
However I am unsure if that shader will function when applied to particles. I doubt it will without modification, but you're free to give it a shot.
After hours of Google, copy-pasting codes and playing around, I still could not find a solution to my problem.
I try to write a postprocessing shader using the vertex and fragment functions. My problem is that I do not know how to compute the radial distance of the current vertex to the camera position (or any other given position) in world coordinates.
My goal is the following:
Consider a very big 3D plane where the camera is on top and looks exactly down to the plane. I now want a postprocessing shader that draws a white line onto the plane, such that only those pixels that have a certain radial distance to the camera are painted white. The expected result would be a white circle (in this specific setup).
I know how to do this in principal, but the problem is that I cannot find out how to compute the radial distance to the vertex.
The problem here might be that this is a POSTPROCESSING shader. So this shader is not applied to a certain object. If I would do so, I could get the world coordinates of the vertex by using mul(unity_ObjectToWorld, v.vertex), but for postprocessing shaders this gives a nonsense value.
This is my debug code for this issue:
Shader "NonHidden/TestShader"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
}
SubShader
{
Tags { "RenderType"="Transparent" "Queue"="Transparent-1"}
LOD 100
ZWrite Off
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma target 3.0
#include "UnityCG.cginc"
sampler2D _MainTex;
sampler2D _CameraDepthTexture;
uniform float4 _MainTex_TexelSize;
// V2F
struct v2f {
float4 outpos : SV_POSITION;
float4 worldPos : TEXCOORD0;
float3 rayDir : TEXCOORD1;
float3 camNormal : TEXCOORD2;
};
// Sample Depth
float sampleDepth(float2 uv) {
return Linear01Depth(
UNITY_SAMPLE_DEPTH(
tex2D(_CameraDepthTexture, uv)));
}
// VERTEX
v2f vert (appdata_tan v)
{
TANGENT_SPACE_ROTATION;
v2f o;
o.outpos = UnityObjectToClipPos(v.vertex);
o.worldPos = mul(unity_ObjectToWorld, v.vertex);
o.rayDir = mul(rotation, ObjSpaceViewDir(v.vertex));
o.camNormal = UNITY_MATRIX_IT_MV[2].xyz;
return o;
}
// FRAGMENT
fixed4 frag (v2f IN) : SV_Target
{
// Get uv coordinates
float2 uv = IN.outpos.xy * (_ScreenParams.zw - 1.0f);
// Flip y if necessary
#if UNITY_UV_STARTS_AT_TOP
if (_MainTex_TexelSize.y < 0)
{
uv.y = 1 - uv.y;
}
#endif
// Get depth
float depth = sampleDepth(uv);
// Set color
fixed4 color = 0;
if(depth.x < 1)
{
color.r = IN.worldPos.x;
color.g = IN.worldPos.y;
color.b = IN.worldPos.z;
}
return color;
}
ENDCG
}
}
}
CurrentState
This image shows the result when the camera looks down on the plane:
Image 1: Actual result
The blue value is (for whatever reason) 25 in every pixel. The red and green areas reflect the x-y coordinates of the screen.
Even if I rotate the camera a little bit, I get the exact same shading at the same screen coordinates:
That shows me that the computed "worldPos" coordinates are screen coordinates and have nothing to do with the world coordinates of the plane.
Expected Result
The result I expect to see is the following:
Here, pixels that have the same (radial) distance to the camera have the same color.
How do I need to change the above code to achieve this effect? With rayDir (computed in the vert function) I tried to get at least the direction vector from the camera center to the current pixel, such that I could compute the radial distance using the depth information. But rayDir has a constant value for all pixels ...
At this point I also have to say that I don't really understand what is computed inside the vert function. This is just stuff that I found on the internet and that I tried out.
Alright, I found some solution to my problem, since I found this video here: Shaders Case Study - No Man's Sky: Topographic Scanner
In the video description is a link to the corresponding GIT repository. I downloaded, analyzed and rewrote the code, such that it fits my purpose, is easier to read and understand.
The major thing I learned is, that there is no built-in way to compute the radial distance using post-processing shaders (correct me if I'm wrong!). So in order to get the radial distance, the only way seems to be in fact to use the direction vector from the camera to the vertex and the depth buffer. Since the direction vector is also not available in a built-in way, a trick is used:
Instead of using the Graphics.Blit function in the post-processing script, a custom Blit function can be used to set some additional shader variables. In this case, the frustum of the camera is stored in a second set of texture coordinates, which are then available in the shader code as TEXCOORD1. The trick here is that the according shader variable automatically contains an interpolated uv value, that is identical to the direction vector ("frustum ray") I was looking for.
The code of the calling script now looks as follows:
using UnityEngine;
using System.Collections;
[ExecuteInEditMode]
public class TestShaderEffect : MonoBehaviour
{
private Material material;
private Camera cam;
void OnEnable()
{
// Create a material that uses the desired shader
material = new Material(Shader.Find("Test/RadialDistance"));
// Get the camera object (this script must be assigned to a camera)
cam = GetComponent<Camera>();
// Enable depth buffer generation#
// (writes to the '_CameraDepthTexture' variable in the shader)
cam.depthTextureMode = DepthTextureMode.Depth;
}
[ImageEffectOpaque] // Draw after opaque, but before transparent geometry
void OnRenderImage(RenderTexture source, RenderTexture destination)
{
// Call custom Blit function
// (usually Graphics.Blit is used)
RaycastCornerBlit(source, destination, material);
}
void RaycastCornerBlit(RenderTexture source, RenderTexture destination, Material mat)
{
// Compute (half) camera frustum size (at distance 1.0)
float angleFOVHalf = cam.fieldOfView / 2 * Mathf.Deg2Rad;
float heightHalf = Mathf.Tan(angleFOVHalf);
float widthHalf = heightHalf * cam.aspect; // aspect = width/height
// Compute helper vectors (camera orientation weighted with frustum size)
Vector3 vRight = cam.transform.right * widthHalf;
Vector3 vUp = cam.transform.up * heightHalf;
Vector3 vFwd = cam.transform.forward;
// Custom Blit
// ===========
// Set the given destination texture as the active render texture
RenderTexture.active = destination;
// Set the '_MainTex' variable to the texture given by 'source'
mat.SetTexture("_MainTex", source);
// Store current transformation matrix
GL.PushMatrix();
// Load orthographic transformation matrix
// (sets viewing frustum from [0,0,-1] to [1,1,100])
GL.LoadOrtho();
// Use the first pass of the shader for rendering
mat.SetPass(0);
// Activate quad draw mode and draw a quad
GL.Begin(GL.QUADS);
{
// Using MultiTexCoord2 (TEXCOORD0) and Vertex3 (POSITION) to draw on the whole screen
// Using MultiTexCoord to write the frustum information into TEXCOORD1
// -> When the shader is called, the TEXCOORD1 value is automatically an interpolated value
// Bottom Left
GL.MultiTexCoord2(0, 0, 0);
GL.MultiTexCoord(1, (vFwd - vRight - vUp) * cam.farClipPlane);
GL.Vertex3(0, 0, 0);
// Bottom Right
GL.MultiTexCoord2(0, 1, 0);
GL.MultiTexCoord(1, (vFwd + vRight - vUp) * cam.farClipPlane);
GL.Vertex3(1, 0, 0);
// Top Right
GL.MultiTexCoord2(0, 1, 1);
GL.MultiTexCoord(1, (vFwd + vRight + vUp) * cam.farClipPlane);
GL.Vertex3(1, 1, 0);
// Top Left
GL.MultiTexCoord2(0, 0, 1);
GL.MultiTexCoord(1, (vFwd - vRight + vUp) * cam.farClipPlane);
GL.Vertex3(0, 1, 0);
}
GL.End(); // Finish quad drawing
// Restore original transformation matrix
GL.PopMatrix();
}
}
The shader code looks like this:
Shader "Test/RadialDistance"
{
Properties
{
_MainTex("Texture", 2D) = "white" {}
}
SubShader
{
// No culling or depth
Cull Off ZWrite Off ZTest Always
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct VertIn
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
float4 ray : TEXCOORD1;
};
struct VertOut
{
float4 vertex : SV_POSITION;
float2 uv : TEXCOORD0;
float4 interpolatedRay : TEXCOORD1;
};
// Parameter variables
sampler2D _MainTex;
// Auto filled variables
float4 _MainTex_TexelSize;
sampler2D _CameraDepthTexture;
// Generate jet-color-sheme color based on a value t in [0, 1]
half3 JetColor(half t)
{
half3 color = 0;
color.r = min(1, max(0, 4 * t - 2));
color.g = min(1, max(0, -abs( 4 * t - 2) + 2));
color.b = min(1, max(0, -4 * t + 2));
return color;
}
// VERT
VertOut vert(VertIn v)
{
VertOut o;
// Get vertex and uv coordinates
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = v.uv.xy;
// Flip uv's if necessary
#if UNITY_UV_STARTS_AT_TOP
if (_MainTex_TexelSize.y < 0)
o.uv.y = 1 - o.uv.y;
#endif
// Get the interpolated frustum ray
// (generated the calling script custom Blit function)
o.interpolatedRay = v.ray;
return o;
}
// FRAG
float4 frag (VertOut i) : SV_Target
{
// Get the color from the texture
half4 colTex = tex2D(_MainTex, i.uv);
// flat depth value with high precision nearby and bad precision far away???
float rawDepth = DecodeFloatRG(tex2D(_CameraDepthTexture, i.uv));
// flat depth but with higher precision far away and lower precision nearby???
float linearDepth = Linear01Depth(rawDepth);
// Vector from camera position to the vertex in world space
float4 wsDir = linearDepth * i.interpolatedRay;
// Position of the vertex in world space
float3 wsPos = _WorldSpaceCameraPos + wsDir;
// Distance to a given point in world space coordinates
// (in this case the camera position, so: dist = length(wsDir))
float dist = distance(wsPos, _WorldSpaceCameraPos);
// Get color by distance (same distance means same color)
half4 color = 1;
half t = saturate(dist/100.0);
color.rgb = JetColor(t);
// Set color to red at a hard-coded distance -> red circle
if (dist < 50 && dist > 50 - 1 && linearDepth < 1)
{
color.rgb = half3(1, 0, 0);
}
return color * colTex;
}
ENDCG
}
}
}
I'm now able to achieve the desired effect:
But there are still some questions I have and I would be thankful if anyone could answer them for me:
Is there really no other way to get the radial distance? Using a direciton vector and the depth buffer is inefficient and inaccurate
I don't really understand the content of the rawDepth variable. I mean yes, it's some depth information, but if you use the depth information as texture color, you basically get a black image if you are not ridiculously close to an object. That leads to a very bad resolution for objects that are further away. How can anyone work with that?
I don't understand what exactly the Linear01Depth function does. Since the Unity documentation sucks in general, it also doesn't offer any information about this one as well
I'm currently developping a game where a character can move on a background. The idea will be that this character dig this background. I think it should be done by a shader but i'm a beginner with its. .
I can imagine something like if the character is too far from the position of this pixel, so the alpha of this pixel is 0. And to keep the alpha at 0.If the alpha is equals to 0 you don't make the character test.
But for now, i've tried with the sprites/diffuse shader as base and i can't find a way to get the position of the pixels. I tried something in the surf function but with no real results The surf function is supposed to be executed once by pixel no?
Thanks in advance to help me.
Edit
I've tried some things to finally get a vert frag shader. As i said, i'm trying to compute the alpha with the distance of the pixel.
For now I can't figure out where is my mistake but maybe some will be more talkative.
By the way, my sorting layer has just explode when i put this new shader. I've tried to switch off Ztest and Zwrite but it doesn't work. So if you have any idea (But it's not the main problem)
Shader "Unlit/SimpleUnlitTexturedShader"
{
Properties
{
// we have removed support for texture tiling/offset,
// so make them not be displayed in material inspector
[NoScaleOffset] _MainTex("Texture", 2D) = "white" {}
_Position("Position", Vector) = (0,0,0,0)
}
SubShader
{
Pass
{
CGPROGRAM
// use "vert" function as the vertex shader
#pragma vertex vert
// use "frag" function as the pixel (fragment) shader
#pragma fragment frag
// vertex shader inputs
struct appdata
{
float4 vertex : POSITION; // vertex position
float2 uv : TEXCOORD0; // texture coordinate
};
// vertex shader outputs ("vertex to fragment")
struct v2f
{
float2 uv : TEXCOORD0; // texture coordinate
float4 vertex : SV_POSITION; // clip space position
float3 worldpos:WD_POSITION;
};
// vertex shader
v2f vert(appdata v)
{
v2f o;
// transform position to clip space
// (multiply with model*view*projection matrix)
o.vertex = mul(UNITY_MATRIX_MVP, v.vertex);
// just pass the texture coordinate
o.uv = v.uv;
o.worldpos= mul(_Object2World,o.vertex).xyz;
return o;
}
// texture we will sample
sampler2D _MainTex;
float4 Position ;
// pixel shader; returns low precision ("fixed4" type)
// color ("SV_Target" semantic)
fixed4 frag(v2f i) : SV_Target
{
// sample texture and return it
fixed4 col = tex2D(_MainTex, i.uv);
col.a = step(2,distance(Position.xyz, i.worldpos));
return col;
}
ENDCG
}
}
}
The effect of "cutting holes" in geometry can be achieved using vertex shaders. This is possible since vertex shaders allow you to determine the 3d world position of a pixel of an object using the _Object2World matrix. If I wasnt on my phone I'd give a code example, but I'll try to describe it. If you take the vertex position, apply an _Object2World matrix, it will return a 3d world position. Here's a quick example:
float3 worldPos = mul (_Object2World, v.vertex).xyz;
In the fragment portion of the shader, you can then set the transparency based on the distance between worldPos and any other position. I apologize if this is still difficult to follow, I am trying to do this on my phone right now. If anyone wants to elaborate on what I said or give code examples that'd be great. Otherwise, you can study up on vertex shaders here:
http://docs.unity3d.com/Manual/SL-VertexFragmentShaderExamples.html
Hope this helps!