SceneKit Metal shader using ``clip_distance`` - swift

I am modifying a SceneKit Metal shader given in https://medium.com/#MalikAlayli/metal-with-scenekit-create-your-first-shader-2c4e4e983300. It displays a cube with image texture, rendered by SceneKit with a Metal shader.
I changed the cube to a sphere of radius 3, centred at (0,0,0) using SCNSphere(radius: 3). Then, I used clip_distance to "cut" away a portion of the sphere satisfying in.position.z > 1.5. The result is shown in the image below. The Metal shader I am using is also given below.
As you can see, the boundary is not smooth. It exhibits boundaries of polygons, instead of an interpolated surface. So, is it possible to make it smooth? If yes, how? Thank you.
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
struct NodeBuffer {
float4x4 modelTransform;
float4x4 modelViewProjectionTransform;
float4x4 modelViewTransform;
float4x4 normalTransform;
float2x3 boundingBox;
};
struct VertexInput {
float3 position [[attribute(SCNVertexSemanticPosition)]];
float2 uv [[attribute(SCNVertexSemanticTexcoord0)]];
};
struct VertexOut {
float4 position [[position]];
float2 uv;
float clip_distance [[clip_distance]];
};
vertex VertexOut textureSamplerVertex(VertexInput in [[ stage_in ]], constant NodeBuffer& scn_node [[buffer(1)]]) {
VertexOut out;
out.position = scn_node.modelViewProjectionTransform * float4(in.position, 1.0);
out.uv = in.uv;
if ((in.position.z > 1.5)) {
out.clip_distance = -1;
}
else {
out.clip_distance = 1;
}
return out;
}
fragment float4 textureSamplerFragment(VertexOut out [[ stage_in ]], texture2d<float, access::sample> customTexture [[texture(0)]]) {
constexpr sampler textureSampler(coord::normalized, filter::linear, address::repeat);
return customTexture.sample(textureSampler, out.uv );
}

The clip distances will be linearly interpolated across the primitive and the portion of the primitive with interpolated distances less than 0.0 will be clipped. (gl_ClipDistance)
For the frontier of clipped fragments to be exactly at z = 1.5 you need make sure that the interpolated clip distance is exactly at 0.0 when z = 1.5 and then positive or negative on each side.
out.clip_distance = (1.5 - in.position.z);

Related

Moving an image using Metal shader

I am trying to write a simple Metal shader to use in Swift, which takes a picture and move it from left to right. Let's say I want the picture x starts in screenwidth - image.size.width and moves x until image.x = screenwidth.x. I used Metalpetal and my code distorts image.
fragment float4 SimplePanFragmentRight(VertexOut vertexIn [[ stage_in ]],
texture2d<float, access::sample> fromTexture [[ texture(0) ]],
texture2d<float, access::sample> toTexture [[ texture(1) ]],
constant float & scale [[ buffer(0) ]],
constant float & rotations [[ buffer(1) ]],
constant float2 & center [[ buffer(2) ]],
constant float4 & backColor [[ buffer(3) ]],
constant float & ratio [[ buffer(4) ]],
constant float & progress [[ buffer(5) ]],
sampler textureSampler [[ sampler(0) ]])
{
float2 uv = vertexIn.textureCoordinate;
uv.y = 1.0 - uv.y;
float _fromR = float(fromTexture.get_width())/float(fromTexture.get_height());
float _toR = float(toTexture.get_width())/float(toTexture.get_height());
float t = 1.0;
float pro = progress / 0.25;
// hence bad code
uv = adjustPos( uv, pro);
// ****
return mix(
getFromColor(uv, fromTexture, ratio, _fromR),
getToColor(uv, toTexture, ratio, _toR),
t);
}
and my "simple function" to manipulate x position is
float2 adjustPos(
float2 uv, float amount) {
uv.x = uv.x * amount;
return uv;
}
How to move linear x position based on a progress ratio without any image distortion ?
You want to do vertex translation in your vertex shader not in your fragment shader. Basically the vertex shader is where transformations of your model's (image's) vertices happen, which can be either because you moved your "camera" (ie. change of perspective) or because the thing moves in the environment. On entry the grid coordinates will be in your picture's coordinates, or in your virtual world coordinates, depending on exactly how you call the shader. You translate that to coordinates relative to your view frustum (ie. relative to your camera's position, direction, and orientation). For 2-D rendering, you can usually ignore the z part of the frustum coordinates (just set it to 0, so it's exactly on the view plane), which makes it the same as screen coordinates.
Your fragment shader is where you'd do effects on the image itself, for example blurring, or color mapping, texture mapping, etc...

Encoding geometry information for Sub-Pixel Shadow Mapping

I'm trying to implement the SPSM approach described in this paper: http://jmarvie.free.fr/Publis/2014_I3D/i3d_2014_SubPixelShadowMapping.pdf
First, why am I trying to do this: I am currently using a shadow map, rendered from the viewpoint of a character to display their view cone on the ground. Using regular shadow mapping works fine so far, but produces jagged edges, mainly where obstacles obstruct the view. I hope SPSM might provide better results.
I'm running into problems while doing step 3.3, the encoding of triangle information in the shadow map. There are two distinct problems.
Getting all the information needed into the fragment shader for writing into a render texture.
Encoding the data into a suitable 128bits-RGBA format.
Regarding 1: I need the following data for the SPSM:
All three triangle vertices of the closest occluding triangle
Depth value at texel center
Depth derivatives
I tried getting the triangle information from the geometry shader while calculating the other two in the fragment shader. My first naive approach looks like this:
struct appdata
{
float4 vertex : POSITION;
};
struct v2g
{
float4 pos : POSITION;
float3 viewPos : NORMAL;
};
struct g2f
{
float4 vertex : SV_POSITION;
float4 v0 : TEXCOORD1;
float4 v1 : TEXCOORD2;
float4 v2 : TEXCOORD3;
};
v2g vert(appdata v)
{
v2g o;
UNITY_INITIALIZE_OUTPUT(v2g, o);
o.pos = UnityObjectToClipPos(v.vertex);
return o;
}
[maxvertexcount(3)]
void geom(triangle v2g input[3], inout TriangleStream<g2f> outStream)
{
g2f o;
float4 vert0 = input[0].pos;
float4 vert1 = input[1].pos;
float4 vert2 = input[2].pos;
o.vertex = vert0;
o.v0 = vert0;
o.v1 = vert1;
o.v2 = vert2;
outStream.Append(o);
o.vertex = vert1;
o.v0 = vert0;
o.v1 = vert1;
o.v2 = vert2;
outStream.Append(o);
o.vertex = vert2;
o.v0 = vert0;
o.v1 = vert1;
o.v2 = vert2;
outStream.Append(o);
}
float4 frag(g2f i) : SV_TARGET
{
float4 col;
half depth = i.vertex.z;
half dx = ddx(i.vertex.z);
half dy = ddy(i.vertex.z);
float r1 = i.v0.x; // _ScreenParams.x; <– this ranges from aroung -5 to 5
float g1 = i.v0.y; // _ScreenParams.y;
float r2 = i.vertex.x // _ScreenParams.x; <– this ranges from 0 to 1920
float g2 = i.vertex.y // _ScreenParams.y;
col = float4(r2, g2, 0, 1);
return col;
}
I'm currently rendering the interpolated vertex position as the fragment color for debugging. When rendering it like this I get the following output:
using the interpolated position as color
If I render the non-interpolated position of the "first" vertex of each triangle I get the following:
using the non-interpolated triangle vertex positions
It looks correct so far. What confuses me, is that i.vertex.x has values ranging from 0 to screen width (e.g. 1920) and i.v0.x has value ranging from around -5 to +5. Shouldn't both be at least roughly the same (I know one is interpolated while the other is not) since they are both transformed from object to clip space? Or is the SV_POSITION semantic working some magic behind the scenes?
Regarding 2: My second problem is the actual encoding of the values into a 128bits-RGBA format. The paper describes the encoding very briefly on page 3. Is there a way to pack two half into one float? Or a clever way to bring those values in the range of [0, 1) so I can use Unity's encoding? What about the two derivatives (8bit values)?
encoding as described in the paper
Alternatively, I'm very glad for any advice on how to improve the "shadow quality" for the view cone rendering in a different way, apart from using a higher resolution or more shadow maps.

How can i get the texture coordinate in surface shader?

I was asked to draw a line between two given points using surface shader. The point is given in texture coordinate (between 0 and 1) and directly goes into the surface shader in Unity. I want to do this by calculating the pixel position and see if it is on that line. So I either try to translate the texture corrdinate to world pos, or i get the position of pixel in relative to that texture coordinate.
But I only found worldPos and screenPos in unity shader manual. Is there some way i can get the position in texture coordinate (or at least get the size of the textured object in world pos?)
Here is a simple example:
Shader "Line" {
Properties {
// Easiest way to get access of UVs in surface shaders is to define a texture
_MainTex("Texture", 2D) = "white"{}
// We can pack both points into one vector
_Line("Start Pos (xy), End Pos (zw)", Vector) = (0, 0, 1, 1)
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert
sampler2D _MainTex;
float4 _Line;
struct Input {
// This UV value will now represent the pixel coordinate in UV space
float2 uv_MainTex;
};
void surf (Input IN, inout SurfaceOutput o) {
float2 start = _Line.xy;
float2 end = _Line.zw;
float2 pos = IN.uv_MainTex.xy;
// Do some calculations
return fixed4(1, 1, 1, 1);
}
ENDCG
}
}
Here is a good post on how to calculate whether a point is on a line:
How to check if a point lies on a line between 2 other points
Let's say you define a function from this with the following signature:
inline bool IsPointOnLine(float2 p, float2 l1, float2 l2)
Then for return value, you can put this:
return IsPointOnLine(pos, start, end) ? _LineColor : _BackgroundColor
If you want UV coordinates without using a texture, i recommend making a vertex fragment shader instead and defining float2 uv : TEXCOORD0 inside the appdata/VertexInput struct. You can then pass that on to the fragment shader inside the vertex function.

How can i convert uv coordinates to world space?

I am trying to implement a shader. It will use with Unity LineRenderer. Shader will have noise that scrolling overtime raltive to texture coordinates. For example in parallel to x axis of uv space of texture. I have an implementation, but i dont'know how to get direction relative to texture uv (consider the texture rotation) in a vert function. I am only have a world space-relativew scrolling.
Main problem - how to convert uv coordinates (for example (0, 0) or (1, 0)) to world space?
Here is a my shader:
Shader "LineRendering/Test"
{
Properties
{
[PerRendererData] _MainTex("Sprite Texture", 2D) = "white" {}
_Freq("Frequency", Float) = 1
_Speed("Speed", Float) = 1
}
SubShader
{
Tags
{
"Queue" = "Transparent"
"IgnoreProjector" = "True"
"RenderType" = "Transparent"
"PreviewType" = "Plane"
"CanUseSpriteAtlas" = "True"
}
LOD 200
Cull Off
Lighting Off
ZWrite Off
Fog { Mode Off }
Blend One OneMinusSrcAlpha
Pass
{
CGPROGRAM
#pragma target 3.0
#pragma vertex vert
#pragma fragment frag
#pragma enable_d3d11_debug_symbols
#include "noiseSimplex.cginc"
struct appdata_t
{
fixed4 vertex : POSITION;
fixed2 uv : TEXCOORD0;
};
struct v2f
{
fixed4 vertex : SV_POSITION;
fixed2 texcoord : TEXCOORD0;
fixed2 srcPos : TEXCOORD1;
};
uniform fixed _Freq;
uniform fixed _Speed;
v2f vert(appdata_t IN)
{
v2f OUT;
OUT.vertex = UnityObjectToClipPos(IN.vertex);
OUT.texcoord = IN.uv;
OUT.srcPos = mul(unity_ObjectToWorld, IN.vertex).xy;
OUT.srcPos *= _Freq;
//This is my trying to convert uv coordinates to world coodinates, but it is still unsuccessfully.
//fixed2 v0Pos = mul(unity_WorldToObject, fixed3(0, 0, 0)).xy;
//fixed2 v1Pos = mul(unity_WorldToObject, fixed3(1, 0, 0)).xy;
//fixed2 scrollOffset = v1Pos - v0Pos;
fixed2 scrollOffset = fixed2(1, 0);
OUT.srcPos.xy -= fixed2(scrollOffset.x, scrollOffset.y) * _Time.y * _Speed;
return OUT;
}
fixed4 frag(v2f IN) : COLOR
{
fixed4 output;
float ns = snoise(IN.srcPos) / 2 + 0.5f;
output.rgb = fixed3(ns, ns, ns);
output.a = ns;
output.rgb *= output.a;
return output;
}
ENDCG
}
}
}
Noise library getted form here: https://forum.unity.com/threads/2d-3d-4d-optimised-perlin-noise-cg-hlsl-library-cginc.218372/#post-2445598. Please help me.
Texture coordinates are already in texture space. If I understand correctly, you should be able to just do this:
v2f vert(appdata_t IN)
{
v2f OUT;
OUT.vertex = UnityObjectToClipPos(IN.vertex);
OUT.texcoord = IN.uv;
OUT.srcPos = IN.uv;
OUT.srcPos *= _Freq;
fixed2 scrollOffset = fixed2(1, 0);
OUT.srcPos.xy -= fixed2(scrollOffset.x, scrollOffset.y) * _Time.y * _Speed;
return OUT;
}
Option 1
Each of your UVs is associated with a specific vertex. Once you can establish which UV is assigned to which vertex, then look up the world position of the vertex.
Option 2
Another way to do this though may be with a texture that is a pre-baked image of the local space coordinates of the object. In the texture, the XYZ coords would map to RGB. Then you'll do a texture lookup and get to local object coordinates. You'll then have to multiply that vector by the world projection matrix in order to get the actual world space value.
When you create the texture, you'll have to account for the inability to store negative values. So first you'll have to set up the object so that it fits entirely inside the world coordinates of [-1, 1], in all three axes. Then, as part of the baking procedure, you'll have to divide all values by two, and then add .5. This will ensure that all your negative coordinate space values are stored from [0,.5) and all positive values are stored from [.5,1].
Note
I had a hard time understanding your exact request. I hope my options help with your program

How to compute the radial distance of an object in a postprocessing vertex and fragment shader

After hours of Google, copy-pasting codes and playing around, I still could not find a solution to my problem.
I try to write a postprocessing shader using the vertex and fragment functions. My problem is that I do not know how to compute the radial distance of the current vertex to the camera position (or any other given position) in world coordinates.
My goal is the following:
Consider a very big 3D plane where the camera is on top and looks exactly down to the plane. I now want a postprocessing shader that draws a white line onto the plane, such that only those pixels that have a certain radial distance to the camera are painted white. The expected result would be a white circle (in this specific setup).
I know how to do this in principal, but the problem is that I cannot find out how to compute the radial distance to the vertex.
The problem here might be that this is a POSTPROCESSING shader. So this shader is not applied to a certain object. If I would do so, I could get the world coordinates of the vertex by using mul(unity_ObjectToWorld, v.vertex), but for postprocessing shaders this gives a nonsense value.
This is my debug code for this issue:
Shader "NonHidden/TestShader"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
}
SubShader
{
Tags { "RenderType"="Transparent" "Queue"="Transparent-1"}
LOD 100
ZWrite Off
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma target 3.0
#include "UnityCG.cginc"
sampler2D _MainTex;
sampler2D _CameraDepthTexture;
uniform float4 _MainTex_TexelSize;
// V2F
struct v2f {
float4 outpos : SV_POSITION;
float4 worldPos : TEXCOORD0;
float3 rayDir : TEXCOORD1;
float3 camNormal : TEXCOORD2;
};
// Sample Depth
float sampleDepth(float2 uv) {
return Linear01Depth(
UNITY_SAMPLE_DEPTH(
tex2D(_CameraDepthTexture, uv)));
}
// VERTEX
v2f vert (appdata_tan v)
{
TANGENT_SPACE_ROTATION;
v2f o;
o.outpos = UnityObjectToClipPos(v.vertex);
o.worldPos = mul(unity_ObjectToWorld, v.vertex);
o.rayDir = mul(rotation, ObjSpaceViewDir(v.vertex));
o.camNormal = UNITY_MATRIX_IT_MV[2].xyz;
return o;
}
// FRAGMENT
fixed4 frag (v2f IN) : SV_Target
{
// Get uv coordinates
float2 uv = IN.outpos.xy * (_ScreenParams.zw - 1.0f);
// Flip y if necessary
#if UNITY_UV_STARTS_AT_TOP
if (_MainTex_TexelSize.y < 0)
{
uv.y = 1 - uv.y;
}
#endif
// Get depth
float depth = sampleDepth(uv);
// Set color
fixed4 color = 0;
if(depth.x < 1)
{
color.r = IN.worldPos.x;
color.g = IN.worldPos.y;
color.b = IN.worldPos.z;
}
return color;
}
ENDCG
}
}
}
CurrentState
This image shows the result when the camera looks down on the plane:
Image 1: Actual result
The blue value is (for whatever reason) 25 in every pixel. The red and green areas reflect the x-y coordinates of the screen.
Even if I rotate the camera a little bit, I get the exact same shading at the same screen coordinates:
That shows me that the computed "worldPos" coordinates are screen coordinates and have nothing to do with the world coordinates of the plane.
Expected Result
The result I expect to see is the following:
Here, pixels that have the same (radial) distance to the camera have the same color.
How do I need to change the above code to achieve this effect? With rayDir (computed in the vert function) I tried to get at least the direction vector from the camera center to the current pixel, such that I could compute the radial distance using the depth information. But rayDir has a constant value for all pixels ...
At this point I also have to say that I don't really understand what is computed inside the vert function. This is just stuff that I found on the internet and that I tried out.
Alright, I found some solution to my problem, since I found this video here: Shaders Case Study - No Man's Sky: Topographic Scanner
In the video description is a link to the corresponding GIT repository. I downloaded, analyzed and rewrote the code, such that it fits my purpose, is easier to read and understand.
The major thing I learned is, that there is no built-in way to compute the radial distance using post-processing shaders (correct me if I'm wrong!). So in order to get the radial distance, the only way seems to be in fact to use the direction vector from the camera to the vertex and the depth buffer. Since the direction vector is also not available in a built-in way, a trick is used:
Instead of using the Graphics.Blit function in the post-processing script, a custom Blit function can be used to set some additional shader variables. In this case, the frustum of the camera is stored in a second set of texture coordinates, which are then available in the shader code as TEXCOORD1. The trick here is that the according shader variable automatically contains an interpolated uv value, that is identical to the direction vector ("frustum ray") I was looking for.
The code of the calling script now looks as follows:
using UnityEngine;
using System.Collections;
[ExecuteInEditMode]
public class TestShaderEffect : MonoBehaviour
{
private Material material;
private Camera cam;
void OnEnable()
{
// Create a material that uses the desired shader
material = new Material(Shader.Find("Test/RadialDistance"));
// Get the camera object (this script must be assigned to a camera)
cam = GetComponent<Camera>();
// Enable depth buffer generation#
// (writes to the '_CameraDepthTexture' variable in the shader)
cam.depthTextureMode = DepthTextureMode.Depth;
}
[ImageEffectOpaque] // Draw after opaque, but before transparent geometry
void OnRenderImage(RenderTexture source, RenderTexture destination)
{
// Call custom Blit function
// (usually Graphics.Blit is used)
RaycastCornerBlit(source, destination, material);
}
void RaycastCornerBlit(RenderTexture source, RenderTexture destination, Material mat)
{
// Compute (half) camera frustum size (at distance 1.0)
float angleFOVHalf = cam.fieldOfView / 2 * Mathf.Deg2Rad;
float heightHalf = Mathf.Tan(angleFOVHalf);
float widthHalf = heightHalf * cam.aspect; // aspect = width/height
// Compute helper vectors (camera orientation weighted with frustum size)
Vector3 vRight = cam.transform.right * widthHalf;
Vector3 vUp = cam.transform.up * heightHalf;
Vector3 vFwd = cam.transform.forward;
// Custom Blit
// ===========
// Set the given destination texture as the active render texture
RenderTexture.active = destination;
// Set the '_MainTex' variable to the texture given by 'source'
mat.SetTexture("_MainTex", source);
// Store current transformation matrix
GL.PushMatrix();
// Load orthographic transformation matrix
// (sets viewing frustum from [0,0,-1] to [1,1,100])
GL.LoadOrtho();
// Use the first pass of the shader for rendering
mat.SetPass(0);
// Activate quad draw mode and draw a quad
GL.Begin(GL.QUADS);
{
// Using MultiTexCoord2 (TEXCOORD0) and Vertex3 (POSITION) to draw on the whole screen
// Using MultiTexCoord to write the frustum information into TEXCOORD1
// -> When the shader is called, the TEXCOORD1 value is automatically an interpolated value
// Bottom Left
GL.MultiTexCoord2(0, 0, 0);
GL.MultiTexCoord(1, (vFwd - vRight - vUp) * cam.farClipPlane);
GL.Vertex3(0, 0, 0);
// Bottom Right
GL.MultiTexCoord2(0, 1, 0);
GL.MultiTexCoord(1, (vFwd + vRight - vUp) * cam.farClipPlane);
GL.Vertex3(1, 0, 0);
// Top Right
GL.MultiTexCoord2(0, 1, 1);
GL.MultiTexCoord(1, (vFwd + vRight + vUp) * cam.farClipPlane);
GL.Vertex3(1, 1, 0);
// Top Left
GL.MultiTexCoord2(0, 0, 1);
GL.MultiTexCoord(1, (vFwd - vRight + vUp) * cam.farClipPlane);
GL.Vertex3(0, 1, 0);
}
GL.End(); // Finish quad drawing
// Restore original transformation matrix
GL.PopMatrix();
}
}
The shader code looks like this:
Shader "Test/RadialDistance"
{
Properties
{
_MainTex("Texture", 2D) = "white" {}
}
SubShader
{
// No culling or depth
Cull Off ZWrite Off ZTest Always
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct VertIn
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
float4 ray : TEXCOORD1;
};
struct VertOut
{
float4 vertex : SV_POSITION;
float2 uv : TEXCOORD0;
float4 interpolatedRay : TEXCOORD1;
};
// Parameter variables
sampler2D _MainTex;
// Auto filled variables
float4 _MainTex_TexelSize;
sampler2D _CameraDepthTexture;
// Generate jet-color-sheme color based on a value t in [0, 1]
half3 JetColor(half t)
{
half3 color = 0;
color.r = min(1, max(0, 4 * t - 2));
color.g = min(1, max(0, -abs( 4 * t - 2) + 2));
color.b = min(1, max(0, -4 * t + 2));
return color;
}
// VERT
VertOut vert(VertIn v)
{
VertOut o;
// Get vertex and uv coordinates
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = v.uv.xy;
// Flip uv's if necessary
#if UNITY_UV_STARTS_AT_TOP
if (_MainTex_TexelSize.y < 0)
o.uv.y = 1 - o.uv.y;
#endif
// Get the interpolated frustum ray
// (generated the calling script custom Blit function)
o.interpolatedRay = v.ray;
return o;
}
// FRAG
float4 frag (VertOut i) : SV_Target
{
// Get the color from the texture
half4 colTex = tex2D(_MainTex, i.uv);
// flat depth value with high precision nearby and bad precision far away???
float rawDepth = DecodeFloatRG(tex2D(_CameraDepthTexture, i.uv));
// flat depth but with higher precision far away and lower precision nearby???
float linearDepth = Linear01Depth(rawDepth);
// Vector from camera position to the vertex in world space
float4 wsDir = linearDepth * i.interpolatedRay;
// Position of the vertex in world space
float3 wsPos = _WorldSpaceCameraPos + wsDir;
// Distance to a given point in world space coordinates
// (in this case the camera position, so: dist = length(wsDir))
float dist = distance(wsPos, _WorldSpaceCameraPos);
// Get color by distance (same distance means same color)
half4 color = 1;
half t = saturate(dist/100.0);
color.rgb = JetColor(t);
// Set color to red at a hard-coded distance -> red circle
if (dist < 50 && dist > 50 - 1 && linearDepth < 1)
{
color.rgb = half3(1, 0, 0);
}
return color * colTex;
}
ENDCG
}
}
}
I'm now able to achieve the desired effect:
But there are still some questions I have and I would be thankful if anyone could answer them for me:
Is there really no other way to get the radial distance? Using a direciton vector and the depth buffer is inefficient and inaccurate
I don't really understand the content of the rawDepth variable. I mean yes, it's some depth information, but if you use the depth information as texture color, you basically get a black image if you are not ridiculously close to an object. That leads to a very bad resolution for objects that are further away. How can anyone work with that?
I don't understand what exactly the Linear01Depth function does. Since the Unity documentation sucks in general, it also doesn't offer any information about this one as well