Related
I would like to set the saturation of an entire color channel in my main camera. The closest option that I've found was the Hue vs. Sat(uration) Grading Curve. In the background of the scene is a palm tree that is colored teal. I want the green level of the tree to still show. Same with the top of the grass in the foreground, It's closer to yellow than green, but I'd still want to see the little bit of green value that it has.
I have been searching the Unity documentation and the asset store for a possible 3rd party shader for weeks, but have come up empty handed. My current result is the best I could come up with, any help would be greatly appreciated. Thank you
SOLVED
-by check-marked answer. Just wanted to share what the results look like for anyone in the future who stumbles across this issue. Compare the above screenshot, where the palm tree in the background and the grass tops in the foreground are just black and white, to the after screenshot below. Full control in the scene of RGB saturation!
Examples using this method:
Below is a postprocessing shader intended to let you set the saturation of each color channel.
It first takes the original pixel color, gets the hue, saturation, and luminance. That color is taken to its most saturated, neutral-luminance version. The rgb of that is then multiplied by the desaturation factor to compute the rgb of the new hue. The magnitude of that rgb is multiplied by the original saturation to get the new saturation. This new hue and saturation is fed back in with the original luminance to compute the new color.
Shader "Custom/ChannelSaturation" {
Properties{
_MainTex("Base", 2D) = "white" {}
_rSat("Red Saturation", Range(0, 1)) = 1
_gSat("Green Saturation", Range(0, 1)) = 1
_bSat("Blue Saturation", Range(0, 1)) = 1
}
SubShader{
Pass {
CGPROGRAM
#pragma vertex vert_img
#pragma fragment frag
#include "UnityCG.cginc"
uniform sampler2D _MainTex;
float _rSat;
float _gSat;
float _bSat;
/*
source: modified version of https://www.shadertoy.com/view/MsKGRW
written # https://gist.github.com/hiroakioishi/
c4eda57c29ae7b2912c4809087d5ffd0
*/
float3 rgb2hsl(float3 c) {
float epsilon = 0.00000001;
float cmin = min( c.r, min( c.g, c.b ) );
float cmax = max( c.r, max( c.g, c.b ) );
float cd = cmax - cmin;
float3 hsl = float3(0.0, 0.0, 0.0);
hsl.z = (cmax + cmin) / 2.0;
hsl.y = lerp(cd / (cmax + cmin + epsilon),
cd / (epsilon + 2.0 - (cmax + cmin)),
step(0.5, hsl.z));
float3 a = float3(1.0 - step(epsilon, abs(cmax - c)));
a = lerp(float3(a.x, 0.0, a.z), a, step(0.5, 2.0 - a.x - a.y));
a = lerp(float3(a.x, a.y, 0.0), a, step(0.5, 2.0 - a.x - a.z));
a = lerp(float3(a.x, a.y, 0.0), a, step(0.5, 2.0 - a.y - a.z));
hsl.x = dot( float3(0.0, 2.0, 4.0) + ((c.gbr - c.brg)
/ (epsilon + cd)), a );
hsl.x = (hsl.x + (1.0 - step(0.0, hsl.x) ) * 6.0 ) / 6.0;
return hsl;
}
/*
source: modified version of
https://stackoverflow.com/a/42261473/1092820
*/
float3 hsl2rgb(float3 c) {
float3 rgb = clamp(abs(fmod(c.x * 6.0 + float3(0.0, 4.0, 2.0),
6.0) - 3.0) - 1.0, 0.0, 1.0);
return c.z + c.y * (rgb - 0.5) * (1.0 - abs(2.0 * c.z - 1.0));
}
float4 frag(v2f_img i) : COLOR {
float3 sat = float3(_rSat, _gSat, _bSat);
float4 c = tex2D(_MainTex, i.uv);
float3 hslOrig = rgb2hsl(c.rgb);
float3 rgbFullSat = hsl2rgb(float3(hslOrig.x, 1, .5));
float3 diminishedrgb = rgbFullSat * sat;
float diminishedHue = rgb2hsl(diminishedrgb).x;
float diminishedSat = hslOrig.y * length(diminishedrgb);
float3 mix = float3(diminishedHue, diminishedSat, hslOrig.z);
float3 newc = hsl2rgb(mix);
float4 result = c;
result.rgb = newc;
return result;
}
ENDCG
}
}
}
If you're using URP (Universal Rendering Pipeline), which is recommended, you can create a new forward renderer pipeline asset, assign the shader to that asset, and configure it appropriately. Further information including diagrams can be found in the official unity tutorial for custom render passes with URP.
If you aren't using URP, you have other options. You could attach it to specific materials, or use the below script from Wikibooks to the camera's gameobject to apply a material using the above shader as a postprocessing effect to the camera:
using System;
using UnityEngine;
[RequireComponent(typeof(Camera))]
[ExecuteInEditMode]
public class PostProcessingEffectScript : MonoBehaviour {
public Material material;
void OnEnable()
{
if (null == material || null == material.shader ||
!material.shader.isSupported)
{
enabled = false;
}
}
void OnRenderImage(RenderTexture source, RenderTexture destination)
{
Graphics.Blit(source, destination, material);
}
}
If you use the postprocessing effect, you will want to render the things you want to exclude from the effect with a different camera, then put everything together. However, this is a bit out of scope for this answer.
My best guess would be to use a custom shader or camera FX that would gives you control over each channel.
Hope that helped ;)
I'm playing around with the code in this Unity sample game to find out how to transform objects with shaders. The game uses shaders to curve the world, and one of those shaders also causes objects to rotate around their y-axis. I'd like to modify it to rotate an object around it's z-axis instead.
I've tried swapping the z's and y's, but while that does get the cylinder I'm using for this experiment to rotate around the z axis, it also causes the cylinder to stretch. Only changing rotVert.z to rotVert.y causes the cylinder to spin on it's side at a 45 degree angle. Can anyone tell me where I'm going wrong?
Here's the code which causes the object to rotate around the y-axis:
float4 rotVert = v.vertex;
rotVert.z = v.vertex.z * cos(_Time.y * 3.14f) - v.vertex.x * sin(_Time.y * 3.14f);
rotVert.x = v.vertex.z * sin(_Time.y * 3.14f) + v.vertex.x * cos(_Time.y * 3.14f);
o.vertex = UnityObjectToClipPos(rotVert);
In fact, this is very simple. You also need to modify the value of x.
float4 rotVert = v.vertex;
rotVert.y = v.vertex.y * cos(_Time.y * 3.14f) - v.vertex.x * sin(_Time.y * 3.14f);
rotVert.x = v.vertex.y * sin(_Time.y * 3.14f) + v.vertex.x * cos(_Time.y * 3.14f);
o.vertex = UnityObjectToClipPos(rotVert);
I'm using an SKAction to rotate an SKCamera right now, which intern rotates the scene. This works really great as gravity and all nodes (except the cameras children) are rotated as well. Now I have a chromatic aberration shader applied to the scene and every time I rotate the scene the background color of the scene immediately covers the entire screen. This happens regardless of shader used. I even made a shader that just gives back the exact texture and it still happens. Also just setting the cameras zRotation does not fix this problem, so it's not the SKAction causing this. This is the shader:
void main( void )
{
float divider = 200;
float deltaX = sin(v_tex_coord.y * 3.14 * 10 + u_time * 4) * 0.01;
vec2 rCoord = v_tex_coord;
rCoord.x = rCoord.x - sin(u_time * 10) / divider + deltaX;
rCoord.y = rCoord.y - sin(u_time * 10) / divider;
vec2 gCoord = v_tex_coord;
gCoord.x = gCoord.x + sin(u_time * 10) / divider + deltaX;
gCoord.y = gCoord.y + sin(u_time * 10) / divider;
vec2 bCoord = v_tex_coord;
bCoord.x = bCoord.x + deltaX;
vec4 rChannel = texture2D(u_texture, rCoord);
vec4 gChannel = texture2D(u_texture, gCoord);
vec4 bChannel = texture2D(u_texture, bCoord);
gl_FragColor = vec4(rChannel.r, gChannel.g, bChannel.b, 1.0);;
}
Also this is the code to initialize the shader:
let shader = SKShader(fileNamed: "ChromaShader.fsh")
self.shader = shader
shouldEnableEffects = true
Thanks in advance!
A parallax background with a fixed camera is easy to do, but since i'm making a topdown view 2D space exploration game, I figured that having a single SKSpriteNode filling the screen and being a child of my SKCameraNode and using a SKShader to draw a parallax starfield would be easier.
I went on shadertoy and found this simple looking shader. I adapted it successfully on shadertoy to accept a vec2() for the velocity of the movement that I want to pass as an SKAttribute so it can follow the movement of my ship.
Here is the original source:
https://www.shadertoy.com/view/XtjSDh
I managed to make the conversion of the original code so it compiles without any error, but nothing shows up on the screen. I tried the individual functions and they do work to generate a fixed image.
Any pointers to make it work?
Thanks!
This isn't really an answer, but it's a lot more info than a comment, and highlights some of the oddness and appropriateness of how SK does particles:
There's a couple of weird things about particles in SceneKit, that might apply to SpriteKit.
when you move the particle system, you can have the particles move with them. This is the default behaviour:
From the docs:
When the emitter creates particles, they are rendered as children of
the emitter node. This means that they inherit the characteristics of
the emitter node, just like nodes do. For example, if you rotate the
emitter node, the positions of all of the spawned particles are
rotated also. Depending on what effect you are simulating with the
emitter, this may not be the correct behavior.
For most applications, this is the wrong behaviour, in fact. But for what you're wanting to do, this is ideal. You can position new SKNodeEmitters offscreen where the ship is heading, and fix them to "space" so they rotate in conjunction with the directional changes of the player's ship, and the particles will do exactly as you want/need to create the feeling of moving throughout space.
SpriteKit has a prebuild, or populate ability in the form of advancing the simulation: https://developer.apple.com/reference/spritekit/skemitternode/1398027-advancesimulationtime
This means you can have stars ready to show wherever the ship is heading to, through space, as the SKEmittors come on screen. There's no need for a loading delay to build stars, this does it immediately.
As near as I can figure, you'd need a 3 particle emitters to pull this off, each the size of the screen of the device. Burst the particles out, then release each layer you want for parallax to a target node at the right "depth" from the camera, and carry on by moving these targets as per the screen movement.
Bit messy, but probably quicker, easier, and much more powerfully full of potential for playful effects than creating your own system.
Maybe... I could be wrong.
EDIT : Code is clean and working now. I've setup a GitHub repo for this.
I guess I didnt explain what I wanted properly. I needed a starfield background that follows the camera like you could find in Subspace (back in the days)
The result is pretty cool and convincing! I'll probably come back to this later when the node quantity becomes a bottleneck. I'm still convinced that the proper way to do that is with shaders!
Here is a link to my code on GitHub. I hope it can be useful to someone. It's still a work in progress but it works well. Included in the repo is the source from SKTUtils (a library by Ray Wenderlich that is already freely available on github) and from my own extension to Ray's tools that I called nuts-n-bolts. these are just extensions for common types that everyone should find useful. You, of course, have the source for the StarfieldNode and the InteractiveCameraNode along with a small demo project.
https://github.com/sonoblaise/StarfieldDemo
The short answer is, in SpriteKit you use the fragment coordinates directly without needing to scale against the viewport resolution (iResoultion in shadertoy land), so the line:
vec2 samplePosition = (fragCoord.xy / maxResolution) + vec2(0.0, iTime * 0.01);
can be changed to omit the scaling:
vec2 samplePosition = fragCoord.xy + vec2(0.0, iTime * 0.01);
this is likely the root issue (hard to know for sure without your rendition of the shader code) of why you're only seeing black from the shader.
For a full answer for an implementation of a SpriteKit shader making a star field, let's take the original shader and simplify it so there's only one star field, no "fog" (just to keep things simple), and add a variable to control the velocity vector of the movement of the stars:
(this is still in shadertoy code)
float Hash(in vec2 p)
{
float h = dot(p, vec2(12.9898, 78.233));
return -1.0 + 2.0 * fract(sin(h) * 43758.5453);
}
vec2 Hash2D(in vec2 p)
{
float h = dot(p, vec2(12.9898, 78.233));
float h2 = dot(p, vec2(37.271, 377.632));
return -1.0 + 2.0 * vec2(fract(sin(h) * 43758.5453), fract(sin(h2) * 43758.5453));
}
float Noise(in vec2 p)
{
vec2 n = floor(p);
vec2 f = fract(p);
vec2 u = f * f * (3.0 - 2.0 * f);
return mix(mix(Hash(n), Hash(n + vec2(1.0, 0.0)), u.x),
mix(Hash(n + vec2(0.0, 1.0)), Hash(n + vec2(1.0)), u.x), u.y);
}
vec3 Voronoi(in vec2 p)
{
vec2 n = floor(p);
vec2 f = fract(p);
vec2 mg, mr;
float md = 8.0;
for(int j = -1; j <= 1; ++j)
{
for(int i = -1; i <= 1; ++i)
{
vec2 g = vec2(float(i), float(j));
vec2 o = Hash2D(n + g);
vec2 r = g + o - f;
float d = dot(r, r);
if(d < md)
{
md = d;
mr = r;
mg = g;
}
}
}
return vec3(md, mr);
}
vec3 AddStarField(vec2 samplePosition, float threshold)
{
vec3 starValue = Voronoi(samplePosition);
if(starValue.x < threshold)
{
float power = 1.0 - (starValue.x / threshold);
return vec3(power * power * power);
}
return vec3(0.0);
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
float maxResolution = max(iResolution.x, iResolution.y);
vec2 velocity = vec2(0.01, 0.01);
vec2 samplePosition = (fragCoord.xy / maxResolution) + vec2(iTime * velocity.x, iTime * velocity.y);
vec3 finalColor = AddStarField(samplePosition * 16.0, 0.00125);
fragColor = vec4(finalColor, 1.0);
}
If you paste that into a new shadertoy window and run it you should see a monochrome star field moving towards the bottom left.
To adjust it for SpriteKit is fairly simple. We need to remove the "in"s from the function variables, change the name of some constants (there's a decent blog post about the shadertoy to SpriteKit changes which are needed), and use an Attribute for the velocity vector so we can change the direction of the stars for each SKSpriteNode this is applied to, and over time, as needed.
Here's the full SpriteKit shader source, with a_velocity as a needed attribute defining the star field movement:
float Hash(vec2 p)
{
float h = dot(p, vec2(12.9898, 78.233));
return -1.0 + 2.0 * fract(sin(h) * 43758.5453);
}
vec2 Hash2D(vec2 p)
{
float h = dot(p, vec2(12.9898, 78.233));
float h2 = dot(p, vec2(37.271, 377.632));
return -1.0 + 2.0 * vec2(fract(sin(h) * 43758.5453), fract(sin(h2) * 43758.5453));
}
float Noise(vec2 p)
{
vec2 n = floor(p);
vec2 f = fract(p);
vec2 u = f * f * (3.0 - 2.0 * f);
return mix(mix(Hash(n), Hash(n + vec2(1.0, 0.0)), u.x),
mix(Hash(n + vec2(0.0, 1.0)), Hash(n + vec2(1.0)), u.x), u.y);
}
vec3 Voronoi(vec2 p)
{
vec2 n = floor(p);
vec2 f = fract(p);
vec2 mg, mr;
float md = 8.0;
for(int j = -1; j <= 1; ++j)
{
for(int i = -1; i <= 1; ++i)
{
vec2 g = vec2(float(i), float(j));
vec2 o = Hash2D(n + g);
vec2 r = g + o - f;
float d = dot(r, r);
if(d < md)
{
md = d;
mr = r;
mg = g;
}
}
}
return vec3(md, mr);
}
vec3 AddStarField(vec2 samplePosition, float threshold)
{
vec3 starValue = Voronoi(samplePosition);
if (starValue.x < threshold)
{
float power = 1.0 - (starValue.x / threshold);
return vec3(power * power * power);
}
return vec3(0.0);
}
void main()
{
vec2 samplePosition = v_tex_coord.xy + vec2(u_time * a_velocity.x, u_time * a_velocity.y);
vec3 finalColor = AddStarField(samplePosition * 20.0, 0.00125);
gl_FragColor = vec4(finalColor, 1.0);
}
(worth noting, that is is simply a modified version of the original )
This shader (code at the end) uses raymarching to render procedural geometry:
However, in the image (above) the cube in the background should be partially occluding the pink solid; it isn't because of this:
struct fragmentOutput {
float4 color : SV_Target;
float zvalue : SV_Depth;
};
fragmentOutput frag(fragmentInput i) {
fragmentOutput o;
...
o.zvalue = IF(output[1] > 0, 0, 1);
}
However, I cannot for the life of my figure out how to correctly generate a depth value here that correctly allows raymarched solids to obscure / not obscure the other geometry in the scene.
I know it's possible, because there's a working example here: https://github.com/i-saint/RaymarchingOnUnity5 (associated japanese language blog http://i-saint.hatenablog.com/)
However, it's in japanese, and largely undocumented, as well as being extremely complex.
I'm looking for an extremely simplified version of the same thing, from which to build on.
In the shader I'm currently using the fragment program line:
float2 output = march_raycast(i.worldpos, i.viewdir, _far, _step);
Maps an input point p on the quad need the camera (which this shader attached to it), into an output float2 (density, distance), where distance is the distance from the quad to the 'point' on the procedural surface.
The question is, how do I map that into a depth buffer in any useful way?
The complete shader is here, to use it, create a new scene with a sphere at 0,0,0 with a size of at least 50 and assign the shader to it:
Shader "Shaders/Raymarching/BasicMarch" {
Properties {
_sun ("Sun", Vector) = (0, 0, 0, 0)
_far ("Far Depth Value", Float) = 20
_edgeFuzz ("Edge fuzziness", Range(1, 20)) = 1.0
_lightStep ("Light step", Range(0.1, 5)) = 1.0
_step ("Raycast step", Range(0.1, 5)) = 1.0
_dark ("Dark value", Color) = (0, 0, 0, 0)
_light ("Light Value", Color) = (1, 1, 1, 1)
[Toggle] _debugDepth ("Display depth field", Float) = 0
[Toggle] _debugLight ("Display light field", Float) = 0
}
SubShader {
Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"}
Blend SrcAlpha OneMinusSrcAlpha
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma target 3.0
#include "UnityCG.cginc"
#include "UnityLightingCommon.cginc" // for _LightColor0
#define IF(a, b, c) lerp(b, c, step((fixed) (a), 0));
uniform float _far;
uniform float _lightStep;
uniform float3 _sun;
uniform float4 _light;
uniform float4 _dark;
uniform float _debugDepth;
uniform float _debugLight;
uniform float _edgeFuzz;
uniform float _step;
/**
* Sphere at origin c, size s
* #param center_ The center of the sphere
* #param radius_ The radius of the sphere
* #param point_ The point to check
*/
float geom_soft_sphere(float3 center_, float radius_, float3 point_) {
float rtn = distance(center_, point_);
return IF(rtn < radius_, (radius_ - rtn) / radius_ / _edgeFuzz, 0);
}
/**
* A rectoid centered at center_
* #param center_ The center of the cube
* #param halfsize_ The halfsize of the cube in each direction
*/
float geom_rectoid(float3 center_, float3 halfsize_, float3 point_) {
float rtn = IF((point_[0] < (center_[0] - halfsize_[0])) || (point_[0] > (center_[0] + halfsize_[0])), 0, 1);
rtn = rtn * IF((point_[1] < (center_[1] - halfsize_[1])) || (point_[1] > (center_[1] + halfsize_[1])), 0, 1);
rtn = rtn * IF((point_[2] < (center_[2] - halfsize_[2])) || (point_[2] > (center_[2] + halfsize_[2])), 0, 1);
rtn = rtn * distance(point_, center_);
float radius = length(halfsize_);
return IF(rtn > 0, (radius - rtn) / radius / _edgeFuzz, 0);
}
/**
* Calculate procedural geometry.
* Return (0, 0, 0) for empty space.
* #param point_ A float3; return the density of the solid at p.
* #return The density of the procedural geometry of p.
*/
float march_geometry(float3 point_) {
return
geom_rectoid(float3(0, 0, 0), float3(7, 7, 7), point_) +
geom_soft_sphere(float3(10, 0, 0), 7, point_) +
geom_soft_sphere(float3(-10, 0, 0), 7, point_) +
geom_soft_sphere(float3(0, 0, 10), 7, point_) +
geom_soft_sphere(float3(0, 0, -10), 7, point_);
}
/** Return a randomish value to sample step with */
float rand(float3 seed) {
return frac(sin(dot(seed.xyz ,float3(12.9898,78.233,45.5432))) * 43758.5453);
}
/**
* March the point p along the cast path c, and return a float2
* which is (density, depth); if the density is 0 no match was
* found in the given depth domain.
* #param point_ The origin point
* #param cast_ The cast vector
* #param max_ The maximum depth to step to
* #param step_ The increment to step in
* #return (denity, depth)
*/
float2 march_raycast(float3 point_, float3 cast_, float max_, float step_) {
float origin_ = point_;
float depth_ = 0;
float density_ = 0;
int steps = floor(max_ / step_);
for (int i = 0; (density_ <= 1) && (i < steps); ++i) {
float3 target_ = point_ + cast_ * i * step_ + rand(point_) * cast_ * step_;
density_ += march_geometry(target_);
depth_ = IF((depth_ == 0) && (density_ != 0), distance(point_, target_), depth_);
}
density_ = IF(density_ > 1, 1, density_);
return float2(density_, depth_);
}
/**
* Simple lighting; raycast from depth point to light source, and get density on path
* #param point_ The origin point on the render target
* #param cast_ The original cast (ie. camera view direction)
* #param raycast_ The result of the original raycast
* #param max_ The max distance to cast
* #param step_ The step increment
*/
float2 march_lighting(float3 point_, float3 cast_, float2 raycast_, float max_, float step_) {
float3 target_ = point_ + cast_ * raycast_[1];
float3 lcast_ = normalize(_sun - target_);
return march_raycast(target_, lcast_, max_, _lightStep);
}
struct fragmentInput {
float4 position : SV_POSITION;
float4 worldpos : TEXCOORD0;
float3 viewdir : TEXCOORD1;
};
struct fragmentOutput {
float4 color : SV_Target;
float zvalue : SV_Depth;
};
fragmentInput vert(appdata_base i) {
fragmentInput o;
o.position = mul(UNITY_MATRIX_MVP, i.vertex);
o.worldpos = mul(_Object2World, i.vertex);
o.viewdir = -normalize(WorldSpaceViewDir(i.vertex));
return o;
}
fragmentOutput frag(fragmentInput i) {
fragmentOutput o;
// Raycast
float2 output = march_raycast(i.worldpos, i.viewdir, _far, _step);
float2 light = march_lighting(i.worldpos, i.viewdir, output, _far, _step);
float lvalue = 1.0 - light[0];
float depth = output[1] / _far;
// Generate fragment color
float4 color = lerp(_light, _dark, lvalue);
// Debugging: Depth
float4 debug_depth = float4(depth, depth, depth, 1);
color = IF(_debugDepth, debug_depth, color);
// Debugging: Color
float4 debug_light = float4(lvalue, lvalue, lvalue, 1);
color = IF(_debugLight, debug_light, color);
// Always apply the depth map
color.a = output[0];
o.zvalue = IF(output[1] > 0, 0, 1);
o.color = IF(output[1] <= 0, 0, color);
return o;
}
ENDCG
}
}
}
(Yes, I know it's quite complex, but it's very difficult to reduce this kind of shader into a 'simple test case' to play with)
I'll happy accept any answer which is a modification of the shader above that allows the procedural solid to be obscured / obscure other geometry in the scene as though is was 'real geometry'.
--
Edit: You can get this 'working' by explicitly setting the depth value on the other geometry in the scene using the same depth function as the raymarcher:
...however, I still cannot get this to work correctly with geometry using the 'standard' shader. Still hunting for a working solution...
Looking at the project you linked to, the most important difference I see is that their raycast march function uses a pass-by-reference parameter to return a fragment position called ray_pos. That position appears to be in object space, so they transform it using the view-projection matrix to get clip space and read a depth value.
The project also has a compute_depth function, but it looks pretty simple.
Your march_raycast function is already calculating a target_ position, so you could refactor a bit, apply the out keyword to return it to the caller, and use it in depth calculations:
//get position using pass-by-ref
float3 ray_pos = i.worldpos;
float2 output = march_raycast(ray_pos, i.viewdir, _far, _step);
...
//convert position to clip space, read depth
float4 clip_pos = mul(UNITY_MATRIX_VP, float4(ray_pos, 1.0));
o.zvalue = clip_pos.z / clip_pos.w;
There might be a problem with render setup.
To allow your shader to output per-pixel depth, its depth-tests must be disabled. Otherwise, GPU would - for optimization - assume that all your pixels' depths are the interpolated depths from your vertices.
As your shader does not do depth-tests, it must be rendered before the geometry that does, or it will just overwrite whatever the other geometry wrote to depth buffer.
It must however have depth-write enabled, or the depth output of your pixel shader will be ignored and not written to depth-buffer.
Your RenderType is Transparent, which, I assume, should disable depth-write. That would be a problem.
Your Queue is Transparent as well, which should have it render after all solid Geometry, and back to front, which would be a problem as well, as we already concluded we have to render before.
So
put your shader in an early render queue that will render before solid geometry
have depth-write enabled
have depth-test disabled