I am trying to write a simple Metal shader to use in Swift, which takes a picture and move it from left to right. Let's say I want the picture x starts in screenwidth - image.size.width and moves x until image.x = screenwidth.x. I used Metalpetal and my code distorts image.
fragment float4 SimplePanFragmentRight(VertexOut vertexIn [[ stage_in ]],
texture2d<float, access::sample> fromTexture [[ texture(0) ]],
texture2d<float, access::sample> toTexture [[ texture(1) ]],
constant float & scale [[ buffer(0) ]],
constant float & rotations [[ buffer(1) ]],
constant float2 & center [[ buffer(2) ]],
constant float4 & backColor [[ buffer(3) ]],
constant float & ratio [[ buffer(4) ]],
constant float & progress [[ buffer(5) ]],
sampler textureSampler [[ sampler(0) ]])
{
float2 uv = vertexIn.textureCoordinate;
uv.y = 1.0 - uv.y;
float _fromR = float(fromTexture.get_width())/float(fromTexture.get_height());
float _toR = float(toTexture.get_width())/float(toTexture.get_height());
float t = 1.0;
float pro = progress / 0.25;
// hence bad code
uv = adjustPos( uv, pro);
// ****
return mix(
getFromColor(uv, fromTexture, ratio, _fromR),
getToColor(uv, toTexture, ratio, _toR),
t);
}
and my "simple function" to manipulate x position is
float2 adjustPos(
float2 uv, float amount) {
uv.x = uv.x * amount;
return uv;
}
How to move linear x position based on a progress ratio without any image distortion ?
You want to do vertex translation in your vertex shader not in your fragment shader. Basically the vertex shader is where transformations of your model's (image's) vertices happen, which can be either because you moved your "camera" (ie. change of perspective) or because the thing moves in the environment. On entry the grid coordinates will be in your picture's coordinates, or in your virtual world coordinates, depending on exactly how you call the shader. You translate that to coordinates relative to your view frustum (ie. relative to your camera's position, direction, and orientation). For 2-D rendering, you can usually ignore the z part of the frustum coordinates (just set it to 0, so it's exactly on the view plane), which makes it the same as screen coordinates.
Your fragment shader is where you'd do effects on the image itself, for example blurring, or color mapping, texture mapping, etc...
Related
I have been trying to obtain the Z position of a vertex in the clip plane, i.e. its location in the depth buffer, but I have been observing weird behaviour affecting the result of UnityObjectToClipPos.
I have written a surface shader that colors vertices based on the depth. Here is the relevant code:
Tags { "RenderType"="Opaque" }
LOD 200
Cull off
CGPROGRAM
#pragma target 3.0
#pragma surface surf StandardSpecular alphatest:_Cutoff addshadow vertex:vert
#pragma debug
struct Input
{
float depth;
};
float posClipZ(float3 vertex)
{
float4 clipPos = UnityObjectToClipPos(vertex);
float depth = clipPos.z / clipPos.w;
#if !defined(UNITY_REVERSED_Z)
depth = depth * 0.5 + 0.5;
#endif
return depth;
}
void vert(inout appdata_full v, out Input o)
{
UNITY_INITIALIZE_OUTPUT(Input, o);
o.depth = posClipZ(v.vertex);
}
void surf(Input IN, inout SurfaceOutputStandardSpecular o)
{
o.Albedo.x = clamp(IN.depth, 0, 1);
o.Alpha = 1;
}
ENDCG
Based on my understanding, UnityObjectToClipPos should return the position of the vertex in the camera's clip coordinates, and the Z coordinate should be (when transformed from homogenous coordinates) between 0 and 1. However, this is not what I am observing at all:
This shows the camera intersecting a sphere. Notice that vertices near or behind the camera near clip plane actually have negative depth (I've checked that with other conversions to albedo color). It also seems that clipPos.z is actually constant most of the time, and only clipPos.w is changing.
I've managed to hijack the generated fragment shader to add a SV_Position parameter, and this is what I actually expected to see in the first place:
However, I don't want to use SV_Position, as I want to be able to calculate the depth in the vertex shader from other positions.
It seems like UnityObjectToClipPos is not suited for the task, as the depth obtained that way is not even monotonic.
So, how can I mimic the second image via depth calculated in the vertex shader? It should also be perfect regarding interpolation, so I suppose I will have to use UnityObjectToViewPos first in the vertex shader to get the linear depth, then scale it in the fragment shader accordingly.
I am not completely sure why UnityObjectToClipPos didn't return anything useful, but it wasn't the right tool for the task anyway. The reason is that the depth of the vertex is not linear in the depth buffer, and so first the actual distance from the camera has to be used for proper interpolation of the depth of all the pixels between the vertices:
float posClipZ(float3 vertex)
{
float3 viewPos = UnityObjectToViewPos(vertex);
return -viewPos.z;
}
Once the fragment/surface shader is executed, LinearEyeDepth seems to be the proper function to retrieve the expected depth value:
void surf(Input IN, inout SurfaceOutputStandardSpecular o)
{
o.Albedo.x = clamp(LinearEyeDepth(IN.depth), 0, 1);
o.Alpha = 1;
}
Once again it is important not to use LinearEyeDepth inside the vertex shader, since the values will be interpolated incorrectly.
So I'm developing a Neural Network to run in iOS on the GPU, so using matrix notation I need (in order to backpropagate the errors) be able to perform an outer product of 2 vectors.
// Outer product of vector A and Vector B
kernel void outerProduct(const device float *inVectorA [[ buffer(0) ]],
const device float *inVectorB [[ buffer(1) ]],
device float *outVector [[ buffer(2) ]],
uint id [[ thread_position_in_grid ]]) {
outVector[id] = inVectorA[id] * inVectorB[***?***]; // How to find this position on the thread group (or grid)?
}
You are using thread_position_in_grid incorrectly. If you are dispatching a 2D grid, it should be uint2 or ushort2, otherwise it only gets the x coordinate. Refer to table 5.7 in Metal Shading Language specification.
I'm not sure which outer product are we talking about, but I think the output should be a matrix. If you are storing it linearly, then your code to calculate the outVector should look something like this:
kernel void outerProduct(const device float *inVectorA [[ buffer(0) ]],
const device float *inVectorB [[ buffer(1) ]],
uint2 gridSize [[ threads_per_grid ]],
device float *outVector [[ buffer(2) ]],
uint2 id [[ thread_position_in_grid ]]) {
outVector[id.y * gridSize.x + id.x] = inVectorA[id.x] * inVectorB[id.y];
}
Also, if you are dispatching a grid exactly the size of inVectorAxinVectorB, you can use attribute threads_per_grid on a kernel argument to find out how big the grid is.
Alternatively, you can just pass the sizes of the vectors alongside the vectors themselves.
I am modifying a SceneKit Metal shader given in https://medium.com/#MalikAlayli/metal-with-scenekit-create-your-first-shader-2c4e4e983300. It displays a cube with image texture, rendered by SceneKit with a Metal shader.
I changed the cube to a sphere of radius 3, centred at (0,0,0) using SCNSphere(radius: 3). Then, I used clip_distance to "cut" away a portion of the sphere satisfying in.position.z > 1.5. The result is shown in the image below. The Metal shader I am using is also given below.
As you can see, the boundary is not smooth. It exhibits boundaries of polygons, instead of an interpolated surface. So, is it possible to make it smooth? If yes, how? Thank you.
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
struct NodeBuffer {
float4x4 modelTransform;
float4x4 modelViewProjectionTransform;
float4x4 modelViewTransform;
float4x4 normalTransform;
float2x3 boundingBox;
};
struct VertexInput {
float3 position [[attribute(SCNVertexSemanticPosition)]];
float2 uv [[attribute(SCNVertexSemanticTexcoord0)]];
};
struct VertexOut {
float4 position [[position]];
float2 uv;
float clip_distance [[clip_distance]];
};
vertex VertexOut textureSamplerVertex(VertexInput in [[ stage_in ]], constant NodeBuffer& scn_node [[buffer(1)]]) {
VertexOut out;
out.position = scn_node.modelViewProjectionTransform * float4(in.position, 1.0);
out.uv = in.uv;
if ((in.position.z > 1.5)) {
out.clip_distance = -1;
}
else {
out.clip_distance = 1;
}
return out;
}
fragment float4 textureSamplerFragment(VertexOut out [[ stage_in ]], texture2d<float, access::sample> customTexture [[texture(0)]]) {
constexpr sampler textureSampler(coord::normalized, filter::linear, address::repeat);
return customTexture.sample(textureSampler, out.uv );
}
The clip distances will be linearly interpolated across the primitive and the portion of the primitive with interpolated distances less than 0.0 will be clipped. (gl_ClipDistance)
For the frontier of clipped fragments to be exactly at z = 1.5 you need make sure that the interpolated clip distance is exactly at 0.0 when z = 1.5 and then positive or negative on each side.
out.clip_distance = (1.5 - in.position.z);
I was asked to draw a line between two given points using surface shader. The point is given in texture coordinate (between 0 and 1) and directly goes into the surface shader in Unity. I want to do this by calculating the pixel position and see if it is on that line. So I either try to translate the texture corrdinate to world pos, or i get the position of pixel in relative to that texture coordinate.
But I only found worldPos and screenPos in unity shader manual. Is there some way i can get the position in texture coordinate (or at least get the size of the textured object in world pos?)
Here is a simple example:
Shader "Line" {
Properties {
// Easiest way to get access of UVs in surface shaders is to define a texture
_MainTex("Texture", 2D) = "white"{}
// We can pack both points into one vector
_Line("Start Pos (xy), End Pos (zw)", Vector) = (0, 0, 1, 1)
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert
sampler2D _MainTex;
float4 _Line;
struct Input {
// This UV value will now represent the pixel coordinate in UV space
float2 uv_MainTex;
};
void surf (Input IN, inout SurfaceOutput o) {
float2 start = _Line.xy;
float2 end = _Line.zw;
float2 pos = IN.uv_MainTex.xy;
// Do some calculations
return fixed4(1, 1, 1, 1);
}
ENDCG
}
}
Here is a good post on how to calculate whether a point is on a line:
How to check if a point lies on a line between 2 other points
Let's say you define a function from this with the following signature:
inline bool IsPointOnLine(float2 p, float2 l1, float2 l2)
Then for return value, you can put this:
return IsPointOnLine(pos, start, end) ? _LineColor : _BackgroundColor
If you want UV coordinates without using a texture, i recommend making a vertex fragment shader instead and defining float2 uv : TEXCOORD0 inside the appdata/VertexInput struct. You can then pass that on to the fragment shader inside the vertex function.
After hours of Google, copy-pasting codes and playing around, I still could not find a solution to my problem.
I try to write a postprocessing shader using the vertex and fragment functions. My problem is that I do not know how to compute the radial distance of the current vertex to the camera position (or any other given position) in world coordinates.
My goal is the following:
Consider a very big 3D plane where the camera is on top and looks exactly down to the plane. I now want a postprocessing shader that draws a white line onto the plane, such that only those pixels that have a certain radial distance to the camera are painted white. The expected result would be a white circle (in this specific setup).
I know how to do this in principal, but the problem is that I cannot find out how to compute the radial distance to the vertex.
The problem here might be that this is a POSTPROCESSING shader. So this shader is not applied to a certain object. If I would do so, I could get the world coordinates of the vertex by using mul(unity_ObjectToWorld, v.vertex), but for postprocessing shaders this gives a nonsense value.
This is my debug code for this issue:
Shader "NonHidden/TestShader"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
}
SubShader
{
Tags { "RenderType"="Transparent" "Queue"="Transparent-1"}
LOD 100
ZWrite Off
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma target 3.0
#include "UnityCG.cginc"
sampler2D _MainTex;
sampler2D _CameraDepthTexture;
uniform float4 _MainTex_TexelSize;
// V2F
struct v2f {
float4 outpos : SV_POSITION;
float4 worldPos : TEXCOORD0;
float3 rayDir : TEXCOORD1;
float3 camNormal : TEXCOORD2;
};
// Sample Depth
float sampleDepth(float2 uv) {
return Linear01Depth(
UNITY_SAMPLE_DEPTH(
tex2D(_CameraDepthTexture, uv)));
}
// VERTEX
v2f vert (appdata_tan v)
{
TANGENT_SPACE_ROTATION;
v2f o;
o.outpos = UnityObjectToClipPos(v.vertex);
o.worldPos = mul(unity_ObjectToWorld, v.vertex);
o.rayDir = mul(rotation, ObjSpaceViewDir(v.vertex));
o.camNormal = UNITY_MATRIX_IT_MV[2].xyz;
return o;
}
// FRAGMENT
fixed4 frag (v2f IN) : SV_Target
{
// Get uv coordinates
float2 uv = IN.outpos.xy * (_ScreenParams.zw - 1.0f);
// Flip y if necessary
#if UNITY_UV_STARTS_AT_TOP
if (_MainTex_TexelSize.y < 0)
{
uv.y = 1 - uv.y;
}
#endif
// Get depth
float depth = sampleDepth(uv);
// Set color
fixed4 color = 0;
if(depth.x < 1)
{
color.r = IN.worldPos.x;
color.g = IN.worldPos.y;
color.b = IN.worldPos.z;
}
return color;
}
ENDCG
}
}
}
CurrentState
This image shows the result when the camera looks down on the plane:
Image 1: Actual result
The blue value is (for whatever reason) 25 in every pixel. The red and green areas reflect the x-y coordinates of the screen.
Even if I rotate the camera a little bit, I get the exact same shading at the same screen coordinates:
That shows me that the computed "worldPos" coordinates are screen coordinates and have nothing to do with the world coordinates of the plane.
Expected Result
The result I expect to see is the following:
Here, pixels that have the same (radial) distance to the camera have the same color.
How do I need to change the above code to achieve this effect? With rayDir (computed in the vert function) I tried to get at least the direction vector from the camera center to the current pixel, such that I could compute the radial distance using the depth information. But rayDir has a constant value for all pixels ...
At this point I also have to say that I don't really understand what is computed inside the vert function. This is just stuff that I found on the internet and that I tried out.
Alright, I found some solution to my problem, since I found this video here: Shaders Case Study - No Man's Sky: Topographic Scanner
In the video description is a link to the corresponding GIT repository. I downloaded, analyzed and rewrote the code, such that it fits my purpose, is easier to read and understand.
The major thing I learned is, that there is no built-in way to compute the radial distance using post-processing shaders (correct me if I'm wrong!). So in order to get the radial distance, the only way seems to be in fact to use the direction vector from the camera to the vertex and the depth buffer. Since the direction vector is also not available in a built-in way, a trick is used:
Instead of using the Graphics.Blit function in the post-processing script, a custom Blit function can be used to set some additional shader variables. In this case, the frustum of the camera is stored in a second set of texture coordinates, which are then available in the shader code as TEXCOORD1. The trick here is that the according shader variable automatically contains an interpolated uv value, that is identical to the direction vector ("frustum ray") I was looking for.
The code of the calling script now looks as follows:
using UnityEngine;
using System.Collections;
[ExecuteInEditMode]
public class TestShaderEffect : MonoBehaviour
{
private Material material;
private Camera cam;
void OnEnable()
{
// Create a material that uses the desired shader
material = new Material(Shader.Find("Test/RadialDistance"));
// Get the camera object (this script must be assigned to a camera)
cam = GetComponent<Camera>();
// Enable depth buffer generation#
// (writes to the '_CameraDepthTexture' variable in the shader)
cam.depthTextureMode = DepthTextureMode.Depth;
}
[ImageEffectOpaque] // Draw after opaque, but before transparent geometry
void OnRenderImage(RenderTexture source, RenderTexture destination)
{
// Call custom Blit function
// (usually Graphics.Blit is used)
RaycastCornerBlit(source, destination, material);
}
void RaycastCornerBlit(RenderTexture source, RenderTexture destination, Material mat)
{
// Compute (half) camera frustum size (at distance 1.0)
float angleFOVHalf = cam.fieldOfView / 2 * Mathf.Deg2Rad;
float heightHalf = Mathf.Tan(angleFOVHalf);
float widthHalf = heightHalf * cam.aspect; // aspect = width/height
// Compute helper vectors (camera orientation weighted with frustum size)
Vector3 vRight = cam.transform.right * widthHalf;
Vector3 vUp = cam.transform.up * heightHalf;
Vector3 vFwd = cam.transform.forward;
// Custom Blit
// ===========
// Set the given destination texture as the active render texture
RenderTexture.active = destination;
// Set the '_MainTex' variable to the texture given by 'source'
mat.SetTexture("_MainTex", source);
// Store current transformation matrix
GL.PushMatrix();
// Load orthographic transformation matrix
// (sets viewing frustum from [0,0,-1] to [1,1,100])
GL.LoadOrtho();
// Use the first pass of the shader for rendering
mat.SetPass(0);
// Activate quad draw mode and draw a quad
GL.Begin(GL.QUADS);
{
// Using MultiTexCoord2 (TEXCOORD0) and Vertex3 (POSITION) to draw on the whole screen
// Using MultiTexCoord to write the frustum information into TEXCOORD1
// -> When the shader is called, the TEXCOORD1 value is automatically an interpolated value
// Bottom Left
GL.MultiTexCoord2(0, 0, 0);
GL.MultiTexCoord(1, (vFwd - vRight - vUp) * cam.farClipPlane);
GL.Vertex3(0, 0, 0);
// Bottom Right
GL.MultiTexCoord2(0, 1, 0);
GL.MultiTexCoord(1, (vFwd + vRight - vUp) * cam.farClipPlane);
GL.Vertex3(1, 0, 0);
// Top Right
GL.MultiTexCoord2(0, 1, 1);
GL.MultiTexCoord(1, (vFwd + vRight + vUp) * cam.farClipPlane);
GL.Vertex3(1, 1, 0);
// Top Left
GL.MultiTexCoord2(0, 0, 1);
GL.MultiTexCoord(1, (vFwd - vRight + vUp) * cam.farClipPlane);
GL.Vertex3(0, 1, 0);
}
GL.End(); // Finish quad drawing
// Restore original transformation matrix
GL.PopMatrix();
}
}
The shader code looks like this:
Shader "Test/RadialDistance"
{
Properties
{
_MainTex("Texture", 2D) = "white" {}
}
SubShader
{
// No culling or depth
Cull Off ZWrite Off ZTest Always
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct VertIn
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
float4 ray : TEXCOORD1;
};
struct VertOut
{
float4 vertex : SV_POSITION;
float2 uv : TEXCOORD0;
float4 interpolatedRay : TEXCOORD1;
};
// Parameter variables
sampler2D _MainTex;
// Auto filled variables
float4 _MainTex_TexelSize;
sampler2D _CameraDepthTexture;
// Generate jet-color-sheme color based on a value t in [0, 1]
half3 JetColor(half t)
{
half3 color = 0;
color.r = min(1, max(0, 4 * t - 2));
color.g = min(1, max(0, -abs( 4 * t - 2) + 2));
color.b = min(1, max(0, -4 * t + 2));
return color;
}
// VERT
VertOut vert(VertIn v)
{
VertOut o;
// Get vertex and uv coordinates
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = v.uv.xy;
// Flip uv's if necessary
#if UNITY_UV_STARTS_AT_TOP
if (_MainTex_TexelSize.y < 0)
o.uv.y = 1 - o.uv.y;
#endif
// Get the interpolated frustum ray
// (generated the calling script custom Blit function)
o.interpolatedRay = v.ray;
return o;
}
// FRAG
float4 frag (VertOut i) : SV_Target
{
// Get the color from the texture
half4 colTex = tex2D(_MainTex, i.uv);
// flat depth value with high precision nearby and bad precision far away???
float rawDepth = DecodeFloatRG(tex2D(_CameraDepthTexture, i.uv));
// flat depth but with higher precision far away and lower precision nearby???
float linearDepth = Linear01Depth(rawDepth);
// Vector from camera position to the vertex in world space
float4 wsDir = linearDepth * i.interpolatedRay;
// Position of the vertex in world space
float3 wsPos = _WorldSpaceCameraPos + wsDir;
// Distance to a given point in world space coordinates
// (in this case the camera position, so: dist = length(wsDir))
float dist = distance(wsPos, _WorldSpaceCameraPos);
// Get color by distance (same distance means same color)
half4 color = 1;
half t = saturate(dist/100.0);
color.rgb = JetColor(t);
// Set color to red at a hard-coded distance -> red circle
if (dist < 50 && dist > 50 - 1 && linearDepth < 1)
{
color.rgb = half3(1, 0, 0);
}
return color * colTex;
}
ENDCG
}
}
}
I'm now able to achieve the desired effect:
But there are still some questions I have and I would be thankful if anyone could answer them for me:
Is there really no other way to get the radial distance? Using a direciton vector and the depth buffer is inefficient and inaccurate
I don't really understand the content of the rawDepth variable. I mean yes, it's some depth information, but if you use the depth information as texture color, you basically get a black image if you are not ridiculously close to an object. That leads to a very bad resolution for objects that are further away. How can anyone work with that?
I don't understand what exactly the Linear01Depth function does. Since the Unity documentation sucks in general, it also doesn't offer any information about this one as well