Post-tessellation Vertex Function and Raytracing: Getting More Detailed Geometries for the Acceleration Structures - swift

I have recently gained some interest in the raytracing API provided by the Metal framework. I understand that you can attach a vertex buffer to a geometry descriptor that Metal will use to create the acceleration structure later (on a MTLPrimitiveAccelerationStructureDescriptor instance for example).
This made me wonder if it were possible to write the output of the tessellator into a separate vertex buffer from the post-tessellation vertex shader and pass that along to the raytracer. I thought that perhaps you could get more detailed geometry and still render without rasterization. For example, I might have the following simple post-tessellation vertex function:
[[patch(triangle, 3)]]
vertex FunctionOutIn tessellation_vertex_triangle(PatchIn patchIn [[stage_in]],
float3 patch_coord [[ position_in_patch ]])
{
// Barycentric coordinates
float u = patch_coord.x;
float v = patch_coord.y;
float w = patch_coord.z;
// Convert to cartesian coordinates
float x = u * patchIn.control_points[0].position.x + v * patchIn.control_points[1].position.x + w * patchIn.control_points[2].position.x;
float y = u * patchIn.control_points[0].position.y + v * patchIn.control_points[1].position.y + w * patchIn.control_points[2].position.y;
// Output
FunctionOutIn vertexOut;
vertexOut.position = float4(x, y, 0.0, 1.0);
vertexOut.color = half4(u, v, w, 1.0h);
return vertexOut;
}
However, the following doesn't compile
// Triangle post-tessellation vertex function
[[patch(triangle, 3)]]
vertex void tessellation_vertex_triangle(device void *outputBuffer [[ buffer(0) ]],
PatchIn patchIn [[stage_in]],
float3 patch_coord [[ position_in_patch ]])
{
// Barycentric coordinates
float u = patch_coord.x;
float v = patch_coord.y;
float w = patch_coord.z;
// Convert to cartesian coordinates
float x = u * patchIn.control_points[0].position.x + v * patchIn.control_points[1].position.x + w * patchIn.control_points[2].position.x;
float y = u * patchIn.control_points[0].position.y + v * patchIn.control_points[1].position.y + w * patchIn.control_points[2].position.y;
// Output
FunctionOutIn vertexOut;
vertexOut.position = float4(x, y, 0.0, 1.0);
vertexOut.color = half4(u, v, w, 1.0h);
}
But I noticed that the function doesn't compile when I don't use the data in the control points as output, like so
[[patch(triangle, 3)]]
vertex FunctionOutIn tessellation_vertex_triangle(PatchIn patchIn [[stage_in]],
float3 patch_coord [[ position_in_patch ]])
{
// Barycentric coordinates
float u = patch_coord.x;
float v = patch_coord.y;
float w = patch_coord.z;
// Convert to cartesian coordinates
float x = u * patchIn.control_points[0].position.x + v * patchIn.control_points[1].position.x + w * patchIn.control_points[2].position.x;
float y = u * patchIn.control_points[0].position.y + v * patchIn.control_points[1].position.y + w * patchIn.control_points[2].position.y;
// Output
FunctionOutIn vertexOut;
// Does not use x or y (and therefore the `patch_control_point<T>`'s values
// are not used as output into the rasterizer)
vertexOut.position = float4(1.0, 1.0, 0.0, 1.0);
vertexOut.color = half4(1.0h, 1.0h, 1.0h, 1.0h);
return vertexOut;
}
I looked at the patch_control_point<T> template that was publicly exposed but didn't see anything enforcing this. What is going on here?
In particular, how would I go about increasing the quality of the geometry into the raytracer? Would I simply have to use more complex assets? Tessellation has its place in the rasterization pipeline, but can it be used elsewhere?

Related

Problems and Inaccuracies Converting + Interpreting Unity Shadergraph to C#

Context
I've been trying to create a buoyancy script that samples the position of a point, tests if it's under a certain level (the "water level"), and adds a force on that position based on depth. Separately, I worked on creating a nice looking water shader in Shadergraph, and had the bright idea to add in waves using the Simple Noise node + vertex displacement.
However, the only way (I could think of) to use those displaced values as the float "water level" was to rewrite the entire node tree in C#, and use that to sample the "water level" at that position.
Problem
For some reason, the final displaced mesh and the calculated positions are different, causing the buoyancy script to assume that the "water level" is higher/lower than it is. The difference isn't large, so I'm assuming there's an error somewhere within either the C# Node Graph or C# Simple Noise translation.
Is that correct? If so, where and what's my misunderstanding? If not, what else could have gone wrong?
Approach
Node Graph
Image of the node graph for the wave vector displacement
*If you need zoomed in pictures, let me know!
All things considered, it's relatively simple. It:
Takes the world position as a UV, and offsets and tiles it.
Feeds the UV to a Simple Noise node, and multiplies the noise by a strength.
Clamps the output.
Repeats 1-3 again and adds both together for more detail.
Replaces the Y value of the vertex position with the combined wave value.
C# Script
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class WaveHeightCalculator : MonoBehaviour
{
// Step 1
[SerializeField] Material _waterMaterial;
[Header("Waves")]
[SerializeField] float _waveTiling;
[SerializeField] float _waveOffset;
[SerializeField] float _waveMin;
[SerializeField] float _waveMax;
[Header("Small Waves")]
[SerializeField] float _wavesSmallScale;
[SerializeField] float _wavesSmallStrength;
[SerializeField] Vector2 _wavesSmallVelocity;
[Header("Large Waves")]
[SerializeField] float _wavesLargeScale;
[SerializeField] float _wavesLargeStrength;
[SerializeField] Vector2 _wavesLargeVelocity;
// Step 2
private void OnValidate()
{
_waterMaterial = GetComponent<Renderer>().sharedMaterial;
SetVariables();
}
void SetVariables()
{
_waveTiling = _waterMaterial.GetFloat("_Wave_Tiling");
_waveOffset = _waterMaterial.GetFloat("_Wave_Offset");
_waveMin = _waterMaterial.GetFloat("_Wave_Min");
_waveMax = _waterMaterial.GetFloat("_Wave_Max");
_wavesSmallScale = _waterMaterial.GetFloat("_Waves_Small_Scale");
_wavesSmallStrength = _waterMaterial.GetFloat("_Waves_Small_Strength");
_wavesSmallVelocity = _waterMaterial.GetVector("_Waves_Small_Velocity");
_wavesLargeScale = _waterMaterial.GetFloat("_Waves_Large_Scale");
_wavesLargeStrength = _waterMaterial.GetFloat("_Waves_Large_Strength");
_wavesLargeVelocity = _waterMaterial.GetVector("_Waves_Large_Velocity");
}
// Step 3
public float GetWaveHeightAtPosition(Vector3 position)
{
Vector2 noiseMapUV;
noiseMapUV = new Vector2(position.x, position.z) * _waveTiling;
// Calculate Small Waves
Vector2 wavesSmallUVOffset = (Time.time / 20) * _wavesSmallVelocity;
float noiseValueAtUVPlusOffset = UnitySimpleNoiseAtUV(noiseMapUV + wavesSmallUVOffset, _wavesSmallScale);
float wavesSmall = noiseValueAtUVPlusOffset * _wavesSmallStrength;
// Calculate Large Waves
Vector2 wavesLargeUVOffset = (Time.time / 20) * _wavesLargeVelocity;
noiseValueAtUVPlusOffset = UnitySimpleNoiseAtUV(noiseMapUV + wavesLargeUVOffset, _wavesLargeScale);
float wavesLarge = noiseValueAtUVPlusOffset * _wavesLargeStrength;
// Combine
float waveHeight = wavesSmall + wavesLarge;
// Clamp
waveHeight = Mathf.Clamp(waveHeight, _waveMin, _waveMax);
// Offset
waveHeight += _waveOffset;
return waveHeight;
}
In the C# script, a couple of things are going on. Here's my thought process for it:
It assigns the relevant material properties to member variables.
It sets those variables in the OnValidate() function.
It uses those variables to calculate the wave value; equivalent to the "water level".
The script also contains and relies on my best attempt at translating the Simple Noise node from "Show Generated Code", which looked liked this.
Generated Code
inline float Unity_SimpleNoise_RandomValue_float (float2 uv)
{
float angle = dot(uv, float2(12.9898, 78.233));
#if defined(SHADER_API_MOBILE) && (defined(SHADER_API_GLES) || defined(SHADER_API_GLES3) || defined(SHADER_API_VULKAN))
// 'sin()' has bad precision on Mali GPUs for inputs > 10000
angle = fmod(angle, TWO_PI); // Avoid large inputs to sin()
#endif
return frac(sin(angle)*43758.5453);
}
inline float Unity_SimpleNnoise_Interpolate_float (float a, float b, float t)
{
return (1.0-t)*a + (t*b);
}
inline float Unity_SimpleNoise_ValueNoise_float (float2 uv)
{
float2 i = floor(uv);
float2 f = frac(uv);
f = f * f * (3.0 - 2.0 * f);
uv = abs(frac(uv) - 0.5);
float2 c0 = i + float2(0.0, 0.0);
float2 c1 = i + float2(1.0, 0.0);
float2 c2 = i + float2(0.0, 1.0);
float2 c3 = i + float2(1.0, 1.0);
float r0 = Unity_SimpleNoise_RandomValue_float(c0);
float r1 = Unity_SimpleNoise_RandomValue_float(c1);
float r2 = Unity_SimpleNoise_RandomValue_float(c2);
float r3 = Unity_SimpleNoise_RandomValue_float(c3);
float bottomOfGrid = Unity_SimpleNnoise_Interpolate_float(r0, r1, f.x);
float topOfGrid = Unity_SimpleNnoise_Interpolate_float(r2, r3, f.x);
float t = Unity_SimpleNnoise_Interpolate_float(bottomOfGrid, topOfGrid, f.y);
return t;
}
void Unity_SimpleNoise_float(float2 UV, float Scale, out float Out)
{
float t = 0.0;
float freq = pow(2.0, float(0));
float amp = pow(0.5, float(3-0));
t += Unity_SimpleNoise_ValueNoise_float(float2(UV.x*Scale/freq, UV.y*Scale/freq))*amp;
freq = pow(2.0, float(1));
amp = pow(0.5, float(3-1));
t += Unity_SimpleNoise_ValueNoise_float(float2(UV.x*Scale/freq, UV.y*Scale/freq))*amp;
freq = pow(2.0, float(2));
amp = pow(0.5, float(3-2));
t += Unity_SimpleNoise_ValueNoise_float(float2(UV.x*Scale/freq, UV.y*Scale/freq))*amp;
Out = t;
}
/* WARNING: $splice Could not find named fragment 'CustomInterpolatorPreVertex' */
// Graph Vertex
// GraphVertex: <None>
/* WARNING: $splice Could not find named fragment 'CustomInterpolatorPreSurface' */
// Graph Pixel
struct SurfaceDescription
{
float4 Out;
};
Translated Code
float float_frac(float x) { return x - Mathf.Floor(x);}
Vector2 frac(Vector2 x) { return x - new Vector2(Mathf.Floor(x.x), Mathf.Floor(x.y));}
float sin(float x) { return Mathf.Sin(x);}
float dot(Vector2 a, Vector2 b) { return a.x * b.x + a.y * b.y;}
float float_floor(float x) { return Mathf.Floor(x);}
Vector2 floor(Vector2 x) { return new Vector2(Mathf.Floor(x.x), Mathf.Floor(x.y));}
float float_abs(float x) { return Mathf.Abs(x);}
Vector2 abs(Vector2 x) { return new Vector2(Mathf.Abs(x.x), Mathf.Abs(x.y));}
float pow (float x, float y) { return Mathf.Pow(x, y);}
float Unity_SimpleNoise_RandomValue_float (Vector2 uv)
{
float angle = dot(uv, new Vector2(12.9898f, 78.233f));
return float_frac(sin(angle) * 43758.5453f);
}
float Unity_SimpleNnoise_Interpolate_float (float a, float b, float t)
{
return (1.0f - t) * a + (t * b);
}
float Unity_SimpleNoise_ValueNoise_float (Vector2 uv)
{
Vector2 i = floor(uv);
Vector2 f = frac(uv);
f = (f * f) * (new Vector2 (3.0f, 3.0f) - new Vector2(2.0f, 2.0f) * f);
uv = abs(frac(uv) - new Vector2 (0.5f, 0.5f));
Vector2 c0 = i + new Vector2(0.0f, 0.0f);
Vector2 c1 = i + new Vector2(1.0f, 0.0f);
Vector2 c2 = i + new Vector2(0.0f, 1.0f);
Vector2 c3 = i + new Vector2(1.0f, 1.0f);
float r0 = Unity_SimpleNoise_RandomValue_float(c0);
float r1 = Unity_SimpleNoise_RandomValue_float(c1);
float r2 = Unity_SimpleNoise_RandomValue_float(c2);
float r3 = Unity_SimpleNoise_RandomValue_float(c3);
float bottomOfGrid = Unity_SimpleNnoise_Interpolate_float(r0, r1, f.x);
float topOfGrid = Unity_SimpleNnoise_Interpolate_float(r2, r3, f.x);
float t = Unity_SimpleNnoise_Interpolate_float(bottomOfGrid, topOfGrid, f.y);
return t;
}
float UnitySimpleNoiseAtUV(Vector2 UV, float Scale)
{
float t = 0.0f;
float freq = pow(2.0f, 0);
float amp = pow(0.5f, 3-0);
t += Unity_SimpleNoise_ValueNoise_float(new Vector2(UV.x*Scale/freq, UV.y*Scale/freq))*amp;
freq = pow(2.0f, 1);
amp = pow(0.5f, 3-1);
t += Unity_SimpleNoise_ValueNoise_float(new Vector2(UV.x * Scale / freq, UV.y * Scale / freq)) * amp;
freq = pow(2.0f, 2);
amp = pow(0.5f, 3-2);
t += Unity_SimpleNoise_ValueNoise_float(new Vector2(UV.x * Scale / freq, UV.y * Scale / freq)) * amp;
return t;
}

Generating a normal map from a height map in compute shader?

The problem is that when I tried converting height map to normal map. The results are wrong. For some reason there is 3 light sources that is emitting from top (green), right (red), and left (blue) in the texture.
This is the GeoMath.hlsl code that I am using
static const float PI = 3.141592653589793238462643383279;
float2 longitudeLatitudeToUV(float2 longLat) {
float longitude = longLat[0];
float latitude = longLat[1];
float u = longitude / (2 * PI) + 0.5;
float v = latitude / PI + 0.5;
return float2(u,v);
}
float3 longitudeLatitudeToPoint(float2 longLat) {
float longitude = longLat[0];
float latitude = longLat[1];
float x;
float y;
float z;
y = sin(latitude);
float r = cos(latitude);
x = sin(longitude) * r;
z = -cos(longitude) * r;
return float3(x, y, z);
}
float2 uvToLongitudeLatitude(float2 uv) {
float longitude = (uv.x - 0.5) * (2 * PI);
float latitude = (uv.y - 0.5) * PI;
return float2(longitude, latitude);
}
float2 pointToLongitudeLatitude(float3 p) {
float longitude = atan2(p.x, p.z);
float latitude = asin(p.y);
return float2(longitude, latitude);
}
float2 pointToUV(float3 p) {
p = normalize(p);
return longitudeLatitudeToUV(pointToLongitudeLatitude(p));
}
This is the compute shader I am using to convert height map into normal map.
#pragma kernel CSMain
#include "GeoMath.hlsl"
Texture2D<float> _HeightMap;
RWTexture2D<float4> _NormalMap;
int _TextureSize_Width;
int _TextureSize_Height;
float _WorldRadius;
float _HeightMultiplier;
float3 CalculateWorldPoint(uint2 texCoord)
{
float2 uv = texCoord / float2(_TextureSize_Width - 1, _TextureSize_Height - 1);
float2 longLat = uvToLongitudeLatitude(uv);
float3 spherePoint = longitudeLatitudeToPoint(longLat);
float height01 = _HeightMap[texCoord].r + 1.0;
float worldHeight = _WorldRadius + height01 * _HeightMultiplier;
return spherePoint * worldHeight;
}
uint2 WrapIndex(uint2 texCoord)
{
texCoord.x = (texCoord.x + _TextureSize_Width) % _TextureSize_Width;
texCoord.y = max(min(_TextureSize_Height - 1, texCoord.y), 0);
return texCoord;
}
[numthreads(8,8,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
float3 normalVector;
float3 posNorth = CalculateWorldPoint(WrapIndex(id.xy + uint2(0, 1)));
float3 posSouth = CalculateWorldPoint(WrapIndex(id.xy + uint2(0, -1)));
float3 posEast = CalculateWorldPoint(WrapIndex(id.xy + uint2(1, 0)));
float3 posWest = CalculateWorldPoint(WrapIndex(id.xy + uint2(-1, 0)));
float3 dirNorth = normalize(posNorth - posSouth);
float3 dirEast = normalize(posEast - posWest);
normalVector = normalize(cross(dirNorth, dirEast));
_NormalMap[id.xy] = float4(normalVector, 1.0);
}
And this is the result I am getting is down below height map (top), generated normal map from the code above (bottom)
I believe that you are trying to get object space normals.
But there is tiny detail is missing.
Possible values for normalized vector3 are -1..1 for each axis.
And possible values for pixel: 0..1.
You just need to adjust ranges.
This line roughly fixes problem:
_NormalMap[id.xy] = float4(normalVector / 2 + float3(0.5, 0.5, 0.5), 1.0);
Result

Green Chrome Key Shader using depth

I have written a shader which converts an RGB Camera Value to HSV and then apply some filtering for green chrome.
Current Problem
If the object at foreground (player) has green pixels, it will be cut out.
I have already depth camera, how can I use that property for making a better cut out chrome key ?
frag (v2f i) : SV_Target
{
fixed4 col = tex2D(_MainTex, i.uv);
if (_ShowBackground)
{
fixed4 col2 = tex2D(_TexReplacer, i.uv);
col = col2;
}
else if (!_ShowOriginal)
{
fixed4 col2 = tex2D(_TexReplacer, i.uv);
float maskY = 0.2989 * _GreenColor.r + 0.5866 * _GreenColor.g + 0.1145 * _GreenColor.b;
float maskCr = 0.7132 * (_GreenColor.r - maskY);
float maskCb = 0.5647 * (_GreenColor.b - maskY);
float Y = 0.2989 * col.r + 0.5866 * col.g + 0.1145 * col.b;
float Cr = 0.7132 * (col.r - Y);
float Cb = 0.5647 * (col.b - Y);
float alpha = smoothstep(_Sensitivity, _Sensitivity + _Smooth, distance(float2(Cr, Cb), float2(maskCr, maskCb)));
col = (alpha * col) + ((1 - alpha) * col2);
}
return col;
}
Unity's UnityObjectToClipPos(float3 pos) let's you transform a vertex into clip space. This means 0 to 1 on all axes, z axis is the distance from the rendering camera (from near to far clipping plane, I believe).
You can use this distance to simply to only apply your keying to vertices further than a given threshold.
If you do not want to use normalized coordinates you can also convert your vertex to world space using mul(unity_ObjectToWorld, vertex.position) and afterwards to camera space, by multiplying the world position with the camera's world to local matrix (which you have to pass into your shader).
To access the camera's depth texture in shader you can use _CameraDepthTexture (see documentation https://docs.unity3d.com/Manual/SL-CameraDepthTexture.html section Shader variables).
You can sample it like any other texture using tex2D(_CameraDepthTexture, i.uv);

Unity 3D Subsurface Shader setting normal for appropriate lighting

I'm attempting to write a simple wave-like shader in Unity 2017.1.0f3 using the sin function, however it's all an undefined one color shape without redefining the normals so it can get the shading right. However despite my maths I can't seem to get these normals to look right, and as you can see in the GIF it's all super messed up.
So here's what I'm doing:
void vert(inout appdata_full v, out Input o)
{
UNITY_INITIALIZE_OUTPUT(Input, o);
//Just basing the height of the wave on distance from the center and time
half offsetvert = o.offsetVert = ((v.vertex.x*v.vertex.x) + (v.vertex.z * v.vertex.z))*100;//The 100 is to compensate for the massive scaling of the object
half value = _Scale * sin(-_Time.w * _Speed + offsetvert * _Frequency)/100;
v.vertex.y += value;
o.pos = v.vertex.xyz;
}
// Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader.
// See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing.
// #pragma instancing_options assumeuniformscaling
UNITY_INSTANCING_CBUFFER_START(Props)
// put more per-instance properties here
UNITY_INSTANCING_CBUFFER_END
void surf (Input IN, inout SurfaceOutputStandard o)
{
//Calculate new normals
//Refer to MATH (1) for how I'm getting the y
float3 norm = (0,sqrt(1/(1+1/(-100/(_Scale*_Frequency*cos(_Time.w * _Speed + IN.offsetVert * _Frequency))))),0);
//Refer to Math (2) for how I'm getting the x and z
float derrivative = _Scale*_Frequency*cos(-_Time.w * _Speed + IN.offsetVert * _Frequency)/100;
float3 norm = (0,sqrt(1/(1+1/(-1/(derrivative)))),0);
float remaining = 1 - pow(norm.y,2);
norm.x = sqrt(remaining/(1 + IN.pos.z*IN.pos.z/(IN.pos.x*IN.pos.x)));
norm.z = sqrt(1-norm.y*norm.y-norm.x*norm.x);
//Assume this is facing away from the center
if (IN.pos.z<0)
norm.z = -norm.z;
if (IN.pos.x<0)
norm.x = -norm.x;
//Flip the direction if necessary
if (derrivative > 0){
norm.x = -norm.x;
norm.z = -norm.z;
}
norm.y = abs(norm.y);
norm = normalize(norm);//Just to be safe
o.Albedo = _Color.rgb;
// Metallic and smoothness come from slider variables
o.Metallic = _Metallic;
o.Smoothness = _Glossiness;
o.Alpha = c.a;
o.Normal.xyz = norm;
}
MATH 1
If the y as a function of distance is
y = (scale/100)sin(time.w * speed + distance * frequency)
then
dy/d(distance) = (scale/100) * frequency * cos(time.w * speed + distance * frequency)
making the gradient of the normal of
y/(some x and z direction) -100/(scale * frequency * cos(time.w * speed + distance * frequency)).
We also know that
(y component)^2 + (some xz component)^2 = 1,
where
(y component)/(some xz component) = the normal gradient defined.
Solving these two simultaneous equations we get
y component = sqrt(1/(1+1/(gradient^2)))
MATH 2
We know that
(x component)/(z component) = (x position)/(z position)
and, by Pythagoras, that
(x component)^2 + (z component)^2 = 1 - (y component)^2
and solving these simultaneous equations we get
x component = sqrt((1 - (y component)^2)/(1 + (z position / x position)^2))
We can then get the z component through Pythagoras.
Please, let me know if you figure out what's wrong :)
Why are you calculating the normals in the surface function? This is done per fragment and will be very inefficient. Why not just calculate the normal in the vertex function?
What i would do is to do the same calculation as for the vertex offset, except once again for two other points which are offset in the X and Y direction from the vertex - then you use the cross product of the vectors between them and the offset vertex to get the normal.
Let's say that you have the offset moved to its own function, which takes the coordinates as parameter. Then you could do this:
float3 offsetPos = VertexOffset(v.vertex.xy)
float3 offsetPosX = offsetPos - VertexOffset(v.vertex.xy + float2(0.1, 0))
float3 offsetPosY = offsetPos - VertexOffset(v.vertex.xy + float2(0, 0.1))
v.vertex.xyz = offsetPos
v.normal.xyz = cross(normalize(offsetPosX), normalize(offsetPosY))

Problems porting a GLSL shadertoy shader to unity

I'm currently trying to port a shadertoy.com shader (Atmospheric Scattering Sample, interactive demo with code) to Unity. The shader is written in GLSL and I have to start the editor with C:\Program Files\Unity\Editor>Unity.exe -force-opengl to make it render the shader (otherwise a "This shader cannot be run on this GPU" error comes up), but that's not a problem right now. The problem is with porting that shader to Unity.
The functions for the scattering etc. are all identical and "runnable" in my ported shader, the only thing is that the mainImage() functions manages the camera, light directions and ray direction itself. This has to be ofcourse changed sothat Unity's camera position, view direction and light sources and directions are used.
The main function of the original looks like this:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// default ray dir
vec3 dir = ray_dir( 45.0, iResolution.xy, fragCoord.xy );
// default ray origin
vec3 eye = vec3( 0.0, 0.0, 2.4 );
// rotate camera
mat3 rot = rot3xy( vec2( 0.0, iGlobalTime * 0.5 ) );
dir = rot * dir;
eye = rot * eye;
// sun light dir
vec3 l = vec3( 0, 0, 1 );
vec2 e = ray_vs_sphere( eye, dir, R );
if ( e.x > e.y ) {
discard;
}
vec2 f = ray_vs_sphere( eye, dir, R_INNER );
e.y = min( e.y, f.x );
vec3 I = in_scatter( eye, dir, e, l );
fragColor = vec4( I, 1.0 );
}
I've read through the documentation of that function and how it's supposed work at https://www.shadertoy.com/howto .
Image shaders implement the mainImage() function in order to generate
the procedural images by computing a color for each pixel. This
function is expected to be called once per pixel, and it is
responsability of the host application to provide the right inputs to
it and get the output color from it and assign it to the screen pixel.
The prototype is:
void mainImage( out vec4 fragColor, in vec2 fragCoord );
where fragCoord contains the pixel coordinates for which the shader
needs to compute a color. The coordinates are in pixel units, ranging
from 0.5 to resolution-0.5, over the rendering surface, where the
resolution is passed to the shader through the iResolution uniform
(see below).
The resulting color is gathered in fragColor as a four component
vector, the last of which is ignored by the client. The result is
gathered as an "out" variable in prevision of future addition of
multiple render targets.
So in that function there are references to iGlobalTime to make the camera rotate with time and references to the iResolution for the resolution. I've embedded the shader in a Unity shader and tried to fix and wireup the dir, eye and l sothat it works with Unity, but I'm completly stuck. I get some sort of picture which looks "related" to the original shader: (Top is original, buttom the current unity state)
I'm not a shader profesional, I only know some basics of OpenGL, but for the most part, I write game logic in C#, so all I could really do was look at other shader examples and look at how I could get the data about camera, lightsources etc. in this code, but as you can see, nothing works out, really.
I've copied the skelton-code for the shader from https://en.wikibooks.org/wiki/GLSL_Programming/Unity/Specular_Highlights and some vectors from http://forum.unity3d.com/threads/glsl-shader.39629/ .
I hope someone can point me in some direction on how to fix this shader / correctly port it to unity. Below is the current shader code, all you have to do to reproduce it is create a new shader in a blank project, copy that code inside, make a new material, assign the shader to that material, then add a sphere and add that material on it and add a directional light.
Shader "Unlit/AtmoFragShader" {
Properties{
_MainTex("Base (RGB)", 2D) = "white" {}
_LC("LC", Color) = (1,0,0,0) /* stuff from the testing shader, now really used */
_LP("LP", Vector) = (1,1,1,1)
}
SubShader{
Tags{ "Queue" = "Geometry" } //Is this even the right queue?
Pass{
//Tags{ "LightMode" = "ForwardBase" }
GLSLPROGRAM
/* begin port by copying in the constants */
// math const
const float PI = 3.14159265359;
const float DEG_TO_RAD = PI / 180.0;
const float MAX = 10000.0;
// scatter const
const float K_R = 0.166;
const float K_M = 0.0025;
const float E = 14.3; // light intensity
const vec3 C_R = vec3(0.3, 0.7, 1.0); // 1 / wavelength ^ 4
const float G_M = -0.85; // Mie g
const float R = 1.0; /* this is the radius of the spehere? this should be set from the geometry or something.. */
const float R_INNER = 0.7;
const float SCALE_H = 4.0 / (R - R_INNER);
const float SCALE_L = 1.0 / (R - R_INNER);
const int NUM_OUT_SCATTER = 10;
const float FNUM_OUT_SCATTER = 10.0;
const int NUM_IN_SCATTER = 10;
const float FNUM_IN_SCATTER = 10.0;
/* begin functions. These are out of the defines because they should be accesible to anyone. */
// angle : pitch, yaw
mat3 rot3xy(vec2 angle) {
vec2 c = cos(angle);
vec2 s = sin(angle);
return mat3(
c.y, 0.0, -s.y,
s.y * s.x, c.x, c.y * s.x,
s.y * c.x, -s.x, c.y * c.x
);
}
// ray direction
vec3 ray_dir(float fov, vec2 size, vec2 pos) {
vec2 xy = pos - size * 0.5;
float cot_half_fov = tan((90.0 - fov * 0.5) * DEG_TO_RAD);
float z = size.y * 0.5 * cot_half_fov;
return normalize(vec3(xy, -z));
}
// ray intersects sphere
// e = -b +/- sqrt( b^2 - c )
vec2 ray_vs_sphere(vec3 p, vec3 dir, float r) {
float b = dot(p, dir);
float c = dot(p, p) - r * r;
float d = b * b - c;
if (d < 0.0) {
return vec2(MAX, -MAX);
}
d = sqrt(d);
return vec2(-b - d, -b + d);
}
// Mie
// g : ( -0.75, -0.999 )
// 3 * ( 1 - g^2 ) 1 + c^2
// F = ----------------- * -------------------------------
// 2 * ( 2 + g^2 ) ( 1 + g^2 - 2 * g * c )^(3/2)
float phase_mie(float g, float c, float cc) {
float gg = g * g;
float a = (1.0 - gg) * (1.0 + cc);
float b = 1.0 + gg - 2.0 * g * c;
b *= sqrt(b);
b *= 2.0 + gg;
return 1.5 * a / b;
}
// Reyleigh
// g : 0
// F = 3/4 * ( 1 + c^2 )
float phase_reyleigh(float cc) {
return 0.75 * (1.0 + cc);
}
float density(vec3 p) {
return exp(-(length(p) - R_INNER) * SCALE_H);
}
float optic(vec3 p, vec3 q) {
vec3 step = (q - p) / FNUM_OUT_SCATTER;
vec3 v = p + step * 0.5;
float sum = 0.0;
for (int i = 0; i < NUM_OUT_SCATTER; i++) {
sum += density(v);
v += step;
}
sum *= length(step) * SCALE_L;
return sum;
}
vec3 in_scatter(vec3 o, vec3 dir, vec2 e, vec3 l) {
float len = (e.y - e.x) / FNUM_IN_SCATTER;
vec3 step = dir * len;
vec3 p = o + dir * e.x;
vec3 v = p + dir * (len * 0.5);
vec3 sum = vec3(0.0);
for (int i = 0; i < NUM_IN_SCATTER; i++) {
vec2 f = ray_vs_sphere(v, l, R);
vec3 u = v + l * f.y;
float n = (optic(p, v) + optic(v, u)) * (PI * 4.0);
sum += density(v) * exp(-n * (K_R * C_R + K_M));
v += step;
}
sum *= len * SCALE_L;
float c = dot(dir, -l);
float cc = c * c;
return sum * (K_R * C_R * phase_reyleigh(cc) + K_M * phase_mie(G_M, c, cc)) * E;
}
/* end functions */
/* vertex shader begins here*/
#ifdef VERTEX
const float SpecularContribution = 0.3;
const float DiffuseContribution = 1.0 - SpecularContribution;
uniform vec4 _LP;
varying vec2 TextureCoordinate;
varying float LightIntensity;
varying vec4 someOutput;
/* transient stuff */
varying vec3 eyeOutput;
varying vec3 dirOutput;
varying vec3 lOutput;
varying vec2 eOutput;
/* lighting stuff */
// i.e. one could #include "UnityCG.glslinc"
uniform vec3 _WorldSpaceCameraPos;
// camera position in world space
uniform mat4 _Object2World; // model matrix
uniform mat4 _World2Object; // inverse model matrix
uniform vec4 _WorldSpaceLightPos0;
// direction to or position of light source
uniform vec4 _LightColor0;
// color of light source (from "Lighting.cginc")
void main()
{
/* code from that example shader */
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
vec3 ecPosition = vec3(gl_ModelViewMatrix * gl_Vertex);
vec3 tnorm = normalize(gl_NormalMatrix * gl_Normal);
vec3 lightVec = normalize(_LP.xyz - ecPosition);
vec3 reflectVec = reflect(-lightVec, tnorm);
vec3 viewVec = normalize(-ecPosition);
/* copied from https://en.wikibooks.org/wiki/GLSL_Programming/Unity/Specular_Highlights for testing stuff */
//I have no idea what I'm doing, but hopefully this computes some vectors which I need
mat4 modelMatrix = _Object2World;
mat4 modelMatrixInverse = _World2Object; // unity_Scale.w
// is unnecessary because we normalize vectors
vec3 normalDirection = normalize(vec3(
vec4(gl_Normal, 0.0) * modelMatrixInverse));
vec3 viewDirection = normalize(vec3(
vec4(_WorldSpaceCameraPos, 1.0)
- modelMatrix * gl_Vertex));
vec3 lightDirection;
float attenuation;
if (0.0 == _WorldSpaceLightPos0.w) // directional light?
{
attenuation = 1.0; // no attenuation
lightDirection = normalize(vec3(_WorldSpaceLightPos0));
}
else // point or spot light
{
vec3 vertexToLightSource = vec3(_WorldSpaceLightPos0
- modelMatrix * gl_Vertex);
float distance = length(vertexToLightSource);
attenuation = 1.0 / distance; // linear attenuation
lightDirection = normalize(vertexToLightSource);
}
/* test port */
// default ray dir
//That's the direction of the camera here?
vec3 dir = viewDirection; //normalDirection;//viewDirection;// tnorm;//lightVec;//lightDirection;//normalDirection; //lightVec;//tnorm;//ray_dir(45.0, iResolution.xy, fragCoord.xy);
// default ray origin
//I think they mean the position of the camera here?
vec3 eye = vec3(_WorldSpaceCameraPos); //vec3(_WorldSpaceLightPos0); //// vec3(0.0, 0.0, 0.0); //_WorldSpaceCameraPos;//ecPosition; //vec3(0.0, 0.0, 2.4);
// rotate camera not needed, remove it
// sun light dir
//I think they mean the direciton of our directional light?
vec3 l = lightDirection;//_LightColor0.xyz; //lightDirection; //normalDirection;//normalize(vec3(_WorldSpaceLightPos0));//lightVec;// vec3(0, 0, 1);
/* this computes the intersection of the ray and the sphere.. is this really needed?*/
vec2 e = ray_vs_sphere(eye, dir, R);
/* copy stuff sothat we can use it on the fragment shader, "discard" is only allowed in fragment shader,
so the rest has to be computed in fragment shader */
eOutput = e;
eyeOutput = eye;
dirOutput = dir;
lOutput = dir;
}
#endif
#ifdef FRAGMENT
uniform sampler2D _MainTex;
varying vec2 TextureCoordinate;
uniform vec4 _LC;
varying float LightIntensity;
/* transient port */
varying vec3 eyeOutput;
varying vec3 dirOutput;
varying vec3 lOutput;
varying vec2 eOutput;
void main()
{
/* real fragment */
if (eOutput.x > eOutput.y) {
//discard;
}
vec2 f = ray_vs_sphere(eyeOutput, dirOutput, R_INNER);
vec2 e = eOutput;
e.y = min(e.y, f.x);
vec3 I = in_scatter(eyeOutput, dirOutput, eOutput, lOutput);
gl_FragColor = vec4(I, 1.0);
/*vec4 c2;
c2.x = 1.0;
c2.y = 1.0;
c2.z = 0.0;
c2.w = 1.0f;
gl_FragColor = c2;*/
//gl_FragColor = c;
}
#endif
ENDGLSL
}
}
}
Any help is appreciated, sorry for the long post and explanations.
Edit: I just found out that the radius of the spehere does have an influence on the stuff, a sphere with scale 2.0 in every direction gives a much better result. However, the picture is still completly independent of the viewing angle of the camera and any lights, this is nowhere near the shaderlab version.
It's look like you are trying to render a 2D texture over a sphere. It has some different approach. For what you trying to do, I would apply the shader over a plane crossed with the sphere.
For general purpose, look this article showing how to convert shaderToy to Unity3D.
There is some steps that I included here:
Replace iGlobalTime shader input (“shader playback time in seconds”) with _Time.y
Replace iResolution.xy (“viewport resolution in pixels”) with _ScreenParams.xy
Replace vec2 types with float2, mat2 with float2x2 etc.
Replace vec3(1) shortcut constructors in which all elements have same value with explicit float3(1,1,1)
Replace Texture2D with Tex2D
Replace atan(x,y) with atan2(y,x) <- Note parameter ordering!
Replace mix() with lerp()
Replace *= with mul()
Remove third (bias) parameter from Texture2D lookups
mainImage(out vec4 fragColor, in vec2 fragCoord) is the fragment shader function, equivalent to float4 mainImage(float2 fragCoord : SV_POSITION) : SV_Target
UV coordinates in GLSL have 0 at the top and increase downwards, in HLSL 0 is at the bottom and increases upwards, so you may need to use uv.y = 1 – uv.y at some point.
About this question:
Tags{ "Queue" = "Geometry" } //Is this even the right queue?
Queue references the order it will be rendered, Geometry is one of the first of, if you want you shader running over everything you could use Overlay for example. This topic is covered here.
Background - this render queue is rendered before any others. It is used for skyboxes and the like.
Geometry (default) - this is used for most objects. Opaque geometry uses this queue.
AlphaTest - alpha tested geometry uses this queue. It’s a separate queue from - Geometry one since it’s more efficient to render alpha-tested objects after all solid ones are drawn.
Transparent - this render queue is rendered after Geometry and AlphaTest, in back-to-front order. Anything alpha-blended (i.e. shaders that don’t write to depth buffer) should go here (glass, particle effects).
Overlay - this render queue is meant for overlay effects. Anything rendered last should go here (e.g. lens flares).