I am trying to implement this paper (https://storage.googleapis.com/pub-tools-public-publication-data/pdf/a0ed9e70a833a3e3ef0ad9efc1d979b59eedb8c4.pdf) into Unity, but I have a hard time understanding the different equations. I especially do not understand what are the "world-space UV gradients" described in Section 4.2.
I spent time implementing the equations 1 to 7, here's what I got so far in my fragment shader:
float w = 64.0;
float2 uGrad = float2(ddx(uv.x), ddy(uv.x));
float2 vGrad = float2(ddx(uv.y), ddy(uv.y));
float2 E = float2(-log2(w * length(uGrad)), -log2(w * length(vGrad)));
float2 uv0 = float2(pow(2.0, floor(E.x)) * uv.x, pow(2.0, floor(E.y)) * uv.y);
float2 uv1 = float2(2.0 * uv0.x, uv0.y);
float2 uv2 = float2(uv0.x, 2.0 * uv0.y);
float2 uv3 = float2(2.0 * uv0.x, 2.0 * uv0.y);
float2 blendCoef = float2(Blend(E.x - floor(E.x)), Blend(E.y - floor(E.y)));
float t0 = tex2D(_DisplacementTex, uv0);
float t1 = tex2D(_DisplacementTex, uv1);
float t2 = tex2D(_DisplacementTex, uv2);
float t3 = tex2D(_DisplacementTex, uv3);
return (1.0 - blendCoef.y) * ((1.0 - blendCoef.x) * t0 + blendCoef.x * t2) + blendCoef.y * ((1.0 - blendCoef.x) * t1 + blendCoef.x * t3);
and the Blend function returns this:
pow(-2.0 * x, 3.0) + pow(3.0 * x, 2.0);
Here's the result in Unity.
Now I am stuck at the 8th and 9th equations, that I have tried to implement in my vertex shader as:
float3 wNormal = UnityObjectToWorldNormal(v.normal);
float3 wTangent = UnityObjectToWorldDir(v.tangent);
float3 wBinormal = cross(wNormal, wTangent);
float3 vTangent = mul((float3x3)UNITY_MATRIX_V, wTangent);
float3 vBinormal = mul((float3x3)UNITY_MATRIX_V, wBinormal);
float2 uGradWS = 0.01;
float2 vGradWS = 0.01;
o.uGrad = length(uGradWS) * vTangent / length(vTangent);
o.vGrad = length(vGradWS) * vBinormal / length(vBinormal);
This is one of the first time I really try to implement something using only equations from the paper (I am usually able to find pre-existing implementation). I'd really like to be able to continue doing this in the future, but it is hard because of my lack of mathematical expertise.
Do you think that I am on the right track with what I already have?
Do you know how I could compute the world-space UV gradient?
My guess is that I would need to compute it on the CPU and pass it as extra vertex data.
Related
The problem is that when I tried converting height map to normal map. The results are wrong. For some reason there is 3 light sources that is emitting from top (green), right (red), and left (blue) in the texture.
This is the GeoMath.hlsl code that I am using
static const float PI = 3.141592653589793238462643383279;
float2 longitudeLatitudeToUV(float2 longLat) {
float longitude = longLat[0];
float latitude = longLat[1];
float u = longitude / (2 * PI) + 0.5;
float v = latitude / PI + 0.5;
return float2(u,v);
}
float3 longitudeLatitudeToPoint(float2 longLat) {
float longitude = longLat[0];
float latitude = longLat[1];
float x;
float y;
float z;
y = sin(latitude);
float r = cos(latitude);
x = sin(longitude) * r;
z = -cos(longitude) * r;
return float3(x, y, z);
}
float2 uvToLongitudeLatitude(float2 uv) {
float longitude = (uv.x - 0.5) * (2 * PI);
float latitude = (uv.y - 0.5) * PI;
return float2(longitude, latitude);
}
float2 pointToLongitudeLatitude(float3 p) {
float longitude = atan2(p.x, p.z);
float latitude = asin(p.y);
return float2(longitude, latitude);
}
float2 pointToUV(float3 p) {
p = normalize(p);
return longitudeLatitudeToUV(pointToLongitudeLatitude(p));
}
This is the compute shader I am using to convert height map into normal map.
#pragma kernel CSMain
#include "GeoMath.hlsl"
Texture2D<float> _HeightMap;
RWTexture2D<float4> _NormalMap;
int _TextureSize_Width;
int _TextureSize_Height;
float _WorldRadius;
float _HeightMultiplier;
float3 CalculateWorldPoint(uint2 texCoord)
{
float2 uv = texCoord / float2(_TextureSize_Width - 1, _TextureSize_Height - 1);
float2 longLat = uvToLongitudeLatitude(uv);
float3 spherePoint = longitudeLatitudeToPoint(longLat);
float height01 = _HeightMap[texCoord].r + 1.0;
float worldHeight = _WorldRadius + height01 * _HeightMultiplier;
return spherePoint * worldHeight;
}
uint2 WrapIndex(uint2 texCoord)
{
texCoord.x = (texCoord.x + _TextureSize_Width) % _TextureSize_Width;
texCoord.y = max(min(_TextureSize_Height - 1, texCoord.y), 0);
return texCoord;
}
[numthreads(8,8,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
float3 normalVector;
float3 posNorth = CalculateWorldPoint(WrapIndex(id.xy + uint2(0, 1)));
float3 posSouth = CalculateWorldPoint(WrapIndex(id.xy + uint2(0, -1)));
float3 posEast = CalculateWorldPoint(WrapIndex(id.xy + uint2(1, 0)));
float3 posWest = CalculateWorldPoint(WrapIndex(id.xy + uint2(-1, 0)));
float3 dirNorth = normalize(posNorth - posSouth);
float3 dirEast = normalize(posEast - posWest);
normalVector = normalize(cross(dirNorth, dirEast));
_NormalMap[id.xy] = float4(normalVector, 1.0);
}
And this is the result I am getting is down below height map (top), generated normal map from the code above (bottom)
I believe that you are trying to get object space normals.
But there is tiny detail is missing.
Possible values for normalized vector3 are -1..1 for each axis.
And possible values for pixel: 0..1.
You just need to adjust ranges.
This line roughly fixes problem:
_NormalMap[id.xy] = float4(normalVector / 2 + float3(0.5, 0.5, 0.5), 1.0);
Result
Can anyone let me know if I'm on the right tack with this: I have a vertex shader that bumps outward dynamically depending on a point passed in (think a mouse running under a rug). In order for the lighting to update properly, I need to recalculate the normals after modifying the vertex position. I have access to each vertex point as well as the origin.
My current thinking is I do some sort of math to determine the tangent / bitangent and use a cross product to determine the normal. My math skills aren't great, what would I need to do to determine those vectors?
Here's my current vert shader:
void vert(inout appdata_full v)
{
float3 worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
float distanceToLift = distance(worldPos, _LiftOrigin);
v.vertex.y = smoothstep(_LiftHeight, 0, distanceToLift / _LiftRadius) * 5;
}
A simple solution is covered in this tutorial by Ronja, which I'll summarize here with modifications which reflect your specific case.
First, find two points offset from your current point by a small amount of tangent and bitangent (which you can calculate from normal and tangent):
float3 posPlusTangent = v.vertex + v.tangent * 0.01;
worldPos = mul(unity_ObjectToWorld, posPlusTangent).xyz;
distanceToLift = distance(worldPos, _LiftOrigin);
posPlusTangent.y = smoothstep(_LiftHeight, 0, distanceToLift / _LiftRadius) * 5;
float3 bitangent = cross(v.normal, v.tangent);
float3 posPlusBitangent = v.vertex + bitangent * 0.01;
worldPos = mul(unity_ObjectToWorld, bitangent).xyz;
distanceToLift = distance(worldPos, _LiftOrigin);
posPlusBitangent.y = smoothstep(_LiftHeight, 0, distanceToLift / _LiftRadius) * 5;
Then, find the difference between these offsets and the new vertex pos to find the new tangent and bitangent, then do another cross product to find the resulting normal:
float3 modifiedTangent = posPlusTangent - v.vertex;
float3 modifiedBitangent = posPlusBitangent - v.vertex;
float3 modifiedNormal = cross(modifiedTangent, modifiedBitangent);
v.normal = normalize(modifiedNormal);
Altogether:
float find_offset(float3 localV)
{
float3 worldPos = mul(unity_ObjectToWorld, localV).xyz;
float distanceToLift = distance(worldPos, _LiftOrigin);
return smoothstep(_LiftHeight, 0, distanceToLift / _LiftRadius) * 5;
}
void vert(inout appdata_full v)
{
v.vertex.y = find_offset(v.vertex);
float3 posPlusTangent = v.vertex + v.tangent * 0.01;
posPlusTangent.y = find_offset(posPlusTangent);
float3 bitangent = cross(v.normal, v.tangent);
float3 posPlusBitangent = v.vertex + bitangent * 0.01;
posPlusTangent.y = find_offset(posPlusBitangent);
float3 modifiedTangent = posPlusTangent - v.vertex;
float3 modifiedBitangent = posPlusBitangent - v.vertex;
float3 modifiedNormal = cross(modifiedTangent, modifiedBitangent);
v.normal = normalize(modifiedNormal);
}
This is a method of approximation, but it may be good enough!
I am working on a custom Metal shader, and I am trying to replicate this particular effect from shader toy: https://www.shadertoy.com/view/3sfcR2
But I can't seem to understand how to convert their texture() function to the Metal shader format. Any ideas?
Here's what I have so far in Metal:
#include <metal_stdlib>
using namespace metal;
kernel void chromaticAberration(texture2d<float, access::read> inTexture [[ texture(0) ]],
texture2d<float, access::write> outTexture [[ texture(1) ]],
device const float *time [[ buffer(0) ]],
uint2 gid [[ thread_position_in_grid ]])
{
float ChromaticAberration = 0.0 / 10.0 + 8.0;
// get the width and height of the screen texture
uint width = outTexture.get_width();
uint height = outTexture.get_height();
// set its resolution
float2 iResolution = float2(width, height);
float4 orig = inTexture.read(gid);
float2 uv = orig.xy / iResolution.xy;
float2 texel = 1.0 / iResolution.xy;
float2 coords = (uv - 0.5) * 2.0;
float coordDot = dot (coords, coords);
float2 precompute = ChromaticAberration * coordDot * coords;
float2 uvR = uv - texel.xy * precompute;
float2 uvB = uv + texel.xy * precompute;
// How to convert these texture() functions?
float r = texture(iChannel0, uvR).r;
float g = texture(iChannel0, uv).g;
float b = texture(iChannel0, uvB).b;
float a = 1.;
const float4 colorAtPixel = float4(r,g,b,1.0);
outTexture.write(colorAtPixel, gid);
}
EDIT:
Following the answer of #JustSomeGuy I was able to successfully replicate this shader in Metal. Here is the final version:
#include <metal_stdlib>
using namespace metal;
kernel void chromaticAberration(texture2d<float, access::read> inTexture [[ texture(0) ]],
texture2d<float, access::write> outTexture [[ texture(1) ]],
texture2d<float, access::sample> sampleTexture [[ texture(2) ]],
device const float *time [[ buffer(0) ]],
uint2 gid [[ thread_position_in_grid ]])
{
float ChromaticAberration = 0.0 / 10.0 + 8.0;
// get the width and height of the screen texture
uint width = inTexture.get_width();
uint height = inTexture.get_height();
// set its resolution
float2 iResolution = float2(width, height);
float2 uv = float2(gid) / iResolution.xy;
float2 texel = 1.0 / iResolution.xy;
float2 coords = (uv - 0.5) * 2.0;
float coordDot = dot (coords, coords);
float2 precompute = ChromaticAberration * coordDot * coords;
float2 uvR = uv - texel.xy * precompute;
float2 uvB = uv + texel.xy * precompute;
constexpr sampler s(address::clamp_to_edge, filter::linear);
float r = sampleTexture.sample(s, uvR).r;
float g = sampleTexture.sample(s, uv).g;
float b = sampleTexture.sample(s, uvB).b;
const float4 colorAtPixel = float4(r,g,b,1.0);
outTexture.write(colorAtPixel, gid);
}
Kudos to #JustSomeGuy! Thank you for your help!
Well, I think ShaderToy uses glsl or some of it's variants, so texture function is basically a sample call in Metal. Let's look at an example. I'm using this doc. We'll use the 2D version since that's what you probably want.
gvec4 texture( gsampler2D sampler,
vec2 P,
[float bias]);
So in this case iChannel0 is your sampler and uvR, uv, uvB are texture coordinates (P). They should be float2.
So this is a global function that samples color for us from a sampler. In Metal, we have separate textures and samplers and you'll need both to sample. Also, in Metal sample is not a global function, but a member function of a texture2d. Let's look at Metal Language Specification, Section 6.10.3 "2D Texture". There we'll find a method:
Tv sample(sampler s, float2 coord, int2 offset = int2(0)) const
where Tv is the template parameter you have in your texture2d instantiation (probably half or float). It also takes a sampler and texcoords, so this code from your sample:
float r = texture(iChannel0, uvR).r;
float g = texture(iChannel0, uv).g;
float b = texture(iChannel0, uvB).b;
will turn into something like this:
constexpr sampler mySampler { filter::linear };
float r = iChannel0.sample(mySampler, uvR).r;
float g = iChannel0.sample(mySampler, uv).g;
float b = iChannel0.sample(mySampler, uvB).b;
And you will also need to pass texture2d<float> iChannel [[texture(N)]] (where N is the index you chose) to your shader the same way shadertoy does it (it's just a global var there, but in Metal you'd need to actually pass it as an argument).
I'm following this tutorial: https://www.youtube.com/watch?v=CzORVWFvZ28 to convert some code from ShaderToy to Unity. This is the shader that I'm attempting to convert: https://www.shadertoy.com/view/Ws23WD.
I saw that in the tutorial, he was able to take his fragColor statement from ShaderToy and simply return a color in Unity instead. However, when I tried doing that with the code that I have from ShaderToy, an error about not being able to implicitly convert from float3 to float4 popped up. I saw that my color variable is being declared as a float3 which is what must be causing the issue, but I need some help figuring out how to fix this.
I also noticed that I have an 'a' value with the fragColor variable, in addition to the rgb values; would I use a float4 to take in the (r, g, b, a) values?
fixed4 frag (v2f i) : SV_Target
{
//float2 uv = float2(fragCoord.x / iResolution.x, fragCoord.y / iResolution.y);
float2 uv = float2(i.uv);
uv -= 0.5;
//uv /= float2(iResolution.y / iResolution.x, 1);
float3 cam = float3(0, -0.15, -3.5);
float3 dir = normalize(float3(uv,1));
float cam_a2 = sin(_Time.y) * pi * 0.1;
cam.yz = rotate(cam.yz, cam_a2);
dir.yz = rotate(dir.yz, cam_a2);
float cam_a = _Time.y * pi * 0.1;
cam.xz = rotate(cam.xz, cam_a);
dir.xz = rotate(dir.xz, cam_a);
float3 color = float3(0.16, 0.12, 0.10);
float t = 0.00001;
const int maxSteps = 128;
for(int i = 0; i < maxSteps; ++i) {
float3 p = cam + dir * t;
float d = scene(p);
if(d < 0.0001 * t) {
color = float3(1.0, length(p) * (0.6 + (sin(_Time.y*3.0)+1.0) * 0.5 * 0.4), 0);
break;
}
t += d;
}
//fragColor.rgb = color;
return color;
//fragColor.a = 1.0;
}
I'm attempting to write a simple wave-like shader in Unity 2017.1.0f3 using the sin function, however it's all an undefined one color shape without redefining the normals so it can get the shading right. However despite my maths I can't seem to get these normals to look right, and as you can see in the GIF it's all super messed up.
So here's what I'm doing:
void vert(inout appdata_full v, out Input o)
{
UNITY_INITIALIZE_OUTPUT(Input, o);
//Just basing the height of the wave on distance from the center and time
half offsetvert = o.offsetVert = ((v.vertex.x*v.vertex.x) + (v.vertex.z * v.vertex.z))*100;//The 100 is to compensate for the massive scaling of the object
half value = _Scale * sin(-_Time.w * _Speed + offsetvert * _Frequency)/100;
v.vertex.y += value;
o.pos = v.vertex.xyz;
}
// Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader.
// See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing.
// #pragma instancing_options assumeuniformscaling
UNITY_INSTANCING_CBUFFER_START(Props)
// put more per-instance properties here
UNITY_INSTANCING_CBUFFER_END
void surf (Input IN, inout SurfaceOutputStandard o)
{
//Calculate new normals
//Refer to MATH (1) for how I'm getting the y
float3 norm = (0,sqrt(1/(1+1/(-100/(_Scale*_Frequency*cos(_Time.w * _Speed + IN.offsetVert * _Frequency))))),0);
//Refer to Math (2) for how I'm getting the x and z
float derrivative = _Scale*_Frequency*cos(-_Time.w * _Speed + IN.offsetVert * _Frequency)/100;
float3 norm = (0,sqrt(1/(1+1/(-1/(derrivative)))),0);
float remaining = 1 - pow(norm.y,2);
norm.x = sqrt(remaining/(1 + IN.pos.z*IN.pos.z/(IN.pos.x*IN.pos.x)));
norm.z = sqrt(1-norm.y*norm.y-norm.x*norm.x);
//Assume this is facing away from the center
if (IN.pos.z<0)
norm.z = -norm.z;
if (IN.pos.x<0)
norm.x = -norm.x;
//Flip the direction if necessary
if (derrivative > 0){
norm.x = -norm.x;
norm.z = -norm.z;
}
norm.y = abs(norm.y);
norm = normalize(norm);//Just to be safe
o.Albedo = _Color.rgb;
// Metallic and smoothness come from slider variables
o.Metallic = _Metallic;
o.Smoothness = _Glossiness;
o.Alpha = c.a;
o.Normal.xyz = norm;
}
MATH 1
If the y as a function of distance is
y = (scale/100)sin(time.w * speed + distance * frequency)
then
dy/d(distance) = (scale/100) * frequency * cos(time.w * speed + distance * frequency)
making the gradient of the normal of
y/(some x and z direction) -100/(scale * frequency * cos(time.w * speed + distance * frequency)).
We also know that
(y component)^2 + (some xz component)^2 = 1,
where
(y component)/(some xz component) = the normal gradient defined.
Solving these two simultaneous equations we get
y component = sqrt(1/(1+1/(gradient^2)))
MATH 2
We know that
(x component)/(z component) = (x position)/(z position)
and, by Pythagoras, that
(x component)^2 + (z component)^2 = 1 - (y component)^2
and solving these simultaneous equations we get
x component = sqrt((1 - (y component)^2)/(1 + (z position / x position)^2))
We can then get the z component through Pythagoras.
Please, let me know if you figure out what's wrong :)
Why are you calculating the normals in the surface function? This is done per fragment and will be very inefficient. Why not just calculate the normal in the vertex function?
What i would do is to do the same calculation as for the vertex offset, except once again for two other points which are offset in the X and Y direction from the vertex - then you use the cross product of the vectors between them and the offset vertex to get the normal.
Let's say that you have the offset moved to its own function, which takes the coordinates as parameter. Then you could do this:
float3 offsetPos = VertexOffset(v.vertex.xy)
float3 offsetPosX = offsetPos - VertexOffset(v.vertex.xy + float2(0.1, 0))
float3 offsetPosY = offsetPos - VertexOffset(v.vertex.xy + float2(0, 0.1))
v.vertex.xyz = offsetPos
v.normal.xyz = cross(normalize(offsetPosX), normalize(offsetPosY))