Green Chrome Key Shader using depth - unity3d

I have written a shader which converts an RGB Camera Value to HSV and then apply some filtering for green chrome.
Current Problem
If the object at foreground (player) has green pixels, it will be cut out.
I have already depth camera, how can I use that property for making a better cut out chrome key ?
frag (v2f i) : SV_Target
{
fixed4 col = tex2D(_MainTex, i.uv);
if (_ShowBackground)
{
fixed4 col2 = tex2D(_TexReplacer, i.uv);
col = col2;
}
else if (!_ShowOriginal)
{
fixed4 col2 = tex2D(_TexReplacer, i.uv);
float maskY = 0.2989 * _GreenColor.r + 0.5866 * _GreenColor.g + 0.1145 * _GreenColor.b;
float maskCr = 0.7132 * (_GreenColor.r - maskY);
float maskCb = 0.5647 * (_GreenColor.b - maskY);
float Y = 0.2989 * col.r + 0.5866 * col.g + 0.1145 * col.b;
float Cr = 0.7132 * (col.r - Y);
float Cb = 0.5647 * (col.b - Y);
float alpha = smoothstep(_Sensitivity, _Sensitivity + _Smooth, distance(float2(Cr, Cb), float2(maskCr, maskCb)));
col = (alpha * col) + ((1 - alpha) * col2);
}
return col;
}

Unity's UnityObjectToClipPos(float3 pos) let's you transform a vertex into clip space. This means 0 to 1 on all axes, z axis is the distance from the rendering camera (from near to far clipping plane, I believe).
You can use this distance to simply to only apply your keying to vertices further than a given threshold.
If you do not want to use normalized coordinates you can also convert your vertex to world space using mul(unity_ObjectToWorld, vertex.position) and afterwards to camera space, by multiplying the world position with the camera's world to local matrix (which you have to pass into your shader).
To access the camera's depth texture in shader you can use _CameraDepthTexture (see documentation https://docs.unity3d.com/Manual/SL-CameraDepthTexture.html section Shader variables).
You can sample it like any other texture using tex2D(_CameraDepthTexture, i.uv);

Related

Post-tessellation Vertex Function and Raytracing: Getting More Detailed Geometries for the Acceleration Structures

I have recently gained some interest in the raytracing API provided by the Metal framework. I understand that you can attach a vertex buffer to a geometry descriptor that Metal will use to create the acceleration structure later (on a MTLPrimitiveAccelerationStructureDescriptor instance for example).
This made me wonder if it were possible to write the output of the tessellator into a separate vertex buffer from the post-tessellation vertex shader and pass that along to the raytracer. I thought that perhaps you could get more detailed geometry and still render without rasterization. For example, I might have the following simple post-tessellation vertex function:
[[patch(triangle, 3)]]
vertex FunctionOutIn tessellation_vertex_triangle(PatchIn patchIn [[stage_in]],
float3 patch_coord [[ position_in_patch ]])
{
// Barycentric coordinates
float u = patch_coord.x;
float v = patch_coord.y;
float w = patch_coord.z;
// Convert to cartesian coordinates
float x = u * patchIn.control_points[0].position.x + v * patchIn.control_points[1].position.x + w * patchIn.control_points[2].position.x;
float y = u * patchIn.control_points[0].position.y + v * patchIn.control_points[1].position.y + w * patchIn.control_points[2].position.y;
// Output
FunctionOutIn vertexOut;
vertexOut.position = float4(x, y, 0.0, 1.0);
vertexOut.color = half4(u, v, w, 1.0h);
return vertexOut;
}
However, the following doesn't compile
// Triangle post-tessellation vertex function
[[patch(triangle, 3)]]
vertex void tessellation_vertex_triangle(device void *outputBuffer [[ buffer(0) ]],
PatchIn patchIn [[stage_in]],
float3 patch_coord [[ position_in_patch ]])
{
// Barycentric coordinates
float u = patch_coord.x;
float v = patch_coord.y;
float w = patch_coord.z;
// Convert to cartesian coordinates
float x = u * patchIn.control_points[0].position.x + v * patchIn.control_points[1].position.x + w * patchIn.control_points[2].position.x;
float y = u * patchIn.control_points[0].position.y + v * patchIn.control_points[1].position.y + w * patchIn.control_points[2].position.y;
// Output
FunctionOutIn vertexOut;
vertexOut.position = float4(x, y, 0.0, 1.0);
vertexOut.color = half4(u, v, w, 1.0h);
}
But I noticed that the function doesn't compile when I don't use the data in the control points as output, like so
[[patch(triangle, 3)]]
vertex FunctionOutIn tessellation_vertex_triangle(PatchIn patchIn [[stage_in]],
float3 patch_coord [[ position_in_patch ]])
{
// Barycentric coordinates
float u = patch_coord.x;
float v = patch_coord.y;
float w = patch_coord.z;
// Convert to cartesian coordinates
float x = u * patchIn.control_points[0].position.x + v * patchIn.control_points[1].position.x + w * patchIn.control_points[2].position.x;
float y = u * patchIn.control_points[0].position.y + v * patchIn.control_points[1].position.y + w * patchIn.control_points[2].position.y;
// Output
FunctionOutIn vertexOut;
// Does not use x or y (and therefore the `patch_control_point<T>`'s values
// are not used as output into the rasterizer)
vertexOut.position = float4(1.0, 1.0, 0.0, 1.0);
vertexOut.color = half4(1.0h, 1.0h, 1.0h, 1.0h);
return vertexOut;
}
I looked at the patch_control_point<T> template that was publicly exposed but didn't see anything enforcing this. What is going on here?
In particular, how would I go about increasing the quality of the geometry into the raytracer? Would I simply have to use more complex assets? Tessellation has its place in the rasterization pipeline, but can it be used elsewhere?

Unity 3D Subsurface Shader setting normal for appropriate lighting

I'm attempting to write a simple wave-like shader in Unity 2017.1.0f3 using the sin function, however it's all an undefined one color shape without redefining the normals so it can get the shading right. However despite my maths I can't seem to get these normals to look right, and as you can see in the GIF it's all super messed up.
So here's what I'm doing:
void vert(inout appdata_full v, out Input o)
{
UNITY_INITIALIZE_OUTPUT(Input, o);
//Just basing the height of the wave on distance from the center and time
half offsetvert = o.offsetVert = ((v.vertex.x*v.vertex.x) + (v.vertex.z * v.vertex.z))*100;//The 100 is to compensate for the massive scaling of the object
half value = _Scale * sin(-_Time.w * _Speed + offsetvert * _Frequency)/100;
v.vertex.y += value;
o.pos = v.vertex.xyz;
}
// Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader.
// See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing.
// #pragma instancing_options assumeuniformscaling
UNITY_INSTANCING_CBUFFER_START(Props)
// put more per-instance properties here
UNITY_INSTANCING_CBUFFER_END
void surf (Input IN, inout SurfaceOutputStandard o)
{
//Calculate new normals
//Refer to MATH (1) for how I'm getting the y
float3 norm = (0,sqrt(1/(1+1/(-100/(_Scale*_Frequency*cos(_Time.w * _Speed + IN.offsetVert * _Frequency))))),0);
//Refer to Math (2) for how I'm getting the x and z
float derrivative = _Scale*_Frequency*cos(-_Time.w * _Speed + IN.offsetVert * _Frequency)/100;
float3 norm = (0,sqrt(1/(1+1/(-1/(derrivative)))),0);
float remaining = 1 - pow(norm.y,2);
norm.x = sqrt(remaining/(1 + IN.pos.z*IN.pos.z/(IN.pos.x*IN.pos.x)));
norm.z = sqrt(1-norm.y*norm.y-norm.x*norm.x);
//Assume this is facing away from the center
if (IN.pos.z<0)
norm.z = -norm.z;
if (IN.pos.x<0)
norm.x = -norm.x;
//Flip the direction if necessary
if (derrivative > 0){
norm.x = -norm.x;
norm.z = -norm.z;
}
norm.y = abs(norm.y);
norm = normalize(norm);//Just to be safe
o.Albedo = _Color.rgb;
// Metallic and smoothness come from slider variables
o.Metallic = _Metallic;
o.Smoothness = _Glossiness;
o.Alpha = c.a;
o.Normal.xyz = norm;
}
MATH 1
If the y as a function of distance is
y = (scale/100)sin(time.w * speed + distance * frequency)
then
dy/d(distance) = (scale/100) * frequency * cos(time.w * speed + distance * frequency)
making the gradient of the normal of
y/(some x and z direction) -100/(scale * frequency * cos(time.w * speed + distance * frequency)).
We also know that
(y component)^2 + (some xz component)^2 = 1,
where
(y component)/(some xz component) = the normal gradient defined.
Solving these two simultaneous equations we get
y component = sqrt(1/(1+1/(gradient^2)))
MATH 2
We know that
(x component)/(z component) = (x position)/(z position)
and, by Pythagoras, that
(x component)^2 + (z component)^2 = 1 - (y component)^2
and solving these simultaneous equations we get
x component = sqrt((1 - (y component)^2)/(1 + (z position / x position)^2))
We can then get the z component through Pythagoras.
Please, let me know if you figure out what's wrong :)
Why are you calculating the normals in the surface function? This is done per fragment and will be very inefficient. Why not just calculate the normal in the vertex function?
What i would do is to do the same calculation as for the vertex offset, except once again for two other points which are offset in the X and Y direction from the vertex - then you use the cross product of the vectors between them and the offset vertex to get the normal.
Let's say that you have the offset moved to its own function, which takes the coordinates as parameter. Then you could do this:
float3 offsetPos = VertexOffset(v.vertex.xy)
float3 offsetPosX = offsetPos - VertexOffset(v.vertex.xy + float2(0.1, 0))
float3 offsetPosY = offsetPos - VertexOffset(v.vertex.xy + float2(0, 0.1))
v.vertex.xyz = offsetPos
v.normal.xyz = cross(normalize(offsetPosX), normalize(offsetPosY))

Is it possible to make an Equirectangular (spherical) camera projection in Unity? [duplicate]

I want to convert from cube map [figure1] into an equirectangular panorama [figure2].
Figure1
Figure2
It is possible to go from Spherical to Cubic (by following: Convert 2:1 equirectangular panorama to cube map ), but lost on how to reverse it.
Figure2 is to be rendered into a sphere using Unity.
Assuming the input image is in the following cubemap format:
The goal is to project the image to the equirectangular format like so:
The conversion algorithm is rather straightforward.
In order to calculate the best estimate of the color at each pixel in the equirectangular image given a cubemap with 6 faces:
Firstly, calculate polar coordinates that correspond to each pixel in
the spherical image.
Secondly, using the polar coordinates form a vector and determine on
which face of the cubemap and which pixel of that face the vector
lies; just like a raycast from the center of a cube would hit one of
its sides and a specific point on that side.
Keep in mind that there are multiple methods to estimate the color of a pixel in the equirectangular image given a normalized coordinate (u,v) on a specific face of a cubemap. The most basic method, which is a very raw approximation and will be used in this answer for simplicity's sake, is to round the coordinates to a specific pixel and use that pixel. Other more advanced methods could calculate an average of a few neighbouring pixels.
The implementation of the algorithm will vary depending on the context. I did a quick implementation in Unity3D C# that shows how to implement the algorithm in a real world scenario. It runs on the CPU, there is a lot room for improvement but it is easy to understand.
using UnityEngine;
public static class CubemapConverter
{
public static byte[] ConvertToEquirectangular(Texture2D sourceTexture, int outputWidth, int outputHeight)
{
Texture2D equiTexture = new Texture2D(outputWidth, outputHeight, TextureFormat.ARGB32, false);
float u, v; //Normalised texture coordinates, from 0 to 1, starting at lower left corner
float phi, theta; //Polar coordinates
int cubeFaceWidth, cubeFaceHeight;
cubeFaceWidth = sourceTexture.width / 4; //4 horizontal faces
cubeFaceHeight = sourceTexture.height / 3; //3 vertical faces
for (int j = 0; j < equiTexture.height; j++)
{
//Rows start from the bottom
v = 1 - ((float)j / equiTexture.height);
theta = v * Mathf.PI;
for (int i = 0; i < equiTexture.width; i++)
{
//Columns start from the left
u = ((float)i / equiTexture.width);
phi = u * 2 * Mathf.PI;
float x, y, z; //Unit vector
x = Mathf.Sin(phi) * Mathf.Sin(theta) * -1;
y = Mathf.Cos(theta);
z = Mathf.Cos(phi) * Mathf.Sin(theta) * -1;
float xa, ya, za;
float a;
a = Mathf.Max(new float[3] { Mathf.Abs(x), Mathf.Abs(y), Mathf.Abs(z) });
//Vector Parallel to the unit vector that lies on one of the cube faces
xa = x / a;
ya = y / a;
za = z / a;
Color color;
int xPixel, yPixel;
int xOffset, yOffset;
if (xa == 1)
{
//Right
xPixel = (int)((((za + 1f) / 2f) - 1f) * cubeFaceWidth);
xOffset = 2 * cubeFaceWidth; //Offset
yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight);
yOffset = cubeFaceHeight; //Offset
}
else if (xa == -1)
{
//Left
xPixel = (int)((((za + 1f) / 2f)) * cubeFaceWidth);
xOffset = 0;
yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight);
yOffset = cubeFaceHeight;
}
else if (ya == 1)
{
//Up
xPixel = (int)((((xa + 1f) / 2f)) * cubeFaceWidth);
xOffset = cubeFaceWidth;
yPixel = (int)((((za + 1f) / 2f) - 1f) * cubeFaceHeight);
yOffset = 2 * cubeFaceHeight;
}
else if (ya == -1)
{
//Down
xPixel = (int)((((xa + 1f) / 2f)) * cubeFaceWidth);
xOffset = cubeFaceWidth;
yPixel = (int)((((za + 1f) / 2f)) * cubeFaceHeight);
yOffset = 0;
}
else if (za == 1)
{
//Front
xPixel = (int)((((xa + 1f) / 2f)) * cubeFaceWidth);
xOffset = cubeFaceWidth;
yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight);
yOffset = cubeFaceHeight;
}
else if (za == -1)
{
//Back
xPixel = (int)((((xa + 1f) / 2f) - 1f) * cubeFaceWidth);
xOffset = 3 * cubeFaceWidth;
yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight);
yOffset = cubeFaceHeight;
}
else
{
Debug.LogWarning("Unknown face, something went wrong");
xPixel = 0;
yPixel = 0;
xOffset = 0;
yOffset = 0;
}
xPixel = Mathf.Abs(xPixel);
yPixel = Mathf.Abs(yPixel);
xPixel += xOffset;
yPixel += yOffset;
color = sourceTexture.GetPixel(xPixel, yPixel);
equiTexture.SetPixel(i, j, color);
}
}
equiTexture.Apply();
var bytes = equiTexture.EncodeToPNG();
Object.DestroyImmediate(equiTexture);
return bytes;
}
}
In order to utilize the GPU I created a shader that does the same conversion. It is much faster than running the conversion pixel by pixel on the CPU but unfortunately Unity imposes resolution limitations on cubemaps so it's usefulness is limited in scenarios when high resolution input image is to be used.
Shader "Conversion/CubemapToEquirectangular" {
Properties {
_MainTex ("Cubemap (RGB)", CUBE) = "" {}
}
Subshader {
Pass {
ZTest Always Cull Off ZWrite Off
Fog { Mode off }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma fragmentoption ARB_precision_hint_fastest
//#pragma fragmentoption ARB_precision_hint_nicest
#include "UnityCG.cginc"
#define PI 3.141592653589793
#define TWOPI 6.283185307179587
struct v2f {
float4 pos : POSITION;
float2 uv : TEXCOORD0;
};
samplerCUBE _MainTex;
v2f vert( appdata_img v )
{
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
o.uv = v.texcoord.xy * float2(TWOPI, PI);
return o;
}
fixed4 frag(v2f i) : COLOR
{
float theta = i.uv.y;
float phi = i.uv.x;
float3 unit = float3(0,0,0);
unit.x = sin(phi) * sin(theta) * -1;
unit.y = cos(theta) * -1;
unit.z = cos(phi) * sin(theta) * -1;
return texCUBE(_MainTex, unit);
}
ENDCG
}
}
Fallback Off
}
The quality of the resulting images can be greatly improved by either employing a more sophisticated method to estimate the color of a pixel during the conversion or by post processing the resulting image (or both, actually). For example an image of bigger size could be generated to apply a blur filter and then downsample it to the desired size.
I created a simple Unity project with two editor wizards that show how to properly utilize either the C# code or the shader shown above. Get it here:
https://github.com/Mapiarz/CubemapToEquirectangular
Remember to set proper import settings in Unity for your input images:
Point filtering
Truecolor format
Disable mipmaps
Non Power of 2: None (only for 2DTextures)
Enable Read/Write (only for 2DTextures)
cube2sphere automates the entire process. Example:
$ cube2sphere front.jpg back.jpg right.jpg left.jpg top.jpg bottom.jpg -r 2048 1024 -fTGA -ostitched

Problems porting a GLSL shadertoy shader to unity

I'm currently trying to port a shadertoy.com shader (Atmospheric Scattering Sample, interactive demo with code) to Unity. The shader is written in GLSL and I have to start the editor with C:\Program Files\Unity\Editor>Unity.exe -force-opengl to make it render the shader (otherwise a "This shader cannot be run on this GPU" error comes up), but that's not a problem right now. The problem is with porting that shader to Unity.
The functions for the scattering etc. are all identical and "runnable" in my ported shader, the only thing is that the mainImage() functions manages the camera, light directions and ray direction itself. This has to be ofcourse changed sothat Unity's camera position, view direction and light sources and directions are used.
The main function of the original looks like this:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// default ray dir
vec3 dir = ray_dir( 45.0, iResolution.xy, fragCoord.xy );
// default ray origin
vec3 eye = vec3( 0.0, 0.0, 2.4 );
// rotate camera
mat3 rot = rot3xy( vec2( 0.0, iGlobalTime * 0.5 ) );
dir = rot * dir;
eye = rot * eye;
// sun light dir
vec3 l = vec3( 0, 0, 1 );
vec2 e = ray_vs_sphere( eye, dir, R );
if ( e.x > e.y ) {
discard;
}
vec2 f = ray_vs_sphere( eye, dir, R_INNER );
e.y = min( e.y, f.x );
vec3 I = in_scatter( eye, dir, e, l );
fragColor = vec4( I, 1.0 );
}
I've read through the documentation of that function and how it's supposed work at https://www.shadertoy.com/howto .
Image shaders implement the mainImage() function in order to generate
the procedural images by computing a color for each pixel. This
function is expected to be called once per pixel, and it is
responsability of the host application to provide the right inputs to
it and get the output color from it and assign it to the screen pixel.
The prototype is:
void mainImage( out vec4 fragColor, in vec2 fragCoord );
where fragCoord contains the pixel coordinates for which the shader
needs to compute a color. The coordinates are in pixel units, ranging
from 0.5 to resolution-0.5, over the rendering surface, where the
resolution is passed to the shader through the iResolution uniform
(see below).
The resulting color is gathered in fragColor as a four component
vector, the last of which is ignored by the client. The result is
gathered as an "out" variable in prevision of future addition of
multiple render targets.
So in that function there are references to iGlobalTime to make the camera rotate with time and references to the iResolution for the resolution. I've embedded the shader in a Unity shader and tried to fix and wireup the dir, eye and l sothat it works with Unity, but I'm completly stuck. I get some sort of picture which looks "related" to the original shader: (Top is original, buttom the current unity state)
I'm not a shader profesional, I only know some basics of OpenGL, but for the most part, I write game logic in C#, so all I could really do was look at other shader examples and look at how I could get the data about camera, lightsources etc. in this code, but as you can see, nothing works out, really.
I've copied the skelton-code for the shader from https://en.wikibooks.org/wiki/GLSL_Programming/Unity/Specular_Highlights and some vectors from http://forum.unity3d.com/threads/glsl-shader.39629/ .
I hope someone can point me in some direction on how to fix this shader / correctly port it to unity. Below is the current shader code, all you have to do to reproduce it is create a new shader in a blank project, copy that code inside, make a new material, assign the shader to that material, then add a sphere and add that material on it and add a directional light.
Shader "Unlit/AtmoFragShader" {
Properties{
_MainTex("Base (RGB)", 2D) = "white" {}
_LC("LC", Color) = (1,0,0,0) /* stuff from the testing shader, now really used */
_LP("LP", Vector) = (1,1,1,1)
}
SubShader{
Tags{ "Queue" = "Geometry" } //Is this even the right queue?
Pass{
//Tags{ "LightMode" = "ForwardBase" }
GLSLPROGRAM
/* begin port by copying in the constants */
// math const
const float PI = 3.14159265359;
const float DEG_TO_RAD = PI / 180.0;
const float MAX = 10000.0;
// scatter const
const float K_R = 0.166;
const float K_M = 0.0025;
const float E = 14.3; // light intensity
const vec3 C_R = vec3(0.3, 0.7, 1.0); // 1 / wavelength ^ 4
const float G_M = -0.85; // Mie g
const float R = 1.0; /* this is the radius of the spehere? this should be set from the geometry or something.. */
const float R_INNER = 0.7;
const float SCALE_H = 4.0 / (R - R_INNER);
const float SCALE_L = 1.0 / (R - R_INNER);
const int NUM_OUT_SCATTER = 10;
const float FNUM_OUT_SCATTER = 10.0;
const int NUM_IN_SCATTER = 10;
const float FNUM_IN_SCATTER = 10.0;
/* begin functions. These are out of the defines because they should be accesible to anyone. */
// angle : pitch, yaw
mat3 rot3xy(vec2 angle) {
vec2 c = cos(angle);
vec2 s = sin(angle);
return mat3(
c.y, 0.0, -s.y,
s.y * s.x, c.x, c.y * s.x,
s.y * c.x, -s.x, c.y * c.x
);
}
// ray direction
vec3 ray_dir(float fov, vec2 size, vec2 pos) {
vec2 xy = pos - size * 0.5;
float cot_half_fov = tan((90.0 - fov * 0.5) * DEG_TO_RAD);
float z = size.y * 0.5 * cot_half_fov;
return normalize(vec3(xy, -z));
}
// ray intersects sphere
// e = -b +/- sqrt( b^2 - c )
vec2 ray_vs_sphere(vec3 p, vec3 dir, float r) {
float b = dot(p, dir);
float c = dot(p, p) - r * r;
float d = b * b - c;
if (d < 0.0) {
return vec2(MAX, -MAX);
}
d = sqrt(d);
return vec2(-b - d, -b + d);
}
// Mie
// g : ( -0.75, -0.999 )
// 3 * ( 1 - g^2 ) 1 + c^2
// F = ----------------- * -------------------------------
// 2 * ( 2 + g^2 ) ( 1 + g^2 - 2 * g * c )^(3/2)
float phase_mie(float g, float c, float cc) {
float gg = g * g;
float a = (1.0 - gg) * (1.0 + cc);
float b = 1.0 + gg - 2.0 * g * c;
b *= sqrt(b);
b *= 2.0 + gg;
return 1.5 * a / b;
}
// Reyleigh
// g : 0
// F = 3/4 * ( 1 + c^2 )
float phase_reyleigh(float cc) {
return 0.75 * (1.0 + cc);
}
float density(vec3 p) {
return exp(-(length(p) - R_INNER) * SCALE_H);
}
float optic(vec3 p, vec3 q) {
vec3 step = (q - p) / FNUM_OUT_SCATTER;
vec3 v = p + step * 0.5;
float sum = 0.0;
for (int i = 0; i < NUM_OUT_SCATTER; i++) {
sum += density(v);
v += step;
}
sum *= length(step) * SCALE_L;
return sum;
}
vec3 in_scatter(vec3 o, vec3 dir, vec2 e, vec3 l) {
float len = (e.y - e.x) / FNUM_IN_SCATTER;
vec3 step = dir * len;
vec3 p = o + dir * e.x;
vec3 v = p + dir * (len * 0.5);
vec3 sum = vec3(0.0);
for (int i = 0; i < NUM_IN_SCATTER; i++) {
vec2 f = ray_vs_sphere(v, l, R);
vec3 u = v + l * f.y;
float n = (optic(p, v) + optic(v, u)) * (PI * 4.0);
sum += density(v) * exp(-n * (K_R * C_R + K_M));
v += step;
}
sum *= len * SCALE_L;
float c = dot(dir, -l);
float cc = c * c;
return sum * (K_R * C_R * phase_reyleigh(cc) + K_M * phase_mie(G_M, c, cc)) * E;
}
/* end functions */
/* vertex shader begins here*/
#ifdef VERTEX
const float SpecularContribution = 0.3;
const float DiffuseContribution = 1.0 - SpecularContribution;
uniform vec4 _LP;
varying vec2 TextureCoordinate;
varying float LightIntensity;
varying vec4 someOutput;
/* transient stuff */
varying vec3 eyeOutput;
varying vec3 dirOutput;
varying vec3 lOutput;
varying vec2 eOutput;
/* lighting stuff */
// i.e. one could #include "UnityCG.glslinc"
uniform vec3 _WorldSpaceCameraPos;
// camera position in world space
uniform mat4 _Object2World; // model matrix
uniform mat4 _World2Object; // inverse model matrix
uniform vec4 _WorldSpaceLightPos0;
// direction to or position of light source
uniform vec4 _LightColor0;
// color of light source (from "Lighting.cginc")
void main()
{
/* code from that example shader */
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
vec3 ecPosition = vec3(gl_ModelViewMatrix * gl_Vertex);
vec3 tnorm = normalize(gl_NormalMatrix * gl_Normal);
vec3 lightVec = normalize(_LP.xyz - ecPosition);
vec3 reflectVec = reflect(-lightVec, tnorm);
vec3 viewVec = normalize(-ecPosition);
/* copied from https://en.wikibooks.org/wiki/GLSL_Programming/Unity/Specular_Highlights for testing stuff */
//I have no idea what I'm doing, but hopefully this computes some vectors which I need
mat4 modelMatrix = _Object2World;
mat4 modelMatrixInverse = _World2Object; // unity_Scale.w
// is unnecessary because we normalize vectors
vec3 normalDirection = normalize(vec3(
vec4(gl_Normal, 0.0) * modelMatrixInverse));
vec3 viewDirection = normalize(vec3(
vec4(_WorldSpaceCameraPos, 1.0)
- modelMatrix * gl_Vertex));
vec3 lightDirection;
float attenuation;
if (0.0 == _WorldSpaceLightPos0.w) // directional light?
{
attenuation = 1.0; // no attenuation
lightDirection = normalize(vec3(_WorldSpaceLightPos0));
}
else // point or spot light
{
vec3 vertexToLightSource = vec3(_WorldSpaceLightPos0
- modelMatrix * gl_Vertex);
float distance = length(vertexToLightSource);
attenuation = 1.0 / distance; // linear attenuation
lightDirection = normalize(vertexToLightSource);
}
/* test port */
// default ray dir
//That's the direction of the camera here?
vec3 dir = viewDirection; //normalDirection;//viewDirection;// tnorm;//lightVec;//lightDirection;//normalDirection; //lightVec;//tnorm;//ray_dir(45.0, iResolution.xy, fragCoord.xy);
// default ray origin
//I think they mean the position of the camera here?
vec3 eye = vec3(_WorldSpaceCameraPos); //vec3(_WorldSpaceLightPos0); //// vec3(0.0, 0.0, 0.0); //_WorldSpaceCameraPos;//ecPosition; //vec3(0.0, 0.0, 2.4);
// rotate camera not needed, remove it
// sun light dir
//I think they mean the direciton of our directional light?
vec3 l = lightDirection;//_LightColor0.xyz; //lightDirection; //normalDirection;//normalize(vec3(_WorldSpaceLightPos0));//lightVec;// vec3(0, 0, 1);
/* this computes the intersection of the ray and the sphere.. is this really needed?*/
vec2 e = ray_vs_sphere(eye, dir, R);
/* copy stuff sothat we can use it on the fragment shader, "discard" is only allowed in fragment shader,
so the rest has to be computed in fragment shader */
eOutput = e;
eyeOutput = eye;
dirOutput = dir;
lOutput = dir;
}
#endif
#ifdef FRAGMENT
uniform sampler2D _MainTex;
varying vec2 TextureCoordinate;
uniform vec4 _LC;
varying float LightIntensity;
/* transient port */
varying vec3 eyeOutput;
varying vec3 dirOutput;
varying vec3 lOutput;
varying vec2 eOutput;
void main()
{
/* real fragment */
if (eOutput.x > eOutput.y) {
//discard;
}
vec2 f = ray_vs_sphere(eyeOutput, dirOutput, R_INNER);
vec2 e = eOutput;
e.y = min(e.y, f.x);
vec3 I = in_scatter(eyeOutput, dirOutput, eOutput, lOutput);
gl_FragColor = vec4(I, 1.0);
/*vec4 c2;
c2.x = 1.0;
c2.y = 1.0;
c2.z = 0.0;
c2.w = 1.0f;
gl_FragColor = c2;*/
//gl_FragColor = c;
}
#endif
ENDGLSL
}
}
}
Any help is appreciated, sorry for the long post and explanations.
Edit: I just found out that the radius of the spehere does have an influence on the stuff, a sphere with scale 2.0 in every direction gives a much better result. However, the picture is still completly independent of the viewing angle of the camera and any lights, this is nowhere near the shaderlab version.
It's look like you are trying to render a 2D texture over a sphere. It has some different approach. For what you trying to do, I would apply the shader over a plane crossed with the sphere.
For general purpose, look this article showing how to convert shaderToy to Unity3D.
There is some steps that I included here:
Replace iGlobalTime shader input (“shader playback time in seconds”) with _Time.y
Replace iResolution.xy (“viewport resolution in pixels”) with _ScreenParams.xy
Replace vec2 types with float2, mat2 with float2x2 etc.
Replace vec3(1) shortcut constructors in which all elements have same value with explicit float3(1,1,1)
Replace Texture2D with Tex2D
Replace atan(x,y) with atan2(y,x) <- Note parameter ordering!
Replace mix() with lerp()
Replace *= with mul()
Remove third (bias) parameter from Texture2D lookups
mainImage(out vec4 fragColor, in vec2 fragCoord) is the fragment shader function, equivalent to float4 mainImage(float2 fragCoord : SV_POSITION) : SV_Target
UV coordinates in GLSL have 0 at the top and increase downwards, in HLSL 0 is at the bottom and increases upwards, so you may need to use uv.y = 1 – uv.y at some point.
About this question:
Tags{ "Queue" = "Geometry" } //Is this even the right queue?
Queue references the order it will be rendered, Geometry is one of the first of, if you want you shader running over everything you could use Overlay for example. This topic is covered here.
Background - this render queue is rendered before any others. It is used for skyboxes and the like.
Geometry (default) - this is used for most objects. Opaque geometry uses this queue.
AlphaTest - alpha tested geometry uses this queue. It’s a separate queue from - Geometry one since it’s more efficient to render alpha-tested objects after all solid ones are drawn.
Transparent - this render queue is rendered after Geometry and AlphaTest, in back-to-front order. Anything alpha-blended (i.e. shaders that don’t write to depth buffer) should go here (glass, particle effects).
Overlay - this render queue is meant for overlay effects. Anything rendered last should go here (e.g. lens flares).

Get the tangent from a normal direction cg unity3d

I am writing a shader in cg where I displace the vertexes. Because I displace the vertexes, I recalculate the normals so they point away from the surface and give that information to the fragment function. In the shader I also implemented a normal map, now am I wondering shouldn't I also recalculate the tangents? And is there a formula to calculate the tangent? I read that it is on a 90 degrees angle of the normal, could I use the cross product for that?
I want to pass on the right tangent to VOUT.tangentWorld. This is my vertex function:
VertexOutput vert (VertexInput i)
{
VertexOutput VOUT;
// put the vert in world space
float4 newVert = mul(_Object2World,i.vertex);
// create fake vertexes
float4 v1 = newVert + float4(0.05,0.0,0.0,0.0) ; // X
float4 v2 = newVert + float4(0.0,0.0,0.05,0.0) ; // Z
// assign the displacement map to uv coords
float4 disp = tex2Dlod(_Displacement, float4(newVert.x + (_Time.x * _Speed), newVert.z + (_Time.x * _Speed),0.0,0.0));
float4 disp2 = tex2Dlod(_Displacement, float4(v1.x + (_Time.x * _Speed), newVert.z + (_Time.x * _Speed),0.0,0.0));
float4 disp3 = tex2Dlod(_Displacement, float4(newVert.x + (_Time.x * _Speed), v2.z + (_Time.x * _Speed),0.0,0.0));
// offset the main vert
newVert.y += _Scale * disp.y;
// offset fake vertexes
v1 += _Scale * disp2.y;
v2 += _Scale * disp3.y;
// calculate the new normal direction
float3 newNor = cross(v2 - newVert, v1 - newVert);
// return world position of the vert for frag calculations
VOUT.posWorld = newVert;
// set the vert back in object space
float4 vertObjectSpace = mul(newVert,_World2Object);
// apply unity mvp matrix to the vert
VOUT.pos = mul(UNITY_MATRIX_MVP,vertObjectSpace);
//return the tex coords for frag calculations
VOUT.tex = i.texcoord;
// return normal, tangents, and binormal information for frag calculations
VOUT.normalWorld = normalize( mul(float4(newNor,0.0),_World2Object).xyz);
VOUT.tangentWorld = normalize( mul(_Object2World,i.tangent).xyz);
VOUT.binormalWorld = normalize( cross(VOUT.normalWorld, VOUT.tangentWorld) * i.tangent.w);
return VOUT;
}
Isn't it just the vector v2 - newVert or v1 - newVert because the point along the surface? And how do I know which one of the two it is?
I've used something like the below:
Vector3 tangent = Vector3.Cross( normal, Vector3.forward );
if( tangent.magnitude == 0 ) {
tangent = Vector3.Cross( normal, Vector3.up );
}
Source: http://answers.unity3d.com/questions/133680/how-do-you-find-the-tangent-from-a-given-normal.html