Multiple textures webgl frag shader problem - fragment

I'm using curtains.js for my web project.
I'm telling you this because I tried to find similar questions on stackoverflow about this topic but due to the way curtains is made, I wasn't able to reproduce the answers. curtains.js is very specific about objects in the DOM of the html.
With that in mind, I would like to pose the following question:
Is there a way I could make my code more beautiful? Currently I have this:
export default [...]
varying vec2 vTextureCoord;
varying vec2 vDisplacedTextureCoord;
varying vec2 vDistortionEffect;
// custom uniforms
uniform float uDisplacementStrength;
uniform float uVideoQueue;
// our textures samplers
uniform sampler2D displacementTexture2;
uniform sampler2D sourceVideo0;
uniform sampler2D sourceVideo1;
uniform sampler2D sourceVideo2;
uniform sampler2D sourceVideo3;
uniform sampler2D sourceVideo4;
uniform sampler2D sourceVideo5;
uniform sampler2D sourceVideo6;
uniform sampler2D sourceVideo7;
uniform sampler2D sourceVideo8;
uniform sampler2D sourceVideo9;
uniform sampler2D canvasTexture;
void main (void) {
vec2 textureCoords = vTextureCoord;
vec4 mouseEffect = texture2D(canvasTexture, textureCoords);
vec4 mapEffect = texture2D(displacementTexture2, textureCoords);
vec4 colorEffect = texture2D(sourceVideo0, textureCoords);
vec4 finalColor = texture2D(sourceVideo0, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
if (uVideoQueue == 1.0) {
colorEffect = texture2D(sourceVideo1, textureCoords);
finalColor = texture2D(sourceVideo1, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
} else if (uVideoQueue == 2.0) {
colorEffect = texture2D(sourceVideo2, textureCoords);
finalColor = texture2D(sourceVideo2, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
} else if (uVideoQueue == 3.0) {
colorEffect = texture2D(sourceVideo3, textureCoords);
finalColor = texture2D(sourceVideo3, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
} else if (uVideoQueue == 4.0) {
colorEffect = texture2D(sourceVideo4, textureCoords);
finalColor = texture2D(sourceVideo4, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
} else if (uVideoQueue == 5.0) {
colorEffect = texture2D(sourceVideo5, textureCoords);
finalColor = texture2D(sourceVideo5, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
} else if (uVideoQueue == 6.0) {
colorEffect = texture2D(sourceVideo6, textureCoords);
finalColor = texture2D(sourceVideo6, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
} else if (uVideoQueue == 7.0) {
colorEffect = texture2D(sourceVideo7, textureCoords);
finalColor = texture2D(sourceVideo7, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
} else if (uVideoQueue == 8.0) {
colorEffect = texture2D(sourceVideo8, textureCoords);
finalColor = texture2D(sourceVideo8, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
} else if (uVideoQueue == 9.0) {
colorEffect = texture2D(sourceVideo9, textureCoords);
finalColor = texture2D(sourceVideo9, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
}
gl_FragColor = finalColor;
[...]
In my main script there is a uniform that is constantly updated to tell the shader which texture it should use for the output (uVideoQueue) and ranges from 0-9 (the current number is the number of the specific video)
The sampler2D uniforms (0-9) are the videos that are in the DOM and need to be drawn.
As you can see in the code, with every draw call in the shader, the function has to check which sourceVideo sampler2D uniform variable it needs to add the the fragment shader.
As you can see: every if/else if line of code is the same as the one on top but only the name of the sampler2D uniform is different. There must be a way to have something like this
vec4 colorEffect = texture2D(sourceVideo **+ i**, textureCoords);
vec4 finalColor = texture2D(sourceVideo **+ i**, [...]);
where i is the number uVideoQueue.
The code as is works fine, but I think it is more processor intense with all the if/else if statements compared to a more elegant solution where only 2 lines are needed...
Thanks!

Based on the official documentation slideshow example code you should use an additional activeTexture texture and update its source whenever you want.
Your fragment shader will then become a lot easier to write, using only the uActiveTexture sampler uniform.
Here is a minimal codesandbox demonstrating the concept: https://codepen.io/martinlaxenaire/pen/YzpVYLE
Cheers,

Thank you for your replies!
This is my new FS:
void main (void) {
vec2 textureCoords = vTextureCoord;
vec4 mouseEffect = texture2D(canvasTexture, vDisplacedTextureCoord);
vec4 mapEffect = texture2D(displacementTexture2, vDisplacedTextureCoord);
vec4 colorEffect = texture2D(activeVideo, vDisplacedTextureCoord);
vec4 finalColor = texture2D(activeVideo, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r * 0.2, 0.0));
gl_FragColor = finalColor;
In my main script I create a new texture:
welcomeobj.mouseEffect.activeTexture = welcomeobj.mouseEffect.curtains.planes[0].createTexture({
sampler: "activeVideo",
});
When the previous video is over the main script sets a new source to the texture:
welcomeobj.mouseEffect.activeTexture.setSource(loaderobj.xhrarray[m].video);

Related

gl_FragDepth calculated to camera space

Depth fragment shader is A.frag
#version 430 core
uniform float pointSize;
uniform mat4 projectMatrix;
in vec3 eyeSpacePos;
void main(){
vec3 normal;
normal.xy = gl_PointCoord.xy * vec2(2.0, -2.0) + vec2(-1.0,1.0);
float mag = dot(normal.xy, normal.xy);
if(mag > 1.0) discard;
normal.z = sqrt(1.0 - mag);
vec4 pixelEyePos = vec4(eyeSpacePos + normal * pointSize, 1.0f);
vec4 pixelClipPos = projectMatrix * pixelEyePos;
float ndcZ = pixelClipPos.z / pixelClipPos.w;
gl_FragDepth = ndcZ;
}
accepth the depth map shader is B.frag
void main(){
float pixelDepth = texture(u_DepthTex, Texcoord).r;
gl_FragDepth = outDepth;
}
How can convert the pixelDepth into the camera space at B.frag? I have tried many times without success.
The ndc coordinate is in range [-1.0, 1.0] the depth has to be in dept range, which is by default [0.0, 1.0]:
gl_FragDepth = ndcZ * 0.5 - 0.5;

Shader works just fine in Unity Editor, but became black in WebGL build

I am working on a project which encodes sensor values at different positions into a 3d heatmap of a building. I use a vertex shader for this purpose and this works just fine in Editor:example, but after I built the scene in WebGL, this turned out to be black.
I has tried using constant loop indices or always include this shader in project settings etc., but none of these works. Here are some of the code:
v2f vert(appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.worldPos = mul(unity_ObjectToWorld, v.vertex);
o.screenPos = ComputeScreenPos(o.vertex);
UNITY_TRANSFER_FOG(o,o.vertex);
return o;
}
//...
float2 boxIntersection(in float3 ro, in float3 rd, in float3 rad)
{
float3 m = 1.0 / rd;
float3 n = m * ro;
float3 k = abs(m) * rad;
float3 t1 = -n - k;
float3 t2 = -n + k;
float tN = max(max(t1.x, t1.y), t1.z);
float tF = min(min(t2.x, t2.y), t2.z);
if (tN > tF || tF < 0.0) return float2(-1.0, -1.0); // no intersection
return float2(tN, tF);
}
//p in object space
float SampleValue(float3 p) {
float totalValue = 0.0;
float denom = 0.0;
for (int i = 0; i < 34; ++i) { // _DataSize
float4 sd = _SensorData[i];
float dist = length(p - sd.xyz);
totalValue += sd.w / (dist * dist);
denom += 1.0 / (dist * dist);
}
if (denom == 0.0) {
return 0.0;
}
return totalValue / denom;
}
float4 transferFunction(float value) {
float tv = (value - _DataScale.x) / (_DataScale.y - _DataScale.x); // _DataScale.x, _DataScale.y
float4 col = tex2D(_TransferTexture, float2(0.5, tv));
col.w *= _Strength; // _Strength
return float4(col.xyz * col.w, col.w);
}
float4 rayMarch(float3 ro, float3 rd, float dp) {
float3 ro1 = mul(unity_WorldToObject, float4(ro, 1.0));
float3 rd1 = mul(unity_WorldToObject, rd);
float2 t = boxIntersection(ro1, rd1, float3(1, 1, 1) * 0.5);
t.x = length(mul(unity_ObjectToWorld, float4(ro1 + rd1 * max(t.x, 0.0), 1.0)) - ro);
t.y = length(mul(unity_ObjectToWorld, float4(ro1 + rd1 * t.y, 1.0)) - ro);
t.y = min(t.y, dp);
float4 acc = float4(0.0, 0.0, 0.0, 1.0);
float totalDelta = (t.y - t.x);
float delta = totalDelta / float(_RM_Samples - 1.0);
float3 p = ro + t.x * rd;
for (int i = 0; i < 34; ++i) { // _RM_Samples
float v = SampleValue(p);
float4 tf = transferFunction(v);
float tr = exp(-tf.w * delta);
acc.xyz += tf.xyz * acc.w * delta;
acc.w *= tr;
p += delta * rd;
}
return float4(acc.xyz, (1.0 - acc.w) * step(t.x, t.y));
}
fixed4 frag(v2f i) : SV_Target
{
float2 tc = i.screenPos.xy / i.screenPos.w;
float depth = UNITY_SAMPLE_DEPTH(tex2D(_CameraDepthTexture, tc));
float eD = LinearEyeDepth(depth);
float3 ro = _WorldSpaceCameraPos;
float3 rd = normalize(i.worldPos - ro);
float4 col = rayMarch(ro, rd, eD);
//if (col.w < 1) col = float4(1, 0, 0, 1);
//else col = float4(0, 1, 0, 1);
if (wingCullPlaneValue(i.worldPos.xyz) == 0 || cullPlaneValue(i.worldPos.xyz) == 0) {
discard;
}
UNITY_APPLY_FOG(i.fogCoord, col);
return col;
}
Since this works fine in Editor, I don't think there is any error in boxIntersection or rayMarching functions. I wonder if there is anything special in WebGl that it processes the pixels differently, and I has to tweak some codes accordingly. I am new to WebGL and Shader, and would appreciate any help or advice, thanks in advance.
that's because of the step function.
Approximate it with your own sigmoid function (might be expensive)
float emulated_step(float a, float x){
return 1.0/(1+pow(1000, -(x-a)*8192));
}

How can I convert these fragColor code snippets from ShaderToy so that they will work in Unity?

I'm following this tutorial: https://www.youtube.com/watch?v=CzORVWFvZ28 to convert some code from ShaderToy to Unity. This is the shader that I'm attempting to convert: https://www.shadertoy.com/view/Ws23WD.
I saw that in the tutorial, he was able to take his fragColor statement from ShaderToy and simply return a color in Unity instead. However, when I tried doing that with the code that I have from ShaderToy, an error about not being able to implicitly convert from float3 to float4 popped up. I saw that my color variable is being declared as a float3 which is what must be causing the issue, but I need some help figuring out how to fix this.
I also noticed that I have an 'a' value with the fragColor variable, in addition to the rgb values; would I use a float4 to take in the (r, g, b, a) values?
fixed4 frag (v2f i) : SV_Target
{
//float2 uv = float2(fragCoord.x / iResolution.x, fragCoord.y / iResolution.y);
float2 uv = float2(i.uv);
uv -= 0.5;
//uv /= float2(iResolution.y / iResolution.x, 1);
float3 cam = float3(0, -0.15, -3.5);
float3 dir = normalize(float3(uv,1));
float cam_a2 = sin(_Time.y) * pi * 0.1;
cam.yz = rotate(cam.yz, cam_a2);
dir.yz = rotate(dir.yz, cam_a2);
float cam_a = _Time.y * pi * 0.1;
cam.xz = rotate(cam.xz, cam_a);
dir.xz = rotate(dir.xz, cam_a);
float3 color = float3(0.16, 0.12, 0.10);
float t = 0.00001;
const int maxSteps = 128;
for(int i = 0; i < maxSteps; ++i) {
float3 p = cam + dir * t;
float d = scene(p);
if(d < 0.0001 * t) {
color = float3(1.0, length(p) * (0.6 + (sin(_Time.y*3.0)+1.0) * 0.5 * 0.4), 0);
break;
}
t += d;
}
//fragColor.rgb = color;
return color;
//fragColor.a = 1.0;
}

SCNProgram not affecting SCNFloor

In my experiments using the shader modifier, I saw that the array data could not be transferred to the shader.
Scenekit giving buffer size error while passing array data to uniform array in openGL shader
For this reason I decided to try SCNProgram. But now I realize that the shaders I added using SCNProgram do not work on SCNFloor.
Is there a particular reason for this problem?
Super simple shaders which I use for testing;
vertex shader
precision highp float;
attribute vec3 vertex;
uniform mat4 ModelViewProjectionMatrix;
void main()
{
gl_Position = ModelViewProjectionMatrix * vec4(vertex, 1.0);
}
fragment shader
precision mediump float;
void main( void )
{
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
}
You can try to make your own vert and frag shaders to kinda do the same thing. I've done a similar thing with webgl / glsl es2, or just paint it solid, tint the existing floor:
uniform mat4 uCameraWM; //camera world matrix
uniform vec2 uCamAspFov; //x:aspect ratio y:tan(fov/2)
varying vec3 vVDw; //view direction
void main(){
//construct frustum, scale and transform a unit quad
vec4 viewDirWorld = vec4(
position.x * uCamAspFov.x,
position.y,
-uCamAspFov.y, //move it to where 1:aspect fits
0.
);
vVDw = ( uCameraWM * viewDirWorld ).xyz; //transform to world
gl_Position = vec4( position.xy , 0. , 1. ); //draw a full screen quad
}
Frag:
(ignore the cruft, point is to intersect the view direction ray with a plane, map it, you can look up a cubemap in the sky portion instead of discarding it)
uniform float uHeight;
uniform sampler2D uTexDiff;
uniform sampler2D uTexNorm;
uniform sampler2D uTexMask;
varying vec2 vUv;
varying vec3 vVDw;
struct Plane
{
vec3 point;
vec3 normal;
float d;
};
bool rpi( in Plane p , in vec3 p0 , in vec3 vd , out vec3 wp )
{
float t;
t = -( dot( p0 , p.normal ) + p.d ) / dot( vd , p.normal );
wp = p0 + t * vd;
return t > 0. ? true : false;
}
void main(){
Plane plane;
plane.point = vec3( 0. , uHeight , 0. );
plane.normal = vec3( 0. , 1. , .0 );
plane.d = -dot( plane.point , plane.normal );
vec3 ld = normalize( vec3(1.,1.,1.) );
vec3 norm = plane.normal;
float ndl = dot( norm , ld ) ;
vec2 uv;
vec3 wp;
vec3 viewDir = normalize( vVDw );
vec3 h = normalize((-viewDir + ld));
float spec = clamp( dot( h , norm ) , .0 , 1. );
// spec = pow( spec , 5. );
if( dot(plane.normal , cameraPosition) < 0. ) discard;
if( !rpi( plane , cameraPosition , viewDir , wp ) ) discard;
uv = wp.xz;
vec2 uvm = uv * .0105;
vec2 uvt = uv * .2;
vec4 tmask = texture2D( uTexMask , uvm );
vec2 ch2Scale = vec2(1.8,1.8);
vec2 ch2Scale2 = vec2(1.6,1.6);
vec2 t1 = uvt * ch2Scale2 - vec2(tmask.z , -tmask.z) ;
// vec2 t2 = uvt * ch2Scale + tmask.z ;
// vec2 t3 = uvt + vec2(0. , mask.z-.5) * 1.52;
vec3 diffLevels = ( texture2D( uTexDiff , t1 ) ).xyz;
// vec3 diffuse2 = ( texture2D( uTexDiff, fract(t1) * vec2(.5,1.) + vec2(.5,.0) ) ).xyz;
// vec3 diffuse1 = ( texture2D( uTexDiff, fract(t2) * vec2(.5,1.) ) ).xyz;
// vec4 normalMap2 = texture2D( uTexNorm, fract(t1) * vec2(.5,1.) + vec2(.5,.0) );
// vec4 normalMap1 = texture2D( uTexNorm, fract(t2) * vec2(.5,1.) );
float diffLevel = mix(diffLevels.y, diffLevels.x, tmask.x);
diffLevel = mix( diffLevel , diffLevels.z, tmask.y );
// vec3 normalMix = mix(normalMap1.xyz, normalMap2.xyz, tmask.x);
// vec2 g = fract(uv*.1) - .5;
// float e = .1;
// g = -abs( g ) + e;
float fog = distance( wp.xz , cameraPosition.xz );
// float r = max( smoothstep( 0.,e,g.x) , smoothstep( 0.,e,g.y) );
gl_FragColor.w = 1.;
gl_FragColor.xyz = vec3(tmask.xxx);
gl_FragColor.xyz = vec3(diffLevel) * ndl + spec * .5;
}
But overall, the better advice would be just to give up on scenekit and save yourself a TON of frustration.
Finally Apple Developer Technical Support answered the question I asked about this issue.
Here is the answer that they give.
Unfortunately, there is not a way to shade floor as such. The
SceneKit team admits that SCNFloor is a different kind of object that
is not intended for use with SCNProgram. Furthermore, using the
.fragment shader entry point does not work either (such as:)
func setFragmentEntryPoint( _ node: SCNNode ) {
print( #function + " setting fragment entry point for \(node)" )
DispatchQueue.main.asyncAfter( deadline: DispatchTime.now() + DispatchTimeInterval.milliseconds( 2500 ) ) {
let geometry = node.geometry!
let dict: [SCNShaderModifierEntryPoint:String] = [.fragment :
"_output.color = vec4( 0.0, 1.0, 0.0, 1.0 );"]
geometry.shaderModifiers = dict
}
}
Though the SceneKit team considered this behavior to be expected, you
may still file a bug report which in this case will be interpreted as
an API enhancement request.

OpenGL ES rotate texture

I have the following fragment shader:
varying highp vec2 coordinate;
precision mediump float;
uniform vec4 maskC;
uniform float threshold;
uniform sampler2D videoframe;
uniform sampler2D videosprite;
uniform vec4 mask;
uniform vec4 maskB;
uniform int recording;
vec3 normalize(vec3 color, float meanr)
{
return color*vec3(0.75 + meanr, 1., 1. - meanr);
}
void main() {
float d;
float dB;
float dC;
float meanr;
float meanrB;
float meanrC;
float minD;
vec4 pixelColor;
vec4 spriteColor;
pixelColor = texture2D(videoframe, coordinate);
spriteColor = texture2D(videosprite, coordinate);
meanr = (pixelColor.r + mask.r)/8.;
meanrB = (pixelColor.r + maskB.r)/8.;
meanrC = (pixelColor.r + maskC.r)/8.;
d = distance(normalize(pixelColor.rgb, meanr), normalize(mask.rgb, meanr));
dB = distance(normalize(pixelColor.rgb, meanrB), normalize(maskB.rgb, meanrB));
dC = distance(normalize(pixelColor.rgb, meanrC), normalize(maskC.rgb, meanrC));
minD = min(d, dB);
minD = min(minD, dC);
gl_FragColor = spriteColor;
if (minD > threshold) {
gl_FragColor = pixelColor;
}
}
Now, depending on whether recording is 0 or 1, I want to rotate uniform sampler2D videosprite 180 degrees (reflection in x-axis, flip vertically). How can I do that?
I found the function glRotatef(), but how do I specify that I want to rotate the videosprite and not the videoframe?
Err, can't you just modify the way videosprite is accessed in the fragment shader?
vec2 c2;
if(recording == 0) {
c2 = coordinate;
} else {
c2 = vec2(coordinate.x, 1.0 - coordinate.y);
}