iPhone GLSL dynamic branching behaviour - iphone

I know that branching is not a good idea of writing shaders, but I haven't thought of a way to avoid it. Here is my fragment shader code:
precision highp float;
varying vec4 v_fragmentColor;
varying vec4 v_pos;
uniform int u_numberOfParticles;
uniform mat4 u_MVPMatrix;
uniform vec3 u_waterVertices[100];
void main()
{
vec4 finalColor = vec4(0.0, 0.0, 0.0, 0.0)
vec2 currPos = v_pos.xy;
float accum = 0.0;
vec3 normal = vec3(0, 0, 0);
for ( int i = 0; i < u_numberOfParticles; ++i )
{
// Some calculations here
}
normal = normalize(normal);
float normalizeToEdge = 1.0 - (accum - threshold) / 2.0;
if (normalizeToEdge < 0.3)
finalColor = vec4( 0.1, normalizeToEdge + 0.5, 0.9-normalizeToEdge*0.4, 1.0);
if ( normalizeToEdge < 0.2 )
{
finalColor = vec4( 120.0/255.0, 245.0/255.0, 245.0/255.0, 1.0);
float shade = mix( 0.7, 1.0, normal.x);
finalColor *= shade;
}
gl_FragColor = vec4(finalColor);
}
The problem is here :
for ( int i = 0; i < u_numberOfParticles; ++i )
{
// Some calculations here
}
changing it to :
for ( int i = 0; i < 2; ++i )
{
// Some calculations here
}
doubles the framerate even though u_numberOfParticles is also 2;
changing it to
for ( int i = 0; i < 100; ++i )
{
if( i == u_numberOfParticles)
break;
// Some calculations here
}
doesn't provide any fps improvements.
How can I cope with such shader behaviour? Is there any techiques to avoid this branching? I think that it will be inefficient to write 50 different shaders for the different number of particles... Any help will be appreciated.

Related

gl_FragDepth calculated to camera space

Depth fragment shader is A.frag
#version 430 core
uniform float pointSize;
uniform mat4 projectMatrix;
in vec3 eyeSpacePos;
void main(){
vec3 normal;
normal.xy = gl_PointCoord.xy * vec2(2.0, -2.0) + vec2(-1.0,1.0);
float mag = dot(normal.xy, normal.xy);
if(mag > 1.0) discard;
normal.z = sqrt(1.0 - mag);
vec4 pixelEyePos = vec4(eyeSpacePos + normal * pointSize, 1.0f);
vec4 pixelClipPos = projectMatrix * pixelEyePos;
float ndcZ = pixelClipPos.z / pixelClipPos.w;
gl_FragDepth = ndcZ;
}
accepth the depth map shader is B.frag
void main(){
float pixelDepth = texture(u_DepthTex, Texcoord).r;
gl_FragDepth = outDepth;
}
How can convert the pixelDepth into the camera space at B.frag? I have tried many times without success.
The ndc coordinate is in range [-1.0, 1.0] the depth has to be in dept range, which is by default [0.0, 1.0]:
gl_FragDepth = ndcZ * 0.5 - 0.5;

Multiple textures webgl frag shader problem

I'm using curtains.js for my web project.
I'm telling you this because I tried to find similar questions on stackoverflow about this topic but due to the way curtains is made, I wasn't able to reproduce the answers. curtains.js is very specific about objects in the DOM of the html.
With that in mind, I would like to pose the following question:
Is there a way I could make my code more beautiful? Currently I have this:
export default [...]
varying vec2 vTextureCoord;
varying vec2 vDisplacedTextureCoord;
varying vec2 vDistortionEffect;
// custom uniforms
uniform float uDisplacementStrength;
uniform float uVideoQueue;
// our textures samplers
uniform sampler2D displacementTexture2;
uniform sampler2D sourceVideo0;
uniform sampler2D sourceVideo1;
uniform sampler2D sourceVideo2;
uniform sampler2D sourceVideo3;
uniform sampler2D sourceVideo4;
uniform sampler2D sourceVideo5;
uniform sampler2D sourceVideo6;
uniform sampler2D sourceVideo7;
uniform sampler2D sourceVideo8;
uniform sampler2D sourceVideo9;
uniform sampler2D canvasTexture;
void main (void) {
vec2 textureCoords = vTextureCoord;
vec4 mouseEffect = texture2D(canvasTexture, textureCoords);
vec4 mapEffect = texture2D(displacementTexture2, textureCoords);
vec4 colorEffect = texture2D(sourceVideo0, textureCoords);
vec4 finalColor = texture2D(sourceVideo0, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
if (uVideoQueue == 1.0) {
colorEffect = texture2D(sourceVideo1, textureCoords);
finalColor = texture2D(sourceVideo1, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
} else if (uVideoQueue == 2.0) {
colorEffect = texture2D(sourceVideo2, textureCoords);
finalColor = texture2D(sourceVideo2, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
} else if (uVideoQueue == 3.0) {
colorEffect = texture2D(sourceVideo3, textureCoords);
finalColor = texture2D(sourceVideo3, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
} else if (uVideoQueue == 4.0) {
colorEffect = texture2D(sourceVideo4, textureCoords);
finalColor = texture2D(sourceVideo4, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
} else if (uVideoQueue == 5.0) {
colorEffect = texture2D(sourceVideo5, textureCoords);
finalColor = texture2D(sourceVideo5, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
} else if (uVideoQueue == 6.0) {
colorEffect = texture2D(sourceVideo6, textureCoords);
finalColor = texture2D(sourceVideo6, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
} else if (uVideoQueue == 7.0) {
colorEffect = texture2D(sourceVideo7, textureCoords);
finalColor = texture2D(sourceVideo7, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
} else if (uVideoQueue == 8.0) {
colorEffect = texture2D(sourceVideo8, textureCoords);
finalColor = texture2D(sourceVideo8, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
} else if (uVideoQueue == 9.0) {
colorEffect = texture2D(sourceVideo9, textureCoords);
finalColor = texture2D(sourceVideo9, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r, 0.0));
}
gl_FragColor = finalColor;
[...]
In my main script there is a uniform that is constantly updated to tell the shader which texture it should use for the output (uVideoQueue) and ranges from 0-9 (the current number is the number of the specific video)
The sampler2D uniforms (0-9) are the videos that are in the DOM and need to be drawn.
As you can see in the code, with every draw call in the shader, the function has to check which sourceVideo sampler2D uniform variable it needs to add the the fragment shader.
As you can see: every if/else if line of code is the same as the one on top but only the name of the sampler2D uniform is different. There must be a way to have something like this
vec4 colorEffect = texture2D(sourceVideo **+ i**, textureCoords);
vec4 finalColor = texture2D(sourceVideo **+ i**, [...]);
where i is the number uVideoQueue.
The code as is works fine, but I think it is more processor intense with all the if/else if statements compared to a more elegant solution where only 2 lines are needed...
Thanks!
Based on the official documentation slideshow example code you should use an additional activeTexture texture and update its source whenever you want.
Your fragment shader will then become a lot easier to write, using only the uActiveTexture sampler uniform.
Here is a minimal codesandbox demonstrating the concept: https://codepen.io/martinlaxenaire/pen/YzpVYLE
Cheers,
Thank you for your replies!
This is my new FS:
void main (void) {
vec2 textureCoords = vTextureCoord;
vec4 mouseEffect = texture2D(canvasTexture, vDisplacedTextureCoord);
vec4 mapEffect = texture2D(displacementTexture2, vDisplacedTextureCoord);
vec4 colorEffect = texture2D(activeVideo, vDisplacedTextureCoord);
vec4 finalColor = texture2D(activeVideo, vDisplacedTextureCoord + vec2(mapEffect.r * mouseEffect.r * colorEffect.r * 0.2, 0.0));
gl_FragColor = finalColor;
In my main script I create a new texture:
welcomeobj.mouseEffect.activeTexture = welcomeobj.mouseEffect.curtains.planes[0].createTexture({
sampler: "activeVideo",
});
When the previous video is over the main script sets a new source to the texture:
welcomeobj.mouseEffect.activeTexture.setSource(loaderobj.xhrarray[m].video);

Shader works just fine in Unity Editor, but became black in WebGL build

I am working on a project which encodes sensor values at different positions into a 3d heatmap of a building. I use a vertex shader for this purpose and this works just fine in Editor:example, but after I built the scene in WebGL, this turned out to be black.
I has tried using constant loop indices or always include this shader in project settings etc., but none of these works. Here are some of the code:
v2f vert(appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.worldPos = mul(unity_ObjectToWorld, v.vertex);
o.screenPos = ComputeScreenPos(o.vertex);
UNITY_TRANSFER_FOG(o,o.vertex);
return o;
}
//...
float2 boxIntersection(in float3 ro, in float3 rd, in float3 rad)
{
float3 m = 1.0 / rd;
float3 n = m * ro;
float3 k = abs(m) * rad;
float3 t1 = -n - k;
float3 t2 = -n + k;
float tN = max(max(t1.x, t1.y), t1.z);
float tF = min(min(t2.x, t2.y), t2.z);
if (tN > tF || tF < 0.0) return float2(-1.0, -1.0); // no intersection
return float2(tN, tF);
}
//p in object space
float SampleValue(float3 p) {
float totalValue = 0.0;
float denom = 0.0;
for (int i = 0; i < 34; ++i) { // _DataSize
float4 sd = _SensorData[i];
float dist = length(p - sd.xyz);
totalValue += sd.w / (dist * dist);
denom += 1.0 / (dist * dist);
}
if (denom == 0.0) {
return 0.0;
}
return totalValue / denom;
}
float4 transferFunction(float value) {
float tv = (value - _DataScale.x) / (_DataScale.y - _DataScale.x); // _DataScale.x, _DataScale.y
float4 col = tex2D(_TransferTexture, float2(0.5, tv));
col.w *= _Strength; // _Strength
return float4(col.xyz * col.w, col.w);
}
float4 rayMarch(float3 ro, float3 rd, float dp) {
float3 ro1 = mul(unity_WorldToObject, float4(ro, 1.0));
float3 rd1 = mul(unity_WorldToObject, rd);
float2 t = boxIntersection(ro1, rd1, float3(1, 1, 1) * 0.5);
t.x = length(mul(unity_ObjectToWorld, float4(ro1 + rd1 * max(t.x, 0.0), 1.0)) - ro);
t.y = length(mul(unity_ObjectToWorld, float4(ro1 + rd1 * t.y, 1.0)) - ro);
t.y = min(t.y, dp);
float4 acc = float4(0.0, 0.0, 0.0, 1.0);
float totalDelta = (t.y - t.x);
float delta = totalDelta / float(_RM_Samples - 1.0);
float3 p = ro + t.x * rd;
for (int i = 0; i < 34; ++i) { // _RM_Samples
float v = SampleValue(p);
float4 tf = transferFunction(v);
float tr = exp(-tf.w * delta);
acc.xyz += tf.xyz * acc.w * delta;
acc.w *= tr;
p += delta * rd;
}
return float4(acc.xyz, (1.0 - acc.w) * step(t.x, t.y));
}
fixed4 frag(v2f i) : SV_Target
{
float2 tc = i.screenPos.xy / i.screenPos.w;
float depth = UNITY_SAMPLE_DEPTH(tex2D(_CameraDepthTexture, tc));
float eD = LinearEyeDepth(depth);
float3 ro = _WorldSpaceCameraPos;
float3 rd = normalize(i.worldPos - ro);
float4 col = rayMarch(ro, rd, eD);
//if (col.w < 1) col = float4(1, 0, 0, 1);
//else col = float4(0, 1, 0, 1);
if (wingCullPlaneValue(i.worldPos.xyz) == 0 || cullPlaneValue(i.worldPos.xyz) == 0) {
discard;
}
UNITY_APPLY_FOG(i.fogCoord, col);
return col;
}
Since this works fine in Editor, I don't think there is any error in boxIntersection or rayMarching functions. I wonder if there is anything special in WebGl that it processes the pixels differently, and I has to tweak some codes accordingly. I am new to WebGL and Shader, and would appreciate any help or advice, thanks in advance.
that's because of the step function.
Approximate it with your own sigmoid function (might be expensive)
float emulated_step(float a, float x){
return 1.0/(1+pow(1000, -(x-a)*8192));
}

SCNProgram not affecting SCNFloor

In my experiments using the shader modifier, I saw that the array data could not be transferred to the shader.
Scenekit giving buffer size error while passing array data to uniform array in openGL shader
For this reason I decided to try SCNProgram. But now I realize that the shaders I added using SCNProgram do not work on SCNFloor.
Is there a particular reason for this problem?
Super simple shaders which I use for testing;
vertex shader
precision highp float;
attribute vec3 vertex;
uniform mat4 ModelViewProjectionMatrix;
void main()
{
gl_Position = ModelViewProjectionMatrix * vec4(vertex, 1.0);
}
fragment shader
precision mediump float;
void main( void )
{
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
}
You can try to make your own vert and frag shaders to kinda do the same thing. I've done a similar thing with webgl / glsl es2, or just paint it solid, tint the existing floor:
uniform mat4 uCameraWM; //camera world matrix
uniform vec2 uCamAspFov; //x:aspect ratio y:tan(fov/2)
varying vec3 vVDw; //view direction
void main(){
//construct frustum, scale and transform a unit quad
vec4 viewDirWorld = vec4(
position.x * uCamAspFov.x,
position.y,
-uCamAspFov.y, //move it to where 1:aspect fits
0.
);
vVDw = ( uCameraWM * viewDirWorld ).xyz; //transform to world
gl_Position = vec4( position.xy , 0. , 1. ); //draw a full screen quad
}
Frag:
(ignore the cruft, point is to intersect the view direction ray with a plane, map it, you can look up a cubemap in the sky portion instead of discarding it)
uniform float uHeight;
uniform sampler2D uTexDiff;
uniform sampler2D uTexNorm;
uniform sampler2D uTexMask;
varying vec2 vUv;
varying vec3 vVDw;
struct Plane
{
vec3 point;
vec3 normal;
float d;
};
bool rpi( in Plane p , in vec3 p0 , in vec3 vd , out vec3 wp )
{
float t;
t = -( dot( p0 , p.normal ) + p.d ) / dot( vd , p.normal );
wp = p0 + t * vd;
return t > 0. ? true : false;
}
void main(){
Plane plane;
plane.point = vec3( 0. , uHeight , 0. );
plane.normal = vec3( 0. , 1. , .0 );
plane.d = -dot( plane.point , plane.normal );
vec3 ld = normalize( vec3(1.,1.,1.) );
vec3 norm = plane.normal;
float ndl = dot( norm , ld ) ;
vec2 uv;
vec3 wp;
vec3 viewDir = normalize( vVDw );
vec3 h = normalize((-viewDir + ld));
float spec = clamp( dot( h , norm ) , .0 , 1. );
// spec = pow( spec , 5. );
if( dot(plane.normal , cameraPosition) < 0. ) discard;
if( !rpi( plane , cameraPosition , viewDir , wp ) ) discard;
uv = wp.xz;
vec2 uvm = uv * .0105;
vec2 uvt = uv * .2;
vec4 tmask = texture2D( uTexMask , uvm );
vec2 ch2Scale = vec2(1.8,1.8);
vec2 ch2Scale2 = vec2(1.6,1.6);
vec2 t1 = uvt * ch2Scale2 - vec2(tmask.z , -tmask.z) ;
// vec2 t2 = uvt * ch2Scale + tmask.z ;
// vec2 t3 = uvt + vec2(0. , mask.z-.5) * 1.52;
vec3 diffLevels = ( texture2D( uTexDiff , t1 ) ).xyz;
// vec3 diffuse2 = ( texture2D( uTexDiff, fract(t1) * vec2(.5,1.) + vec2(.5,.0) ) ).xyz;
// vec3 diffuse1 = ( texture2D( uTexDiff, fract(t2) * vec2(.5,1.) ) ).xyz;
// vec4 normalMap2 = texture2D( uTexNorm, fract(t1) * vec2(.5,1.) + vec2(.5,.0) );
// vec4 normalMap1 = texture2D( uTexNorm, fract(t2) * vec2(.5,1.) );
float diffLevel = mix(diffLevels.y, diffLevels.x, tmask.x);
diffLevel = mix( diffLevel , diffLevels.z, tmask.y );
// vec3 normalMix = mix(normalMap1.xyz, normalMap2.xyz, tmask.x);
// vec2 g = fract(uv*.1) - .5;
// float e = .1;
// g = -abs( g ) + e;
float fog = distance( wp.xz , cameraPosition.xz );
// float r = max( smoothstep( 0.,e,g.x) , smoothstep( 0.,e,g.y) );
gl_FragColor.w = 1.;
gl_FragColor.xyz = vec3(tmask.xxx);
gl_FragColor.xyz = vec3(diffLevel) * ndl + spec * .5;
}
But overall, the better advice would be just to give up on scenekit and save yourself a TON of frustration.
Finally Apple Developer Technical Support answered the question I asked about this issue.
Here is the answer that they give.
Unfortunately, there is not a way to shade floor as such. The
SceneKit team admits that SCNFloor is a different kind of object that
is not intended for use with SCNProgram. Furthermore, using the
.fragment shader entry point does not work either (such as:)
func setFragmentEntryPoint( _ node: SCNNode ) {
print( #function + " setting fragment entry point for \(node)" )
DispatchQueue.main.asyncAfter( deadline: DispatchTime.now() + DispatchTimeInterval.milliseconds( 2500 ) ) {
let geometry = node.geometry!
let dict: [SCNShaderModifierEntryPoint:String] = [.fragment :
"_output.color = vec4( 0.0, 1.0, 0.0, 1.0 );"]
geometry.shaderModifiers = dict
}
}
Though the SceneKit team considered this behavior to be expected, you
may still file a bug report which in this case will be interpreted as
an API enhancement request.

iPhone GLSL dynamic branching issue

I am trying to pass an array of vec3 as uniform and then iterate through them on each pixel. The size of array varies on situations so I can't make the loop with constant number of iterations.
Here is the code:
precision highp float;
precision highp int;
varying vec4 v_fragmentColor;
varying vec4 v_pos;
uniform int u_numberOfParticles;
const int numberOfAccumsToCapture = 3;
const float threshold = 0.15;
const float gooCoeff = 1.19;
uniform mat4 u_MVPMatrix;
uniform vec3 u_waterVertices[100];
void main()
{
vec4 finalColor = vec4(0.0, 0.0, 0.0, 0.0);
vec2 currPos = v_pos.xy;
float accum = 0.0;
vec3 normal = vec3(0, 0, 0);
for ( int i = 0; i < u_numberOfParticles; ++i )
{
vec2 dir2 = u_waterVertices[i].xy - currPos.xy;
vec3 dir3 = vec3(dir2, 0.1);
float q = dot(dir2, dir2);
accum += u_waterVertices[i].z / q;
}
float normalizeToEdge = 1.0 - (accum - threshold) / 2.0;
if (normalizeToEdge < 0.4)
finalColor = vec4( 0.1, normalizeToEdge + 0.5, 0.9-normalizeToEdge*0.4, 1.0);
if ( normalizeToEdge < 0.2 )
{
finalColor = vec4( 120.0/255.0, 245.0/255.0, 245.0/255.0, 1.0);
float shade = mix( 0.7, 1.0, normal.x);
finalColor *= shade;
}
gl_FragColor = vec4(finalColor);
}
The problem is here:
for ( int i = 0; i < u_numberOfParticles; ++i )
{
vec2 dir2 = u_waterVertices[i].xy - currPos.xy;
vec3 dir3 = vec3(dir2, 0.1);
float q = dot(dir2, dir2);
accum += u_waterVertices[i].z / q;
}
When I make the for-loop like this
for ( int i = 0; i < 2; ++i )
{
//...
}
I get double the framerate even though u_numberOfParticles is also 2
Making it like this
for ( int i = 0; i < 100; ++i )
{
if (i == u_numberOfParticles)
break;
//...
}
gives no improvement.
The only way I know to cope with this situation is to create multiple shaders. But The size of array may vary from 1 to 40 and making 40 different shaders just because of the for-loop is stupid. Any help or ideas how to deal with this situation ?
I agree with #badweasel that you're approach is not really suited for shaders.
From what I understand you are calculating the distance from the current pixel to each particle, sum something up and determine the color using the result.
Maybe you could instead render a point sprite for each particle and determine the color by smart blending.
You can set the size of the point sprite in the vertex shader using gl_PointSize. In the fragment shader you can determine the location of the current pixel within the point sprite by using gl_PointCoord.xy (which is in texture coordinates, i.e. [0..1]). By knowing the size of your point sprite you can then calculate the distance of the current pixel from the particles center and set the color to something. By additionally enabling blending you may be able to achieve the summing you do inside your loop, but with much higher frame rates.
Here are vertex and fragment shader I use for rendering "fake" spheres via point sprites as an example on how to use point sprites.
VS:
#version 150
in vec3 InPosition;
uniform mat4 ModelViewProjectionMatrix;
uniform int Radius = 10;
void main()
{
vec4 Vertex = vec4(InPosition, 1.0);
gl_Position = ModelViewProjectionMatrix * Vertex;
gl_PointSize = Radius;
}
FS:
#version 150
out vec4 FragColor;
void main()
{
// calculate normal, i.e. vector pointing from point sprite center to current fragment
vec3 normal;
normal.xy = gl_PointCoord * 2 - vec2(1);
float r2 = dot(normal.xy, normal.xy);
// skip pixels outside the sphere
if (r2 > 1) discard;
// set "fake" z normal to simulate spheres
normal.z = sqrt(1 - r2);
// visualize per pixel eye-space normal
FragColor = vec4(gl_PointCoord, normal.z, 1.0);
}
Note, that you need to enable: GL_POINT_SPRITE, GL_PROGRAM_POINT_SIZE for using point sprites.