Fastest way to write sampler2D * 0 and sampler2D * 1 to add many together? (glsl/cg) - unity3d

I would like to mix many sampler2Ds into one shader, without "if" conditions, using variable m1,m2,m3 equals to 0 / 1 for each sampler2D to say if it is active or not, and multiply by 1 the sampler2Ds active at one time.
I naively wrote this function to merge 15 sampler2D's:
//float4 blend function is a sampler2D function that "return tex2D (tex,uv);" at 3 different altitudes
color = blend (_TexTop *m1 + _TexTop2 *m2 + _TexTop3 *m3 + _TexTop4 *m4 + _TexTop5 *m5 ,
_TexMid *m1 + _TexMid2 *m2 + _TexMid3 *m3 + _TexMid4 *m4 + _TexMid5 *m5 ,
_TexBot *m1 + _TexBot2 *m2 + _TexBot3 *m3 + _TexBot4 *m4 + _TexBot5 *m5 ,
_StrataBlendWidth, _strataAltitudeOffset, _StrataMidbandWidth, IN.worldPos.y, IN.uv_TexTop);
Except obviously that doesnt work because it's not possible to add and multiply sampler2Ds. rarrgh!
What is the best way to rewrite the above line, to switch sampler2Ds on and off, using a 0-1 integer, without if conditions and dynaming branching?
Thankyou

Assuming that I've understood your question (I'm not sure), there's no way to selectively sample a texture based on a uniform paramater(mx) without branching.
Consider a couple of facts:
Branching on uniforms could not be always that expensive, since the value will be constant within the same drawcall (I guess it depends on the shader profile and the hardware, even if avoiding branches is a good rule of thumb).
If in the worst case you want to use all your sample (15 seems a quite big number though), then you might consider to always sample all of them.
I'm not sure on what's the practical use case for such a shader, btw a different approach could be compile different shader variants depending on the number of active samplers.
For example to compile 2 variants of a shader 1 with one sampler the other with 2 you could something like:
#pragma multi_compile ONE_SAMP TWO_SAMPLE
#ifdef ONE_SAMP
uniform sampler2D tex1;
#endif
#ifdef TWO_SAMP
uniform sampler2D tex1;
uniform sampler2D tex2;
#endif
...
#ifdef ONE_SAMP
fixed4 col1 = tex2D(tex1,uv);
#endif
#ifdef TWO_SAMP
fixed4 col1 = tex2D(tex1,uv);
fixed4 col2 = tex2D(tex2,uv);
#endif
Then set the active keyword by script or using a custom material editor.
EDIT
In my answer I've assumed that m1,m2 values are uniforms, otherwise if they are not there's really no way of dynamic sampling a texture without branching.

Related

Unity3d shader e -f convert to glsl

I am trying to convert to a shader in Unity3D to normal glsl shader.
The code is :
Out = lerp(To, In, saturate((Distance - Range) / max(Fuzziness, e-f)));
I know the lerp need to be convert to mix and saturate to clamp(xxx, 0.0, 1.0).
But I don't know how to convert the e - f part in the code above.
Any suggestion will be appreciated, thanks :)
You can see the code generated by your graph : right click on any node -> Show generated code. For this node the glsl function generated is :
void Unity_ReplaceColor_float(float3 In, float3 From, float3 To, float Range, out float3 Out, float Fuzziness)
{
float Distance = distance(From, In);
Out = lerp(To, In, saturate((Distance - Range) / max(Fuzziness, 1e-5f)));
}

Does GLKit limit me to two attributes?

I've been working with some GLKit code for the past few days that has a color attribute and a position attribute, but when I try to add a normal attribute it crashes every time.
Vertex Shader:
attribute vec4 SourceColor;
attribute vec4 aVertexPosition;
attribute vec4 aVertexNormal;
varying vec4 DestinationColor;
uniform mat4 uPMatrix; /* perspectiveView matrix */
uniform mat4 uVMatrix; /* view matrix */
uniform mat4 uOMatrix; /* object matrix */
uniform mat4 Projection;
uniform mat4 ModelView;
uniform float u_time;
void main(void) {
DestinationColor = SourceColor;
gl_Position = aVertexPosition * Projection;
}
Code:
self.colorSlot = glGetAttribLocation(programHandle, "SourceColor")
self.positionSlot = glGetAttribLocation(programHandle, "aVertexPosition")
self.normalSlot = glGetAttribLocation(programHandle, "aVertexNormal")
glEnableVertexAttribArray(GLuint(self.positionSlot))
glEnableVertexAttribArray(GLuint(self.colorSlot))
glEnableVertexAttribArray(GLuint(self.normalSlot))<-crashes here
As found through the comments the reason for this crash that the value self.normalSlot was -1 which is returned when glGetAttribLocation fails to find the attribute with the specified name. The value -1 was then typecast using GLuint(self.normalSlot) which most likely produced a very large value which is not supported in enabling vertex attribute array causing a crash. So before using a location for attribute and uniform you should check if the location was retrieved. A valid locations are positive values so location >= 0 should be checked.
Still the attribute was present in the shader source but the location was not retrieved. It seems the reason for that is the attribute was never used in the shader so it might be a compiler optimization. In any case you may not force the attribute by simply declaring it in the vertex shader, you need to also use it. There seems to be another way to force it which is by using glBindAttribLocation. I would not expect this to guarantee the existence of the attribute so you should still at least check the GL errors after using it to avoid additional issues.
Note:
If you are using glBindAttribLocation make sure you completely understand its documentation. It is very easy to lose track of these indices and you should have a smart system or personal standards on how you index the attributes.
Absolutely agree with Matik, but the limit for number of attributes really exists. It may be checked by GL_MAX_VERTEX_ATTRIBS:
int max;
glGetIntegerv(GL_MAX_VERTEX_ATTRIBS, &max);
NSLog(#"GL_MAX_VERTEX_ATTRIBS=%#", #(max));
I got GL_MAX_VERTEX_ATTRIBS=16 for my iPod 5. This is much more than two.

frac() function not returning correct value in unity/cg shader

I've been playing with a noise function as part of a shader I'm using and encountered something I don't understand with regards the frac() function, specifically that it is returning the originally passed in number, rather than just the fractional part.
To debug the issue I created a very simple shader:
Shader "Custom/FracTest" {
SubShader {
CGPROGRAM
#pragma surface surf Lambert
#pragma target 3.0
#include "UnityCG.cginc"
struct Input {
half3 viewDir;
};
void surf(Input IN, inout SurfaceOutput o) {
float4 col = 0;
float n = 1;
float sinn = sin(n); // ~0.84147
float hash = sinn * 43758.5453; // ~36821.54621
float floorHash = floor(hash); // 36821
float fracHash = frac(hash); // should be ~0.54621
float manualFrac = hash - floorHash;
col.r = fracHash == hash ? 1 : 0;
col.g = floorHash == hash ? 1 : 0;
col.b = manualFrac == fracHash ? 1 : 0;
o.Albedo = col.rgb;
}
ENDCG
}
Fallback "Diffuse"
}
As you can see, this is a Unity Surface shader, however I don't think Unity is at fault here - from looking at the generated fragment shader it hasn't messed with any of the values/datatypes around this bit of code. The values in comments are what I expect the variables to contain.
When this shader runs it simply colours the whole object based on some tests against the result of the functions. In this case my object appears red, meaning the fracHash value is still equal to the original hash value, the floorHash is not equal to it and the manualFrac value is also not equal to fracHash value.
I can't see what I'm doing wrong here, the documentation states there is a float variant of the frac() function, but I can only assume I'm just hitting some precision limit inside the frac() function - if I use smaller numbers it seems to return sensible results. Given it works when I manually extract the fraction, perhaps I'm using higher precision numbers?
Any ideas - am I making some wrong assumption somewhere, or am I just being an idiot :P ?
For now I'm progressing by manually calculating the fraction, but I'd like to understand what I've missed :)

GLSL 1.2 Geometry shader varying in vec4 is illegal?

I'm trying to figure out whether something is buggy in my graphics card drivers or just in my code. Is the following illegal?
#version 120
#extension GL_EXT_gpu_shader4 : enable
#extension GL_EXT_geometry_shader4 : enable
varying in vec4 something; // <------- this
void main()
{
for(int i = 0; i < gl_VerticesIn; ++i)
{
gl_Position = gl_PositionIn[i];
EmitVertex();
}
EndPrimitive();
}
It's crashing on my OSX 10.7 (NVIDIA 9400m/9600m) laptop and I'm curious A) is this actually illegal in GLSL 1.2 or is it just my implemntation? Is there a flag of some kind to enable passing a vec4 to geometry shader in GLSL 1.2?
for the record: yes, I know this is waaaay easier in 1.3+ but I'm stuck with 1.2 at the moment. Thanks!
Inputs to the geometry shader are arrays, not single values. This is because the GS takes in a primitive, which can be composed of multiple vertices. Just like gl_PositionIn is an array, so too should your user-defined inputs.

GLSL 'texture2D' : no matching overloaded function found OpenGL ES2 on iPhone

I'm experimenting with shaders with GLSL but I get a funny error when I try to take data from a texture to try a simple contrast enhancement algorithm.
'texture2D' : no matching overloaded function found
It happens with this code where "final" is the vec4 variable to hold the colour that is being worked on. The idea here is to push the pixel's colour further from the surrounding ones (An experimental idea). I'll mark the line in the code which has the error.
highp vec4 tex = texture2D(tex,vec2(texcoord.x+1.0,texcoord.y));
highp float total = tex.r + tex.g + tex.b;
tex = texture2D(tex,vec2(texcoord.x-1.0,texcoord.y)); <----This one as well as the next similar lines
total += tex.r + tex.g + tex.b;
tex = texture2D(tex,vec2(texcoord.x,texcoord.y+1.0));
total += tex.r + tex.g + tex.b;
tex = texture2D(tex,vec2(texcoord.x,texcoord.y-1.0));
total += tex.r + tex.g + tex.b;
highp float di = 12.0;
highp vec4 close_av = total/di;
final = (final - close_av)*1.3+close_av;
Why wont it work? Thank you.
Assuming that tex was originally declared as a uniform sampler2D at the top of your shader source, it is being redeclared as a local variable by the first line of your snippet, which hides the original definition. Changing either variable to keep their names distinct should fix your compilation issues.