Translating square with shaders on iPhone - iphone

I'm trying to update my (little) knowledge of OpenGL ES 1.1 to 2.0 on the iPhone. The default OpenGL ES Application template for the iPhone draws a square and makes it translate up and down and works fine. Their implementation does the math for the Y value changes on the shader itself which is pretty much useless. So, I've changed the vertext shader to:
uniform mat4 mvpMatrix;
attribute vec4 position;
attribute vec4 color;
varying vec4 colorVarying;
void main()
{
gl_Position = position * mvpMatrix;
colorVarying = color;
}
Which seems to be correct and common (from I've seen in my research). Obviously, I did the necessary changes to the code, like binding the uniform and, to help with the math, I got the sources for the esUtil.h code. On the drawing method, my code looks like this:
transY += 0.075f;
ESMatrix mvp, model, view;
esMatrixLoadIdentity(&view);
esPerspective(&view, 60.0, 320.0/480.0, 1.0, -1.0);
esMatrixLoadIdentity(&model);
esTranslate(&model, sinf(transY), 0.0f, 0.0f);
esMatrixLoadIdentity(&mvp);
esMatrixMultiply(&mvp, &model, &view);
glUniformMatrix4fv(uniforms[UNIFORM_MVPMATRIX], 1, GL_FALSE, (GLfloat *)&mvp);
And that should be working but, unfortunately, what I get is quite different from a simple translation.
I've restarted the template a few times but I can't figure out what I'm doing wrong here... Rotating seems to be working as expected, I believe...
Any help would be appreciated.

I think you want to reverse the order of your position transform, as your matrix library is probably working in Column-major order.
gl_Position = position * mvpMatrix;
=>
gl_Position = mvpMatrix * position;

unknowingly you have made a camera position change. In opengles camera(global) and object(local) transforms are just inverse.

Related

OpenGL ES transparency not working, instead things just blend with the background

So I have a simple simulation set up on my phone. The goal is to have circles of red, white, and blue that appear on the screen with various transparencies. I have most of that working, except for one thing, while transparency sort of works, the only blending happens with the black background. As a result the circle in the center appears dark red instead of showing the white circles under it. What am I doing wrong?
Note I am working in an orthographic 2d projection matrix. All of the objects z positions are the same, and are rendered in a specific order.
Here is how I set it so transparency works:
glEnable(GLenum(GL_DEPTH_TEST))
glEnable(GLenum(GL_POINT_SIZE));
glEnable(GLenum(GL_BLEND))
glBlendFunc(GLenum(GL_SRC_ALPHA), GLenum(GL_ONE_MINUS_SRC_ALPHA))
glEnable(GLenum(GL_POINT_SMOOTH))
//Note some of these things aren't compatible with OpenGL-es but they can hurt right?
Here is the fragment shader:
precision mediump float;
varying vec4 outColor;
varying vec3 center;
varying float o_width;
varying float o_height;
varying float o_pointSize;
void main()
{
vec4 fc = gl_FragCoord;
vec3 fp = vec3(fc);
vec2 circCoord = 2.0 * gl_PointCoord - 1.0;
if (dot(circCoord, circCoord) > 1.0) {
discard;
}
gl_FragColor = outColor;//colorOut;
}
Here is how I pass each circle to the shader:
func drawParticle(part: Particle,color_loc: GLint, size_loc: GLint)
{
//print("Drawing: " , part)
let p = part.position
let c = part.color
glUniform4f(color_loc, GLfloat(c.h), GLfloat(c.s), GLfloat(c.v), GLfloat(c.a))
glUniform1f(size_loc, GLfloat(part.size))
glVertexAttribPointer(0, GLint(3), GLenum(GL_FLOAT), GLboolean(GL_FALSE), 0, [p.x, p.y, p.z]);
glEnableVertexAttribArray(0);
glDrawArrays(GLenum(GL_POINTS), 0, GLint(1));
}
Here is how I set it so transparency works:
glEnable(GLenum(GL_DEPTH_TEST))
glEnable(GLenum(GL_POINT_SIZE));
glEnable(GLenum(GL_BLEND))
glBlendFunc(GLenum(GL_SRC_ALPHA), GLenum(GL_ONE_MINUS_SRC_ALPHA))
glEnable(GLenum(GL_POINT_SMOOTH))
And that's not how transparency works. OpenGL is not a scene graph, it just draws geometry in the order you specify it to. If the first thing you draw are the red circles, they will blend with the background. Once things get drawn that are "behind" the red circles, the "occulded" parts will simply be discarded due to the depth test. There is no way for OpenGL (or any other depth test based algorithm) to automatically sort the different depth layers and blend them appropriately.
What you're trying to do there is order independent transparency, a problem still in research on how to solve it efficiently.
For what you want to achieve you'll have to:
sort your geometry far to near and draw in that order
disable the depth test while rendering

How to avoid these edges around my transparent PNGs with WebGL?

I recently added opacity to my webgl fragment shader, replacing
gl_FragColor = texture2D(u_image, vec2(v_texCoord.s, v_texCoord.t));
with
vec4 color = texture2D(u_image, vec2(v_texCoord.s, v_texCoord.t));
gl_FragColor = vec4(color.rgb, * color.a);
but since I did, a little edge appears around my transparent pngs.
Here you can see a little white edge around the worm body parts, where they layer on the background, and thin black edge where they layer on other worm parts.
With the old code, before I added the opacity, or when using Canvas 2D context instead of WebGL, there is no edge :
Here is one of the pngs I use for the worm body parts. As you can see there is no white edge around it, and no black edge on the bottom where it layers on another body part.
Update : I updated SuperPNG export pluggin to last version, which does not anymore give white color to fully transparent pixels. It seems to me that the edge is actually still there but it is now the same color as the shape border so you barely can see it. The thin black edge at the back of the worm parts is still here though.
I once had a similar sprite edge problem before and I added "premultipliedAlpha:false" in my getContext() call to fix it. The new code seems to ignore it. Whever I remove or let it in the getContext() call, the little edge stays around my sprites.
Here is my code :
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
uniform sampler2D u_image;
varying vec2 v_texCoord;
uniform float alpha;
void main()
{
vec4 color = texture2D(u_image, vec2(v_texCoord.s, v_texCoord.t));
gl_FragColor = vec4(color.rgb, alpha * color.a);
}
</script>
...
webglalpha = gl.getUniformLocation(gl.program, "alpha");
...
gl.uniform1f(webglalpha, d_opacity);
Does anybody know what is the problem and how to solve it ?
Consider using premultiplied alpha. Change your blending mode to
gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA);
If your images are not already premultiplied then do this in your shader
float newAlpha = textureColor.a * alpha;
gl_FragColor = vec4(textureColor.rgb * newAlpha, newAlpha);

2D lighting from multiple point sources on GLSL ES 2.0 in iPhone

as i'm a complete noob with shaders i've got some problems while trying to get to work a 2D lighting system that basically covers the screen with a 2D black texture with transparent holes where the lighten areas are.
As i'm using only one texture I guess that i must do this in the fragment shader, right?
Fragment shader:
#ifdef GL_ES
precision mediump float;
#endif
// Texture, coordinates and size
uniform sampler2D u_texture;
varying vec2 v_texCoord;
uniform vec2 textureSize;
uniform int lightCount;
struct LightSource
{
vec2 position;
float radius;
float strength;
};
uniform LightSource lights[10];
void main()
{
float alpha = 1.0;
vec2 pos = vec2(v_texCoord.x * textureSize.x, v_texCoord.y * textureSize.y);
int i;
for (i = 0; i < lightCount; i++)
{
LightSource source = lights[i];
float distance = distance(source.position, pos);
if (distance < source.radius)
{
alpha -= mix(source.strength, 0.0, distance/source.radius);
}
}
gl_FragColor = vec4(0.0, 0.0, 0.0, alpha);
}
The problem is that the performance is really terrible (cannot run at 60fps with 2 lights and nothing else on screen), any suggestions to make it better or even different ways to approach this problem?
By the way, i'm doing this from cocos2d-x, so if anyone has any idea that uses cocos2d elements it will be welcome as well :)
I totally agree with Tim. If you want to improve the total speed, you've to avoid for loops. I recommend you that, if the lights array size is always ten, swap the loop statement with ten copies of the loop content. You should be aware that any variable that you declare into a loop statement will be freed up at the end of the loop! So its a good idea to span the loop in ten parts (ugly, but it's an old school trick ;))))
Besides, I also recommend you to put some println in every statement, to see what instructions is messing around. I bet that the mix operation is the culprit. I don't know anything about cocos2d, but, it is possible to make an unique call to mix at the end of the process, with a sumarization of distances and strengths? It seems that at some point there's a pretty float-consuming annoying operation
Two things I would try (not guaranteed to help)
Remove the for loop and just hardcode in two lights. For loops can be expensive if they are not handled properly by the driver. It would be good to know if that is slowing you down.
If statements can be expensive, and I don't think that's a good application of mix (you're doing an a*(1-c) + 0.0 * c, and the second half of that term is pointless). I might try replacing this if statement:
if (distance < source.radius)
{
alpha -= mix(source.strength, 0.0, distance/source.radius);
}
With this single line:
alpha -= (1.0-min(distance/source.radius, 1.0)) * source.strength;

OpenGL ES 2.0 shader examples for image processing?

I am learning shader programming and looking for examples, specifically for image processing. I'd like to apply some Photoshop effect to my photos, e.g. Curves, Levels, Hue/Saturation adjustments, etc.
I'll assume you have a simple uncontroversial vertex shader, as it's not really relevant to the question, such as:
void main()
{
gl_Position = modelviewProjectionMatrix * position;
texCoordVarying = vec2(textureMatrix * vec4(texCoord0, 0.0, 1.0));
}
So that does much the same as ES 1.x would if lighting was disabled, including the texture matrix that hardly anyone ever uses.
I'm not a Photoshop expert, so please forgive my statements of what I think the various tools do — especially if I'm wrong.
I think I'm right to say that the levels tool effectively stretches (and clips) the brightness histogram? In that case an example shader could be:
varying mediump vec2 texCoordVarying;
uniform sampler2D tex2D;
const mediump mat4 rgbToYuv = mat4( 0.257, 0.439, -0.148, 0.06,
0.504, -0.368, -0.291, 0.5,
0.098, -0.071, 0.439, 0.5,
0.0, 0.0, 0.0, 1.0);
const mediump mat4 yuvToRgb = mat4( 1.164, 1.164, 1.164, -0.07884,
2.018, -0.391, 0.0, 1.153216,
0.0, -0.813, 1.596, 0.53866,
0.0, 0.0, 0.0, 1.0);
uniform mediump float centre, range;
void main()
{
lowp vec4 srcPixel = texture2D(tex2D, texCoordVarying);
lowp vec4 yuvPixel = rgbToYuv * srcPixel;
yuvPixel.r = ((yuvPixel.r - centre) * range) + 0.5;
gl_FragColor = yuvToRgb * yuvPixel;
}
You'd control that by setting the centre of the range you want to let through (which will be moved to the centre of the output range) and the total range you want to let through (1.0 for the entire range, 0.5 for half the range, etc).
One thing of interest is that I switch from the RGB input space to a YUV colour space for the intermediate adjustment. I do that using a matrix multiplication. I then adjust the brightness channel, and apply another matrix that transforms back from YUV to RGB. To me it made most sense to work in a luma/chroma colour space and from there I picked YUV fairly arbitrarily, though it has the big advantage for ES purposes of being a simple linear transform of RGB space.
I am under the understanding that the curves tool also remaps the brightness, but according to some function f(x) = y, which is monotonically increasing (so, will intersect any horizontal or vertical only exactly once) and is set in the interface as a curve from bottom left to top right somehow.
Because GL ES isn't fantastic with data structures and branching is to be avoided where possible, I'd suggest the best way to implement that is to upload a 256x1 luminance texture where the value at 'x' is f(x). Then you can just map through the secondary texture, e.g. with:
... same as before down to ...
lowp vec4 yuvPixel = rgbToYuv * srcPixel;
yuvPixel.r = texture2D(lookupTexture, vec2(yuvPixel.r, 0.0));
... and as above to convert back to RGB, etc ...
You're using a spare texture unit to index a lookup table, effectively. On iOS devices that support ES 2.0 you get at least eight texture units so you'll hopefully have one spare.
Hue/saturation adjustments are more painful to show because the mapping from RGB to HSV involves a lot of conditionals, but the process is basically the same — map from RGB to HSV, perform the modifications you want on H and S, map back to RGB and output.
Based on a quick Google search, this site offers some downloadable code that includes some Photoshop functions (though not curves or levels such that I can see) and, significantly, supplies example implementations of functions RGBToHSL and HSLToRGB. It's for desktop GLSL, which has a more predefined variables, types and functions, but you shouldn't have any big problems working around that. Just remember to add precision modifiers and supply your own replacements for the absent min and max functions.
For curves photoshop uses bicubic spline interpolation. For a given set of control points you can precalculate all 256 values for each channel and for the master curve. I found that it's easier to store the results as a 256x1 texture and pass it to the shader and then change values of each component:
uniform sampler2D curvesTexture;
vec3 RGBCurvesAdjustment(vec3 color)
{
return vec3(texture2D(curvesTexture, vec2(color.r, 1.0)).r,
texture2D(curvesTexture, vec2(color.g, 1.0)).g,
texture2D(curvesTexture, vec2(color.b, 1.0)).b);
}

Multi-textured Point Sprites in OpenGL ES2.0 on iOS?

I am trying to make a multi-textured point sprite for an iphone application using OpenGL ES 2.0. I can't find any examples of this on web, and it doesn't seem to be working. Is there some built-in limitation where gl_PointCoord can't be used on multiple textures when using GL_POINTS mode for point sprites?
uniform sampler2D tex;
uniform sampler2D blur_tex;
vec4 texPixel = texture2D( tex, gl_PointCoord );
vec4 blurPixel = texture2D( blur_tex, gl_PointCoord );
I'm sure I am passing in the textures properly, as I can do multi-texturing just fine in TRIANGLE_STRIP mode, but I am hoping to speed things up using point sprites.
If it is possible, a link to an example of working code would super helpful. Thanks!
EDIT:
Here's how I'm passing in the textures to my shader. This lets me do multi-texturing when I am in TRIANGLE or TRIANGLE_STRIP mode.
//pass in position and tex_coord attributes...
//normal tex
glActiveTexture(0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex0);
glUniform1i(SAMPLER_0_UNIFORM, 0);
//blur tex
glActiveTexture(1);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex1);
glUniform1i(SAMPLER_1_UNIFORM, 1);
//draw arrays...
However if I am using POINTS mode then I never see the second texture. That is, referring to the shader code above, whether I do
gl_FragColor = texPixel;
OR
gl_FragColor = blurPixel;
I see the same texture. Which seems strange. My guess is that you CAN'T do multi-texturing on a point sprite and somehow having two active textures or two calls to gl_PointCoord causes a problem. But I'm hoping I'm wrong. So if someone has a simple example of multi-texturing working with point sprites in OpenGL ES 2.0 I would be happy to look at that code!
EDIT 2:
vertex shader:
attribute vec4 position;
void main() {
gl_PointSize = 15.0;
gl_Position = position;
}
fragment shader:
precision mediump float;
uniform sampler2D tex;
uniform sampler2D blur_tex;
void main() {
vec4 texPixel = texture2D( tex, gl_PointCoord );
vec4 blurPixel = texture2D( blur_tex, gl_PointCoord );
//these both do the same thing even though I am passing in two different textures?!?!?!?
//gl_FragColor = texPixel;
gl_FragColor = blurPixel;
}
There is a typo in your main program.
The right parameter to pass to glActiveTexture is GL_TEXTURE0, GL_TEXTURE1, ...
Note that GL_TEXTURE0, GL_TEXTURE1 does not have a value of 0,1 etc.
Since you are passing an invalid value to glActiveTexture, the function will fail and so the active texture will always be a default (probably 0) all your changes are going to texture at 0 position.
source
In my case there is a blending for points
The possible problem was in nonexistent parameters
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
I think may be too late to post this though.
There are two problems in your code. One is the one that Satyakam has pointed out. The other problem is that you should NOT use glUniform1f. Right one is glUniform1i. The deference is f or i on the tail which means float or integer.