Change texture opacity in OpenGL - iphone

This is hopefully a simple question: I have an OpenGL texture and would like to be able to change its opacity, how do I do that? The texture already has an alpha channel and blending works fine, but I want to be able to decrease the opacity of the whole texture, to fade it into the background. I have fiddled with glBlendFunc, but with no luck – it seems that I would need something like GL_SRC_ALPHA_MINUS_CONSTANT, which is not available. I am working on iPhone, with OpenGL ES.

I have no idea about OpenGL ES, but in standard OpenGL you would set the opacity by declaring a colour for the texture before you use it:
// R, G, B, A
glColor4f(1.0, 1.0, 1.0, 0.5);
The example would give you 50% alpha without affecting the colour of your texture. By adjusting the other values you can shift the texture colour too.

Use a texture combiner. Set the texture stage to do a GL_MODULATE operation between a texture and constant color. Then change the constant color from your code (glTexEnv, GL_TEXTURE_ENV_COLOR).
This should come as "free" in terms of performance. On most (if not all) graphics chips combiner operations take the same number of GPU cycles (usually 1), so just using a texture versus doing a modulate operation (or any other operation) is exactly the same cost.

Basically you have two options: use glTexEnv for your texture with GL_MODULATE and specify the color using glColor4* and use a non-opaque level for the alpha channel. Note that glTexEnv should be issued only once, when you first load your texture. This scenario will not work if you specify colors in your vertex-attributes though. Those will namely override any glColor4* color you may set. In that case, you may want to resort to either of 2 options: use texture combiners (advanced topic, not nice to use in fixed-pipeline), or "manually" change the vertex color attribute of each individual vertex (can be undesirable for larger meshes).

If you are using modern OpenGL..
You can do this in the fragment shader :
void main()
{
color = vec4(1.0f, 1.0f, 1.0f, OPACITY) * texture(u_Texture, TexCoord);
}
This allows you to apply a opacity value to a texture without disrupting the blending.

Thank You all for the ideas. I’ve played both with glColor4f and glTexEnv and at last forced myself to read the glTexEnv manpage carefully. The manpage says that in the GL_MODULATE texturing mode, the resulting color is computed by multiplying the incoming fragment by the texturing color (C=Cf×Ct), same goes for the alpha. I tried glColor4f(1, 1, 1, opacity) and that did not work, but passing the desired opacity into all four arguments of the call did the trick. (Still not sure why though.)

The most straightforward way is to change the texture's alpha value on the fly. Since you tell OpenGL about the texture at some point, you will have the bitmap in memory. So just rebind the texture to the same texture id. In case you don't have it in memory, (due to space constraints, since you are on ES), you can retrieve the texture to a buffer again, using glGetTexImage(). That's the clean solution.
Saving/retrieving operations are a bit costly, though, so you might want another solution. Thinking about it, you might be able to work with geometry behind your the geometry displaying your texture or simply work on the material/colour of the geometry that holds the texture. You will probably want to have some additive blending of the back-geometry. Using a glBlendFunc of
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA),
you might be able to "easily" and - more important, cheaply - achieve the desired effect.

Most likely you are using cg to get your image into a texture. When you use cg, the alpha is premultiplied, thus why you have to use the alpha for rgba of the color4f func.

I suspect that you had a black background, and thus by decreasing the amount of every color, you were effectively fading the color to black.

Related

iPhone OpenGL ES 2.0 blending with Cocos2D gives unexpected results

I have very simple CCScene with ONLY 1 CCLayer containing:
CCSprite for background with standard blending mode
CCRenderTexture to draw paint brushes, with its sprite attached to root CCLayer above background sprite:
_bgSprite = [CCSprite spriteWithFile:backgroundPath];
_renderTexture = [CCRenderTexture renderTextureWithWidth:self.contentSize.width height:self.contentSize.height];
[_renderTexture.sprite setBlendFunc:(ccBlendFunc){GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA}];
[self addChild:_bgSprite z:-100];
[self addChild:_renderTexture];
Brush rendering code:
[_renderTexture begin];
glBlendFuncSeparate(GL_ONE, GL_ZERO, GL_ONE, GL_ONE); // 1.
// calculate vertices code,etc...
glDrawArrays(GL_TRIANGLES, 0, (GLsizei)count);
[_renderTexture end];
When user brushes with first colored brush, it blends with background as expected.
But when when continues brushing with another color on top of the previous brush, it goes wrong (soft alpha edges loses opacity when 2 brushes overlap each other):
I tried many blending options but somehow I cannot find correct one.
Is there something special about CCRenderTexture that it does not blend with itself (with previously drawn content) as expected?
My fragment shader used for brushing is just standard texture shader with minor change to preserve input color alpha in texture:
void main()
{
gl_FragColor = texture2D(u_texture, v_texCoord);
gl_FragColor.a = v_fragmentColor.a;
}
UPDATE - ALMOST PERFECT SOLUTION : by jozxyqk
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA,
GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
in rendering code (in place of // 1. and
[_renderTexture.sprite setBlendFunc:(ccBlendFunc){GL_ONE, GL_ONE_MINUS_SRC_ALPHA}];
THIS WORKS GREAT AND GIVES ME WHAT I WANT...
...BUT ONLY WHEN _rederTexture is in full opacity.
When opacity of _rendertexture.sprite is lowered, brushes get lightened up instead of fading out as one could expect:
Why alphas of the brushes are blending with background correctly when parent texture is in full opacity but go bananas when opacity is lowered? How can I make brushes to blend with background correctly?
EDIT
Blending brush -> layer -> background
OK, what's happening is glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) is working for blending the brush strokes into the brush texture, but the resulting alpha values in the texture are wrong. Each added fragment needs to 1. add it's alpha to the final alpha value - it has to remove exactly that much light for the interaction and 2. scale the previous alpha by the remainder - previous surfaces reduce the light by the previous value, but since a new surface is added there is less light for them to reduce. I'm not sure if that made sense but it leads to this...
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA,
GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
Now the colour channel of the brush texture contains the total colour to be blended with the background (pre-multiplied with alpha) and the alpha channel gives the weight (or the amount the colour obscures the background). Since the colour is pre-multiplied with alpha, the default RenderTexture blending GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA scales with alpha again and hence darkens the overall colour. You now need to blend the brush texture with the background using the following function, which I gather must be set in Cocos2D:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
Hopefully this is possible. I haven't given a lot of thought on how to manage the possibility of setting up the brush texture to blend with GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA but it may require a floating point texture and/or an extra pass to divide/normalize the alpha, which sounds painful.
Alternatively, splat the background into your render texture before drawing and keep the lot there without any blending of layers.
This worked for me:
glDisable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
fbo.bind();
glClear(GL_COLOR_BUFFER_BIT);
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA,
GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
drawTexture(brush1);
drawTexture(brush2);
fbo.unbind();
drawTexture(grassTex); //tex alpha is 1.0, so blending doesn't affect background
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
drawTexture(fbo.getColour(0)); //blend in the brush layer
Brush layer opacity
Using GL_ONE, GL_ONE_MINUS_SRC_ALPHA causes issues with the library's implementation of opacity in layer blending since it assumes the colour is multiplied by alpha. By reducing the opacity value, the alpha of the brush layer is scaled down during blending. GL_ONE_MINUS_SRC_ALPHA then causes the amount of background colour to increase, however GL_ONE sums 100% of the brush layer and oversaturates the image.
The simplest solution imo is to find a way to scale down the colour by the global layer opacity yourself and continue to use GL_ONE, GL_ONE_MINUS_SRC_ALPHA.
Actually using GL_CONSTANT_COLOR, GL_ONE_MINUS_SRC_ALPHA might be an answer if the library supported it, but apparently it doesn't.
You could use fixed pipeline rendering to scale the colour: glColor4f(opacity, opacity, opacity, opacity), but this will require a second render target and doing the blend manually, similarly to the code above, where you draw a full screen quad once for the background and again for the brush layer.
If you're doing the blend manually it would be more robust to use a fragment shader instead of the glColor method. This would allow far greater control if you ever wanted to play with more complex blending functions, especially where divisions and temporaries outside the 0 to 1 range are concerned:
gl_FragColour = texture(brushTexture, coord) * layerOpacity;
END EDIT
The standard alpha blending function is glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);, not quite the GL "initial"/default function.
Summing alpha values as you do in glBlendFuncSeparate will oversaturate alpha and the underneath colour is completely replaced. Saturation blending may give decent results: glBlendFunc(GL_SRC_ALPHA_SATURATE, GL_ONE). It might also be worth experimenting with glBlendEquationSeparate and MAX blending, if it's supported. The advantage of playing with MAX would be reducing the overlapping artefacts (hard triangular bits) from your line drawing code - eg replace colour, but only until total alpha value X is reached. EDIT: Both cases will require blending and clearing after each stroke.
I can only assume blending the render texture onto the background is in fact working. (not for the current layer values)
On a side note and largely unrelated there's also "Under Blending", where you keep a transmittance value instead of alpha/opacity (from here):
glBlendEquation(GL_FUNC_ADD);
glBlendFuncSeparate(GL_DST_ALPHA, GL_ONE, GL_ZERO, GL_ONE_MINUS_SRC_ALPHA);

In OpenGL ES2.0 for iOs, how can I use a CVPixelBufferRef to update a cubemap texture?

I have managed to get a CVPixelBufferRef from an AVPlayer to feed pixel data that I can use to texture a 2D object. When my pixelbuffer has data in it I do:
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage('
kCFAllocatorDefault,
videoTextureCache_,
pixelBuffer, //this is a CVPixelBufferRef
NULL,
GL_TEXTURE_2D,
GL_RGBA,
frameWidth,
frameHeight,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&texture);
I would like to use this buffer to create a GL_TEXTURE_CUBE_MAP. My video frame data is actually 6 sections in one image (e.g. a cubestrip) that in total makes the sides of a cube. Any thoughts on a way to do this?
I had thought to just pretend my GL_TEXTURE_2D was a GL_TEXTURE_CUBE_MAP and replace the texture on my skybox with the texture generated by the code above, but this creates a distorted mess (as I suppose should be expected when trying to force a skybox to be textured with a GL_TEXTURE_2D.
The other idea was to setup unpacking using glPixelStorei and then read from the pixelbuffur:
glPixelStorei(GL_UNPACK_ROW_LENGTH, width);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, X);
glPixelStorei(GL_UNPACK_SKIP_ROWS, Y);
glTexImage2D(...,&pixelbuffer);
But unbelievably GL_UNPACK_ROW_LENGTH is not supported in OpenGl ES2.0 for iOS.
So, is there:
-Any way to split us the pixel data in my CVPixelBufferRef through indexing the buffer to some pixel subset before using it to make a texture?
-Any way to make a 6 new GL_TEXTURE_2D as indexed subsets of my GL_TEXTURE_2D that is created by the code above
-any way to convert a GL_TEXTURE_2D to a valid GL_TEXTURE_CUBE_MAP (e.g. GLKit has a Skybox effect that loads a GL_TEXTURE_CUBE_MAP from a single cubestrip file. It doesnt have a method to load a texture from memory though or I would be sorted)
-any other ideas?
If it were impossible any other way (which is unlikely, there probably is an alternate way -- so this is probably not the best answer & involves more work than necessary) here is a hack I'd try:
How a cube map works is it projects the texture for each face from a point in the center of the geometry out toward each of the cube faces. So you could reproduce that behavior yourself; you could use Projective Texturing to make six draw calls, one for each face of your cube. Each time, you'd first draw the face you're interested in to the stencil buffer, then calculate the projection matrix for your texture (this technique is used a lot for 'spotlight' effects in games), then figure out the transform matrix required to augment the fragment shader's texture read so that for each face, only the portion of the texture that corresponds to that face winds up within the (0..1) texture lookup range. If everything has gone right, anything outside the 0..1 range should be discarded by the stencil buffer, and you'd be left with a DIY cube map out of a TEXTURE_2D.
The above method is actually really similar to what I'm doing for an app right now, except I'm only using projective texturing to mask off & replace a small portion of the cube map. I need to pixel-match the edges of the small square I'm projecting so that it's seamlessly applied to the skybox, so that's why I feel confident that this method will actually reproduce the cube map behavior -- otherwise, pixel-matching wouldn't be possible.
Anyway, I hope you find a way to simply transition your 2D to CUBEMAP, because that would probably be much easier and cleaner.

Transparent Texture With OpenGL-ES 2.0

I am trying to add a transparent texture on top a cube. Only the front face is not transparent. Other sides are transparent. What could be the problem?. Any help is appreciated.
EDIT : I found that the face which is drawn first is opaque.
3 face of the cube is drawn.
Opaque face.((This face's index is given first in GLdrawElements))
Transparent face.
You most probably ran into a sorting problem. To display transparent geometries correctly the faces of the object have to be sorted from back to front.
Unfortunately there is no built-in support for that in opengl-es (or in any gfx-library in existance). The only possibility is to sort your polygons, recreate your object each frame and draw it with correctly ordered faces.
A workaround would be using additive transparency instead of normal transparency. Additive transparency is an order independent calculation. You have to remember to turn off z-buffer writes while drawing because otherwise some geometry may be ocluded.
Additive transparency is achieved by setting both blendfunc values to GL_ONE.

iPhone game 2d shadows

We're in the process of creating an iPhone game using cocos2d. We're trying to layer several sprites on top of each other and have them cast shadows.
Right now the shadows are rendered as sprites which works fine for the most part. But we only want the shadows to hit the closest layer.
I've made an image that hopefully explains what we're trying to accomplish:
And here's what we have at the moment:
Basically we want the sprite to only render the part of the shadow that is at the same depth as the z-buffer.
We've played around with glDepthFunc and GL_DEPTH_TEST but nothing seems to work.
Here's how we're rendering the shadow sprite (subclassed CCSprite):
- (void)draw {
glDisable( GL_BLEND );
glEnable( GL_DEPTH_TEST );
glDepthFunc( GL_LESS );
glDepthMask( GL_FALSE );
[super draw];
glDepthMask( GL_TRUE );
glDisable( GL_DEPTH_TEST );
glEnable( GL_BLEND );
}
The GL_BLEND calls are only there so we can see the sprite at all times.
All sprites that aren't shadows use glDepthMask( GL_TRUE ) and we're clearing the depth buffer on each frame.
Any help would be much appreciated!
glDepthFunc(GL_LESS)
is actually the default value; it means "draw the pixel only if the thing currently in the depth buffer is further away". If you wanted exactly equal you'd use glDepthFunc(GL_EQUAL), but in practice you'll get all sorts of rounding oddities if you do that.
Assuming you're able to use depth values for this purpose, if you have ten objects then I'd suggest you:
set glClearDepth to 0 before you glClear; this'll fill the depth buffer with the nearest storable value so that with normal depth buffering nothing else would be drawn.
disable the depth and draw the shadows such as they're supposed to fall on the back plane; at this point your depth buffer will still be full of the nearest possible value.
enable the depth test but set glDepthFunc to GL_ALWAYS. Then draw all your solid rectangles in back to front order with their depth values set appropriately.
set glDepthFunc to GL_LESS and draw the shadows that are meant to fall on other sprites, each positioned further back than the sprite they're associated with but in front of the sprite behind.
By the time you get to step 4, you'll have correct depth information everywhere a sprite was drawn and you'll have the closest possible value set wherever the background plane was. So normal depth testing will work on the intermediate shadows — they'll draw on top of anything drawn in step 3 but not on top of anything drawn in step 2.
You're sort of using the depth buffer as a surrogate stencil, which the older iPhones don't support.
If you can't afford to use the depth buffer for this task then all I can think of is projecting the shadows as textures in the second texture unit, using the first for a mask texture (or not if you're actually drawing rectangles, but I guess you're probably not) and doing one rendering pass per sprite per shadow that falls upon it. Is that a passable solution?

Is there a penalty for mixing color spaces? (Core Graphics)

If I'm writing drawing code in Core Graphics on Mac OS X or iPhone OS, I can set the active fill color to red by calling:
CGContextSetRGBFillColor(context, 1.0, 0.0, 0.0, 1.0); // RGB(1,0,0)
If I want 50% gray, I could call:
CGContextSetRGBFillColor(context, 0.5, 0.5, 0.5, 1.0); // RGB(0.5,0.5,0.5)
But for shades of gray it's tempting to make a shorter line and call:
CGContextSetGrayFillColor(context, 0.5, 1.0);
However, this function is NOT simply calling the RGB method with the intensity value copied three times; instead it is changing the context's color space from DeviceRGB to DeviceGray. The next call to an RGB method will switch it back.
I'm curious to know:
What's the penalty for switching color spaces?
Is there a penalty for drawing when your context's color space doesn't match your device's native color space? (i.e., drawing in DeviceGray versus DeviceRGB)
I'm asking out of technical curiosity, not a desire to prematurely optimize, so please keep your admonitions to a minimum, please.
Conceptually, there's a penalty, but in practice it's so miniscule as to be irrelevant; converting (e.g.) a shade of gray to an RGB triplet (plus alpha) is trivial arithmetic even with a custom colour space.
Colour spaces do have a penalty when you're drawing images, however, as it's more than a matter of a single conversion operation. Every pixel must be converted, and while there are optimizations that can be done here (e.g. CLUTs, Colour Look-Up Tables, are useful if the source image uses indexed colours) they don't tend to be useful in situations where you also find Quartz code.
You say you expect CGContextSetGrayFillColor() to change the colour space of the graphics context, but that isn't actually the case. Doing so would necessitate converting the contents of that graphics context to match the context's new colour space. Since it's far cheaper and simpler to convert the colour instead of the context's buffers (e.g. by making CGContextSetGrayFillColor() an under-the-covers wrapper around CGContextSetRGBFillColor()), such an expense is going to be avoided in any sensible implementation.
I have been using both extensively in a similar manner and have noticed no penalty in terms of performance.