Invert text color using glBlendFunc in Cocos2d on CCLabelTTF - iphone

I am trying to add a CCLabelTTF to my Cocos2d project and have the text be an inverted version of the graphics behind it.
I am having a hard time figuring out what blend fund to use.
I have to admit I do not really understand the concepts behind this, so I am basically just trying different modes.
I have tried several types:
This one inverts the background of the text, but leaves the text white:
[fontLabel setBlendFunc:(ccBlendFunc){GL_ONE_MINUS_DST_COLOR, GL_SRC_ALPHA}];
Can you help me in the right direction?
I want the text to be inverted, and the background to be invisible.

You can visually experiment with the various blendfunc methods with the aptly named Visual glBlendFunc tool.
You should also be aware that CCLabelTTF uses 8-Bit (alpha mask, kCCTexture2DPixelFormat_A8) textures on 1st and 2nd generation devices, and 16-Bit (alpha+intensity mask, kCCTexture2DPixelFormat_AI88) textures on 3rd generation and newer devices. This may or may not affect the blend mode results, or even make it impossible because the textures don't contain color information, only alpha.

It can not be done using glBlendFunc. Blending equation looks like this:
result = A * front_color OP B * back_color;
OpenGL allows you to configure A, B - glBlendFunc(A, B);
and OP (operation) - glBlendEquation(OP);
To invert colors, you need
result = 1 - back_color;
You can do that by setting A = 1, B = 1, OP = FUNC_SUBTRACT, but you will have to set front_color to (1,1,1,1) in fragment shader.
P.S. I might be wrong, so write a comment below and I will change my answer.

Related

How to create a Sprite Color Mask in Unity

I'm trying to use the Among Us Spritesheets to get crewmates in Unity. The spritesheet looks like this: https://www.reddit.com/r/AmongUs/comments/ir6nl0/main_player_sprite_sheet_for_those_who_wanted_it/
Each sprite is a blue/red color. Somehow the devs get every color of crewmate from these sprites, and I'm wondering how they did it.
How can I get every color of crewmate from this sprite sheet?
Thanks!
EDIT: Solution
I edited the title to be more accurate to my problem. Thanks to #G Kalendar for mentioning Shaders, I hadn't thought about that.
What I ended up doing was creating a Shader Graph, extracting each color channel, multiplying it by a color value, and recombining them into a texture.
I followed this helpful and straightforward tutorial: https://www.youtube.com/watch?v=4dAGUxvsD24
This is what my Shader Graph ended up looking like:
"Secondary" and "Primary" are color properties.
Hope this helps somebody!
If you want to have multiple info stored in a single image, it's common practice to use the channels: Red, Green, Blue. Horizon Zero Dawn for example uses that technique to make the environment effects as efficient as possible.
Here it looks like blue and red are used as a polaceholder to mark an area. So in unity's shader graph, when you use this image in a SampleTexture2D node you can use a Split node to get the different channels of the image to isolate the parts you want to color in.
Then just multiply the different channels by the color you want, add them together and use that as the base color.
Edit: Or use the "Replace Color Node" I just learned about.

Unity - Avoid quad clipping or set rendering order

I am using Unity 5 to develop a game. I'm still learning, so this may be a dumb question. I have read about Depth Buffer and Depth Texture, but I cannot seem to understand if that applies here or not.
My setting is simple: I create a grid using several quads (40x40) which I use to snap buildings. Those buildings also have a base, made with quads. Every time I put one one the map, the Quads overlap and they look like the picture.
As you can see, the red quad is "merging" with the floor (white quads).
How can I make sure Unity renders the red one first, and the white ones are background? Of course, I can change the red quad Y position, but that seems like the wrong way of solving this.
This is a common issue, called Z-Fighting.
Usually you can reduce it by reducing the range of “Clipping Planes” of the camera, but in your case the quads are at the same Y position, so you can’t avoid it without changing the Y position.
I don't know if it is an option for you, but if you use SpriteRenderer (Unity 2D) you don’t have that problem and you can just set “Sorting Layer” or “Order in Layer” if you want modify the rendering order.

How does CGContextClipToMask work internally?

I am trying to replicate the behavior of CGContextClipToMask on iOS without any luck so far. Does anyone know how CGContextClipToMask works internally? I have read the documentation and it says it simply multiplies the image alpha value by the mask alpha value, so that is what I am doing in my custom function, however when I draw the result image using normal blending on to a CGContext multiple times, the result gets darker and darker, whereas with CGContextClipToMask the result is correct and does not get darker.
My theory is that CGContextClipToMask somehow uses the destination context, in addition to the image and mask to produce a correct result, but I just don't know enough about core graphics to be sure.
I've also read this question:
How to get the real RGBA or ARGB color values without premultiplied alpha?
and it's possible I am running into this problem, but then how does CGContextClipToMask get around the problem of precision loss with 8 bit alpha?
I found the problem. When multiplying the mask, I had to do a ceilf call on the RGB values like so:
float alphaPercent = (float)maskAlpha / 255.0f;
pixelDest->A = maskAlpha;
pixelDest->R = ceilf((float)pixelDest->R * alphaPercent);
pixelDest->G = ceilf((float)pixelDest->G * alphaPercent);
pixelDest->B = ceilf((float)pixelDest->B * alphaPercent);
Amazingly, this solves the problem...
I personally have no idea about the internals of CGContextClipToMask, but maybe you can find out something interesting by having a look at how it is implemented in GNUStep.
PS:
thinking more about it, when you write "multiplies the image alpha value by the mask alpha value", there is something that strikes my attention.
Indeed, as far as I know, a mask on iOS has no alpha channel and all my attempts to use alpha-channeled masks have given wrong results. Could this allow you to move further a bit?
Maybe the trick is multiplying the image by the mask (which is defined in the gray color space, and has got just one component), and leave the image alpha channel unchanged...
how does that sounds to you?

Open GL ES: rendering a coloured texture in black and white?

Is it possible to render a coloured texture in black and white using ES 1.X? If yes, how?
The only thing I can think of is very convoluted — using the GL_COMBINE texEnv mode to do a per-pixel dot product, though I can't seem to find a route through that doesn't involve an intermediate FBO and reducing the precision of your RGB channels to 7 bits a piece. So you're using the dot3 functionality that's generally intended for lighting, but because you don't want to use negative values you're ending up with half the available range. You'd basically just dot product everything with the vector (0.299, 0.587, 0.114) and output that on all three channels.
With a fragment shader that converts the color information into grayscale. Its pretty simple, just add all three channels and divide them with three (there are more advanced ways but this simple way works in most if not all cases).

Change texture opacity in OpenGL

This is hopefully a simple question: I have an OpenGL texture and would like to be able to change its opacity, how do I do that? The texture already has an alpha channel and blending works fine, but I want to be able to decrease the opacity of the whole texture, to fade it into the background. I have fiddled with glBlendFunc, but with no luck – it seems that I would need something like GL_SRC_ALPHA_MINUS_CONSTANT, which is not available. I am working on iPhone, with OpenGL ES.
I have no idea about OpenGL ES, but in standard OpenGL you would set the opacity by declaring a colour for the texture before you use it:
// R, G, B, A
glColor4f(1.0, 1.0, 1.0, 0.5);
The example would give you 50% alpha without affecting the colour of your texture. By adjusting the other values you can shift the texture colour too.
Use a texture combiner. Set the texture stage to do a GL_MODULATE operation between a texture and constant color. Then change the constant color from your code (glTexEnv, GL_TEXTURE_ENV_COLOR).
This should come as "free" in terms of performance. On most (if not all) graphics chips combiner operations take the same number of GPU cycles (usually 1), so just using a texture versus doing a modulate operation (or any other operation) is exactly the same cost.
Basically you have two options: use glTexEnv for your texture with GL_MODULATE and specify the color using glColor4* and use a non-opaque level for the alpha channel. Note that glTexEnv should be issued only once, when you first load your texture. This scenario will not work if you specify colors in your vertex-attributes though. Those will namely override any glColor4* color you may set. In that case, you may want to resort to either of 2 options: use texture combiners (advanced topic, not nice to use in fixed-pipeline), or "manually" change the vertex color attribute of each individual vertex (can be undesirable for larger meshes).
If you are using modern OpenGL..
You can do this in the fragment shader :
void main()
{
color = vec4(1.0f, 1.0f, 1.0f, OPACITY) * texture(u_Texture, TexCoord);
}
This allows you to apply a opacity value to a texture without disrupting the blending.
Thank You all for the ideas. I’ve played both with glColor4f and glTexEnv and at last forced myself to read the glTexEnv manpage carefully. The manpage says that in the GL_MODULATE texturing mode, the resulting color is computed by multiplying the incoming fragment by the texturing color (C=Cf×Ct), same goes for the alpha. I tried glColor4f(1, 1, 1, opacity) and that did not work, but passing the desired opacity into all four arguments of the call did the trick. (Still not sure why though.)
The most straightforward way is to change the texture's alpha value on the fly. Since you tell OpenGL about the texture at some point, you will have the bitmap in memory. So just rebind the texture to the same texture id. In case you don't have it in memory, (due to space constraints, since you are on ES), you can retrieve the texture to a buffer again, using glGetTexImage(). That's the clean solution.
Saving/retrieving operations are a bit costly, though, so you might want another solution. Thinking about it, you might be able to work with geometry behind your the geometry displaying your texture or simply work on the material/colour of the geometry that holds the texture. You will probably want to have some additive blending of the back-geometry. Using a glBlendFunc of
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA),
you might be able to "easily" and - more important, cheaply - achieve the desired effect.
Most likely you are using cg to get your image into a texture. When you use cg, the alpha is premultiplied, thus why you have to use the alpha for rgba of the color4f func.
I suspect that you had a black background, and thus by decreasing the amount of every color, you were effectively fading the color to black.