Distortion with 'pixel accurate' OpenGL rendering of sprites - iphone

To define what I'm trying to do: I want to be able to take an arbitrary 'sprite' image from a ^2x^2 sized PNG, and display just the pixels of interest to a given x/y position on screen.
My results are the problem - major distortion - it looks awful! (Note these SS's are in iPhone sim but on real retina device they appear the same.. junky). Here is a screenshot of the source PNG in 'preview' - which looks wonderful (any variations on rendering that I describe in this question look almost exactly like the junky one)
Previously, I've asked a question about displaying a non-power-of-2 texture as a sprite using OpenGL ES 2.0 (although this applies to any OpenGL). I'm close, but I have some issues that I can't resolve. I think there are probably multiple bugs - I think there's some bug where I'm basically aliasing what I'm displaying by rendering large then squashing x2 or vice versa but I can't see it. Additionally, there are off by one errors and I cannot get a handle on them. I can't visually identify them occurring but I know for sure they're there.
I'm working in 960 x 640 landscape (on iPhone4 retina display). So I expect 0->959 moves left to right, 0->639 moves bottom to top. (And I think I'm seeing opposite of this - but that's not what this question is about)
To make things easy what I'm trying to achieve in this test case is a FULL SCREEN 960x640 display of a PNG file. Just one of them. I display a red background first so that it's obvious if I'm covering the screen or not.
Update: I realized the 'glViewport' inside of the setFramebuffer call was setting my width and height backwards. I noticed this because when I would set my geometry to draw from 0,0 to 100,100 it drew in a rectangle not a square. When I swapped these, that call does draw a square. However, using that same call, my entire screen fills with vertex range of 0,0 -> 480,320 (half 'resolution').. don't understand that. However no matter where I push on from this, I'm still not getting a good looking result
Here's my vertex shader:
attribute vec4 a_position;
attribute vec2 a_texCoord;
varying vec2 v_texCoord;
// Gives 'landscape' full screen..
mat4 projectionMatrix = mat4( 2.0/640.0, 0.0, 0.0, -1.0,
0.0, 2.0/960.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
// Gives a 1/4 of screen.. (not doing 2.0/.. was suggested in previous SO Q)
/*mat4 projectionMatrix = mat4( 1.0/640.0, 0.0, 0.0, -1.0,
0.0, 1.0/960.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0); */
// Apply the projection matrix to the position and pass the texCoord
void main()
{
gl_Position = a_position;
gl_Position *= projectionMatrix;
v_texCoord = a_texCoord;
}
Here's my fragment shader:
precision mediump float;
varying vec2 v_texCoord;
uniform sampler2D s_texture;
void main()
{
gl_FragColor = texture2D(s_texture, v_texCoord);
}
Here's my draw code:
#define MYWIDTH 960.0f
#define MYHEIGHT 640.0f
// I have to refer to 'X' as height although I'd assume I use 'Y' here..
// I think my X and Y throughout this whole block of code is screwed up
// But, I have experimented flipping them all and verifying that if they
// Are taken from the way they're set now to swapping X and Y that things
// end up being turned the wrong way. So this is a mess, but unlikely my problem
#define BG_X_ORIGIN 0.0f
// ALSO NOTE HERE: I have to put my 'dest' at 640.0f.. --- see note [1] below
#define BG_X_DEST 640.0f
#define BG_Y_ORIGIN 0.0f
// --- see note [1] below
#define BG_Y_DEST 960.0f
// These are texturing coordinates, I texture starting at '0' px and then
// I calculate a percentage of the texture to use based on how many pixels I use
// divided by the actual size of the image (1024x1024)
#define BG_X_ZERO 0.0f
#define BG_Y_USEPERCENTAGE BG_X_DEST / 1023.0f
#define BG_Y_ZERO 0.0f
#define BG_X_USEPERCENTAGE BG_Y_DEST / 1023.0f
// glViewport(0, 0, MYWIDTH, MYHEIGHT);
// See note 2.. it sets glViewport basically, provided by Xcode project template
[(EAGLView *)self.view setFramebuffer];
// Big hack just to get things going - like I said before, these could be backwards
// w/respect to X and Y
static const GLfloat backgroundVertices[] = {
BG_X_ORIGIN, BG_Y_ORIGIN,
BG_X_DEST, BG_Y_ORIGIN,
BG_X_ORIGIN, BG_Y_DEST,
BG_X_DEST, BG_Y_DEST
};
static const GLfloat backgroundTexCoords[] = {
BG_X_ZERO, BG_Y_USEPERCENTAGE,
BG_X_USEPERCENTAGE, BG_Y_USEPERCENTAGE,
BG_X_ZERO, BG_Y_ZERO,
BG_X_USEPERCENTAGE, BG_Y_ZERO
};
// Turn on texturing
glEnable(GL_TEXTURE_2D);
// Clear to RED so that it's obvious when I'm not drawing my sprite on screen
glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// Texturing parameters - these make sense.. don't think they are the issue
glActiveTexture(GL_TEXTURE0);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);//GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);//GL_LINEAR);
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, backgroundVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, 0, 0, backgroundTexCoords);
glEnableVertexAttribArray(ATTRIB_TEXCOORD);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, background->textureId);
// I don't understand what this uniform does in the texture2D call in shader.
glUniform1f(uniforms[UNIFORM_SAMPLERLOC], 0);
// Draw the geometry...
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// present the framebuffer see note [3]
[(EAGLView *)self.view presentFramebuffer];
Note [1]:
If I set BG_X_DEST to 639.0f I do not get full coverage of the 640 pixels, I get red showing through on the right hand side. But this doesn't make sense to me - I'm aiming for pixel perfect and I have to draw my sprite geometry from 0 to 640 which is 641 pixels when I only have 640!!! red line appearing with 639f instead of 640f
And if I set BG_Y_DEST to 959.0f I do not get the red line show throug.
red line top bug appearing with 958f instead of 960 or 959f
This may be a good clue as to what bug(s) I have going on.
Note: [2] - included in the OpenGL ES 2 framework by Xcode
- (void)setFramebuffer
{
if (context)
{
[EAGLContext setCurrentContext:context];
if (!defaultFramebuffer)
[self createFramebuffer];
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
glViewport(0, 0, framebufferWidth, framebufferHeight);
}
}
Note [3]: - included in the OpenGL ES 2 framework by Xcode
- (BOOL)presentFramebuffer
{
BOOL success = FALSE;
if (context)
{
[EAGLContext setCurrentContext:context];
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
success = [context presentRenderbuffer:GL_RENDERBUFFER];
}
return success;
}
Note [4] - relevant image loading code (I have used PNG with and without alpha channel and actually it doesn't seem to make any difference... I also have tried to change my code up to be ARGB instead of RGBA and that's wrong - since A = 1.0 everywhere, I get a very RED image, which makes me think the RGBA is in fact valid and this code is right.): update: I have switched this texture loading to a completely different setup using CG/ImageIO calls and it looks identical to this so I assume it's not some kind of aliasing or color compression done by the image libraries (unless they both go to the same fundamental calls, which is possible..)
// Otherwise it isn't already loaded
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);//GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);//GL_LINEAR);
// TODO Next 2 can prob go later on..
glGenTextures(1, &(newTexture->textureId)); // generate Texture
// Use this before 'drawing' the texture to the memory...
glBindTexture(GL_TEXTURE_2D, newTexture->textureId);
NSString *path = [[NSBundle mainBundle]
pathForResource:[NSString stringWithUTF8String:newTexture->filename.c_str()] ofType:#"png"];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil)
NSLog(#"Do real error checking here");
newTexture->width = CGImageGetWidth(image.CGImage);
newTexture->height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc(newTexture->height * newTexture->width * 4 );
CGContextRef myContext = CGBitmapContextCreate
(imageData, newTexture->width, newTexture->height, 8, 4 * newTexture->width, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease(colorSpace);
CGContextClearRect(myContext, CGRectMake(0, 0, newTexture->width, newTexture->height));
CGContextDrawImage(myContext, CGRectMake(0, 0, newTexture->width, newTexture->height), image.CGImage);
// Texture is created!
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newTexture->width, newTexture->height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(myContext);
free(imageData);
[image release];
[texData release];

[(EAGLView *)self.view setContentScaleFactor:2.0f];
By default, iPhone windows do scaling to reach their high resolution modes. Which was destroying my image quality ..
Thanks for all the help folks

Related

Mixing Multiple Textures in OpenGL ES Shader on iOS Results in Inverse Behavior

I have an OpenGL ES Shader fragment shader that is using the mix function to place an overlay over a videoframe. This is my shader:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 textureCoordinate;
uniform sampler2D videoFrameY;
uniform sampler2D videoFrameUV;
uniform sampler2D overlay;
const mat3 yuv2rgb = mat3(
1, 1, 1,
0, -.21482, 2.12798,
1.28033, -.38059, 0
);
void main() {
vec3 yuv;
vec4 ovr;
yuv.x = texture2D(videoFrameY, textureCoordinate).r;
yuv.yz = texture2D(videoFrameUV, textureCoordinate).rg - vec2(0.5, 0.5);
ovr = texture2D(overlay, textureCoordinate);
vec3 rgb = yuv2rgb * yuv;
gl_FragColor = mix(ovr, vec4(rgb, 1.0), ovr.a);
}
Without the overlay texture, feeding gl_FragColor this:
gl_FragColor = vec4(rgb, 1.0);
works just fine and my video is displayed. Now I'm creating my overlay texture from a CATextLayer like this:
- (void)generateOverlay {
CATextLayer *textLayer = [CATextLayer layer];
[textLayer setString:#"Sample test string"];
[textLayer setFont:(__bridge CFStringRef)#"Helvetica"];
[textLayer setFontSize:(_videoHeight / 6)];
[textLayer setAlignmentMode:kCAAlignmentLeft];
[textLayer setBounds:CGRectMake(0, 0, _videoWidth, _videoHeight)];
CGSize layerSize = textLayer.bounds.size;
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc(layerSize.height * layerSize.width * 4);
CGContextRef context = CGBitmapContextCreate(imageData, layerSize.width, layerSize.height, 8, 4 * layerSize.width, colorspace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorspace);
CGContextClearRect(context, CGRectMake(0, 0, layerSize.width, layerSize.height));
CGContextTranslateCTM(context, 0, layerSize.height - layerSize.height);
[textLayer renderInContext:context];
glActiveTexture(GL_TEXTURE2);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, layerSize.width, layerSize.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(context);
free(imageData);
}
The problem is the results are inverted. So therefore they look like this:
(source: c.tro.pe)
The fact is they are are inverted in two ways. First, instead of the blue being alpha'ed out and the video showing through, the text is the alpha and that is where the video is showing through. Second, the text is mirrored. The mirroring could be the results of the vertex values being used however the video is correct using the same coords. I'm sure this is a quick rearrange but I'm not sure what to tweak. Thanks!
For the mix, the mix(x, y, a) function interpolates between x and y based on a. a of 0 gives you all x, and a of 1.0 gives you all y. You're keying off of the alpha of your text layer, so for the overlay you want, you need to reverse your ordering as follows:
gl_FragColor = mix(vec4(rgb, 1.0), ovr, ovr.a);
In regards to the rotation, remember that the iOS rear cameras are mounted landscape left and the front cameras landscape right, so for a portrait orientation you need to rotate the incoming video frames. You appear to be performing that rotation in either your vertices or texture coordinates. You're going to need a second set of texture coordinates that aren't rotated to use for sampling your overlay image, or you'll need to draw your label at a matching landscape left rotation when generating its texture.

How can I crop an Image with mask and combine it with another image (background) on iPhone? (OpenGL ES 1.1 is preferred)

I need to combine three images the way I represent in attached file:
1) One image is background. It is 'solid' in sense it has no alpha channel.
2) Another one is sprite. Sprite lies upon background. Sprite may have its own alpha channel, background has to be visible in places where sprite is transparent.
3) There's a number of masks: I apply new mask to Sprite every frame. Mask isn't rectangular.
In other words, visible pixel =
pixel of background, if cropping mask corresponding color is white OR sprite is transparent;
pixel of sprite otherwise (for example, corresponding mask's pixel is black).
I'm working with cocos2d-iphone. Can I make such combination with cocos2d-iphone or with OpenGL ES 1.1? If any answer is YES, working code would be appreciated. If both answers is NO, is there another technology on iOS to make what I want (maybe Quartz2d or OpenGL ES 2.0) ?
Mask format is not obligatory black for Sprite and white for Background. I can make Mask of required format, such as transparency for Background and white for Sprite if needed.
P.S. There's another unanswered question of same kind:
Possible to change the alpha value of certain pixels on iPhone?
Here is my answer for OpenGL. The procedure would be very different for Quartz.
The actual code is pretty simple, but getting it exactly right is the tricky part. I am using a GL context that is 1024X1024 with the origin in the bottom left. I'm not posting my code because it uses immediate mode which isn't available in OpenGL|ES. If you want my drawing code, let me know and I'll update my answer.
Draw the mask with blending disabled.
Enable blending, set the GLBlendFunc(GL_DST_COLOR, GL_ZERO) and draw the bleed through texture. My mask is white where it should bleed through. In your question it was black.
Now to draw the background, set the blend function to glBlendFunc(GL_ONE_MINUS_DST_COLOR, GL_DST_COLOR) and draw the background texture.
EDIT Here is the code I describe above. Please note that this will not work on iOS since there is no immediate mode, but you should be able to get this working in Macintosh project. Once that is working you can convert it to something iOS compatible in the Macintosh project and then move that code over to your iOS project.
The renderMask() call is where the most interesting part is. renderTextures() draws the sample textures in the top row.
static GLuint color_texture;
static GLuint mask_texture;
static GLuint background_texture;
static float window_size[2];
void renderMask()
{
float texture_x=0, texture_y=0;
float x=0, y=0;
{
glBindTexture(GL_TEXTURE_2D, mask_texture);
glDisable(GL_BLEND);
glBegin(GL_QUADS);
glTexCoord2f(texture_x,texture_y);
glVertex2f(x,y);
glTexCoord2f(texture_x+1.0,texture_y);
glVertex2f(x+512.0,y);
glTexCoord2f(texture_x+1.0,texture_y+1.0);
glVertex2f(x+512.0,y+512.0);
glTexCoord2f(texture_x,texture_y+1.0);
glVertex2f(x,y+512.0);
glEnd();
}
{
glBindTexture(GL_TEXTURE_2D, color_texture);
glEnable(GL_BLEND);
glBlendFunc(GL_DST_COLOR, GL_ZERO);
glBegin(GL_QUADS);
glTexCoord2f(texture_x,texture_y);
glVertex2f(x,y);
glTexCoord2f(texture_x+1.0,texture_y);
glVertex2f(x+512.0,y);
glTexCoord2f(texture_x+1.0,texture_y+1.0);
glVertex2f(x+512.0,y+512.0);
glTexCoord2f(texture_x,texture_y+1.0);
glVertex2f(x,y+512.0);
glEnd();
}
{
glBindTexture(GL_TEXTURE_2D, background_texture);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE_MINUS_DST_COLOR, GL_DST_COLOR);
glBegin(GL_QUADS);
glTexCoord2f(texture_x,texture_y);
glVertex2f(x,y);
glTexCoord2f(texture_x+1.0,texture_y);
glVertex2f(x+512.0,y);
glTexCoord2f(texture_x+1.0,texture_y+1.0);
glVertex2f(x+512.0,y+512.0);
glTexCoord2f(texture_x,texture_y+1.0);
glVertex2f(x,y+512.0);
glEnd();
}
}
// Draw small versions of the textures.
void renderTextures()
{
float texture_x=0, texture_y=0;
float x=0, y=532.0;
float size = 128;
{
glBindTexture(GL_TEXTURE_2D, mask_texture);
glDisable(GL_BLEND);
glBegin(GL_QUADS);
glTexCoord2f(texture_x,texture_y);
glVertex2f(x,y);
glTexCoord2f(texture_x+1.0,texture_y);
glVertex2f(x+size,y);
glTexCoord2f(texture_x+1.0,texture_y+1.0);
glVertex2f(x+size,y+size);
glTexCoord2f(texture_x,texture_y+1.0);
glVertex2f(x,y+size);
glEnd();
}
{
glBindTexture(GL_TEXTURE_2D, color_texture);
x = size + 16;
glBegin(GL_QUADS);
glTexCoord2f(texture_x,texture_y);
glVertex2f(x,y);
glTexCoord2f(texture_x+1.0,texture_y);
glVertex2f(x+size,y);
glTexCoord2f(texture_x+1.0,texture_y+1.0);
glVertex2f(x+size,y+size);
glTexCoord2f(texture_x,texture_y+1.0);
glVertex2f(x,y+size);
glEnd();
}
{
glBindTexture(GL_TEXTURE_2D, background_texture);
x = size*2 + 16*2;
glBegin(GL_QUADS);
glTexCoord2f(texture_x,texture_y);
glVertex2f(x,y);
glTexCoord2f(texture_x+1.0,texture_y);
glVertex2f(x+size,y);
glTexCoord2f(texture_x+1.0,texture_y+1.0);
glVertex2f(x+size,y+size);
glTexCoord2f(texture_x,texture_y+1.0);
glVertex2f(x,y+size);
glEnd();
}
}
void init()
{
GLdouble bounds[4];
glGetDoublev(GL_VIEWPORT, bounds);
window_size[0] = bounds[2];
window_size[1] = bounds[3];
glClearColor(0.0, 0.0, 0.0, 1.0);
glShadeModel(GL_SMOOTH);
// Load our textures...
color_texture = [[NSImage imageNamed:#"colors"] texture];
mask_texture = [[NSImage imageNamed:#"mask"] texture];
background_texture = [[NSImage imageNamed:#"background"] texture];
// Enable alpha blending. We'll learn more about this later
glEnable(GL_BLEND);
glEnable(GL_TEXTURE_2D);
}
void draw()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glColor3f(1.0, 1.0, 1.0);
renderMask();
renderTextures();
}
void reshape(int width, int height)
{
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0.0, width, 0.0, height);
glMatrixMode(GL_MODELVIEW);
window_size[0] = width;
window_size[1] = height;
}
This shows the my three textures drawn normally (crop, bleed through, and background) and then combined below.

OpenGL paint program based on Apple's 'glPaint' on a white background - how to blend?

Trying to write a simple paint program for iPhone, and I'm using Apple's glPaint sample as a guide. The only problem is, painting doesn't work on a white background, since white + colour = white. I've tried different blending functions, but haven't been able to hit on the right combination of settings and/or brushes to make this work. I've seen similar posts about this problem but no answers. Does anyone know how this might work?
Edit:
I don't even really transparency effects, at this point if I could draw solid lines with rounded ends I'd be happy.
I got white backgrounds working (using the default GLPaint code), by just changing the clear colour in the erase method ie,
- (void) erase
{
[EAGLContext setCurrentContext:context];
// Clear the buffer
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
//glClearColor(0.0, 0.0, 0.0, 0.0);
glClearColor(1.0, 1.0, 1.0, 0.0); // Change to white
glClear(GL_COLOR_BUFFER_BIT);
// Display the buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
The default blend function and brush image seem to just work.
Rather than adding the colour to the blend, could you subtract its opposite? This is roughly how paint and light work in real life, and should give the correct functionality.
Ex: If the user is painting in Red:255 Green:0 Blue:100 Opacity:0.5, you should do this to the pixel:
pixel.red -= (255-paint.red) * paint.opacity; //Subtract 0
pixel.green -= (255-paint.green) * paint.opacity; //Subtract 127.5
pixel.blue -= (255-paint.blue) * paint.opacity; //Subtract 77.5
EDIT: As you pointed out, it is not what is expected, as painting over full blue with full red will go to black, since they subtract each other.
A possible fix to this would be to combine the additive and subtractive approach.
For instance if you added 0.5*paint.colour and subtracted 0.5*paint.complementaryColour, adding full red to full blue would result in:
newPixel.red -> 0 + 127.5 - 0 = 127.5
newPixel.green -> 0 + 0 - 127.5 = 0 //Cap it off, or invent new math =D
newPixel.blue -> 255 + 0 - 127.5 = 127.5
As you can see, this results in a nice purple colour, which is the combination of blue and red. You can tweak the proportion of additive to subtractive logic to simulate how well the paint mixes.
Hope that helps! =)
Yea I had the same issue. The edges of the brush were darker than they should be. It turns out that apple's api pre multiplies the alpha into the rgb channels.
So I countered that by making a grayscale brush in photoshop with just rgb values and no alpha channel. This should look the way you want your brush to be with white representing full color pigmentaton and black representing no color pigmentation.
I load that brush the way its done in apple's glPaint sample code. I then copy the R-channel (or G or B channels as they all are equal) into the alpha channel of the texture. Following that I set the R-G-B values to maximum for all pixels of the brush texture.
So now your brush has an alpha channel with data of how exactly your brush looks. and the RGB are all 1.
Finally I used the blending function:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
And dont forget to set the color before you draw.
glColor4f(1.0f,0.0f,0.0f,1.0f); //red color
Check out the code below, see if it works for you:
-(GLuint) createBrushWithImage: (NSString*)brushName
{
GLuint brushTexture;
CGImageRef brushImage;
CGContextRef brushContext;
GLubyte *brushData,*brushData1;
size_t width, height;
//initialize brush image
brushImage = [UIImage imageNamed:brushName].CGImage;
// Get the width and height of the image
width = CGImageGetWidth(brushImage);
height = CGImageGetHeight(brushImage);
//make the brush texture and context
if(brushImage) {
// Allocate memory needed for the bitmap context
brushData = (GLubyte *) calloc(width * height *4, sizeof(GLubyte));
// We are going to use brushData1 to make the final texture
brushData1 = (GLubyte *) calloc(width * height *4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
brushContext = CGBitmapContextCreate(brushData, width, height, 8, width *4 , CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), brushImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
for(int i=0; i< width*height;i++){
//set the R-G-B channel to maximum
brushData1[i*4] = brushData1[i*4+1] =brushData1[i*4+2] =0xff;
//store originally loaded brush image in alpha channel
brushData1[i*4+3] = brushData[i*4];
}
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData1);
// Release the image data; it's no longer needed
free(brushData1);
free(brushData);
}
return brushTexture;
}
i ran into something similar. the following blending function call solved it for me without any complicated math.
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
Add this before your glDraw calls and you should be able to draw with any texture as brush.
Actually this works even better:
glBlendFunc(GL_ONE,GL_ONE_MINUS_SRC_ALPHA);
The sample code GLPaint is used glBlendFunc(GL_SRC_ALPHA, GL_ONE) and glBlendFunc(GL_SRC_ALPHA, GL_ONE) mode in the function - (id)initWithCoder:(NSCoder*)coder.So while the color is white,all the other colors won't be see.
I want to solve it too.
glBlendFunc(GL_ONE,GL_ONE_MINUS_SRC_ALPHA);
This is best answer.And you have to charge the brush picture.The picture must be a alpha backgroud and while ellipse.

Transparency/Blending issues with OpenGL ES/iPhone

I have a simple 16x16 particle that goes from being opaque to transparent. Unfortunately is appears different in my iPhone port and I can't see where the differences in the code are. Most of the code is essentially the same.
I've uploaded an image to here to show the problem
The particle on the left is the incorrectly rendered iPhone version and the right is how it appears on Mac and Windows. It's just a simple RGBA .png file.
I've tried numerous blend functions and glTexEnv setting but I can't seem to make them the same.
Just to be thorough, my Texture loading code on the iPhone looks like the following
GLuint TextureLoader::LoadTexture(const char *path)
{
NSString *macPath = [NSString stringWithCString:path length:strlen(path)];
GLuint texture = 0;
CGImageRef textureImage = [UIImage imageNamed:macPath].CGImage;
if (textureImage == nil)
{
NSLog(#"Failed to load texture image");
return 0;
}
NSInteger texWidth = CGImageGetWidth(textureImage);
NSInteger texHeight = CGImageGetHeight(textureImage);
GLubyte *textureData = new GLubyte[texWidth * texHeight * 4];
memset(textureData, 0, texWidth * texHeight * 4);
CGContextRef textureContext = CGBitmapContextCreate(textureData, texWidth, texHeight, 8, texWidth * 4, CGImageGetColorSpace(textureImage), kCGImageAlphaPremultipliedLast);
CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)texWidth, (float)texHeight), textureImage);
CGContextRelease(textureContext);
//Make a texture ID, bind it, create it
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
delete[] textureData;
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
return texture;
}
The blend function I use is glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
I'll try any ideas people throw at me, because this has been a bit of a mystery to me.
Cheers.
this looks like the standard "textures are converted to premultiplied alpha" problem.
you can use
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
or you can write custom loading code to avoid the premultiplication.
Call me naive, but seeing that premultiplying an image requires (ar, ag, a*b, a), I figured I'd just divide the rgb values by a.
Of course as soon as the alpha value is larger than the r, g, b components, the particle texture became black. Oh well. Unless I can find a different image loader to the one above, then I'll just make all the rgb components 0xff (white). This is a good temporary solution for me because I either need a white particle or just colourise it in the application. Later on I might just make raw rgba files and read them in, because this is mainly for very small 16x16 and smaller particle textures.
I can't use Premultiplied textures for the particle system because overlapping multiple particle textures saturates the colours way too much.

How to draw a solid circle with cocos2d for iPhone

Is it possible to draw a filled circle with cocos2d ?
An outlined circle can be done using the drawCircle() function, but is there a way to fill it in a certain color? Perhaps by using pure OpenGL?
In DrawingPrimitives.m, change this in drawCricle:
glDrawArrays(GL_LINE_STRIP, 0, segs+additionalSegment);
to:
glDrawArrays(GL_TRIANGLE_FAN, 0, segs+additionalSegment);
You can read more about opengl primitives here:
http://www.informit.com/articles/article.aspx?p=461848
Here's a slight modification of ccDrawCircle() that lets you draw any slice of a circle. Stick this in CCDrawingPrimitives.m and also add the method header information to CCDrawingPrimitives.h:
Parameters: a: starting angle in radians, d: delta or change in angle in radians (use 2*M_PI for a complete circle)
Changes are commented
void ccDrawFilledCircle( CGPoint center, float r, float a, float d, NSUInteger totalSegs)
{
int additionalSegment = 2;
const float coef = 2.0f * (float)M_PI/totalSegs;
NSUInteger segs = d / coef;
segs++; //Rather draw over than not draw enough
if (d == 0) return;
GLfloat *vertices = calloc( sizeof(GLfloat)*2*(segs+2), 1);
if( ! vertices )
return;
for(NSUInteger i=0;i<=segs;i++)
{
float rads = i*coef;
GLfloat j = r * cosf(rads + a) + center.x;
GLfloat k = r * sinf(rads + a) + center.y;
//Leave first 2 spots for origin
vertices[2+ i*2] = j * CC_CONTENT_SCALE_FACTOR();
vertices[2+ i*2+1] =k * CC_CONTENT_SCALE_FACTOR();
}
//Put origin vertices into first 2 spots
vertices[0] = center.x * CC_CONTENT_SCALE_FACTOR();
vertices[1] = center.y * CC_CONTENT_SCALE_FACTOR();
// Default GL states: GL_TEXTURE_2D, GL_VERTEX_ARRAY, GL_COLOR_ARRAY, GL_TEXTURE_COORD_ARRAY
// Needed states: GL_VERTEX_ARRAY,
// Unneeded states: GL_TEXTURE_2D, GL_TEXTURE_COORD_ARRAY, GL_COLOR_ARRAY
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, vertices);
//Change to fan
glDrawArrays(GL_TRIANGLE_FAN, 0, segs+additionalSegment);
// restore default state
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
free( vertices );
}
Look into:
CGContextAddArc
CGContextFillPath
These will allow you to fill a circle without needing OpenGL
I also wonder this, but haven't really accomplished doing it. I tried using CGContext stuff that Grouchal tipped above, but I can't get it to draw anything on the screen. This is what I've tried:
-(void) draw
{
[self makestuff:UIGraphicsGetCurrentContext()];
}
-(void)makestuff:(CGContextRef)context
{
// Drawing lines with a white stroke color
CGContextSetRGBStrokeColor(context, 1.0, 1.0, 1.0, 1.0);
// Draw them with a 2.0 stroke width so they are a bit more visible.
CGContextSetLineWidth(context, 2.0);
// Draw a single line from left to right
CGContextMoveToPoint(context, 10.0, 30.0);
CGContextAddLineToPoint(context, 310.0, 30.0);
CGContextStrokePath(context);
// Draw a connected sequence of line segments
CGPoint addLines[] =
{
CGPointMake(10.0, 90.0),
CGPointMake(70.0, 60.0),
CGPointMake(130.0, 90.0),
CGPointMake(190.0, 60.0),
CGPointMake(250.0, 90.0),
CGPointMake(310.0, 60.0),
};
// Bulk call to add lines to the current path.
// Equivalent to MoveToPoint(points[0]); for(i=1; i<count; ++i) AddLineToPoint(points[i]);
CGContextAddLines(context, addLines, sizeof(addLines)/sizeof(addLines[0]));
CGContextStrokePath(context);
// Draw a series of line segments. Each pair of points is a segment
CGPoint strokeSegments[] =
{
CGPointMake(10.0, 150.0),
CGPointMake(70.0, 120.0),
CGPointMake(130.0, 150.0),
CGPointMake(190.0, 120.0),
CGPointMake(250.0, 150.0),
CGPointMake(310.0, 120.0),
};
// Bulk call to stroke a sequence of line segments.
// Equivalent to for(i=0; i<count; i+=2) { MoveToPoint(point[i]); AddLineToPoint(point[i+1]); StrokePath(); }
CGContextStrokeLineSegments(context, strokeSegments, sizeof(strokeSegments)/sizeof(strokeSegments[0]));
}
These methods are defined in a cocos node class, and the makestuff method I borrowed from a code example...
NOTE:
I'm trying to draw any shape or path and fill it. I know that the code above only draws lines, but I didn't wanna continue until I got it working.
EDIT:
This is probably a crappy solution, but I think this would at least work.
Each CocosNode has a texture (Texture2D *). Texture2D class can be initialized from an UIImage. UIImage can be initialized from a CGImageRef. It is possible to create a CGImageRef context for the quartz lib.
So, what you would do is:
Create the CGImageRef context for quartz
Draw into this image with quartz
Initialize an UIImage with this CGImageRef
Make a Texture2D that is initialized with that image
Set the texture of a CocosNode to that Texture2D instance
Question is if this would be fast enough to do. I would prefer if you could sort of get a CGImageRef from the CocosNode directly and draw into it instead of going through all these steps, but I haven't found a way to do that yet (and I'm kind of a noob at this so it's hard to actually get somewhere at all).
There is a new function in cocos2d CCDrawingPrimitives called ccDrawSolidCircle(CGPoint center, float r, NSUInteger segs). For those looking at this now, use this method instead, then you don't have to mess with the cocos2d code, just import CCDrawingPrimitives.h
I used this way below.
glLineWidth(2);
for(int i=0;i<50;i++){
ccDrawCircle( ccp(s.width/2, s.height/2), i,0, 50, NO);
}
I made multiple circle with for loop and looks like a filled circle.