CGContextRelease(); doesn't release the context? - iphone

I've got a nice and short method to load textures in my iPhone app, where I glGenTextures(13, &textures[0]);. Of course at the end of it I do glDeleteTextures(13, textures);, but the memory isn't fully released untill I comment out this line:
CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)texWidth, (float)texHeight), textureImage);' (which of course results in having the textures totally black) although the next line says:
CGContextRelease(textureContext);
This is the full code for loading the textures:
- (void)loadTexture:(NSString *)name intoLocation:(GLuint)location {
CGImageRef textureImage = [UIImage imageNamed:name].CGImage;
if (textureImage == nil) {
NSLog(#"Failed to load texture image");
return;
}
NSInteger texWidth = CGImageGetWidth(textureImage);
NSInteger texHeight = CGImageGetHeight(textureImage);
GLubyte *textureData = (GLubyte *)malloc(texWidth * texHeight << 2);
int k, l = texWidth * texHeight << 2;
for (k=0; k<l; k++) textureData[k] = 0;
CGContextRef textureContext = CGBitmapContextCreate(textureData, texWidth, texHeight, 8, texWidth << 2, CGImageGetColorSpace(textureImage),
kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(textureContext, 0, texHeight);
CGContextScaleCTM(textureContext, 1.0, -1.0);
CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)texWidth, (float)texHeight), textureImage);
CGContextRelease(textureContext);
glBindTexture(GL_TEXTURE_2D, location);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
free(textureData);
glEnable(GL_TEXTURE_2D);
}

CGImage decodes its data lazily the first time its drawn. It holds on to the decoded data.
The UIImage class caches UIImage instances by name, and the instances hold the CGImages which are holding the decoded data. The UIImage cache is purged in response to low memory warnings.
If this is causing too much memory use in your app, you want to control the creation and destruction of UIImages more carefully, which means not using +[UIImage imageNamed:].
On the other hand, usually an app doesn't have that many named images - it's a static set determined at compile time. If you were just wondering about this memory usage, there you go.

Related

OpenGL OES Iphone glCopyTexImage2D

I am new to openGL OES on iPhone and have a memory issue with glCopyTexImage2D. So far i understood, this function should copy the current framebuffer to the binded texture. But for some reason it always allocates new memory, which i can see in instruments checking the allocations.
My goal is to read texture images and draw on it, after drawing i want to save the new texture , so i can scroll through the painting. So here is may code:
1) init opengl and framebuffer:
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
if (!context || ![EAGLContext setCurrentContext:context]) {
[self release];
return nil;
}
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_SRC_COLOR);
// Setup OpenGL states
glMatrixMode(GL_PROJECTION);
CGRect frame = self.bounds;
CGFloat scale = self.contentScaleFactor;
// Setup the view port in Pixels
glOrthof(0, frame.size.width * scale, 0, frame.size.height * scale, -1, 1);
glViewport(0, 0, frame.size.width, frame.size.height * scale);
glDisable(GL_DEPTH_TEST);
glDisable(GL_DITHER);
glMatrixMode(GL_MODELVIEW);
glEnableClientState(GL_VERTEX_ARRAY);
// Set a blending function appropriate for premultiplied alpha pixel data
glEnable(GL_POINT_SPRITE_OES);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(64 / kBrushScale);
2) now i load the saved images into the framebuffer:
if ([[NSFileManager defaultManager] fileExistsAtPath:path]) {
// load texture
NSData* data = [[NSData alloc] initWithContentsOfFile:path];
glGenTextures(1, &drawBoardTextures[i]);
glBindTexture(GL_TEXTURE_2D, drawBoardTextures[i]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1024, 1024, 0, GL_RGBA, GL_UNSIGNED_BYTE, [data bytes]);
// free memory
[data release];
}
3) and finally render the texture:
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
int width = 1024;
GLfloat quad[] = {0.0,1024.0,1024.0,1024.0,0.0,0.0,1024.0,0.0};
GLfloat quadTex[] = {0.0,1.0,1.0,1.0,0.0,0.0,1.0,0.0};
for (int i=0; i<10; i++) {
quad[0] = width * i;
quad[2] = quad[0] + width;
quad[4] = quad[0];
quad[6] = quad[2];
glBindTexture(GL_TEXTURE_2D, drawBoardTextures[i]);
glVertexPointer(2, GL_FLOAT, 0, quad);
glTexCoordPointer(2, GL_FLOAT, 0, quadTex);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindTexture(GL_TEXTURE_2D, 0);
}
4) for now everything works fine, with gltranslatef i can scroll through the textures and also there is no allocation yet observed in instruments. so now i draw on the current window and want to save the result like followed:
int texIndex = offset.x/1024;
float diff = offset.x - (1024*texIndex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, drawBoardTextures[texIndex]);
glBindTexture(GL_TEXTURE_2D, drawBoardTextures[texIndex]);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, diff, 0, 0, 0, 1024-diff, 1024);
glBindTexture(GL_TEXTURE_2D, drawBoardTextures[texIndex + 1]);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 1024-diff, 0, diff, 1024);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glFlush();
No the problems starts. instead of writing it directly into the generated textures, it writes it into client memory. for every copied texture it uses ~4 MB of Ram, but every recopy doesn't need any memory. i really don't know what i did wrong.
Does anyone know what the problem is? Thanks alot for your help.
cheers
chris

Pass textures in shader

The problem is that I pass two textures in shader, but the picture on 3-d model is the same for both sampler2D variables (It is with image "normal.jpg", the first one.). I'll be very grateful for help. Here is the code:
GLuint _textures[2];
glEnable(GL_TEXTURE_2D);
glGenTextures(2, _textures);
glActiveTexture(GL_TEXTURE0);
bump = [self loadTextureWithName: #"normal.jpg"];
//bump = [self loadTextureWithName: #"specular.jpg"];
glUniform1i(bump, 0);
glActiveTexture(GL_TEXTURE1);
diffuse = [self loadTextureWithName2: #"diffuse.jpg"];
glUniform1i(diffuse, 0);
- (GLuint) loadTextureWithName: (NSString*) filename{
CGImageRef textureImage = [UIImage imageNamed: filename].CGImage;
NSInteger texWidth = CGImageGetWidth(textureImage);
NSInteger texHeight = CGImageGetHeight(textureImage);
GLubyte *textureData = (GLubyte *)malloc(texWidth * texHeight * 4*sizeof(GLubyte));
memset((void*) textureData, 0, texWidth * texHeight * 4*sizeof(GLubyte));
CGContextRef textureContext = CGBitmapContextCreate(textureData,texWidth,texHeight,8, texWidth * 4,CGImageGetColorSpace(textureImage),kCGImageAlphaPremultipliedLast);
CGContextDrawImage(textureContext,CGRectMake(0.0, 0.0, (float)texWidth, (float)texHeight),textureImage);
CGContextRelease(textureContext);
//glActiveTexture(GL_TEXTURE0);
//glShadeModel(GL_FLAT);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
//glGenTextures(1, &TEX);
glBindTexture(GL_TEXTURE_2D, _textures[0]);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
free(textureData);
//texId = [NSNumber numberWithInt:TEX];
//[textureImage release];//
//[textureContext release];//
return _textures[0];
}
- (GLuint) loadTextureWithName2: (NSString*) filename{
CGImageRef textureImage = [UIImage imageNamed: filename].CGImage;
NSInteger texWidth = CGImageGetWidth(textureImage);
NSInteger texHeight = CGImageGetHeight(textureImage);
GLubyte *textureData = (GLubyte *)malloc(texWidth * texHeight * 4*sizeof(GLubyte));
memset((void*) textureData, 0, texWidth * texHeight * 4*sizeof(GLubyte));
CGContextRef textureContext = CGBitmapContextCreate(textureData,texWidth,texHeight,8, texWidth * 4, CGImageGetColorSpace(textureImage),kCGImageAlphaPremultipliedLast);
CGContextDrawImage(textureContext,CGRectMake(0.0, 0.0, (float)texWidth, (float)texHeight),textureImage);
CGContextRelease(textureContext);
//glActiveTexture(GL_TEXTURE1);
//glShadeModel(GL_FLAT);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
//glGenTextures(1, &TEX);
glBindTexture(GL_TEXTURE_2D, _textures[1]);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
free(textureData);
return _textures[1];
}
These two lines:
glUniform1i(bump, 0);
glUniform1i(diffuse, 0);
Appear to point both of your sampler2Ds to the first texture unit. I'd imagine you want:
glUniform1i(diffuse, 1);
To set a sampler2D you pass it the texture unit that it should read from.

iPhone OpenGLES 2.0 Text Texture w/ strange border (not stroke) issue

I'm using the CoreGraphcis to create a text texture. Unfortunately the text renders like this (Text color is same as background to demonstrate the strange border).
I've tried playing with stroke colors and borders to I think it is do to OpenGLES 2.0 and not CoreGraphics.
// Create default framebuffer object. The backing will be allocated for the current layer in -resizeFromLayer
glGenFramebuffers(1, &defaultFramebuffer);
glGenRenderbuffers(1, &colorRenderbuffer);
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);
glActiveTexture(GL_TEXTURE0);
glUniform1i(uniforms[UNIFORM_SAMPLER], 0);
// Set up the texture state.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
texture = [[FW2Texture alloc] initWithString:#"Text"];
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture.width, texture.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, texture.imageData);
And the core graphics bit:
-(id)initWithString:(NSString*)str {
if((self = [super init])) {
UIFont *font = [UIFont systemFontOfSize:17];
CGSize size = [str sizeWithFont:font];
NSInteger i;
width = size.width;
if((width != 1) && (((int)width) & (((int)width) - 1))) {
i = 1;
while(i < width)
i *= 2;
width = i;
}
height = size.height;
if((height != 1) && (((int)height) & (((int)height) - 1))) {
i = 1;
while(i < height)
i *= 2;
height = i;
}
NSInteger BitsPerComponent = 8;
int bpp = BitsPerComponent / 2;
int byteCount = width * height * bpp;
uint8_t *data = (uint8_t*) calloc(byteCount, 1);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
CGContextRef context = CGBitmapContextCreate(data,
width,
height,
BitsPerComponent,
bpp * width,
colorSpace,
bitmapInfo);
CGColorSpaceRelease(colorSpace);
CGContextSetGrayFillColor(context, 0.5f, 1.0f);
CGContextTranslateCTM(context, 0.0f, height);
CGContextScaleCTM(context, 1.0f, -1.0f);
UIGraphicsPushContext(context);
[str drawInRect:CGRectMake(0,
0,
size.width,
size.height)
withFont:font
lineBreakMode:UILineBreakModeWordWrap
alignment:UITextAlignmentCenter];
UIGraphicsPopContext();
CGContextRelease(context);
imageData = (uint8_t*)[[NSData dataWithBytesNoCopy:data length:byteCount freeWhenDone:YES] bytes];
}
return self;
}
What's your glBlendFunc? You're taking premultiplied alpha from CoreGraphics, so e.g. instead of a border pixel being (r, g, b, 0.5) it'll be (0.5*r, 0.5*g, 0.5*b, 0.5). That means you should composite with blending enabled, using glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) so that you get srcColour + (1 - alpha of srcColour)*dstColour.

Drawing a texture with an alpha channel doesn't work -- draws black

I am modifying GLPaint to use a different background, so in this case it is white. Anyway the existing stamp they are using assumes the background is black, so I made a new background with an alpha channel. When I draw on the canvas it is still black, what gives? When I actually draw, I just bind the texture and it works. Something is wrong in this initialization.
Here is the photo
- (id)initWithCoder:(NSCoder*)coder
{
CGImageRef brushImage;
CGContextRef brushContext;
GLubyte *brushData;
size_t width, height;
if (self = [super initWithCoder:coder])
{
CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
eaglLayer.opaque = YES;
// In this application, we want to retain the EAGLDrawable contents after a call to presentRenderbuffer.
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
if (!context || ![EAGLContext setCurrentContext:context]) {
[self release];
return nil;
}
// Create a texture from an image
// First create a UIImage object from the data in a image file, and then extract the Core Graphics image
brushImage = [UIImage imageNamed:#"test.png"].CGImage;
// Get the width and height of the image
width = CGImageGetWidth(brushImage);
height = CGImageGetHeight(brushImage);
// Texture dimensions must be a power of 2. If you write an application that allows users to supply an image,
// you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2.
// Make sure the image exists
if(brushImage)
{
brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
brushContext = CGBitmapContextCreate(brushData, width, width, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast);
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage);
CGContextRelease(brushContext);
glGenTextures(1, &brushTexture);
glBindTexture(GL_TEXTURE_2D, brushTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
free(brushData);
}
//Set up OpenGL states
glMatrixMode(GL_PROJECTION);
CGRect frame = self.bounds;
glOrthof(0, frame.size.width, 0, frame.size.height, -1, 1);
glViewport(0, 0, frame.size.width, frame.size.height);
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DITHER);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA);
glEnable(GL_POINT_SPRITE_OES);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(width / kBrushScale);
}
return self;
}
Your line here:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA);
Needs to be:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
One minus the destination alpha can be 0, since the dst alpha is usually 1.

openGL ES textures from PNGs with transparency are being rendered with weird artifacts and driving me nuts!

I am beginning to work on my first OpenGL iPhone app, but I've hit an early snag.
I have a VERY SIMPLE little texture that I want to use as a sprite in a 2D game, but it renders with weird 'randomly' colored pixels up top.
http://i40.tinypic.com/2s7c9ro.png <-- Screenshot here
I sort of get the feeling that this is Photoshop's fault, so if anybody something about that please let me know.
If it's not photoshop then it's gotta be my code... So here is the code in question...
- (void)loadTexture {
CGImageRef textureImage = [UIImage imageNamed:#"zombie0.png"].CGImage;
if (textureImage == nil) {
NSLog(#"Failed to load texture image");
return;
}
NSInteger texWidth = CGImageGetWidth(textureImage);
NSInteger texHeight = CGImageGetHeight(textureImage);
GLubyte *textureData = (GLubyte *)malloc(texWidth * texHeight * 4);
CGContextRef textureContext = CGBitmapContextCreate(textureData, texWidth, texHeight, 8, texWidth * 4, CGImageGetColorSpace(textureImage), kCGImageAlphaPremultipliedLast);
CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)texWidth, (float)texHeight), textureImage);
CGContextRelease(textureContext);
glGenTextures(1, &textures[0]);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
free(textureData);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
}
This blend function yielded the best results.
Please, let me know what you think is wrong.
Thank you very much, this problem has been driving me nuts.
One problem I can see from the code is that you do not clear your context before drawing the image. Since your image contains transparent areas and is composed on the background, you just see what's in the memory allocated by malloc. Try setting you Quartz Blend mode to copy before drawing the image:
CGContextSetBlendMode(textureContext, kCGBlendModeCopy);
You could also use calloc instead of malloc, since calloc gives you zeroed memory.
Your OpenGL blending is correct:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
gives you Porter-Duff "OVER", which is what you usually want.
Try erasing your CGContextRef first:
CGContextSetRGBFillColor(ctxt, 1, 1, 1, 0);
CGContextFillRect(ctxt, CGRectMake(0, 0, w, h));
Most probably your image has some colored pixels with an zero alpha value, but because of the blending function you are showing them. Try
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);