How to give animation for the texture in open gl iphone? - iphone

I am new to iphone development. Currently I am working on a project where opengl is used and I need to animate a part of the view. For that I have taken the screen shot and then created the texture.
This is my code
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_SRC_COLOR);
UIGraphicsBeginImageContext(self.introductionTextLabel.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
GLuint texture[1];
glGenTextures(1, &texture[0]);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
GLuint width = CGImageGetWidth(viewImage.CGImage);
GLuint height = CGImageGetHeight(viewImage.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef contextTexture = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextClearRect( contextTexture, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( contextTexture, 0, height - height );
CGContextDrawImage( contextTexture, CGRectMake( 0, 0, width, height ), viewImage.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(contextTexture);
free(imageData);
I have searched a lot but could not find a good method for giving an animation for the texture.
Can any one suggest me good method. If I am doing wrong please point out.
Thanks in advance.

The only way I can recommend you to modify the texture is to use glTexImage2D() for full frame update and glTexSubImage2D() for partial update. Of course it will be better if you will use preloaded and compressed type of textures...

Related

iPhone OpenGL texture not completely transparent

I tried to paint a transparent texture over a sphere, but the transparent areas are not completely transparent. A vivid shade of gray remains. I tried to load a Photoshop generated PNG then paint it on sphere using the code below:
My code to load textures:
- (void) loadPNGTexture: (int)index Name: (NSString*) name{
CGImageRef imageRef = [UIImage imageNamed:[NSString stringWithFormat:#"%#.png",name]].CGImage;
GLsizei width = CGImageGetWidth(imageRef);
GLsizei height = CGImageGetHeight(imageRef);
GLubyte * data = malloc(width * 4 * height);
if (!data)
NSLog(#"error allocating memory for texture loading!");
else {
NSLog(#"Memory allocated for %#", name);
}
NSLog(#"Width : %d, Height :%d",width,height);
CGContextRef cg_context = CGBitmapContextCreate(data, width, height, 8, 4 * width, CGImageGetColorSpace(imageRef), kCGImageAlphaPremultipliedLast);//kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(cg_context, 0, height);
CGContextScaleCTM(cg_context, 1, -1);
CGContextDrawImage(cg_context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(cg_context);
CGContextSetBlendMode(cg_context, kCGBlendModeCopy); //kCGBlendModeCopy);
glGenTextures(2, m_texture[index]);
glBindTexture(GL_TEXTURE_2D, m_texture[index][0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
free(data);
}
Drawing clouds:
glPushMatrix();
glTranslatef(0, 0, 3 );
glScalef(3.1, 3.1, 3.1);
glRotatef(-1, 0, 0, 1);
glRotatef(90, -1, 0, 0);
glDisable(GL_LIGHTING);
//Load Texture for left side of globe
glBindTexture(GL_TEXTURE_2D, m_texture[CLOUD_TEXTURE][0]);
glVertexPointer(3, GL_FLOAT, sizeof(TexturedVertexData3D), &VertexData[0].vertex);
glNormalPointer(GL_FLOAT, sizeof(TexturedVertexData3D), &VertexData[0].normal);
glTexCoordPointer(2, GL_FLOAT, sizeof(TexturedVertexData3D), &VertexData[0].texCoord);
// draw the sphere
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_COPY);
glDrawArrays(GL_TRIANGLES, 0, 11520);
glEnable(GL_LIGHTING);
glPopMatrix();
This first thing that stands out in your code is the line:
glBlendFunc(GL_SRC_ALPHA, GL_COPY);
The second argument (GL_COPY), is not a valid argument for glBlendFunc.
You might want to change that to something along the lines of
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

iPhone OpenGL Texture loader issues

Ok, what I'm trying to achieve is to load an image as a single resource and then save different parts of it as a number of different textures (basically chopping the image into smaller squares and then saving them separately).
For my game, simply mapping different sections of the original image to my shapes won't work and being able to have each 'tile' as a separate texture would be awesome.
Below is the code I'm using for my texture loader. I've tried messing round with the width and height of the texture being loaded but getting some weird results.
Any suggestions would be much appreciated.
Thanks
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
NSString *path =
[[NSBundle mainBundle] pathForResource:#"checkerplate" ofType:#"png"];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef context = CGBitmapContextCreate(
imageData, width, height, 8, 4 * width, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( context, 0, height - height );
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA,
GL_UNSIGNED_BYTE, imageData);
CGContextRelease(context);
free(imageData);
[image release];
[texData release];
Ok, just in case anyone wants to achieve this, i figured it out. All you need to do is modify the line CGContextDrawImage and alter the width and height parameters.
This results in the whole texture being loaded in but then will only draw the area specified on this line.
Basically,
CGContextDrawImage( context, CGRectMake(0, 0, width*2, height*2 ), image.CGImage );
will draw 1/4 of the image (as its height and width are being doubled past the textures size).
:)

problem while displayin the texture image on view that works fine on iphone simulator but not on device

i am trying to display an image on iphone by converting it into texture and then displaying it on the UIView.
here is the code to load an image from an UIImage object
- (void)loadImage:(UIImage *)image mipmap:(BOOL)mipmap texture:(uint32_t)texture
{
int width, height;
CGImageRef cgImage;
GLubyte *data;
CGContextRef cgContext;
CGColorSpaceRef colorSpace;
GLenum err;
if (image == nil)
{
NSLog(#"Failed to load");
return;
}
cgImage = [image CGImage];
width = CGImageGetWidth(cgImage);
height = CGImageGetHeight(cgImage);
colorSpace = CGColorSpaceCreateDeviceRGB();
// Malloc may be used instead of calloc if your cg image has dimensions equal to the dimensions of the cg bitmap context
data = (GLubyte *)calloc(width * height * 4, sizeof(GLubyte));
cgContext = CGBitmapContextCreate(data, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedLast);
if (cgContext != NULL)
{
// Set the blend mode to copy. We don't care about the previous contents.
CGContextSetBlendMode(cgContext, kCGBlendModeCopy);
CGContextDrawImage(cgContext, CGRectMake(0.0f, 0.0f, width, height), cgImage);
glGenTextures(1, &(_textures[texture]));
glBindTexture(GL_TEXTURE_2D, _textures[texture]);
if (mipmap)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
else
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
if (mipmap)
glGenerateMipmapOES(GL_TEXTURE_2D);
err = glGetError();
if (err != GL_NO_ERROR)
NSLog(#"Error uploading texture. glError: 0x%04X", err);
CGContextRelease(cgContext);
}
free(data);
CGColorSpaceRelease(colorSpace);
}
The problem that i currently am facing is this code workd perfectly fine and displays the image on simulator where as on the device as seen on debugger an error is displayed i.e. Error uploading texture. glError: 0x0501
any idea how to tackle this bug....
thnx in advance 4 ur soluitons
If the texture has non-power of two dimensions you need to use EXT_texture_rectangle. (GL_TEXTURE_RECTANGLE_EXT instead of GL_TEXTURE_2D and different texture coordinates).
If that doesn't help it would be good to know which line(s) causes the GL_INVALID_VALUE (0x0501) error.

Creating UIImage with per-pixel alpha from GL buffer

I'm trying to take my gl buffer and turn it into a UIImage while retaining the per-pixel alpha within that gl buffer. It doesn't seem to work, as the result I'm getting is the buffer w/o alpha. Can anyone help? I feel like I must be missing a few key steps somewhere. I would really love any advice on this.
Basically I do:
//Read Pixels from OpenGL
glReadPixels(0, 0, miDrawBufferWidth, miDrawBufferHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
//Make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, len, NULL);
//Configure image
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(miDrawBufferWidth, miDrawBufferHeight, 8, 32, (4 * miDrawBufferWidth), colorSpaceRef, kCGBitmapByteOrderDefault, provider, NULL, NO, kCGRenderingIntentDefault);
// use device's orientation's width/height to determine context dimensions (and consequently resulting image's dimensions)
uint32* pixels = (uint32 *) IQ_NEW(kHeapGfx, "snapshot_pixels") GLubyte[len];
// use kCGImageAlphaLast? :-/
CGContextRef context = CGBitmapContextCreate(pixels, iRotatedWidth, iRotatedHeight, 8, (4 * iRotatedWidth), CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, miDrawBufferWidth, miDrawBufferHeight), iref);
UIImage *outputImage = [[UIImage alloc] initWithCGImage:CGBitmapContextCreateImage(context)];
//cleanup
CGDataProviderRelease(provider);
CGImageRelease(iref);
CGContextRelease(context);
return outputImage;
Yes! Luckily apparently someone has solved this exact problem here: http://www.iphonedevsdk.com/forum/iphone-sdk-development/23525-cgimagecreate-alpha.html
It boiled down to an extra kCGImageAlphaLast flag being passed into the CGImageCreate to incorporate the alpha (along with the kCGBitmapByteOrderDefault flag). :)

Loading image into OpenGL ES from c++ code in iPhone

I'm creating an iPhone game and I need to load an image from a PNG file into OpenGL (and bind it as a texture). I'm using function glTexImage2D to achieve this goal.
I know how to load an image into OpenGL using UIImage (by converting it into CGImage and afterwards drawing into a context).
How can I call my Objective-C code from within C++ code (I'm coding in .mm file)?
Here's the code I'm using and that would work in Objective-C:
NSString *path = [[NSBundle mainBundle] pathForResource:#"texture" ofType:#"png"];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil)
NSLog(#"Do real error checking here");
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef context = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( context, 0, 0 );
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(context);
free(imageData);
[image release];
[texData release];
Is there a way to call those UI and Core Graphics functions from C++ code? What files do I have to include?
Thank you in advance.
Solved the problem. Had to find manually and add the Core Graphics framework.