Real memory usage increases 2x when I load a texture in OpenGLES - iphone

I've been banging my head against this one for a while. The situation is this: I've made a 2D game for iphone on my own little OpenGL 2D game engine (which I built as an experiment but which may now actually ship). I'm trying to get memory under control. I'm using texture atlases, and I'm familiar with PVRTC (but as of now I'm not using it). The issue is that if I load a 1024x1024 png texture atlas, which I expect to take about 4 megs when expanded into memory (1024 x 1024 x 4 bytes per pixel - RBGA8888 = 4 megs), the real memory usage (according to Instruments-Memory Monitor) increases by 8 megs. Aaagh!
I'm aware that OpenGLES takes the texture data, expands it into memory, then reorders the pixels to work on the PowerVR chip, and then makes a texture out of it (or something similar). Is it possible that this memory is not getting freed? So that I have two copies of each texture sitting around in memory? From the ObjectiveC side of things, I see everything releasing correctly. But what goes on behind the OpenGL API, I don't know. I'm probably missing something.
My implementation for loading the textures I got from O'Reilly's iPhone Game Development book. Here are the key points I'm using for implementation:
Step 1 - get image data to correct (power of 2) size:
- (id) initWithImage:(UIImage *)uiImage
{
NSUInteger width, height, i;
CGContextRef context = nil;
void* data = nil;
CGColorSpaceRef colorSpace;
void* tempData;
unsigned int* inPixel32;
unsigned short* outPixel16;
BOOL hasAlpha;
CGImageAlphaInfo info;
CGAffineTransform transform;
CGSize imageSize;
GLTexturePixelFormat pixelFormat;
CGImageRef image;
UIImageOrientation orientation;
BOOL sizeToFit = NO;
image = [uiImage CGImage];
orientation = [uiImage imageOrientation];
if(image == NULL) {
[self release];
NSLog(#"Image is Null");
return nil;
}
info = CGImageGetAlphaInfo(image);
hasAlpha = ((info == kCGImageAlphaPremultipliedLast) || (info == kCGImageAlphaPremultipliedFirst) || (info == kCGImageAlphaLast) || (info == kCGImageAlphaFirst) ? YES : NO);
if(CGImageGetColorSpace(image)) {
if(hasAlpha)
pixelFormat = kGLTexturePixelFormat_RGBA8888;
else
pixelFormat = kGLTexturePixelFormat_RGB565;
} else { //NOTE: No colorspace means a mask image
pixelFormat = kGLTexturePixelFormat_A8;
}
imageSize = CGSizeMake(CGImageGetWidth(image), CGImageGetHeight(image));
transform = CGAffineTransformIdentity;
width = imageSize.width;
if((width != 1) && (width & (width - 1))) {
i = 1;
while((sizeToFit ? 2 * i : i) < width)
i *= 2;
width = i;
}
height = imageSize.height;
if((height != 1) && (height & (height - 1))) {
i = 1;
while((sizeToFit ? 2 * i : i) < height)
i *= 2;
height = i;
}
while((width > kMaxTextureSize) || (height > kMaxTextureSize)) {
width /= 2;
height /= 2;
transform = CGAffineTransformScale(transform, 0.5, 0.5);
imageSize.width *= 0.5;
imageSize.height *= 0.5;
}
switch(pixelFormat) {
case kGLTexturePixelFormat_RGBA8888:
colorSpace = CGColorSpaceCreateDeviceRGB();
data = malloc(height * width * 4);
context = CGBitmapContextCreate(data, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
break;
case kGLTexturePixelFormat_RGB565:
colorSpace = CGColorSpaceCreateDeviceRGB();
data = malloc(height * width * 4);
context = CGBitmapContextCreate(data, width, height, 8, 4 * width, colorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
break;
case kGLTexturePixelFormat_A8:
data = malloc(height * width);
context = CGBitmapContextCreate(data, width, height, 8, width, NULL, kCGImageAlphaOnly);
break;
default:
[NSException raise:NSInternalInconsistencyException format:#"Invalid pixel format"];
}
CGContextClearRect(context, CGRectMake(0, 0, width, height));
CGContextTranslateCTM(context, 0, height - imageSize.height);
if(!CGAffineTransformIsIdentity(transform))
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
//Convert "RRRRRRRRRGGGGGGGGBBBBBBBBAAAAAAAA" to "RRRRRGGGGGGBBBBB"
if(pixelFormat == kGLTexturePixelFormat_RGB565) {
tempData = malloc(height * width * 2);
inPixel32 = (unsigned int*)data;
outPixel16 = (unsigned short*)tempData;
for(i = 0; i < width * height; ++i, ++inPixel32)
*outPixel16++ = ((((*inPixel32 >> 0) & 0xFF) >> 3) << 11) | ((((*inPixel32 >> 8) & 0xFF) >> 2) << 5) | ((((*inPixel32 >> 16) & 0xFF) >> 3) << 0);
free(data);
data = tempData;
}
self = [self initWithData:data pixelFormat:pixelFormat pixelsWide:width pixelsHigh:height contentSize:imageSize];
CGContextRelease(context);
free(data);
return self;
}
Step 2 - bind and load texture:
- (id) initWithData:(const void*)data pixelFormat:(GLTexturePixelFormat)pixelFormat pixelsWide:(NSUInteger)width pixelsHigh:(NSUInteger)height contentSize:(CGSize)size
{
GLint saveName;
if((self = [super init])) {
glGenTextures(1, &_name); //get a new texture id. _name increases as more textures are loaded
glGetIntegerv(GL_TEXTURE_BINDING_2D, &saveName); //generally, saveName==1. gets existing bound texture, so we can restore it after load.
glBindTexture(GL_TEXTURE_2D, _name); //start working with our new texture id
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); //added by ijames
//associate pixel data with the texture id.
switch(pixelFormat) {
case kGLTexturePixelFormat_RGBA8888:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
break;
case kGLTexturePixelFormat_RGB565:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, data);
break;
case kGLTexturePixelFormat_A8:
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, width, height, 0, GL_ALPHA, GL_UNSIGNED_BYTE, data);
break;
default:
[NSException raise:NSInternalInconsistencyException format:#""];
}
glBindTexture(GL_TEXTURE_2D, saveName); //restore the previous texture binding.
//NSLog(#"name %d, savename %d", _name, saveName);
_size = size;
_width = width;
_height = height;
_format = pixelFormat;
_maxS = size.width / (float)width;
_maxT = size.height / (float)height;
}
return self;
}
Do you see anything horribly wrong? Have any of you run into this problem before? Where on earth is this phantom memory coming from?
Thanks for your time and thoughts!
EDIT 1:
I just added some lines to initWithImage: immediately before the call to initWithData: to convert any RGBA8888 textures to RGBA4444 on the fly when being loaded, just to see what would happen and how bad the graphics hit would be. The result was that the real memory usage decreased by almost 2x. That means that wherever the mystery doubling is happening, it happens in or after the initWithData: step. Thanks again for you thoughts!
EDIT 2:
To answer one of the comments - here is how initWithImage: is called (this is the only place it happens - from a ResourceManager class that manages a cache for textures) :
//NOTE: 'texture' and '_textures' are declared earlier...
//if the texture doesn't already exist, create it and add it to the cache
NSString * fullPath = [[[NSBundle mainBundle] bundlePath] stringByAppendingPathComponent: fileName];
UIImage * textureImg = [[UIImage alloc] initWithContentsOfFile: fullPath];
texture = [[GLTexture alloc] initWithImage: textureImg]; //here's the call
[textureImg release];
[_textures setValue: texture forKey: fileName];
return [texture autorelease];

Related

Inappropriate Images display due to overlapping in CGContext

I can't find the solution for this:
I have 2 image Views both with different image - image_1(Jeans of Person) and image_2(Shirt of person). Now when I change the RGB value individually for image_1's or image_2's each and every pixel, I get the perfect result. But whenever one of my frame from the two, slightly overlap with other after processing both of them, then the problem occurs. Please help. This is how I am processing the image.
-(UIImage *)ColorChangeProcessing :(int )redvalue greenValue:(int)greenvalue blueValue:(int)bluevalue imageUsed : (UIImage *)image
{
CGContextRef ctx;
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
int byteIndex = (bytesPerRow * 0) + 0 * bytesPerPixel,RED = redvalue,GREEN=greenvalue,BLUE = bluevalue;
for (int ii = 0 ; ii < width * height ; ++ii)
{
if(rawData[byteIndex] != '/0' || rawData[byteIndex+1] != '/0' || rawData[byteIndex+2] != '/0'){
if ((((rawData[byteIndex])+RED)) > 255)
{
rawData[byteIndex] = (char)255;
}
else if((((rawData[byteIndex])+RED)) >0)
{
rawData[byteIndex] = (char) (((rawData[byteIndex] * 1.0) + RED));
}
else
{
rawData[byteIndex] = (char)0;
}
if ((((rawData[byteIndex+1])+GREEN)) > 255)
{
rawData[byteIndex+1] = (char)255;
}
else if((((rawData[byteIndex+1])+GREEN))>0)
{
rawData[byteIndex+1] = (char) (((rawData[byteIndex+1] * 1.0) + GREEN));
}
else
{
rawData[byteIndex+1] = (char)0;
}
if ((((rawData[byteIndex+2])+BLUE)) > 255)
{
rawData[byteIndex+2] = (char)255;
}
else if((((rawData[byteIndex+2])+BLUE))>0)
{
rawData[byteIndex+2] = (char) (((rawData[byteIndex+2] * 1.0) + BLUE));
}
else
{
rawData[byteIndex+2] = (char)0;
}
}
byteIndex += 4;
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedLast );
CGImageRef NewimageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:NewimageRef];
CGContextRelease(ctx);
free(rawData);
CGImageRelease(NewimageRef);
return rawImage;
}
Now on Any button action you can set R,G,B & image values and get the processed image after that. Then just try to place those processed images frame so that some of the part of one image is covered by another. Means if you have jeans image just try to place the small portion near belt over shirt image.
Finally I came up with the solution that is I was missing to check the alpha value. So the transparent image part was the one which created problems. Thanks all.

Getting an openGL image - runs in simulator, crashes on iPad

The purpose of this function is to return a UIImage from an openGL image. The reason it's being converted to a CG image is so openGL and UIKit elements can be rendered on top of each other, which is taken care of in another function.
The strange thing is, when the app is run in the simulator, everything works fine. However, after testing the app on multiple different iPads, when the drawGlToImage method is called on self, the app crashes with a EXC_BAD_ACCESS code=1 error. Does anyone know what I'm doing here that would cause this? I've read that UIGraphicsBeginImageContext() used to have thread safety issues, but it seems like that was fixed in iOS 4.
- (UIImage *)drawGlToImage
{
self.context = [EAGLContext currentContext];
[EAGLContext setCurrentContext:self.context];
UIGraphicsBeginImageContext(self.view.frame.size);
unsigned char buffer[1024 * 768 * 4];
NSInteger dataSize = 1024 * 768 * 4;
CGContextRef currentContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(currentContext);
glReadPixels(0, 0, 1024, 768, GL_RGBA, GL_UNSIGNED_BYTE, &buffer);
//flip the image
GLubyte *flippedBuffer = (GLubyte *) malloc(dataSize);
for(int y = 0; y <768; y++)
{
for(int x = 0; x <1024 * 4; x++)
{
if(buffer[y* 4 * 1024 + x]==0)
flippedBuffer[(767 - y) * 1024 * 4 + x]=1;
else
flippedBuffer[(767 - y) * 1024 * 4 + x] = buffer[y* 4 * 1024 + x];
}
}
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, flippedBuffer, 1024 * 768 * 4, NULL);
CGImageRef iref = CGImageCreate(1024,768,8,32,1024*4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaLast, ref, NULL, true, kCGRenderingIntentDefault);
CGContextScaleCTM(currentContext, 1.0, -1.0);
CGContextTranslateCTM(currentContext, 0, -self.view.frame.size.height);
UIGraphicsPopContext();
UIImage *image = [[UIImage alloc] initWithCGImage:iref];
UIGraphicsEndImageContext();
return image;
free(flippedBuffer);
UIGraphicsPopContext();
}
When a button is pressed, a method that is called makes this assignment, which causes the app to crash.
UIImage *glImage = [self drawGlToImage];
I am not sure in which phase you are calling this method. But before calling any OpenGL functions you need to set the right OpenGL context. In the Xcode template it is this line
[EAGLContext setCurrentContext:self.context];
Here's the code used to solve it
- (UIImage *)drawGlToImage {
// Code borrowed and tweaked from:
// http://stackoverflow.com/questions/9881143/missing-part-of-the-image-when-taking-screenshot-while-supporting-retina-display
CGFloat scale = UIScreen.mainScreen.scale;
CGFloat xOffset = 40.0f;
CGFloat yOffset = -16.0f;
CGSize size = CGSizeMake((self.chart.frame.size.width) * scale,
self.chart.frame.size.height * scale);
//Create buffer for pixels
GLuint bufferLength = size.width * size.height * 4;
GLubyte* buffer = (GLubyte*)malloc(bufferLength);
//Read Pixels from OpenGL
glReadPixels(0.0f, 0.0f, size.width, size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
//Make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
//Configure image
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * size.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(size.width, size.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
CGContextRef context = CGBitmapContextCreate(pixels, size.width, size.height, 8, size.width * 4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0.0f, size.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
// These numbers are a little magical.
CGContextDrawImage(context, CGRectMake(xOffset, yOffset, ((size.width - (6.0f * scale)) / scale) - (xOffset / 2), (size.height / scale) - (yOffset / 2)), iref);
UIImage *outputImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
//Dealloc
CGDataProviderRelease(provider);
CGImageRelease(iref);
CGContextRelease(context);
free(buffer);
free(pixels);
return outputImage;
}

How to crop the image in iPhone

I want to do the same thing as asked in this question.
In my App i want to crop the image like we do image cropping in FaceBook can any one guide me with the link of good tutorial or with any sample code. The Link which i have provided will completely describe my requirement.
You may create new image with any properties. Here is my function, witch do that. you just need to use your own parameters of new image. In my case, image is not cropped, I just making some effect, moving pixels from there original place to another. But if you initialize new image with another height and width, you can just copy from any range of pixels of old image you need, to new one:
-(UIImage *)Color:(UIImage *)img
{
int R;
float m_width = img.size.width;
float m_height = img.size.height;
if (m_width>m_height) R = m_height*0.9;
else R = m_width*0.9;
int m_wint = (int)m_width; //later, we will need this parameters in float and int. you may just use "(int)" and "(float)" before variables later, and do not implement another ones
int m_hint = (int)m_height;
CGRect imageRect;
//cheking image orientation. we will work with image pixel-by-pixel, so we need to make top side at the top.
if(img.imageOrientation==UIImageOrientationUp
|| img.imageOrientation==UIImageOrientationDown)
{
imageRect = CGRectMake(0, 0, m_wint, m_hint);
}
else
{
imageRect = CGRectMake(0, 0, m_hint, m_wint);
}
uint32_t *rgbImage = (uint32_t *) malloc(m_wint * m_hint * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rgbImage, m_wint, m_hint, 8, m_wint *sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextSetShouldAntialias(context, NO);
CGContextTranslateCTM(context, 0, m_hint);
CGContextScaleCTM(context, 1.0, -1.0);
switch (img.imageOrientation) {
case UIImageOrientationRight:
{
CGContextRotateCTM(context, M_PI / 2);
CGContextTranslateCTM(context, 0, -m_wint);
}break;
case UIImageOrientationLeft:
{
CGContextRotateCTM(context, - M_PI / 2);
CGContextTranslateCTM(context, -m_hint, 0);
}break;
case UIImageOrientationUp:
{
CGContextTranslateCTM(context, m_wint, m_hint);
CGContextRotateCTM(context, M_PI);
}
default:
break;
}
CGContextDrawImage(context, imageRect, img.CGImage);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
//here is new image. you can change m_wint and m_hint as you whant
uint8_t *result = (uint8_t *) calloc(m_wint * m_hint * sizeof(uint32_t), 1);
for(int y = 0; y < m_hint; y++) //new m_hint here
{
float fy=y;
double yy = (m_height*( asinf(m_height/(2*R))-asin(((m_height/2)-fy)/R) )) /
(2*asin(m_height/(2*R))); // (xx, yy) - coordinates of pixel of OLD image
for(int x = 0; x < m_wint; x++) //new m_wint here
{
float fx=x;
double xx = (m_width*( asin(m_width/(2*R))-asin(((m_width/2)-fx)/R) )) /
(2*asin(m_width/(2*R)));
uint32_t rgbPixel=rgbImage[(int)yy * m_wint + (int)xx];
int intRedSource = (rgbPixel>>24)&255;
int intGreenSource = (rgbPixel>>16)&255;
int intBlueSource = (rgbPixel>>8)&255;
result[(y * (int)m_wint + x) * 4] = 0;
result[(y * (int)m_wint + x) * 4 + 1] = intBlueSource;
result[(y * (int)m_wint + x) * 4 + 2] = intGreenSource;
result[(y * (int)m_wint + x) * 4 + 3] = intRedSource;
}
}
free(rgbImage);
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(result, m_wint, m_hint, 8, m_wint * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast ); //new m_wint and m_hint as well
CGImageRef image1 = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *resultUIImage = [UIImage imageWithCGImage:image1];
CGImageRelease(image1);
#try {
free(result);
}
#catch (NSException * e) {
NSLog(#"proc. Exception: %#", e);
}
return resultUIImage;
}
CGRect rectImage = CGRectMake(p1.x,p1.y, p2.x - p1.x, p4.y - p1.y);
//Create bitmap image from original image data,
//using rectangle to specify desired crop area
CGImageRef imageRef = CGImageCreateWithImageInRect([imageForCropping CGImage], rectImage);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
imageView1 = [[UIImageView alloc] initWithFrame:CGRectMake(p1.x, p1.y,p2.x-p1.x p4.y-p1.y)];
imageView1.image = croppedImage;
[self.view addSubview:imageView1];
CGImageRelease(imageRef);

Mesh creation in openGL iphone sdk

I am using this source code as my base and trying to change the code as per my requirements. I have included the following code to create a mesh on image.
-(void)populateMesh{
verticalDivisions = kVerticalDivisions;
horizontalDivisions = kHorisontalDivisions;
unsigned int verticesArrsize = (kVerticalDivisions * ((2 + kHorisontalDivisions * 2) * 3));
unsigned int textureCoordsArraySize = kVerticalDivisions * ((2 + kHorisontalDivisions * 2) * 2);
verticesArr = (GLfloat *)malloc(verticesArrsize * sizeof(GLfloat));
textureCoordsArr = (GLfloat*)malloc(textureCoordsArraySize * sizeof(GLfloat));
if (verticesArr == NULL) {
NSLog(#"verticesArr = NULL!");
}
float height = kWindowHeight/verticalDivisions;
float width = kWindowWidth/horizontalDivisions;
int i,j, count;
count = 0;
for (j=0; j<verticalDivisions; j++) {
for (i=0; i<=horizontalDivisions; i++, count+=6) { //2 vertices each time...
float currX = i * width;
float currY = j * height;
verticesArr[count] = currX;
verticesArr[count+1] = currY + height;
verticesArr[count+2] = 0.0f;
verticesArr[count+3] = currX;
verticesArr[count+4] = currY;
verticesArr[count+5] = 0.0f;
}
}
float xIncrease = 1.0f/horizontalDivisions;
float yIncrease = 1.0f/verticalDivisions;
int x,y;
//int elements;
count = 0;
for (y=0; y<verticalDivisions; y++) {
for (x=0; x<horizontalDivisions+1; x++, count+=4) {
float currX = x *xIncrease;
float currY = y * yIncrease;
textureCoordsArr[count] = (float)currX;
textureCoordsArr[count+1] = (float)currY + yIncrease;
textureCoordsArr[count+2] = (float)currX;
textureCoordsArr[count+3] = (float)currY;
}
}
// int cnt;
// int cnt = 0;
NSLog(#"expected %i vertices, and %i vertices were done",(verticalDivisions * ((2 + horizontalDivisions*2 ) * 2) ) , count );
}
Following is the drawView code.
- (void)drawView:(GLView*)view;
{
static GLfloat rot = 0.0;
glBindTexture(GL_TEXTURE_2D, texture[0]);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, textureCoordsArr);
glVertexPointer(3, GL_FLOAT, 0, verticesArr);
glPushMatrix();{
int i;
for (i=0; i<verticalDivisions; i++) {
glDrawArrays(GL_TRIANGLE_STRIP, i*(horizontalDivisions*2+2), horizontalDivisions*2+2);
}
}glPopMatrix();
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
}
In the setup view I have called [self populateMesh]; at the end of the function.
My problem is after changing the code, a blank rather say black view is appeared on the screen. Can anyone figure out where I am doing some mistake. I am newbie for openGL and trying to manipulate images through mesh. Please help asap. Thanks in advance.
Following is the setup view code.
-(void)setupView:(GLView*)view {
const GLfloat zNear = 0.01, zFar = 1000.0, fieldOfView = 45.0;
GLfloat size;
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0);
CGRect rect = view.bounds;
glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar);
glViewport(0, 0, rect.size.width, rect.size.height);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_SRC_COLOR);
glGenTextures(1, &texture[0]);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
NSString *path = [[NSBundle mainBundle] pathForResource:#"texture" ofType:#"png"];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil)
NSLog(#"Do real error checking here");
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef context = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGContextTranslateCTM (context, 0, height);
CGContextScaleCTM (context, 1.0, -1.0);
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ),image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(context);
free(imageData);
[image release];
[texData release];
[self populateMesh];
}
EDIT This is what I am getting as an out put. While expected is regular grid...
Guessing the mesh is cut out with zNear value. Try to change z value to -2.
verticesArr[count+2] = -2.0f;
verticesArr[count+5] = -2.0f;
By default, the camera is situated at the origin, points down the negative z-axis, and has an up-vector of (0, 1, 0).
Notice that your mesh is out of view frustum (pyramid). Check it out in OpenGL red book:
http://glprogramming.com/red/chapter03.html
The grid is not regular because of the way how vertices are ordered. Also are you sure that GL_TRIANGLE_STRIP is desired option. Maybe, GL_TRIANGLES is that what you need.
I propose simpler solution with using indices array. For example in initialization code make vertices and texture array for your grid in normal order as:
0 1 2
3 4 5
6 7 8
Update:
- (void) setup
{
vertices = (GLfloat*)malloc(rows*colums*3*sizeof(GLfloat));
texCoords = (GLfloat*)malloc(rows*columns*2*sizeof(GLfloat));
indices = (GLubyte*)malloc((rows-1)*(columns-1)*6*sizeof(GLubyte));
float xDelta = horizontalDivisions/columns;
float yDelta = verticalDivisions/rows;
for (int i=0;i<columns;i++) {
for(int j=0;j<rows; j++) {
int index = j*columns+i;
vertices[3*index+0] = i*xDelta; //x
vertices[3*index+1] = j*yDelta; //y
vertices[3*index+2] = -10; //z
texCoords[2*index+0] = i/(float)(columns-1); //x texture coordinate
texCoords[2*index+1] = j/(float)(rows-1); //y tex coordinate
}
}
for (int i=0;i<columns-1;i++) {
for(int j=0;j<rows-1; j++) {
indices[6*(j*columns+i)+0] = j*columns+i;
indices[6*(j*columns+i)+1] = j*columns+i+1;
indices[6*(j*columns+i)+2] = (j+1)*columns+i;
indices[6*(j*columns+i)+3] = j*columns+i+1;
indices[6*(j*columns+i)+4] = (j+1)*columns+i+1;
indices[6*(j*columns+i)+5] = (j+1)*columns+i;
}
}
}
- (void) dealloc {
free(vertices); free(texCoords); free(indices);
}
Practically this indices order means that tringles are rendered as following:
(013)(143)(124)(254)(346)(476)... and so on.
In render method use the following lines:
glVertexPointer(3, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glDrawElements(GL_TRIANGLES, 6*(columns-1)*(rows-1), GL_UNSIGNED_BYTE, indices);
Hope that will help.
This is a great tutorial on drawing a grid in 3D. It should have the code necessary to help you. I'm not sure if you are working in 3D or 2D, but even if it is 2D, it should be fairly easy to adapt to your needs. Hope that helps!

How to get UIImage from EAGLView?

I am trying to get a UIImage from what is displayed in my EAGLView. Any suggestions on how to do this?
Here is a cleaned up version of Quakeboy's code.
I tested it on iPad, and works just fine.
The improvements include:
works with any size EAGLView
works with retina display (point scale 2)
replaced nested loop with memcpy
cleaned up memory leaks
saves the UIImage in the photoalbum as a bonus.
Use this as a method in your EAGLView:
-(void)snapUIImage
{
int s = 1;
UIScreen* screen = [ UIScreen mainScreen ];
if ( [ screen respondsToSelector:#selector(scale) ] )
s = (int) [ screen scale ];
const int w = self.frame.size.width;
const int h = self.frame.size.height;
const NSInteger myDataLength = w * h * 4 * s * s;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, w*s, h*s, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < h*s; y++)
{
memcpy( buffer2 + (h*s - 1 - y) * w * 4 * s, buffer + (y * 4 * w * s), w * 4 * s );
}
free(buffer); // work with the flipped buffer, so get rid of the original one.
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * w * s;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(w*s, h*s, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [ UIImage imageWithCGImage:imageRef scale:s orientation:UIImageOrientationUp ];
UIImageWriteToSavedPhotosAlbum( myImage, nil, nil, nil );
CGImageRelease( imageRef );
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
free(buffer2);
}
I was unable to get the other answers here to work correctly for me.
After a few days I finally got a working solution to this. There is code provided by Apple which produces a UIImage from a EAGLView. Then you simply need to flip the image vertically since UIkit is upside down.
Apple Provided Method - Modified to be inside the view you want to make into an image.
-(UIImage *) drawableToCGImage
{
GLint backingWidth2, backingHeight2;
//Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth2);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight2);
NSInteger x = 0, y = 0, width2 = backingWidth2, height2 = backingHeight2;
NSInteger dataLength = width2 * height2 * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width2, height2, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width2, height2, 8, 32, width2 * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.contentScaleFactor;
widthInPoints = width2 / scale;
heightInPoints = height2 / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width2;
heightInPoints = height2;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
And heres a method to flip the image
- (UIImage *) flipImageVertically:(UIImage *)originalImage {
UIImageView *tempImageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(tempImageView.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform flipVertical = CGAffineTransformMake(
1, 0, 0, -1, 0, tempImageView.frame.size.height
);
CGContextConcatCTM(context, flipVertical);
[tempImageView.layer renderInContext:context];
UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//[tempImageView release];
return flippedImage;
}
And here's a link to the Apple dev page where I found the first method for reference.
http://developer.apple.com/library/ios/#qa/qa1704/_index.html
-(UIImage *) saveImageFromGLView
{
NSInteger myDataLength = 320 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <480; y++)
{
for(int x = 0; x <320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease( imageRef );
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
free(buffer2);
return myImage;
}
EDIT: as demianturner notes below, you no longer need to render the layer, you can (and should) now use the higher-level [UIView drawViewHierarchyInRect:]. Other than that; this should work the same.
An EAGLView is just a kind of view, and its underlying CAEAGLLayer is just a kind of layer. That means, that the standard approach for converting a view/layer into a UIImage will work. (The fact that the linked question is UIWebview doesn't matter; that's just yet another kind of view.)
CGDataProviderCreateWithData comes with a release callback to release the data, where you should do the release:
void releaseBufferData(void *info, const void *data, size_t size)
{
free((void*)data);
}
Then do this like other examples, but NOT to free data here:
GLubyte *bufferData = (GLubyte *) malloc(bufferDataSize);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, bufferData, bufferDataSize, releaseBufferData);
....
CGDataProviderRelease(provider);
Or simply use CGDataProviderCreateWithCFData without release callback stuff instead:
GLubyte *bufferData = (GLubyte *) malloc(bufferDataSize);
NSData *data = [NSData dataWithBytes:bufferData length:bufferDataSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
....
CGDataProviderRelease(provider);
free(bufferData); // Remember to free it
For more information, please check this discuss:
What's the right memory management pattern for buffer->CGImageRef->UIImage?
With this above code of Brad Larson, you have to edit your EAGLView.m
- (id)initWithCoder:(NSCoder*)coder{
self = [super initWithCoder:coder];
if (self) {
CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
eaglLayer.opaque = TRUE;
eaglLayer.drawableProperties =
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
}
return self;
}
You have to change numberWithBool value YES