Getting an openGL image - runs in simulator, crashes on iPad - iphone

The purpose of this function is to return a UIImage from an openGL image. The reason it's being converted to a CG image is so openGL and UIKit elements can be rendered on top of each other, which is taken care of in another function.
The strange thing is, when the app is run in the simulator, everything works fine. However, after testing the app on multiple different iPads, when the drawGlToImage method is called on self, the app crashes with a EXC_BAD_ACCESS code=1 error. Does anyone know what I'm doing here that would cause this? I've read that UIGraphicsBeginImageContext() used to have thread safety issues, but it seems like that was fixed in iOS 4.
- (UIImage *)drawGlToImage
{
self.context = [EAGLContext currentContext];
[EAGLContext setCurrentContext:self.context];
UIGraphicsBeginImageContext(self.view.frame.size);
unsigned char buffer[1024 * 768 * 4];
NSInteger dataSize = 1024 * 768 * 4;
CGContextRef currentContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(currentContext);
glReadPixels(0, 0, 1024, 768, GL_RGBA, GL_UNSIGNED_BYTE, &buffer);
//flip the image
GLubyte *flippedBuffer = (GLubyte *) malloc(dataSize);
for(int y = 0; y <768; y++)
{
for(int x = 0; x <1024 * 4; x++)
{
if(buffer[y* 4 * 1024 + x]==0)
flippedBuffer[(767 - y) * 1024 * 4 + x]=1;
else
flippedBuffer[(767 - y) * 1024 * 4 + x] = buffer[y* 4 * 1024 + x];
}
}
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, flippedBuffer, 1024 * 768 * 4, NULL);
CGImageRef iref = CGImageCreate(1024,768,8,32,1024*4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaLast, ref, NULL, true, kCGRenderingIntentDefault);
CGContextScaleCTM(currentContext, 1.0, -1.0);
CGContextTranslateCTM(currentContext, 0, -self.view.frame.size.height);
UIGraphicsPopContext();
UIImage *image = [[UIImage alloc] initWithCGImage:iref];
UIGraphicsEndImageContext();
return image;
free(flippedBuffer);
UIGraphicsPopContext();
}
When a button is pressed, a method that is called makes this assignment, which causes the app to crash.
UIImage *glImage = [self drawGlToImage];

I am not sure in which phase you are calling this method. But before calling any OpenGL functions you need to set the right OpenGL context. In the Xcode template it is this line
[EAGLContext setCurrentContext:self.context];

Here's the code used to solve it
- (UIImage *)drawGlToImage {
// Code borrowed and tweaked from:
// http://stackoverflow.com/questions/9881143/missing-part-of-the-image-when-taking-screenshot-while-supporting-retina-display
CGFloat scale = UIScreen.mainScreen.scale;
CGFloat xOffset = 40.0f;
CGFloat yOffset = -16.0f;
CGSize size = CGSizeMake((self.chart.frame.size.width) * scale,
self.chart.frame.size.height * scale);
//Create buffer for pixels
GLuint bufferLength = size.width * size.height * 4;
GLubyte* buffer = (GLubyte*)malloc(bufferLength);
//Read Pixels from OpenGL
glReadPixels(0.0f, 0.0f, size.width, size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
//Make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
//Configure image
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * size.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(size.width, size.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
CGContextRef context = CGBitmapContextCreate(pixels, size.width, size.height, 8, size.width * 4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0.0f, size.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
// These numbers are a little magical.
CGContextDrawImage(context, CGRectMake(xOffset, yOffset, ((size.width - (6.0f * scale)) / scale) - (xOffset / 2), (size.height / scale) - (yOffset / 2)), iref);
UIImage *outputImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
//Dealloc
CGDataProviderRelease(provider);
CGImageRelease(iref);
CGContextRelease(context);
free(buffer);
free(pixels);
return outputImage;
}

Related

Capture ScreenShot with openGL [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
I want capture screen when any game is running with opengl
I am developing a action game with OpenGL.
When a best shot play by user I want to capture them and upload to social network.
I used many code but no success with then. Some code are on:
OpenGL ES View Snapshot
NSInteger myDataLength = 480 * 320 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 480, 320, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < 320; y++)
{
for(int x = 0; x < 480 * 4; x++)
{
buffer2[(319 - y) * 480 * 4 + x] = buffer[y * 4 * 480 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 480;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(480, 320, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
UIImage *image = [UIImage imageWithCGImage:imageRef];
And Other Code Is :
CGRect screenRect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContextWithOptions(screenRect.size, NO, 0.0);
[glview.parent.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But All Code Generate Image In Simulator Not In Device(Black or white image only)
This question is already answered here. Here is what I believe you are looking for
How to get UIImage from EAGLView?

When glReadPixels can be used?

I want to know the use of GLReadPixels function./
How it is reading the pixels?
Is it reading GLKView pixels or UIView pixels or anything on the mainscreen which is in bounds provided in the glreadFunction.
Or it can only be used if we are using GLKView??
Please clarify my doubt.
It reads pixels from the current OpenGL (ES) framebuffer. It can't be used to read pixels from UIView, but it can be used for reading from a GLKView because its backed by a framebuffer (however, you can only read its data when its the active framebuffer, which it most likely is at the time of drawing). However, if everything you want is a screenshot of your GLKView, you can use its built-in snapshot method to get an UIImage with its content.
You can use glreadPixels to read background screen. Here is code to do.
- (UIImage*) getGLScreenshot {
NSInteger myDataLength = 320 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <480; y++)
{
for(int x = 0; x <320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
- (void)saveGLScreenshotToPhotosAlbum {
UIImageWriteToSavedPhotosAlbum([self getGLScreenshot], nil, nil, nil);
}

Why is this GLView Screenshot code returning a blank/black UIImage?

I am using the following code to take a screenshot of the pixels in a GLView. The problem is, it returns a completely black UIImage. This code is being called in LineDrawer.m which is the heart of the GLView code - so it is being called from the right .m file. How can I save the actual screenshot and not a black image?
- (UIImage*) getGLScreenshot {
NSLog(#"1");
float scale = 0.0;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
{
// scale value should be 1.0 on 3G and 3GS, and 2.0 on iPhone 4.
scale = [[UIScreen mainScreen] scale];
}
// these are swapped since the screen is rotatey
float h = 768 * scale;
float w = 924 * scale;
NSInteger myDataLength = w * h * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, w, h, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <h; y++)
{
for(int x = 0; x <w * 4; x++)
{
buffer2[(((int)h-1) - y) * (int)w * 4 + x] = buffer[y * 4 * (int)w + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * w;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(w, h, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
- (void)saveGLScreenshotToPhotosAlbum {
UIImageWriteToSavedPhotosAlbum([self getGLScreenshot], nil, nil, nil);
}
I had to do something similar in the Sparrow Framework a while back, you should be able to pull the parts you need out of the code in this forum reply:
http://forum.sparrow-framework.org/topic/spdisplayobjectscreenshot
EDIT: Also this post http://forum.sparrow-framework.org/topic/taking-screenshots
Change your drawable properties
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],
kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGB565,
kEAGLDrawablePropertyColorFormat, nil];
kEAGLDrawablePropertyRetainedBacking to YES
Try this I went through a lot of things and finally found a solution.
-(UIImage*)renderImg{
GLint backingWidth = 0;
GLint backingHeight = 0;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);
GLubyte *buffer = (GLubyte *) malloc(backingWidth * backingHeight * 4);
GLubyte *buffer2 = (GLubyte *) malloc(backingWidth * backingHeight * 4);
glReadPixels(0, 0, backingWidth, backingHeight, GL_RGBA, GL_UNSIGNED_BYTE,
(GLvoid *)buffer);
for (int y=0; y<backingHeight; y++) {
for (int x=0; x<backingWidth*4; x++) {
buffer2[y * 4 * backingWidth + x] =
buffer[(backingHeight - y - 1) * backingWidth * 4 + x];
}
}
free(buffer);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2,
backingWidth * backingHeight * 4,
myProviderReleaseData);
// set up for CGImage creation
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * backingWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
// Use this to retain alpha
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(backingWidth, backingHeight,
bitsPerComponent, bitsPerPixel,
bytesPerRow, colorSpaceRef,
bitmapInfo, provider,
NULL, NO,
renderingIntent);
// this contains our final image.
UIImage *newUIImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
return newUIImage;
}
I think it should work perfectly.

How to capture a OpenGL view as UIImage?

When I try to use this method to convert a OpenGL view to UIImage, only the view's background is returned, but not the GLView content. How can I convert OpenGL's context into a UIImage?
It depends on what OpenGL view you are using. Since iOS 5 you can make use of GLKit and the corresponding GLKView that greatly simplifies the process of rendering an UIImage.
GLKView* v = (GLKView*) _previewViewController.view;
UIImage* thumbnail = [v snapshot];
http://developer.apple.com/library/ios/#documentation/GLkit/Reference/GLKView_ClassReference/Reference/Reference.html
Use the below code to convert your opengl view in UIImage .
GLubyte *buffer = (GLubyte *) malloc(backingWidth * backingHeight * 4);
GLubyte *buffer2 = (GLubyte *) malloc(backingWidth * backingHeight * 4);
GLvoid *pixel_data = nil;
glReadPixels(0, 0, backingWidth, backingHeight, GL_RGBA, GL_UNSIGNED_BYTE,
(GLvoid *)buffer);
for (int y=0; y<backingHeight; y++) {
for (int x=0; x<backingWidth*4; x++) {
buffer2[y * 4 * backingWidth + x] =
buffer[(backingHeight - y - 1) * backingWidth * 4 + x];
}
}
CGDataProviderRef provider;
provider = CGDataProviderCreateWithData(NULL, buffer2,
backingWidth * backingHeight * 4,
freeImageData);
// set up for CGImage creation
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * backingWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
// Use this to retain alpha
//CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(backingWidth, backingHeight,
bitsPerComponent, bitsPerPixel,
bytesPerRow, colorSpaceRef,
bitmapInfo, provider,
NULL, NO,
renderingIntent);
// this contains our final image.
UIImage *newUIImage = [UIImage imageWithCGImage:imageRef];
Taken from
Thanks for the code. I find it usefull but I had to put some extra code in order to properly release allocated memory with buffer, buffer2, imageRef, colorSpaceRef and provider pointer. Note that buffer2 is released with provider release function.
static void myProviderReleaseData (void *info,const void *data,size_t size)
{
free((void*)data);
}
- (UIImage*)renderToImage
{
// The image size should be grabbed from your ESRenderer class.
// That parameter is get in renderer function:
// - (BOOL) resizeFromLayer:(CAEAGLLayer *)layer {
// glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
GLint backingWidth = renderer.backingWidth;
GLint backingHeight = renderer.backingHeight;
GLubyte *buffer = (GLubyte *) malloc(backingWidth * backingHeight * 4);
GLubyte *buffer2 = (GLubyte *) malloc(backingWidth * backingHeight * 4);
glReadPixels(0, 0, backingWidth, backingHeight, GL_RGBA, GL_UNSIGNED_BYTE,
(GLvoid *)buffer);
for (int y=0; y<backingHeight; y++) {
for (int x=0; x<backingWidth*4; x++) {
buffer2[y * 4 * backingWidth + x] =
buffer[(backingHeight - y - 1) * backingWidth * 4 + x];
}
}
free(buffer);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2,
backingWidth * backingHeight * 4,
myProviderReleaseData);
// set up for CGImage creation
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * backingWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
// Use this to retain alpha
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(backingWidth, backingHeight,
bitsPerComponent, bitsPerPixel,
bytesPerRow, colorSpaceRef,
bitmapInfo, provider,
NULL, NO,
renderingIntent);
// this contains our final image.
UIImage *newUIImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
return newUIImage;
}

How to get UIImage from EAGLView?

I am trying to get a UIImage from what is displayed in my EAGLView. Any suggestions on how to do this?
Here is a cleaned up version of Quakeboy's code.
I tested it on iPad, and works just fine.
The improvements include:
works with any size EAGLView
works with retina display (point scale 2)
replaced nested loop with memcpy
cleaned up memory leaks
saves the UIImage in the photoalbum as a bonus.
Use this as a method in your EAGLView:
-(void)snapUIImage
{
int s = 1;
UIScreen* screen = [ UIScreen mainScreen ];
if ( [ screen respondsToSelector:#selector(scale) ] )
s = (int) [ screen scale ];
const int w = self.frame.size.width;
const int h = self.frame.size.height;
const NSInteger myDataLength = w * h * 4 * s * s;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, w*s, h*s, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < h*s; y++)
{
memcpy( buffer2 + (h*s - 1 - y) * w * 4 * s, buffer + (y * 4 * w * s), w * 4 * s );
}
free(buffer); // work with the flipped buffer, so get rid of the original one.
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * w * s;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(w*s, h*s, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [ UIImage imageWithCGImage:imageRef scale:s orientation:UIImageOrientationUp ];
UIImageWriteToSavedPhotosAlbum( myImage, nil, nil, nil );
CGImageRelease( imageRef );
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
free(buffer2);
}
I was unable to get the other answers here to work correctly for me.
After a few days I finally got a working solution to this. There is code provided by Apple which produces a UIImage from a EAGLView. Then you simply need to flip the image vertically since UIkit is upside down.
Apple Provided Method - Modified to be inside the view you want to make into an image.
-(UIImage *) drawableToCGImage
{
GLint backingWidth2, backingHeight2;
//Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth2);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight2);
NSInteger x = 0, y = 0, width2 = backingWidth2, height2 = backingHeight2;
NSInteger dataLength = width2 * height2 * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width2, height2, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width2, height2, 8, 32, width2 * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.contentScaleFactor;
widthInPoints = width2 / scale;
heightInPoints = height2 / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width2;
heightInPoints = height2;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
And heres a method to flip the image
- (UIImage *) flipImageVertically:(UIImage *)originalImage {
UIImageView *tempImageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(tempImageView.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform flipVertical = CGAffineTransformMake(
1, 0, 0, -1, 0, tempImageView.frame.size.height
);
CGContextConcatCTM(context, flipVertical);
[tempImageView.layer renderInContext:context];
UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//[tempImageView release];
return flippedImage;
}
And here's a link to the Apple dev page where I found the first method for reference.
http://developer.apple.com/library/ios/#qa/qa1704/_index.html
-(UIImage *) saveImageFromGLView
{
NSInteger myDataLength = 320 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <480; y++)
{
for(int x = 0; x <320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease( imageRef );
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
free(buffer2);
return myImage;
}
EDIT: as demianturner notes below, you no longer need to render the layer, you can (and should) now use the higher-level [UIView drawViewHierarchyInRect:]. Other than that; this should work the same.
An EAGLView is just a kind of view, and its underlying CAEAGLLayer is just a kind of layer. That means, that the standard approach for converting a view/layer into a UIImage will work. (The fact that the linked question is UIWebview doesn't matter; that's just yet another kind of view.)
CGDataProviderCreateWithData comes with a release callback to release the data, where you should do the release:
void releaseBufferData(void *info, const void *data, size_t size)
{
free((void*)data);
}
Then do this like other examples, but NOT to free data here:
GLubyte *bufferData = (GLubyte *) malloc(bufferDataSize);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, bufferData, bufferDataSize, releaseBufferData);
....
CGDataProviderRelease(provider);
Or simply use CGDataProviderCreateWithCFData without release callback stuff instead:
GLubyte *bufferData = (GLubyte *) malloc(bufferDataSize);
NSData *data = [NSData dataWithBytes:bufferData length:bufferDataSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
....
CGDataProviderRelease(provider);
free(bufferData); // Remember to free it
For more information, please check this discuss:
What's the right memory management pattern for buffer->CGImageRef->UIImage?
With this above code of Brad Larson, you have to edit your EAGLView.m
- (id)initWithCoder:(NSCoder*)coder{
self = [super initWithCoder:coder];
if (self) {
CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
eaglLayer.opaque = TRUE;
eaglLayer.drawableProperties =
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
}
return self;
}
You have to change numberWithBool value YES