Draw texture on touch in openGL iphone - iphone

Recently started with OpenGL, I have managed to draw a background image on screen and frame buffer. All works fine. Problem comes when I create new texture and add it to frame buffer.
{
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glViewport(0, 0, backingWidth, backingHeight);
// Clear screen
glClear(GL_COLOR_BUFFER_BIT);
// Render player ship
[playerShip drawAtPoint:CGPointMake(160, 240)];
// glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// [context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
If I uncomment the above two lines
// glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// [context presentRenderbuffer:GL_RENDERBUFFER_OES];
I dont get new texture which I am drawing through below code:
NSInteger myDataLength = 20 * 20 * 4;
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(location.x-10, location.y-10, 20, 20, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < 20; y++)
{
for(int x = 0; x < 20 * 4; x++)
{
buffer2[(19 - y) * 20 * 4 + x] = buffer[y * 4 * 20 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 20;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(20, 20, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
Texture2D *texture = [[Texture2D alloc] initWithImage:myImage];
UIImageWriteToSavedPhotosAlbum(myImage, self, nil, nil);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glLoadIdentity();
[texture drawAtPoint:location];
glPopMatrix();
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glPushMatrix();
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
I dont know where I am doing mistake. I am new to this.

If you are still rendering the background texture in your code and then render the space ship on top of it, the space ship might fail the depth buffer test.
Have you tried out:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
to clear the z-buffer and then rendering the spaceship?

Related

Blank image is coming when I try to get image from EAGLView using snapshot method [duplicate]

The following is code I use for reading an image from an OpenGL ES scene:
-(UIImage *)getImage{
GLint width;
GLint height;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height);
NSLog(#"%d %d",width,height);
NSInteger myDataLength = width * height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < height; y++)
{
for(int x = 0; x < width * 4; x++)
{
buffer2[((height - 1) - y) * width * 4 + x] = buffer[y * 4 * width + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
free(buffer);
free(buffer2);
return myImage;
}
This is working in iOS 5.x and lower versions, but on iOS 6.0 this is now returning a black image. Why is glReadPixels() failing on iOS 6.0?
CAEAGLLayer *eaglLayer = (CAEAGLLayer *) self.layer;
eaglLayer.drawableProperties = #{
kEAGLDrawablePropertyRetainedBacking: [NSNumber numberWithBool:YES],
kEAGLDrawablePropertyColorFormat: kEAGLColorFormatRGBA8
};
set
kEAGLDrawablePropertyRetainedBacking = YES
(I do not know why this tip is going well..///)
bro try this method to get your screenshot image. The output image is MailImage
- (UIImage*)screenshot
{
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
Mailimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return Mailimage;
}

When glReadPixels can be used?

I want to know the use of GLReadPixels function./
How it is reading the pixels?
Is it reading GLKView pixels or UIView pixels or anything on the mainscreen which is in bounds provided in the glreadFunction.
Or it can only be used if we are using GLKView??
Please clarify my doubt.
It reads pixels from the current OpenGL (ES) framebuffer. It can't be used to read pixels from UIView, but it can be used for reading from a GLKView because its backed by a framebuffer (however, you can only read its data when its the active framebuffer, which it most likely is at the time of drawing). However, if everything you want is a screenshot of your GLKView, you can use its built-in snapshot method to get an UIImage with its content.
You can use glreadPixels to read background screen. Here is code to do.
- (UIImage*) getGLScreenshot {
NSInteger myDataLength = 320 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <480; y++)
{
for(int x = 0; x <320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
- (void)saveGLScreenshotToPhotosAlbum {
UIImageWriteToSavedPhotosAlbum([self getGLScreenshot], nil, nil, nil);
}

Drawing into OpenGL ES framebuffer and getting UIImage from it on iPhone

I'm trying to do offscreen rendering of some primitives in OpenGL ES on iOS. The code is as follows:
// context and neccesary buffers
#interface RendererGL
{
EAGLContext* myContext;
GLuint framebuffer;
GLuint colorRenderbuffer;
GLuint depthRenderbuffer;
}
.m file:
- (id) init
{
self = [super init];
if (self)
{
// initializing context
myContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:myContext];
[self setupOpenGL]; // creating buffers
}
return self;
}
-(void) setupOpenGL
{
int width = 256;
int height = 256;
// generating buffers and binding them as Apple,s tutorial says
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glGenRenderbuffersOES(1, &colorRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_RGBA8_OES, width, height);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, colorRenderbuffer);
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, width, height);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
GLenum status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) ;
if(status != GL_FRAMEBUFFER_COMPLETE_OES) {
NSLog(#"failed to make complete framebuffer object %x", status);
}
}
- (UIImage *) renderImage
{
int width = 256;
int height = 256;
[EAGLContext setCurrentContext:myContext];
glEnable(GL_DEPTH_TEST);
// clear color - cyan
glClearColor(0.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// trying to draw some primitive - red line
float line[] = {-0.5f, -0.5f, 0.5f, 0.5f};
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer); //should I do this?
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, line);
glColor4f(1.0, 0.0, 0.0, 1.0);
glLineWidth(10);
glDrawArrays(GL_LINES, 0, 2); // draw line with two points
[myContext presentRenderbuffer:GL_RENDERBUFFER_OES]; // and this?
// then I grab image from _frameBuffer and return it as UIImage - this part is working
NSInteger x = 0, y = 0;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
UIGraphicsBeginImageContext(CGSizeMake(width, height));
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, width, height), iref);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
the problem is that -renderImage returning completely cyan image, without red line. What can it be? Did I missed some actions before drawing line?
the first problem is that I was creating EAGLContext for OpenGL ES 2.0, but functions which I used were for OpenGL ES 1.1. The solution is to set constant to kEAGLRenderingAPIOpenGLES1
the second - I did not set model and projection matricies (if I use 1.1)
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
after this all works)

How can I dump OpenGL RenderBuffer to PNG or JPG image?

I'm working on a project which uses OpenGL framebuffer/renderbuffer to draw freehand lines. This App uses OpenGL ES 1.x and I can't figure out how to dump current screen to PNG or JPG image.
Thanks for reading this question.
From OpenGL Newbie.
Codes used to create frame buffer and render buffer.
// Generate IDs for a framebuffer object and a color renderbuffer
glGenFramebuffersOES(1, &viewFramebuffer);
glGenRenderbuffersOES(1, &viewRenderbuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// This call associates the storage for the current render buffer with the EAGLDrawable (our CAEAGLLayer)
// allowing us to draw into a buffer that will later be rendered to screen wherever the layer is (which corresponds with our view).
[context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(id<EAGLDrawable>)self.layer];
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
// For this sample, we also need a depth buffer, so we'll create and attach one via another renderbuffer.
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, backingWidth, backingHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
-(UIImage *)dumpImage
{
GLubyte *buffer = (GLubyte *) malloc(backingWidth * backingHeight * 4);
GLubyte *buffer2 = (GLubyte *) malloc(backingWidth * backingHeight * 4);
GLvoid *pixel_data = nil;
glReadPixels(0, 0, backingWidth, backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)buffer);
/* make upside down */
for (int y=0; y<backingHeight; y++) {
for (int x=0; x<backingWidth*4; x++) {
buffer2[y * 4 * backingWidth + x] = buffer[(backingHeight - y - 1) * backingWidth * 4 + x];
}
}
// make data provider from buffer
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, backingWidth * backingHeight * 4, freeImageData);
// set up for CGImage creation
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * backingWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
// Use this to retain alpha
//CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(backingWidth, backingHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// make UIImage from CGImage
UIImage *newUIImage = [UIImage imageWithCGImage:imageRef];
return newUIImage;
}
freeImageData() frees data later.
void freeImageData(void *info, const void *data, size_t size)
{
//printf("freeImageData called");
free((void*)data);
}

How to get UIImage from EAGLView?

I am trying to get a UIImage from what is displayed in my EAGLView. Any suggestions on how to do this?
Here is a cleaned up version of Quakeboy's code.
I tested it on iPad, and works just fine.
The improvements include:
works with any size EAGLView
works with retina display (point scale 2)
replaced nested loop with memcpy
cleaned up memory leaks
saves the UIImage in the photoalbum as a bonus.
Use this as a method in your EAGLView:
-(void)snapUIImage
{
int s = 1;
UIScreen* screen = [ UIScreen mainScreen ];
if ( [ screen respondsToSelector:#selector(scale) ] )
s = (int) [ screen scale ];
const int w = self.frame.size.width;
const int h = self.frame.size.height;
const NSInteger myDataLength = w * h * 4 * s * s;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, w*s, h*s, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < h*s; y++)
{
memcpy( buffer2 + (h*s - 1 - y) * w * 4 * s, buffer + (y * 4 * w * s), w * 4 * s );
}
free(buffer); // work with the flipped buffer, so get rid of the original one.
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * w * s;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(w*s, h*s, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [ UIImage imageWithCGImage:imageRef scale:s orientation:UIImageOrientationUp ];
UIImageWriteToSavedPhotosAlbum( myImage, nil, nil, nil );
CGImageRelease( imageRef );
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
free(buffer2);
}
I was unable to get the other answers here to work correctly for me.
After a few days I finally got a working solution to this. There is code provided by Apple which produces a UIImage from a EAGLView. Then you simply need to flip the image vertically since UIkit is upside down.
Apple Provided Method - Modified to be inside the view you want to make into an image.
-(UIImage *) drawableToCGImage
{
GLint backingWidth2, backingHeight2;
//Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth2);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight2);
NSInteger x = 0, y = 0, width2 = backingWidth2, height2 = backingHeight2;
NSInteger dataLength = width2 * height2 * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width2, height2, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width2, height2, 8, 32, width2 * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.contentScaleFactor;
widthInPoints = width2 / scale;
heightInPoints = height2 / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width2;
heightInPoints = height2;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
And heres a method to flip the image
- (UIImage *) flipImageVertically:(UIImage *)originalImage {
UIImageView *tempImageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(tempImageView.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform flipVertical = CGAffineTransformMake(
1, 0, 0, -1, 0, tempImageView.frame.size.height
);
CGContextConcatCTM(context, flipVertical);
[tempImageView.layer renderInContext:context];
UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//[tempImageView release];
return flippedImage;
}
And here's a link to the Apple dev page where I found the first method for reference.
http://developer.apple.com/library/ios/#qa/qa1704/_index.html
-(UIImage *) saveImageFromGLView
{
NSInteger myDataLength = 320 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <480; y++)
{
for(int x = 0; x <320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease( imageRef );
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
free(buffer2);
return myImage;
}
EDIT: as demianturner notes below, you no longer need to render the layer, you can (and should) now use the higher-level [UIView drawViewHierarchyInRect:]. Other than that; this should work the same.
An EAGLView is just a kind of view, and its underlying CAEAGLLayer is just a kind of layer. That means, that the standard approach for converting a view/layer into a UIImage will work. (The fact that the linked question is UIWebview doesn't matter; that's just yet another kind of view.)
CGDataProviderCreateWithData comes with a release callback to release the data, where you should do the release:
void releaseBufferData(void *info, const void *data, size_t size)
{
free((void*)data);
}
Then do this like other examples, but NOT to free data here:
GLubyte *bufferData = (GLubyte *) malloc(bufferDataSize);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, bufferData, bufferDataSize, releaseBufferData);
....
CGDataProviderRelease(provider);
Or simply use CGDataProviderCreateWithCFData without release callback stuff instead:
GLubyte *bufferData = (GLubyte *) malloc(bufferDataSize);
NSData *data = [NSData dataWithBytes:bufferData length:bufferDataSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
....
CGDataProviderRelease(provider);
free(bufferData); // Remember to free it
For more information, please check this discuss:
What's the right memory management pattern for buffer->CGImageRef->UIImage?
With this above code of Brad Larson, you have to edit your EAGLView.m
- (id)initWithCoder:(NSCoder*)coder{
self = [super initWithCoder:coder];
if (self) {
CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
eaglLayer.opaque = TRUE;
eaglLayer.drawableProperties =
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
}
return self;
}
You have to change numberWithBool value YES