This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
I want capture screen when any game is running with opengl
I am developing a action game with OpenGL.
When a best shot play by user I want to capture them and upload to social network.
I used many code but no success with then. Some code are on:
OpenGL ES View Snapshot
NSInteger myDataLength = 480 * 320 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 480, 320, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < 320; y++)
{
for(int x = 0; x < 480 * 4; x++)
{
buffer2[(319 - y) * 480 * 4 + x] = buffer[y * 4 * 480 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 480;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(480, 320, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
UIImage *image = [UIImage imageWithCGImage:imageRef];
And Other Code Is :
CGRect screenRect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContextWithOptions(screenRect.size, NO, 0.0);
[glview.parent.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But All Code Generate Image In Simulator Not In Device(Black or white image only)
This question is already answered here. Here is what I believe you are looking for
How to get UIImage from EAGLView?
Related
I want to know the use of GLReadPixels function./
How it is reading the pixels?
Is it reading GLKView pixels or UIView pixels or anything on the mainscreen which is in bounds provided in the glreadFunction.
Or it can only be used if we are using GLKView??
Please clarify my doubt.
It reads pixels from the current OpenGL (ES) framebuffer. It can't be used to read pixels from UIView, but it can be used for reading from a GLKView because its backed by a framebuffer (however, you can only read its data when its the active framebuffer, which it most likely is at the time of drawing). However, if everything you want is a screenshot of your GLKView, you can use its built-in snapshot method to get an UIImage with its content.
You can use glreadPixels to read background screen. Here is code to do.
- (UIImage*) getGLScreenshot {
NSInteger myDataLength = 320 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <480; y++)
{
for(int x = 0; x <320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
- (void)saveGLScreenshotToPhotosAlbum {
UIImageWriteToSavedPhotosAlbum([self getGLScreenshot], nil, nil, nil);
}
I am using the following code to take a screenshot of the pixels in a GLView. The problem is, it returns a completely black UIImage. This code is being called in LineDrawer.m which is the heart of the GLView code - so it is being called from the right .m file. How can I save the actual screenshot and not a black image?
- (UIImage*) getGLScreenshot {
NSLog(#"1");
float scale = 0.0;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
{
// scale value should be 1.0 on 3G and 3GS, and 2.0 on iPhone 4.
scale = [[UIScreen mainScreen] scale];
}
// these are swapped since the screen is rotatey
float h = 768 * scale;
float w = 924 * scale;
NSInteger myDataLength = w * h * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, w, h, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <h; y++)
{
for(int x = 0; x <w * 4; x++)
{
buffer2[(((int)h-1) - y) * (int)w * 4 + x] = buffer[y * 4 * (int)w + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * w;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(w, h, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
- (void)saveGLScreenshotToPhotosAlbum {
UIImageWriteToSavedPhotosAlbum([self getGLScreenshot], nil, nil, nil);
}
I had to do something similar in the Sparrow Framework a while back, you should be able to pull the parts you need out of the code in this forum reply:
http://forum.sparrow-framework.org/topic/spdisplayobjectscreenshot
EDIT: Also this post http://forum.sparrow-framework.org/topic/taking-screenshots
Change your drawable properties
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],
kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGB565,
kEAGLDrawablePropertyColorFormat, nil];
kEAGLDrawablePropertyRetainedBacking to YES
Try this I went through a lot of things and finally found a solution.
-(UIImage*)renderImg{
GLint backingWidth = 0;
GLint backingHeight = 0;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);
GLubyte *buffer = (GLubyte *) malloc(backingWidth * backingHeight * 4);
GLubyte *buffer2 = (GLubyte *) malloc(backingWidth * backingHeight * 4);
glReadPixels(0, 0, backingWidth, backingHeight, GL_RGBA, GL_UNSIGNED_BYTE,
(GLvoid *)buffer);
for (int y=0; y<backingHeight; y++) {
for (int x=0; x<backingWidth*4; x++) {
buffer2[y * 4 * backingWidth + x] =
buffer[(backingHeight - y - 1) * backingWidth * 4 + x];
}
}
free(buffer);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2,
backingWidth * backingHeight * 4,
myProviderReleaseData);
// set up for CGImage creation
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * backingWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
// Use this to retain alpha
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(backingWidth, backingHeight,
bitsPerComponent, bitsPerPixel,
bytesPerRow, colorSpaceRef,
bitmapInfo, provider,
NULL, NO,
renderingIntent);
// this contains our final image.
UIImage *newUIImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
return newUIImage;
}
I think it should work perfectly.
I am creating an iPhone game using OpenGL that works really well. I know that when the game go to background once the Home button is pressed iOS make a screenshot of the screen to be showed when the app returns to foreground.
The problem is that when you launch the game again (returning it from background) a white image is showed instead of the last screen of the game. Of course the game is working fine in the moment to go background.
I have tested this problem only with the simulator not with a real iPhone (Simulator v.5.0, iOS v.5.0).
Has anybody this problem and a solution for it? I am missing something?.
Update: I have found that some Cocos2D users have the same problem but without a solution. I don't use Cocos2D.
Update: I have checked that the snapshot taken by iOS is a 640x960 white jpg file. So maybe the problem is some type of connection between OpenGL-ES and the view of the game.
If your application is in OpenGL then you can use glReadPixels to read image screen.
- (UIImage*) getGLScreenshot {
NSInteger myDataLength = 320 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <480; y++)
{
for(int x = 0; x <320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
- (void)saveGLScreenshotToPhotosAlbum {
UIImageWriteToSavedPhotosAlbum([self getGLScreenshot], nil, nil, nil);
}
I have an EAGLView (taken from Apple's examples) which I can successfully convert to a UIImage using this code:
- (UIImage *)glToUIImage:(CGSize)size {
NSInteger backingWidth = size.width;
NSInteger backingHeight = size.height;
NSInteger myDataLength = backingWidth * backingHeight * 4;
// allocate array and read pixels into it.
GLuint *buffer = (GLuint *) malloc(myDataLength);
glReadPixels(0, 0, backingWidth, backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders “upside down” so swap top to bottom into new array.
for(int y = 0; y < backingHeight / 2; y++) {
for(int x = 0; x < backingWidth; x++) {
//Swap top and bottom bytes
GLuint top = buffer[y * backingWidth + x];
GLuint bottom = buffer[(backingHeight - 1 - y) * backingWidth + x];
buffer[(backingHeight - 1 - y) * backingWidth + x] = top;
buffer[y * backingWidth + x] = bottom;
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, releaseScreenshotData);
// prep the ingredients
const int bitsPerComponent = 8;
const int bitsPerPixel = 4 * bitsPerComponent;
const int bytesPerRow = 4 * backingWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(backingWidth, backingHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, YES, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
// then make the UIImage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return myImage;
}
void releaseScreenshotData(void *info, const void *data, size_t size) {
free((void *)data);
};
And here is the code where I use this method to convert to a UIImage:
EAGLView *newView = [[EAGLView alloc] initWithImage:photo.originalImage];
[newView reshapeFramebuffer];
[newView drawView:theSlider.tag value:theSlider.value];
//the two lines above are how EAGLViews in Apple's examples are modified
photoItem *newPhoto = [[photoItem alloc] initWithImage:[self glToUIImage:photo.originalImage.size]];
The problem I am having is, sometimes the converted UIImage won't have the same colors as the EAGLView. This occurs if I apply a high saturation to the EAGLView, or high brigtness, or low contrast, and some other cases. For example, if I apply a high saturation to the EAGLView, and then convert to UIImage, some parts of the image will be brighter than what it's suppose to be.
So I discovered that the problem was a EAGLView timing issue in disguise, similar to my previous question here (EAGLView to UIImage timing question).
For anyone that still cares see my comment below:
Tommy, I finally hacked my way into a solution. It was another EAGLView timing issue (which only showed up during Saturation actually), and I was able to fix it with your performSelector:afterDelay:0.0 approach. Thx
Also, I would recommend anyone coding for iOS 5.0 to look into Core Image and GLKView. They basically make your job of adjusting image properties (as I am doing here) and EAGLView to UIImage conversion type of stuff much simpler.
I am trying to get a UIImage from what is displayed in my EAGLView. Any suggestions on how to do this?
Here is a cleaned up version of Quakeboy's code.
I tested it on iPad, and works just fine.
The improvements include:
works with any size EAGLView
works with retina display (point scale 2)
replaced nested loop with memcpy
cleaned up memory leaks
saves the UIImage in the photoalbum as a bonus.
Use this as a method in your EAGLView:
-(void)snapUIImage
{
int s = 1;
UIScreen* screen = [ UIScreen mainScreen ];
if ( [ screen respondsToSelector:#selector(scale) ] )
s = (int) [ screen scale ];
const int w = self.frame.size.width;
const int h = self.frame.size.height;
const NSInteger myDataLength = w * h * 4 * s * s;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, w*s, h*s, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < h*s; y++)
{
memcpy( buffer2 + (h*s - 1 - y) * w * 4 * s, buffer + (y * 4 * w * s), w * 4 * s );
}
free(buffer); // work with the flipped buffer, so get rid of the original one.
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * w * s;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(w*s, h*s, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [ UIImage imageWithCGImage:imageRef scale:s orientation:UIImageOrientationUp ];
UIImageWriteToSavedPhotosAlbum( myImage, nil, nil, nil );
CGImageRelease( imageRef );
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
free(buffer2);
}
I was unable to get the other answers here to work correctly for me.
After a few days I finally got a working solution to this. There is code provided by Apple which produces a UIImage from a EAGLView. Then you simply need to flip the image vertically since UIkit is upside down.
Apple Provided Method - Modified to be inside the view you want to make into an image.
-(UIImage *) drawableToCGImage
{
GLint backingWidth2, backingHeight2;
//Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth2);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight2);
NSInteger x = 0, y = 0, width2 = backingWidth2, height2 = backingHeight2;
NSInteger dataLength = width2 * height2 * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width2, height2, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width2, height2, 8, 32, width2 * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.contentScaleFactor;
widthInPoints = width2 / scale;
heightInPoints = height2 / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width2;
heightInPoints = height2;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
And heres a method to flip the image
- (UIImage *) flipImageVertically:(UIImage *)originalImage {
UIImageView *tempImageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(tempImageView.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform flipVertical = CGAffineTransformMake(
1, 0, 0, -1, 0, tempImageView.frame.size.height
);
CGContextConcatCTM(context, flipVertical);
[tempImageView.layer renderInContext:context];
UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//[tempImageView release];
return flippedImage;
}
And here's a link to the Apple dev page where I found the first method for reference.
http://developer.apple.com/library/ios/#qa/qa1704/_index.html
-(UIImage *) saveImageFromGLView
{
NSInteger myDataLength = 320 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <480; y++)
{
for(int x = 0; x <320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease( imageRef );
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
free(buffer2);
return myImage;
}
EDIT: as demianturner notes below, you no longer need to render the layer, you can (and should) now use the higher-level [UIView drawViewHierarchyInRect:]. Other than that; this should work the same.
An EAGLView is just a kind of view, and its underlying CAEAGLLayer is just a kind of layer. That means, that the standard approach for converting a view/layer into a UIImage will work. (The fact that the linked question is UIWebview doesn't matter; that's just yet another kind of view.)
CGDataProviderCreateWithData comes with a release callback to release the data, where you should do the release:
void releaseBufferData(void *info, const void *data, size_t size)
{
free((void*)data);
}
Then do this like other examples, but NOT to free data here:
GLubyte *bufferData = (GLubyte *) malloc(bufferDataSize);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, bufferData, bufferDataSize, releaseBufferData);
....
CGDataProviderRelease(provider);
Or simply use CGDataProviderCreateWithCFData without release callback stuff instead:
GLubyte *bufferData = (GLubyte *) malloc(bufferDataSize);
NSData *data = [NSData dataWithBytes:bufferData length:bufferDataSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
....
CGDataProviderRelease(provider);
free(bufferData); // Remember to free it
For more information, please check this discuss:
What's the right memory management pattern for buffer->CGImageRef->UIImage?
With this above code of Brad Larson, you have to edit your EAGLView.m
- (id)initWithCoder:(NSCoder*)coder{
self = [super initWithCoder:coder];
if (self) {
CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
eaglLayer.opaque = TRUE;
eaglLayer.drawableProperties =
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
}
return self;
}
You have to change numberWithBool value YES