I’m working on a paint app for iphone. In my code I'm using an imageView which contain outline image on which I am puting CAEAGLLayer for filling colors in outline image. Now I am taking screenshot of OpenGL ES [CAEAGLLayer] rendered content using function:
- (UIImage*)snapshot:(UIView*)eaglview{
GLint backingWidth1, backingHeight1;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth1);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight1);
NSInteger x = 0, y = 0, width = backingWidth1, height = backingHeight1;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;}
combining this screenshot with outline image using function:
- (void)Combine:(UIImage *)Back{
UIImage *Front =backgroundImageView.image;
//UIGraphicsBeginImageContext(Back.size);
UIGraphicsBeginImageContext(CGSizeMake(640,960));
// Draw image1
[Back drawInRect:CGRectMake(0, 0, Back.size.width*2, Back.size.height*2)];
// Draw image2
[Front drawInRect:CGRectMake(0, 0, Front.size.width*2, Front.size.height*2)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(resultingImage, nil, nil, nil);
UIGraphicsEndImageContext();
}
Save this image to photoalbum using function
-(void)captureToPhotoAlbum {
[self Combine:[self snapshot:self]];
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:#"Success" message:#"Image saved to Photo Album" delegate:nil cancelButtonTitle:#"OK" otherButtonTitles:nil];
[alert show];
[alert release]; }
Above Code is working but the image quality of screenshot is poor. On the outlines of the brush, there is a grayish outline. I have uploaded a screenshot of my app which is combination of opengles content & UIImage.
Is there any way to get retina display screenshot of opengles-CAEaglelayer content.
Thank you in advance!
I don't believe that resolution is your issue here. If you aren't seeing the grayish outlines on your drawing when it appears on the screen, odds are that you're observing a compression artifact in the saving process. Your image is probably being saved as a lower-quality JPEG image, where artifacts will appear on sharp edges, like the ones in your drawing.
To work around this, Ben Weiss's answer here provides the following code for forcing your image to be saved to the photo library as a PNG:
UIImage* im = [UIImage imageWithCGImage:myCGRef]; // make image from CGRef
NSData* imdata = UIImagePNGRepresentation ( im ); // get PNG representation
UIImage* im2 = [UIImage imageWithData:imdata]; // wrap UIImage around PNG representation
UIImageWriteToSavedPhotosAlbum(im2, nil, nil, nil); // save to photo album
While this is probably the easiest way to address your problem here, you could also try employing multisample antialiasing, as Apple describes in the "Using Multisampling to Improve Image Quality" section of the OpenGL ES Programming Guide for iOS. Depending on how fill-rate limited you are, MSAA might lead to a little bit of slowdown in your application.
You're using kCGImageAlphaPremultipliedLast when you create the CG bitmap context. Although I can't see your OpenGL code, it seems unlikely to me that your OpenGL context is rendering premultiplied alpha. Unfortunately, IIRC, it's not possible to create a non-premultiplied CG bitmap context on iOS (it would be using kCGImageAlphaLast, but I think that'll just make the creation call fail), so you may need to premultiply the data by hand between getting it from OpenGL and making the CG context.
On the other hand, is there a reason your OpenGL context has an alpha channel? Could you just make it opaque white then use kCGImageAlphaNoneSkipLast?
Related
I am working on paint app taking reference from GLPaint app. In this app there are two canvas views, one view is moving from left to right (animating) and other view is used as background view (as shown in figure).
I am using CAEAGLLayer for filling colors in both views (using subclassing technique). It is working as expected. Now I have to take screenshot of the complete view (outlines and both OpenGL views), but I am getting screenshot of only one view (either moving view or background view). Code related to screenshot is associated with both views but at a time only one view's content is saved.
Code snippet for screenshot as follows.
- (UIImage*)snapshot:(UIView*)eaglview{
GLint backingWidth, backingHeight;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
Is there any way to combine content of both CAEaglelayer views?
Please help.
Thank you very much.
you could create screenshots of each view by separate and then combine them as follows:
UIGraphicsBeginImageContext(canvasSize);
[openGLImage1 drawInRect:CGRectMake(0, 0, canvasSize.width, canvasSize.height)];
[openGLImage2 drawInRect:CGRectMake(0, 0, canvasSize.width, canvasSize.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
you should use appropriate canvasSize and frame to draw each generated UIImage, it is just a sample of how you could do it
See here for a much better way of doing this. It basically allows you to capture a larger view that contains all of your OpenGL (and other) views into one, fully composed screenshot that's identical to what you see on the screen.
This seems like a simple task, yet it is driving me nuts. Is it possible to convert a UIView containing AVCaptureVideoPreviewLayer as a sublayer into an image to be saved? I want to create an augmented reality overlay and have a button save the picture to the camera roll. Holding the power button + home key captures the screenshot to the camera roll, meaning that all of my capture logic is working, AND the task is possible. But I cannot seem to be able to make it work programmatically.
I'm capturing a live preview of the camera's image using AVCaptureVideoPreviewLayer . All of my attempts to render the image fail:
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
//start the session, etc...
//this saves a white screen
- (IBAction)saveOverlay:(id)sender {
NSLog(#"saveOverlay");
UIGraphicsBeginImageContext(appDelegate.window.bounds.size);
UIGraphicsBeginImageContext(scrollView.frame.size);
[previewLayer.presentationLayer renderInContext:UIGraphicsGetCurrentContext()];
// [appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(screenshot, self,
#selector(image:didFinishSavingWithError:contextInfo:), nil);
}
//this renders everything, EXCEPT for the preview layer, which is blank.
[appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
I've read somewhere that this may be due to security issues of the iPhone. Is this true?
Just to be clear: I don't want to save the image for the camera. I want to save the transparent preview layer superimposed over another image, thus creating transparency. Yet for some reason I cannot make it work.
I like #Roma's suggestion of using GPU Image - great idea. . . . however if you want a pure CocoaTouch approach, here's what to do:
Implement AVCaptureVideoDataOutputSampleBufferDelegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage+Orientation from the sample buffer data
if (_captureFrame)
{
[captureSession stopRunning];
_captureFrame = NO;
UIImage *image = [ImageTools imageFromSampleBuffer:sampleBuffer];
image = [image rotate:UIImageOrientationRight];
_frameCaptured = YES;
if (delegate != nil)
{
[delegate cameraPictureTaken:image];
}
}
}
Capture as Follows:
+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
Blend the UIImage with the overlay
Now that you have the UIImage, add it to a new UIView.
Add the overlay on top as a sub-view.
Capture the new UIView
+ (UIImage*)imageWithView:(UIView*)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, [UIScreen mainScreen].scale);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
I can advise you to try GPU Image.
https://github.com/BradLarson/GPUImage
It uses openGL, so it's rather fast. It can process pictures from camera and add filters to them (there are lot of them) including edge detection, motion detection and a far more
It's like OpenCV but based on my own experience GPU image is easier to connect with your project and the language is objective-c.
Problem could appear if you decided to use box2d for physics - is uses openGl too and you will need to spent some time till this 2 frameworks will stop fighting))
I am just trying to draw backgroung_pic.png to the entire view using OpenGL. Please help! I am a noob to OpenGL and I'm trying to make a photoshop-ish app and am just learning OpenGL to accomplish drawing on images.
My code so far.
[EAGLContext setCurrentContext:context];
CGContextRef imageContext;
GLubyte *imageData;
size_t width, height;
CGImageRef backgroundImage;
backgroundImage = [UIImage imageNamed:#"background_pic.png"].CGImage;
// Get the width and height of the image
width = CGImageGetWidth(backgroundImage);
height = CGImageGetHeight(backgroundImage);
// Make sure the image exists
if(backgroundImage) {
// Allocate memory needed for the bitmap context
imageData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
imageContext = CGBitmapContextCreate(imageData, width, height, 8, width * 4, CGImageGetColorSpace(backgroundImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), backgroundImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(imageContext);
//I'm thinking the drawing goes in here
free(imageData);
}
// Display the buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
I've been developing a 3D program for the iPad and iPhone and would like to be able to render it to an external screen. According to my understanding, you have to do something similar to the below code to implement it, (found at: Sunsetlakesoftware.com):
if ([[UIScreen screens] count] > 1)
{
// External screen attached
}
else
{
// Only local screen present
}
CGRect externalBounds = [externalScreen bounds];
externalWindow = [[UIWindow alloc] initWithFrame:externalBounds];
UIView *backgroundView = [[UIView alloc] initWithFrame:externalBounds];
backgroundView.backgroundColor = [UIColor whiteColor];
[externalWindow addSubview:backgroundView];
[backgroundView release];
externalWindow.screen = externalScreen;
[externalWindow makeKeyAndVisible];
However, I'm not sure what to change to do this to an OpenGL project. Does anyone know what you would do to implement this into the defualt openGL project for iPad or iPhone in XCode?
All you need to do to render OpenGL ES content on the external display is to either create a UIView that is backed by a CAEAGLLayer and add it as a subview of the backgroundView above, or take such a view and move it to be a subview of backgroundView.
In fact, you can remove the backgroundView if you want and just place your OpenGL-hosting view directly on the externalWindow UIWindow instance. That window is attached to the UIScreen instance that represents the external display, so anything placed on it will show on that display. This includes OpenGL ES content.
There does appear to be an issue with particular types of OpenGL ES content, as you can see in the experimental support I've tried to add to my Molecules application. If you look in the source code there, I attempt to migrate my rendering view to an external display, but it never appears. I have done the same with other OpenGL ES applications and had their content render fine, so I believe there might be an issue with the depth buffer on the external display. I'm still working to track that down.
I've figured out how to get ANY OpenGL-ES content to render onto an external display. It's actually really straightforward. You just copy your renderbuffer to a UIImage then display that UIImage on the external screen view. The code to take a snapshot of your renderbuffer is below:
- (UIImage*)snapshot:(UIView*)eaglview
{
// Get the size of the backing CAEAGLLayer
GLint backingWidth, backingHeight;
glBindRenderbufferOES(GL_RENDERBUFFER_OES, defaultFramebuffer);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
Although, for some reason I've never been able to get glGetRenderbufferParameterivOES to return the proper backingwidth and backingheight, so I've had to use my own function to calculate those. Just pop this into your rendering implementation and place the result onto the external screen using a timer. If anyone can make any improvements to this method, please let me know.
have been struggling with this issue for quite some time now and couldn't find an answer so far. Basically, what I want to do, is capturing the content of my EAGLview and then use it to merge it with other images. Anyway, the mainproblem is, that everything transparent in my EAGLview renders opaque when saving it to the photoalbum or putting it into a UIImageView. Let me share some code with you, I found somewhere else:
- (CGImageRef) glToUIImage {
unsigned char buffer[320*480*4];
glReadPixels(0,0,320,480,GL_RGBA,GL_UNSIGNED_BYTE,&buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, &buffer, 320*480*4, NULL);
CGImageRef iref = CGImageCreate(320,480,8,32,320*4,CGColorSpaceCreateDeviceRGB(),kCGBitmapByteOrderDefault,ref,NULL,true,kCGRenderingIntentDefault);
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width*height*4;
uint32_t *pixels = (uint32_t *)malloc(length);
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width*4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0.0, height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
UIImage *outputImage = [UIImage imageWithCGImage:outputRef];
UIImageWriteToSavedPhotosAlbum(outputImage, nil, nil, nil);
return outputRef;
}
As I already mentioned, this perfectly grabs the content of my EAGLview, but I can not get the image with its alpha values.
Any help appreciated. Thanks!
Two places I can see that you might be losing your transparency:
when you're drawing your scene: does your scene have a transparent background? make sure you're doing a glClear to something like (0,0,0,0) rather than (0,0,0,1).
when you're drawing the image to flip it over: what is the default background color here? Seems likely it's a non-transparent black and you'll end up with that where the transparent parts of your scene used to be.
You could check if #2 is your problem by saving the image before you flip it over, and if it is, you could avoid the flipping over process by flipping the memory in your pixels buffer directly rather than using Core Graphics calls.