take screen Programmatically of UIview+glview - iphone

i have glview in my uiview ,now i have to take scrren shot of combine view of uiview and glview.
i googled lot but i dnt found any thing useful i know how to take scrrenshot of glview
nt width = glView.frame.size.width;
int height = glView.frame.size.height;
NSInteger myDataLength = width * height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < height; y++)
{
for(int x = 0; x < width * 4; x++)
{
buffer2[((height - 1) - y) * width * 4 + x] = buffer[y * 4 * width + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;

It seems like it's pretty tricky to get a screenshot nowadays, especially when you're mixing the UIKit and OpenGL ES: there used to be UIGetScreenImage() but Apple made it private again and is rejecting apps that use it.
Instead, there are two "solutions" to replace it: Screen capture in UIKit applications and OpenGL ES View Snapshot. The former does not capture OpenGL ES or video content while the later is only for OpenGL ES.
There is another technical note How do I take a screenshot of my app that contains both UIKit and Camera elements?, and here they essentially say: You need to first capture the camera picture and then when rendering the view hierarchy, draw that image in the context.
The very same would apply for OpenGL ES: You would first need to render a snapshot for your OpenGL ES view, then render the UIKit view hierarchy into an image context and draw the image of your OpenGL ES view on top of it. Very ugly, and depending on your view hierarchy it might actually not be what you're seeing on screen (e. g. if there are views in front of your OpenGL view).

Inspired by DarkDust, I was successful in implementing a screen capture of a mix of uiview and openglview (cocos2d 2.0 view). I've sanitized the code a bit and pasted below, hopefully it's helpful for others.
To help explain the setup, my app screen has 4 view layers: the back is a background UIView with background images "backgroundLayer", the middle are 2 layers of Cocos2d glview "glLayer1 and glLayer2"; and the front is another UIView layer with a few native UI controls (e.g. UIButtons) "frontView".
Here's the code:
+ (UIImage *) grabScreenshot
{
// Get the 2 layers in the middle of cocos2d glview and store it as UIImage
[CCDirector sharedDirector].nextDeltaTimeZero = YES;
CGSize winSize = [CCDirector sharedDirector].winSize;
CCRenderTexture* rtx =
[CCRenderTexture renderTextureWithWidth:winSize.width
height:winSize.height];
[rtx begin];
[glLayer1 visit];
[glLayer2 visit];
[rtx end];
UIImage *openglImage = [rtx getUIImage];
UIGraphicsBeginImageContext(winSize);
// Capture the bottom layer
[backgroundView.layer renderInContext:UIGraphicsGetCurrentContext()];
// Save the captured glLayers image to the image context
[openglImage drawInRect:CGRectMake(0, 0, openglImage.size.width, openglImage.size.height)];
[frontView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;
}

Related

Displaying a screen shot generated UIImage is not displaying in UIImageView (for device only)

I am trying to save an OpenGL buffer (whats currently displayed in the view) to the device's photo library. The code snippet below works fine on the simulator. But for the actual device it is crashing. I believe there could be a problem with the way im creating the UIImage captured from the screen.
This operations is initiated via an IBAction event handle method.
The function i use to save the image is UIImageWriteToSavedPhotosAlbum (i recently changed this to ALAssetsLibrary's writeImageToSavedPhotosAlbum).
I have ensured that my app is authorized to access the Photos library.
I also made sure that my CGImageRed is globally defined (defined at the top of the file) and my UIImage is a (nonatomic, retain) property.
Can somebody help me fix this issue? I'd like to have a valid UIImage reference that was generated from the glReadPixels data.
Below is the relevant code snippet (call to save to photo library):
-(void)TakeImageBufferSnapshot:(CGSize)dimensions
{
NSLog(#"TakeSnapShot 1 : (%f, %f)", dimensions.width, dimensions.height);
NSInteger size = dimensions.width * dimensions.height * 4;
GLubyte *buffer = (GLubyte *) malloc(size);
glReadPixels(0, 0, dimensions.width, dimensions.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
GLubyte *buffer2 = (GLubyte *) malloc(size);
int height = (int)dimensions.height - 1;
int width = (int)dimensions.width;
for(int y = 0; y < dimensions.height; y++)
{
for(int x = 0; x < dimensions.width * 4; x++)
{
buffer2[(height - 1 - y) * width * 4 + x] = buffer[y * 4 * width + x];
}
}
NSLog(#"TakeSnapShot 2");
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, size, NULL);
if (buffer) free(buffer);
if (buffer2) free(buffer2);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * self.view.bounds.size.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
NSLog(#"TakeSnapShot 3");
// make the cgimage
g_savePhotoImageRef = CGImageCreate(dimensions.width, dimensions.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
NSLog(#"TakeSnapShot 4");
// then make the uiimage from that
self.savePhotoImage = [UIImage imageWithCGImage:g_savePhotoImageRef];
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
}
-(void)SaveToPhotoAlbum
{
ALAuthorizationStatus status = [ALAssetsLibrary authorizationStatus];
NSLog(#"Authorization status: %d", status);
if (status == ALAuthorizationStatusAuthorized)
{
[self TakeImageBufferSnapshot:self.view.bounds.size];
// UPDATED - DO NOT proceed to save to album below.
// Instead, set the created image to a UIImageView IBOutlet.
// On the simulator this shows the screen/buffer captured image (as expected) -
// but on the device (ipad) this doesnt show anything and the app crashes.
self.testImageView.image = self.savePhotoImage;
return;
NSLog(#"Saving to photo album...");
UIImageWriteToSavedPhotosAlbum(self.savePhotoImage,
self,
#selector(photoAlbumImageSave:didFinishSavingWithError:contextInfo:),
nil);
}
else
{
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:#"Access is denied"
message:#"Allow access to your Photos library to save this image."
delegate:nil
cancelButtonTitle:#"Close"
otherButtonTitles:nil, nil];
[alert show];
}
}
- (void)photoAlbumImageSave:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)context
{
self.savePhotoImage = nil;
CGImageRelease(g_savePhotoImageRef);
if (error)
{
NSLog(#"Error saving photo to albums: %#", error.description);
}
else
{
NSLog(#"Saved to albums!");
}
}
* Update *
I think i've managed to narrow down my issue. I started doing trial & error, where i run the app (on the device) after commenting out lines of code, to narrow things down. It looks like i may have a problem with the TakeImageBufferSnapshot function, which takes the screen buffer (using glReadPixels) and creates an CGImageRef. Now, when i try to create a UIImage out of this (using the [UIImage imageWithCGImage:] method, this seems to be why the app crashes. If I comment this line out it seems like there is no issue (other than the fact that i dont have a UIImage reference).
I basically need a valid UIImage reference so that i can save it to the photo library (which seems to work just fine using test images).
First, I should point out that glReadPixels() may not behave the way you expect. If you try to use it to read from the screen after -presentRenderbuffer: has been called, the results are undefined. On iOS 6.0+, this returns a black image, for example. You need to either use glReadPixels() right before the content is presented to the screen (my recommendation) or enable retained backing for your OpenGL ES context (which has adverse performance consequences).
Second, there's no need for the two buffers. You can capture directly into one and use that to create your CGImageRef.
To your core issue, the problem is that you are deallocating your raw image byte buffer while your CGImageRef / UIImage is still relying on it. This pulls the rug out from underneath your UIImage and will lead to the image corruption / crashing you are seeing. To account for this, you need to put in place a callback function to be triggered on the deallocation of your CGDataProvider. This is how I do this within my GPUImage framework:
rawImagePixels = (GLubyte *)malloc(totalBytesForImage);
glReadPixels(0, 0, (int)currentFBOSize.width, (int)currentFBOSize.height, GL_RGBA, GL_UNSIGNED_BYTE, rawImagePixels);
dataProvider = CGDataProviderCreateWithData(NULL, rawImagePixels, totalBytesForImage, dataProviderReleaseCallback);
cgImageFromBytes = CGImageCreate((int)currentFBOSize.width, (int)currentFBOSize.height, 8, 32, 4 * (int)currentFBOSize.width, defaultRGBColorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaLast, dataProvider, NULL, NO, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
The callback function takes this form:
void dataProviderReleaseCallback (void *info, const void *data, size_t size)
{
free((void *)data);
}
This function will be called only when the UIImage containing your CGImageRef (and by extension the CGDataProvider) is deallocated. Until that point, the buffer containing your image bytes remains.
You can examine how I do this within GPUImage, as a functional example. Take a look at the GPUImageFilter class for how I extract images from an OpenGL ES frame, including a faster method using texture caches instead of glReadPixels().
well - from my experience you cannot just grab the pixels that are in the buffer right now
you need to reestablish the right context, draw and grab THEN before finally releasing the context
=> This is mainly true for the device and ios6 in particular
EAGLContext* previousContext = [EAGLContext currentContext];
[EAGLContext setCurrentContext: self.context];
[self fillBuffer:sender];
//GRAB the pixels here
[EAGLContext setCurrentContext:previousContext];
alternatively (thats how I do it) create a new FrameBuffer, fill THAT and grab pixels from THERE
GLuint rttFramebuffer;
glGenFramebuffers(1, &rttFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, rttFramebuffer);
[self fillBuffer:self.displayLink];
size_t size = viewportHeight * viewportWidth * 4;
GLubyte *pixels = malloc(size*sizeof(GLubyte));
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(0, 0, viewportWidth, viewportHeight, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
// Restore the original framebuffer binding
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glDeleteFramebuffers(1, &rttFramebuffer);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = viewportWidth * bitsPerPixel / bitsPerComponent;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, size, ImageProviderReleaseData);
CGImageRef cgImage = CGImageCreate(viewportWidth, viewportHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, bitmapInfo, provider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
UIImage *image = [UIImage imageWithCGImage:cgImage scale:self.contentScaleFactor orientation:UIImageOrientationDownMirrored];
CGImageRelease(cgImage);
CGColorSpaceRelease(colorSpace);
Edit: removed call to presentBuffer

iPhone - Screenshot of multiple OpenGL (CAEaglelayer) views

I am working on paint app taking reference from GLPaint app. In this app there are two canvas views, one view is moving from left to right (animating) and other view is used as background view (as shown in figure).
I am using CAEAGLLayer for filling colors in both views (using subclassing technique). It is working as expected. Now I have to take screenshot of the complete view (outlines and both OpenGL views), but I am getting screenshot of only one view (either moving view or background view). Code related to screenshot is associated with both views but at a time only one view's content is saved.
Code snippet for screenshot as follows.
- (UIImage*)snapshot:(UIView*)eaglview{
GLint backingWidth, backingHeight;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
Is there any way to combine content of both CAEaglelayer views?
Please help.
Thank you very much.
you could create screenshots of each view by separate and then combine them as follows:
UIGraphicsBeginImageContext(canvasSize);
[openGLImage1 drawInRect:CGRectMake(0, 0, canvasSize.width, canvasSize.height)];
[openGLImage2 drawInRect:CGRectMake(0, 0, canvasSize.width, canvasSize.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
you should use appropriate canvasSize and frame to draw each generated UIImage, it is just a sample of how you could do it
See here for a much better way of doing this. It basically allows you to capture a larger view that contains all of your OpenGL (and other) views into one, fully composed screenshot that's identical to what you see on the screen.

Saving imageRef from GLPaint creates completely black image

Hi I am trying out drawing app and have a problem when it comes to saving the image that is drawn. Right now I'm very early in learning this but I have added code from:
How to get UIImage from EAGLView? to save the image that was drawn.
I have created a new app, then displayed a viewController that I created. In IB I have added a view which is the PaintingView, and an imageView lies behind it.
The only modification I have done to the PaintingView so far is to change the background to clear and set the background to clear so that I can display an image behind it. The drawing works great, my only problem is saving.
- (void)saveImageFromGLView:(UIView *)glView {
if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES)
{
/// This IS being activated with code 0
NSLog(#"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES));
}
int s = 1;
UIScreen* screen = [ UIScreen mainScreen ];
if ( [ screen respondsToSelector:#selector(scale) ] )
s = (int) [ screen scale ];
const int w = self.frame.size.width;
const int h = self.frame.size.height;
const NSInteger myDataLength = w * h * 4 * s * s;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, w*s, h*s, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < h*s; y++)
{
memcpy( buffer2 + (h*s - 1 - y) * w * 4 * s, buffer + (y * 4 * w * s), w * 4 * s );
}
free(buffer); // work with the flipped buffer, so get rid of the original one.
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * w * s;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(w*s, h*s, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [ UIImage imageWithCGImage:imageRef scale:s orientation:UIImageOrientationUp ];
UIImageWriteToSavedPhotosAlbum( myImage, nil, nil, nil );
CGImageRelease( imageRef );
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
free(buffer2);
}
Adding the above code to the Sample app works fine, the problem is doing it in my new app. The only difference I can tell is that I have not included PaintingWindow - would that be the problem?
It's as if the saveImage method isn't seeing the data for drawings.
The save method should be called within the scope of the OpenGL context.
To solve this you can move your method within the same rendering .m file and call this function from outside.
Also you need to consider OpenGL clear color.
(detail explanation in comments, lol)
I found that changing the CGBitmapInfo into:
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
Results in a transparent background.
Ah, you have to do this at the beginning.
[EAGLContext setCurrentContext:drawContext];

How do you render OpenGL-ES to an external screen using the VGA out adapter?

I've been developing a 3D program for the iPad and iPhone and would like to be able to render it to an external screen. According to my understanding, you have to do something similar to the below code to implement it, (found at: Sunsetlakesoftware.com):
if ([[UIScreen screens] count] > 1)
{
// External screen attached
}
else
{
// Only local screen present
}
CGRect externalBounds = [externalScreen bounds];
externalWindow = [[UIWindow alloc] initWithFrame:externalBounds];
UIView *backgroundView = [[UIView alloc] initWithFrame:externalBounds];
backgroundView.backgroundColor = [UIColor whiteColor];
[externalWindow addSubview:backgroundView];
[backgroundView release];
externalWindow.screen = externalScreen;
[externalWindow makeKeyAndVisible];
However, I'm not sure what to change to do this to an OpenGL project. Does anyone know what you would do to implement this into the defualt openGL project for iPad or iPhone in XCode?
All you need to do to render OpenGL ES content on the external display is to either create a UIView that is backed by a CAEAGLLayer and add it as a subview of the backgroundView above, or take such a view and move it to be a subview of backgroundView.
In fact, you can remove the backgroundView if you want and just place your OpenGL-hosting view directly on the externalWindow UIWindow instance. That window is attached to the UIScreen instance that represents the external display, so anything placed on it will show on that display. This includes OpenGL ES content.
There does appear to be an issue with particular types of OpenGL ES content, as you can see in the experimental support I've tried to add to my Molecules application. If you look in the source code there, I attempt to migrate my rendering view to an external display, but it never appears. I have done the same with other OpenGL ES applications and had their content render fine, so I believe there might be an issue with the depth buffer on the external display. I'm still working to track that down.
I've figured out how to get ANY OpenGL-ES content to render onto an external display. It's actually really straightforward. You just copy your renderbuffer to a UIImage then display that UIImage on the external screen view. The code to take a snapshot of your renderbuffer is below:
- (UIImage*)snapshot:(UIView*)eaglview
{
// Get the size of the backing CAEAGLLayer
GLint backingWidth, backingHeight;
glBindRenderbufferOES(GL_RENDERBUFFER_OES, defaultFramebuffer);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
Although, for some reason I've never been able to get glGetRenderbufferParameterivOES to return the proper backingwidth and backingheight, so I've had to use my own function to calculate those. Just pop this into your rendering implementation and place the result onto the external screen using a timer. If anyone can make any improvements to this method, please let me know.

How to erase part of an image as the user touches it

My big picture goal is to have a grey field over an image, and then as the user rubs on that grey field, it reveals the image underneath. Basically like a lottery scratcher card. I've done a bunch of searching through the docs, as well as this site, but can't find the solution.
The following is just a proof of concept to test "erasing" an image based on where the user touches, but it isn't working. :(
I have a UIView that detects touches, then sends the coords of the move to the UIViewController that clips the image in a UIImageView by doing the following:
- (void) moveDetectedFrom:(CGPoint) from to:(CGPoint) to
{
UIImage* image = bkgdImageView.image;
CGSize s = image.size;
UIGraphicsBeginImageContext(s);
CGContextRef g = UIGraphicsGetCurrentContext();
CGContextMoveToPoint(g, from.x, from.y);
CGContextAddLineToPoint(g, to.x, to.y);
CGContextClosePath(g);
CGContextAddRect(g, CGRectMake(0, 0, s.width, s.height));
CGContextEOClip(g);
[image drawAtPoint:CGPointZero];
bkgdImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[bkgdImageView setNeedsDisplay];
}
The problem is that the touches are sent to this method just fine, but nothing happens on the original.
Am I doing the clip path incorrectly? Or?
Not really sure...so any help you may have would be greatly appreciated.
Thanks in advance,
Joel
I've been trying to do the same thing a lot of time ago, using just Core Graphics, and it can be done, but trust me, the effect is not as smooth and soft as the user expects to be. So, i knew how to work with OpenCV, (Open Computer Vision Library), and as it was written in C, i knew i could ise it on the iPhone.
Doing what you want to do with OpenCV is extremely easy.
First you need a couple of functions to convert a UIImage to an IplImage wich is the type used in OpenCV to represent images of all kinds, and the other way.
+ (IplImage *)CreateIplImageFromUIImage:(UIImage *)image {
CGImageRef imageRef = image.CGImage;
//This is the function you use to convert a UIImage -> IplImage
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
IplImage *iplimage = cvCreateImage(cvSize(image.size.width, image.size.height), IPL_DEPTH_8U, 4);
CGContextRef contextRef = CGBitmapContextCreate(iplimage->imageData, iplimage->width, iplimage->height,
iplimage->depth, iplimage->widthStep,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, image.size.width, image.size.height), imageRef);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return iplimage;}
+ (UIImage *)UIImageFromIplImage:(IplImage *)image {
//Convert a IplImage -> UIImage
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSData * data = [[NSData alloc] initWithBytes:image->imageData length:image->imageSize];
//NSData *data = [NSData dataWithBytes:image->imageData length:image->imageSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
CGImageRef imageRef = CGImageCreate(image->width, image->height,
image->depth, image->depth * image->nChannels, image->widthStep,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault,
provider, NULL, false, kCGRenderingIntentDefault);
UIImage *ret = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
[data release];
return ret;}
Now that you have both the basic functions you need you can do whatever you want with your IplImage:
this is what you want:
+(UIImage *)erasePointinUIImage:(IplImage *)image :(CGPoint)point :(int)r{
//r is the radious of the erasing
int a = point.x;
int b = point.y;
int position;
int minX,minY,maxX,maxY;
minX = (a-r>0)?a-r:0;
minY = (b-r>0)?b-r:0;
maxX = ((a+r) < (image->width))? a+r : (image->width);
maxY = ((b+r) < (image->height))? b+r : (image->height);
for (int i = minX; i < maxX ; i++)
{
for(int j=minY; j<maxY;j++)
{
position = ((j-b)*(j-b))+((i-a)*(i-a));
if (position <= r*r)
{
uchar* ptr =(uchar*)(image->imageData) + (j*image->widthStep + i*image->nChannels);
ptr[1] = ptr[2] = ptr[3] = ptr[4] = 0;
}
}
}
UIImage * res = [self UIImageFromIplImage:image];
return res;}
Sorry for the formatting.
If you want to know how to port OpenCV to the iPhone Yoshimasa Niwa's
If you want to check out an app currently working with OpenCV on the AppStore go get :Flags&Faces
You usually want to draw into the current graphics context inside of a drawRect: method, not just any old method. Also, a clip region only affects what is drawn to the current graphics context. But instead of going into why this approach isn't working, I'd suggest doing it differently.
What I would do is have two views. One with the image, and one with the gray color that is made transparent. This allows the graphics hardware to cache the image, instead of trying to redraw the image every time you modify the gray fill.
The gray one would be a UIView subclass with CGBitmapContext that you would draw into to make the pixels that the user touches clear.
There are probably several ways to do this. I'm just suggesting one way above.