This seems like a simple task, yet it is driving me nuts. Is it possible to convert a UIView containing AVCaptureVideoPreviewLayer as a sublayer into an image to be saved? I want to create an augmented reality overlay and have a button save the picture to the camera roll. Holding the power button + home key captures the screenshot to the camera roll, meaning that all of my capture logic is working, AND the task is possible. But I cannot seem to be able to make it work programmatically.
I'm capturing a live preview of the camera's image using AVCaptureVideoPreviewLayer . All of my attempts to render the image fail:
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
//start the session, etc...
//this saves a white screen
- (IBAction)saveOverlay:(id)sender {
NSLog(#"saveOverlay");
UIGraphicsBeginImageContext(appDelegate.window.bounds.size);
UIGraphicsBeginImageContext(scrollView.frame.size);
[previewLayer.presentationLayer renderInContext:UIGraphicsGetCurrentContext()];
// [appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(screenshot, self,
#selector(image:didFinishSavingWithError:contextInfo:), nil);
}
//this renders everything, EXCEPT for the preview layer, which is blank.
[appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
I've read somewhere that this may be due to security issues of the iPhone. Is this true?
Just to be clear: I don't want to save the image for the camera. I want to save the transparent preview layer superimposed over another image, thus creating transparency. Yet for some reason I cannot make it work.
I like #Roma's suggestion of using GPU Image - great idea. . . . however if you want a pure CocoaTouch approach, here's what to do:
Implement AVCaptureVideoDataOutputSampleBufferDelegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage+Orientation from the sample buffer data
if (_captureFrame)
{
[captureSession stopRunning];
_captureFrame = NO;
UIImage *image = [ImageTools imageFromSampleBuffer:sampleBuffer];
image = [image rotate:UIImageOrientationRight];
_frameCaptured = YES;
if (delegate != nil)
{
[delegate cameraPictureTaken:image];
}
}
}
Capture as Follows:
+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
Blend the UIImage with the overlay
Now that you have the UIImage, add it to a new UIView.
Add the overlay on top as a sub-view.
Capture the new UIView
+ (UIImage*)imageWithView:(UIView*)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, [UIScreen mainScreen].scale);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
I can advise you to try GPU Image.
https://github.com/BradLarson/GPUImage
It uses openGL, so it's rather fast. It can process pictures from camera and add filters to them (there are lot of them) including edge detection, motion detection and a far more
It's like OpenCV but based on my own experience GPU image is easier to connect with your project and the language is objective-c.
Problem could appear if you decided to use box2d for physics - is uses openGl too and you will need to spent some time till this 2 frameworks will stop fighting))
Related
I am trying to generate a PDF report from a UIWebView. Using Graphics Contexts I can get the images of screen frame and draw it in PDF.
Problems in curent method are,
Images are blurred (bad quality)
So the report pdf is not readable.
How can I get a high quality images of a screen frame? Or is there any other better solutions for my problem?
P.S. Report includes multiple background colors and images, I think it is hard to directly draw them in PDF.
+ (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
NSMutableData *pdfData = [NSMutableData data];
UIGraphicsBeginPDFContextToData(pdfData, webview.bounds, nil);
UIGraphicsBeginPDFPage();
CGContextRef pdfContext = UIGraphicsGetCurrentContext();
[webview.layer renderInContext:pdfContext];
UIGraphicsEndPDFContext();
Have you tried like this?
I'm using AVCaptureVideoDataOutputSampleBufferDelegate and I receive a CMSampleBufferRef wich I convert to a UIImage - but the resulting image isn't correctly oriented.
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *img = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
If I hold the iPhone in portrait mode the resulting image is rotated 90 degrees (anti-clockwise).
If I hold the iPhone in landscape left orientation (home button is on the left) the resulting image is up.
If I hold the iPhone in landscape right orientation (home button is on the right) the resulting image is upside-down.
I'm using the front camera of the device - but I will also be using the front camera, so the resulting image should always have the correct orientation.
I’m working on a paint app for iphone. In my code I'm using an imageView which contain outline image on which I am puting CAEAGLLayer for filling colors in outline image. Now I am taking screenshot of OpenGL ES [CAEAGLLayer] rendered content using function:
- (UIImage*)snapshot:(UIView*)eaglview{
GLint backingWidth1, backingHeight1;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth1);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight1);
NSInteger x = 0, y = 0, width = backingWidth1, height = backingHeight1;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;}
combining this screenshot with outline image using function:
- (void)Combine:(UIImage *)Back{
UIImage *Front =backgroundImageView.image;
//UIGraphicsBeginImageContext(Back.size);
UIGraphicsBeginImageContext(CGSizeMake(640,960));
// Draw image1
[Back drawInRect:CGRectMake(0, 0, Back.size.width*2, Back.size.height*2)];
// Draw image2
[Front drawInRect:CGRectMake(0, 0, Front.size.width*2, Front.size.height*2)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(resultingImage, nil, nil, nil);
UIGraphicsEndImageContext();
}
Save this image to photoalbum using function
-(void)captureToPhotoAlbum {
[self Combine:[self snapshot:self]];
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:#"Success" message:#"Image saved to Photo Album" delegate:nil cancelButtonTitle:#"OK" otherButtonTitles:nil];
[alert show];
[alert release]; }
Above Code is working but the image quality of screenshot is poor. On the outlines of the brush, there is a grayish outline. I have uploaded a screenshot of my app which is combination of opengles content & UIImage.
Is there any way to get retina display screenshot of opengles-CAEaglelayer content.
Thank you in advance!
I don't believe that resolution is your issue here. If you aren't seeing the grayish outlines on your drawing when it appears on the screen, odds are that you're observing a compression artifact in the saving process. Your image is probably being saved as a lower-quality JPEG image, where artifacts will appear on sharp edges, like the ones in your drawing.
To work around this, Ben Weiss's answer here provides the following code for forcing your image to be saved to the photo library as a PNG:
UIImage* im = [UIImage imageWithCGImage:myCGRef]; // make image from CGRef
NSData* imdata = UIImagePNGRepresentation ( im ); // get PNG representation
UIImage* im2 = [UIImage imageWithData:imdata]; // wrap UIImage around PNG representation
UIImageWriteToSavedPhotosAlbum(im2, nil, nil, nil); // save to photo album
While this is probably the easiest way to address your problem here, you could also try employing multisample antialiasing, as Apple describes in the "Using Multisampling to Improve Image Quality" section of the OpenGL ES Programming Guide for iOS. Depending on how fill-rate limited you are, MSAA might lead to a little bit of slowdown in your application.
You're using kCGImageAlphaPremultipliedLast when you create the CG bitmap context. Although I can't see your OpenGL code, it seems unlikely to me that your OpenGL context is rendering premultiplied alpha. Unfortunately, IIRC, it's not possible to create a non-premultiplied CG bitmap context on iOS (it would be using kCGImageAlphaLast, but I think that'll just make the creation call fail), so you may need to premultiply the data by hand between getting it from OpenGL and making the CG context.
On the other hand, is there a reason your OpenGL context has an alpha channel? Could you just make it opaque white then use kCGImageAlphaNoneSkipLast?
Please note that this question is about CGLayer (which you typically use to draw offscreen), it is not about CALayer.
In iOS, what's the correct code to save a CGLayer as a PNG file? Thanks!
Again, that's CGLayer, not CALayer.
Note that you CAN NOT use UIGraphicsGetImageFromCurrentImageContext.
(From the documentation, "You can call UIGraphicsGetImageFromCurrentImageContext only when a bitmap-based graphics context is the current graphics context.")
Note that you CAN NOT use renderInContext:. renderInContext: is strictly for CALayers. CGLayers are totally different.
So, how can you actually convert a CGLayer to a PNG image? Or indeed, how to render a CGLayer in to a bitmap in some way (of course you can then easily save as an image).
Later ... Ken has answered this difficult question. I will paste in a long example code that may help people. Thanks again Ken! Amazing!
-(void)drawingExperimentation
{
// this code uses the ASTOUNDING solution by KENNYTM -- Oct/Nov2010
//
// create a CGLayer for offscreen drawing
// note. for "yourContext", ideally it should be a context from your screen, ie the
// context you "normally get" in one of your drawRect routines associated with
// drawing to the screen normally.
// UIGraphicsGetCurrentContext() also normally works but you could have colorspace woes
// so create the CGLayer called notepad...
CGLayerRef notepad = CGLayerCreateWithContext(yourContext,CGSizeMake(1500,1500), NULL);
CGContextRef notepadContext = CGLayerGetContext(notepad);
// you can for example write an image in to notepad
CGImageRef imageExamp = [[UIImage imageWithContentsOfFile:
[[NSBundle mainBundle] pathForResource:#"smallTestImage" ofType:#"png"] ] CGImage];
CGContextDrawImage( notepadContext, CGRectMake(100,100, 50,50), imageExamp);
// setting the colorspace may or may not be relevant to you
CGContextSetFillColorSpace( notepadContext, CGColorSpaceCreateDeviceRGB() );
// you can draw to notepad as much as you like in the normal way
// don't forget to push it's context on and off your work space so you can draw to it
UIGraphicsPushContext(notepadContext);
// set the colors
CGContextSetRGBFillColor(notepadContext, 0.15,0.25,0.35, 0.45);
// draw rects
UIRectFill(CGRectMake(x,y,w,h));
// draw ovals, filled stroked or whatever you wish
UIBezierPath* d = [UIBezierPath bezierPathWithOvalInRect:CGRectMake(x,y,w,h)];
[d fill];
// draw cubic and other curves
UIBezierPath *longPath;
longPath.lineWidth = 42;
longPath.lineCapStyle = kCGLineCapRound;
longPath.lineJoinStyle = kCGLineJoinRound;
[longPath moveToPoint:p];
[longPath addCurveToPoint:q controlPoint1:r controlPoint2:s];
[longPath addCurveToPoint:a controlPoint1:b controlPoint2:c];
[longPath addCurveToPoint:m controlPoint1:n controlPoint2:o];
[longPath closePath];
[longPath stroke];
UIGraphicsPopContext();
// so now you have a nice CGLayer.
// how to save it to a file?
// you can save it to a file using the amazing KENNY-TM-METHOD !!!
UIGraphicsBeginImageContext( CGLayerGetSize(notepad) );
CGContextRef rr = UIGraphicsGetCurrentContext();
CGContextDrawLayerAtPoint(rr, CGPointZero, notepad);
UIImage* ii = UIGraphicsGetImageFromCurrentImageContext();
NSData* pp = UIImagePNGRepresentation(ii);
[pp writeToFile:#"foo.png" atomically:YES];
UIGraphicsEndImageContext();
// you may prefer to look at it like this:
UIGraphicsBeginImageContext( CGLayerGetSize(notepad) );
CGContextDrawLayerAtPoint(UIGraphicsGetCurrentContext(), CGPointZero, notepad);
[UIImagePNGRepresentation(UIGraphicsGetImageFromCurrentImageContext()) writeToFile:#"foo.png" atomically:YES];
UIGraphicsEndImageContext();
// there are three clever steps in the KENNY-TM-METHOD:
// - start a new UIGraphics image context
// - CGContextDrawLayerAtPoint which can, in fact, draw a CGLayer
// - just use the usual UIImagePNGRepresentation to convert to a png
// done! a miracle
// if you are testing on your mac-simulator, you'll find the file
// simply in the main drive directory
return;
}
For iPhone OS, it should be possible to draw a CGLayer on a CGContext and then convert into a UIImage, which can then be encoded into PNG and saved.
CGSize size = CGLayerGetSize(layer);
UIGraphicsBeginImageContext(size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextDrawLayerAtPoint(ctx, CGPointZero, layer);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
NSData* pngData = UIImagePNGRepresentation(image);
[pngData writeToFile:... atomically:YES];
UIGraphicsEndImageContext();
(not tested)
Alright what I am trying to do is:
given an image where there is a circle within that image that is "blank". I want to take an existing image from user library and then mask it so that only a certain part of that image is shown on the "blank" image..
I have tried a few masking code but they all seem to work the other way around ... any tips on how to tackle this?
Unfortunately you can't use CoreAnimation to do this (which would make it rather easy).
Looking at Apple's CoreAnimation documentation:
iOS Note: As a performance consideration, iOS does not support the mask property.
Therefore the next best way to do this is to use Quartz 2D (as answered here):
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGImageRef maskImage = [[UIImage imageNamed:#"mask.png"] CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height), maskImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can
// release the original
CGImageRelease(mainViewContentBitmapContext);
// return the image
return theImage;