What's the correct code to save a CGLayer as a PNG file? - iphone

Please note that this question is about CGLayer (which you typically use to draw offscreen), it is not about CALayer.
In iOS, what's the correct code to save a CGLayer as a PNG file? Thanks!
Again, that's CGLayer, not CALayer.
Note that you CAN NOT use UIGraphicsGetImageFromCurrentImageContext.
(From the documentation, "You can call UIGraphicsGetImageFromCurrentImageContext only when a bitmap-based graphics context is the current graphics context.")
Note that you CAN NOT use renderInContext:. renderInContext: is strictly for CALayers. CGLayers are totally different.
So, how can you actually convert a CGLayer to a PNG image? Or indeed, how to render a CGLayer in to a bitmap in some way (of course you can then easily save as an image).
Later ... Ken has answered this difficult question. I will paste in a long example code that may help people. Thanks again Ken! Amazing!
-(void)drawingExperimentation
{
// this code uses the ASTOUNDING solution by KENNYTM -- Oct/Nov2010
//
// create a CGLayer for offscreen drawing
// note. for "yourContext", ideally it should be a context from your screen, ie the
// context you "normally get" in one of your drawRect routines associated with
// drawing to the screen normally.
// UIGraphicsGetCurrentContext() also normally works but you could have colorspace woes
// so create the CGLayer called notepad...
CGLayerRef notepad = CGLayerCreateWithContext(yourContext,CGSizeMake(1500,1500), NULL);
CGContextRef notepadContext = CGLayerGetContext(notepad);
// you can for example write an image in to notepad
CGImageRef imageExamp = [[UIImage imageWithContentsOfFile:
[[NSBundle mainBundle] pathForResource:#"smallTestImage" ofType:#"png"] ] CGImage];
CGContextDrawImage( notepadContext, CGRectMake(100,100, 50,50), imageExamp);
// setting the colorspace may or may not be relevant to you
CGContextSetFillColorSpace( notepadContext, CGColorSpaceCreateDeviceRGB() );
// you can draw to notepad as much as you like in the normal way
// don't forget to push it's context on and off your work space so you can draw to it
UIGraphicsPushContext(notepadContext);
// set the colors
CGContextSetRGBFillColor(notepadContext, 0.15,0.25,0.35, 0.45);
// draw rects
UIRectFill(CGRectMake(x,y,w,h));
// draw ovals, filled stroked or whatever you wish
UIBezierPath* d = [UIBezierPath bezierPathWithOvalInRect:CGRectMake(x,y,w,h)];
[d fill];
// draw cubic and other curves
UIBezierPath *longPath;
longPath.lineWidth = 42;
longPath.lineCapStyle = kCGLineCapRound;
longPath.lineJoinStyle = kCGLineJoinRound;
[longPath moveToPoint:p];
[longPath addCurveToPoint:q controlPoint1:r controlPoint2:s];
[longPath addCurveToPoint:a controlPoint1:b controlPoint2:c];
[longPath addCurveToPoint:m controlPoint1:n controlPoint2:o];
[longPath closePath];
[longPath stroke];
UIGraphicsPopContext();
// so now you have a nice CGLayer.
// how to save it to a file?
// you can save it to a file using the amazing KENNY-TM-METHOD !!!
UIGraphicsBeginImageContext( CGLayerGetSize(notepad) );
CGContextRef rr = UIGraphicsGetCurrentContext();
CGContextDrawLayerAtPoint(rr, CGPointZero, notepad);
UIImage* ii = UIGraphicsGetImageFromCurrentImageContext();
NSData* pp = UIImagePNGRepresentation(ii);
[pp writeToFile:#"foo.png" atomically:YES];
UIGraphicsEndImageContext();
// you may prefer to look at it like this:
UIGraphicsBeginImageContext( CGLayerGetSize(notepad) );
CGContextDrawLayerAtPoint(UIGraphicsGetCurrentContext(), CGPointZero, notepad);
[UIImagePNGRepresentation(UIGraphicsGetImageFromCurrentImageContext()) writeToFile:#"foo.png" atomically:YES];
UIGraphicsEndImageContext();
// there are three clever steps in the KENNY-TM-METHOD:
// - start a new UIGraphics image context
// - CGContextDrawLayerAtPoint which can, in fact, draw a CGLayer
// - just use the usual UIImagePNGRepresentation to convert to a png
// done! a miracle
// if you are testing on your mac-simulator, you'll find the file
// simply in the main drive directory
return;
}

For iPhone OS, it should be possible to draw a CGLayer on a CGContext and then convert into a UIImage, which can then be encoded into PNG and saved.
CGSize size = CGLayerGetSize(layer);
UIGraphicsBeginImageContext(size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextDrawLayerAtPoint(ctx, CGPointZero, layer);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
NSData* pngData = UIImagePNGRepresentation(image);
[pngData writeToFile:... atomically:YES];
UIGraphicsEndImageContext();
(not tested)

Related

Correct GraphicsContext When Using UIGraphicsGetImageFromCurrentImageContext

I have the following method where I'm trying to do some drawing into an image:
- (UIImage*) renderImage
{
UIGraphicsBeginImageContextWithOptions(self.size, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
//drawing code
UIImage *image = [UIGraphicsGetImageFromCurrentImageContext() retain];
UIGraphicsEndImageContext();
return [image autorelease];
}
When I run this code I noticed that I'm getting hit much harder than I did when I was simply drawing this code in drawRect of a UIView. Am I drawing into the wrong graphics context here (ie CGContextRef context = UIGraphicsGetCurrentContext();)? Or is UIGraphicsGetImageFromCurrentImageContext just that much more expensive than drawing in drawRect?
The main difference is that the context that you create requires an offscreen rendering, it isn't the same context created in -drawRect. So you are adding an additional memory to the heap that stays until you will release the image.

Is it possible to render AVCaptureVideoPreviewLayer in a graphics context?

This seems like a simple task, yet it is driving me nuts. Is it possible to convert a UIView containing AVCaptureVideoPreviewLayer as a sublayer into an image to be saved? I want to create an augmented reality overlay and have a button save the picture to the camera roll. Holding the power button + home key captures the screenshot to the camera roll, meaning that all of my capture logic is working, AND the task is possible. But I cannot seem to be able to make it work programmatically.
I'm capturing a live preview of the camera's image using AVCaptureVideoPreviewLayer . All of my attempts to render the image fail:
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
//start the session, etc...
//this saves a white screen
- (IBAction)saveOverlay:(id)sender {
NSLog(#"saveOverlay");
UIGraphicsBeginImageContext(appDelegate.window.bounds.size);
UIGraphicsBeginImageContext(scrollView.frame.size);
[previewLayer.presentationLayer renderInContext:UIGraphicsGetCurrentContext()];
// [appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(screenshot, self,
#selector(image:didFinishSavingWithError:contextInfo:), nil);
}
//this renders everything, EXCEPT for the preview layer, which is blank.
[appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
I've read somewhere that this may be due to security issues of the iPhone. Is this true?
Just to be clear: I don't want to save the image for the camera. I want to save the transparent preview layer superimposed over another image, thus creating transparency. Yet for some reason I cannot make it work.
I like #Roma's suggestion of using GPU Image - great idea. . . . however if you want a pure CocoaTouch approach, here's what to do:
Implement AVCaptureVideoDataOutputSampleBufferDelegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage+Orientation from the sample buffer data
if (_captureFrame)
{
[captureSession stopRunning];
_captureFrame = NO;
UIImage *image = [ImageTools imageFromSampleBuffer:sampleBuffer];
image = [image rotate:UIImageOrientationRight];
_frameCaptured = YES;
if (delegate != nil)
{
[delegate cameraPictureTaken:image];
}
}
}
Capture as Follows:
+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
Blend the UIImage with the overlay
Now that you have the UIImage, add it to a new UIView.
Add the overlay on top as a sub-view.
Capture the new UIView
+ (UIImage*)imageWithView:(UIView*)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, [UIScreen mainScreen].scale);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
I can advise you to try GPU Image.
https://github.com/BradLarson/GPUImage
It uses openGL, so it's rather fast. It can process pictures from camera and add filters to them (there are lot of them) including edge detection, motion detection and a far more
It's like OpenCV but based on my own experience GPU image is easier to connect with your project and the language is objective-c.
Problem could appear if you decided to use box2d for physics - is uses openGl too and you will need to spent some time till this 2 frameworks will stop fighting))

How can I save a photo of part of the screen to the local iPhone's Photos?

I have put a UILabel that the user has chosen over a UIImageView that was also chosen by the user. I would like to put these two into a picture, kind of like a screenshot of a small part of the screen. I have absolutely no idea how to do this and have no experience in this. Any help is appreciated!!
You could setup a Bitmap context with a clipping mask of the area you want to save. Then you use the backing layer's renderInContext method to draw onto that context.
CGSize imageSize = CGSizeMake(960, 580);
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClipToRect(context, CGRectMake(10,10,200,200); // whatever rect you want
[self.layer renderInContext:context];
UIImage *myImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Save to camera roll
UIImageWriteToSavedPhotosAlbum(myImage, self, #selector(didSaveImage), null);

Is this kind of masking possible with UIImage or CGImage API in iOS

I have an UIImage with some text and would like to apply pattern UIImage as masking. Is this possible ?
I understand that with UILabel we can get this kind of gradient using CAGradientLayer. But can this be done if the source is an UIImage ?
The image may have some symbols/pictures etc other than regular characters and hence UIImage. Also i could reuse the image by applying different masking pattern depending on the context.
is this possible ?
Appreciate your help.
EDIT: Thanks for all your answers.
I understand applying the gradient to a text label or creating an image that has text.
But my goal is to get this.--> Click here
i.e. i have a png with some drawing like a flower with transparent background. I want to apply the gradient to the object inside that picture at runtime with a gradient.png as shown in the picture above. Is that possible with masking ?
Thanks
Looks like you should be able to use CGImageMaskCreate:
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
return [UIImage imageWithCGImage:masked];
}
For a longer discussion check out the comment thread here:
http://iosdevelopertips.com/cocoa/how-to-mask-an-image.html
Yes, it is :)
textField.textColor = [UIColor colorWithPatternImage: [UIImage imageNamed:#"rainbowGradient.png"]];
If you want heavy control, Jason Whyne's idea might work. But I like this one, because it's about 8 lines shorter.
Here's just another way to draw an image of text masking something. It's based on kCGBlendModeSourceIn blending mode: you draw text on a clear background and then draw the fill all over the place.
NSString *theString = ...;
UIFont *theFont = ...;
CGSize stringSize = [theString sizeWithFont:theFont];
// The background must be clear (fully transparent), hence NO as the 2nd argument
UIGraphicsBeginImageContextWithOptions(stringSize, NO, 0);
[theString drawAtPoint:CGPointZero withFont:theFont];
// This effectively colorizes the image. Use a pattern color...
[patternColor set];
UIRectFillUsingBlendMode(CGRectMake(0, 0, stringSize.width, stringSize.height), kCGBlendModeSourceIn);
// ... or an image:
[patternImage drawInRect:CGRectMake(0, 0, stringSize.width, stringSize.height) blendMode:kCGBlendModeSourceIn alpha:1.0f];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
sample using mask is fine and works well, but You are leaking.
CGImageMaskCreate and CGImageCreateWithMask do allocate (following "create" -> retain rule)
so You should release mask & image after using:
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
UIImage * result = [UIImage imageWithCGImage:masked];
CGImageRelease(mask);
CGImageRelease(masked);
return result;
As per ADC docs:
...
Return Value
A Quartz bitmap image mask. You are responsible for releasing this object by calling CGImageRelease.

How can I draw an CGImageRef context on the screen?

I have a beautiful CGImageRef context, which I created the whole day to get alpha values ;)
It's defined like that:
CGContextRef context = CGBitmapContextCreate (bitmapData, pixWidth, pixHeiht 8, pixWidth, NULL, kCGImageAlphaOnly);
So for my understanding, that context represents somehow my image. But "virtually", non-visible somewhere in memory.
Can I stuff that in an UIImageView or draw that directly to the screen? I guess that alpha would be converted to grayscale or something like that.
You can create a UIImage by calling:
+ (UIImage *)imageWithCGImage:(CGImageRef)cgImage
and then draw the UIImage using:
- (void)drawAtPoint:(CGPoint)point
Go look at CGBitmapContextCreateImage(), that can give you a CGImageRef from your bitmap context. You can then draw that using the CGContext... functions or make a UIImage using +[UIImage imageWithCGImage:].
CGSize size = ...;
UIGraphicsBeginImageContext(size);
...
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
...
CGPoint pt = ...;
[img drawAtPoint:pt];