I have the following code which I use to capture the contents of a view into an image:
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
{
UIGraphicsBeginImageContextWithOptions(self.mainView.bounds.size, NO, [UIScreen mainScreen].scale);
}
else
{
UIGraphicsBeginImageContext(self.mainView.bounds.size);
}
[self.mainView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = [UIGraphicsGetImageFromCurrentImageContext() retain];
Recently, I tried attaching a GLKView (which I use to apply Core Image filters in real time on the GPU) to the mainView. When I execute the above code, it doesn't capture the graphics in the GLKView (instead basically just ignores it).
So my question is, is it possible to capture graphics to an image that are drawn on the GPU and haven't yet been copied back to the CPU?
You need to grab the view's framebuffer pixel data using OpenGL ES. You can't do it with renderInContext:.
There are a couple of ways to use OpenGL ES to grab the data. Look at this answer for details.
Related
I have the following code:
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(mainView.bounds.size, NO, [UIScreen mainScreen].scale);
}
else {
UIGraphicsBeginImageContext(mainView.bounds.size);
}
[mainView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *saveImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
In mainView, there is a masked subview that does not appear in saveImage when using this method. However, I understand there used to be a UIGetScreenImage method pre iOS 4 that did capture such activity. My question is, what is the best way to capture CALayer activities in iOS 6? Is UIGetScreenImage still private?
I think there was a similar question about a week ago: Mask does not work when capturing a uiview to a uiimage
On iOS6 there is a problem capturing a UIView with the mask applied (btw, in iOS 7 it has been fixed): you capture the image but the mask is not applied.
I posted a lengthy solution which involved applying the mask manually to the captured image. It's not very difficult and I also made a demo project of this. You can download it here:
https://bitbucket.org/reydan/so_imagemask
If I did not understand your problem correctly, please tell me so I can remove this answer.
try getting the presentation layer instead, as it will contain the layer's state.
[mainView.layer.presentationLayer renderInContext:UIGraphicsGetCurrentContext()];
https://developer.apple.com/library/mac/documentation/graphicsimaging/reference/CALayer_class/Introduction/Introduction.html#//apple_ref/occ/instm/CALayer/presentationLayer
I am perfoming image sequences animation is on main thread.
At the same time i want to take snapshot of device screen in back ground.
And by using that snapshots i want make video..
Thanks,
Keyur Prajapati
For taking the screenshots while running the image animations
use
[self.view.layer.presentationLayer renderInContext:UIGraphicsGetCurrentContext()];
instead of
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
it will take screenshots while runing animations
iOS uses Core Animation as the rendering system. Each UIView is backed by a CALayer. Each visible layer tree is backed by a presentation tree. The layer tree contains the object model values for each layer, i.e., values you set when you assign a value to a layer property (A and B). The presentation tree contains the values that are currently being presented to the user as an animation takes place (interpolated values between A and B).
If you're doing it in CoreAnimation you can render the layer contents into a bitmap using -renderInContext:. Have a look at Matt Longs Tutorial. It's for Objective-C on the Mac, but it can be easily converted for use on the iPhone.
Create one another thread where you can do this:
//Create rect portion for image if full screen then 320X480
CGRect contextRect = CGRectMake(0, 0, 320, 480);
// this is whate you need
UIGraphicsBeginImageContext(contextRect.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
viewImage is the image which you needed.
You can write this code in function which can be called timely bases like per 5 seconds or according to your requirements.
Hope this is what you needed.
I am trying to programmatically save an image of the current view to a UIImage.
the current view contains a background, and a number of UIImageViews that have had a 3d transformation applied to the layer, e.g.
CATransform3D t = CATransform3DIdentity;
t = CATransform3DRotate(t, rotationX, 1.0, 0, 0);
t = CATransform3DRotate(t, rotationY, 0, 1.0, 0);
object.layer.transform = t;
I am currently grabbing a screenshot of the current view with the following code:
UIGraphicsBeginImageContext(background.bounds.size);
[background.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
However, the 3d transformations on the objects are lost, e.g. the objects appear in the saved image as if no 3d transformations have been applied. Is there a way to grab a screen shot programmatically that will include the 3d transformations?
UIGetScreenImage() is not an option as that is a private API and this app is going to the app store.
I don't think it can be done at the moment, as renderInContext does not render Core Animation transforms and UIGetScreenImage() is not a public API as you well point out. The same question was asked already on StackOverflow:
1) How do I create/render a UIImage from a 3D transformed UIImageView?
2) CALayer renderInContext
If that feature is key to your application, I'd look into using an OpenGL view and capturing the color buffer (which can be rendered offscreen if you don't want to display it).
I'm trying to emulate the animation seen in the default camera app, where a snapshot of the cameras viewfinder is animated into the corner of the apps display.
The AVCaptureVideoPreviewLayer object that holds the key to solving this problem isn't very open to these requirements: trying to create a copy of it in a new layer with ..
- (id)initWithLayer:(id)layer
.. returns an empty layer, without the image snapshot, so clearly there is some deeper magic going on here.
Your clues/boos are most welcome.
M.
facing the same woes, from a slightly different angle.
Here are possible solutions, that none are too great IMO:
You can add to an AVCaptureSession both an AVCaptureStillImageOutput and an AVCaptureVideoDataOutput. When you set the sessionPreset to AVCaptureSessionPresetHigh you'll start getting frames by the API, and when you switch to AVCaptureSessionPresetPhoto you can take real images. So right before taking the picture, you can switch to video, get a frame, and then return to camera. Major caveat is that it takes a "long" time (couple of seconds) for the camera to switch between the video camera and picture camera.
Another option would be to use only the camera output (AVCaptureStillImageOutput), and use UIGetScreenImage to get a screen capture of the phone. You could then crop out the controls and leave only the image. This gets complicated if you're showing UI controls over the image. Also, according to this post, Apple started rejecting apps that use this function (it was always iffy).
Aside from these I also tried playing with AVCaptureVideoPreviewLayer. There's this post to save a UIView or CALayer to a UIImage. But it all produces clear or white images. I tried accessing the layer, the view's layer, the superlayer, the presentationLayer, the modelLayer, but to no avail. I guess the data in AVCaptureVideoPreviewLayer is very internal, and not really part of the regular layer infrastructure.
Hope this helps,
Oded.
I think you should add an AVCaptureVideoDataOutput to the current session with:
AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init];
videoOutput.videoSettings = #{ (NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
[session addOutput:videoOutput];
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[videoOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
Then, implement the delegate method below to get your image snapshot:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
// Add your code here that uses the image.
dispatch_async(dispatch_get_main_queue(), ^{
_imageView.image = image;
});
}
This will consume memory and reduce the performance of the app. To improve, you can also optimize your AVCaptureVideoDataOutput with:
videoOutput.minFrameDuration = CMTimeMake(1, 15);
You can also use alwaysDiscardsLateVideoFrames.
there are 2 ways to grab frames of the preview.. AVCaptureVideoDataOutput & AVCaptureStillImageOutput :)
is your capture session is setup to grab video frames, then make your layer with the cgimage from a chosen frame. if it's setup for stills, wait until getting your still image and make your layer from a scaled down version of that cgimage. if you don't have an output on your session yet, you'll have to add one i think.
Starting in iOS 7, you can use UIView::snapshotViewAfterScreenUpdates to snapshot the UIView wrapping your AVCaptureVideoPreviewLayer. This is not the same as UIGetScreenImage, which will get your app rejected.
UIView *snapshot = [self.containerView snapshotViewAfterScreenUpdates:YES];
Recall the old-school way of turning a view into an image. For some reason it worked on everything except for camera previews:
UIGraphicsBeginImageContextWithOptions(self.containerView.bounds.size, NO, [UIScreen mainScreen].scale);
[self.containerView drawViewHierarchyInRect:self.containerView.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
In my code I'm trying to show a UIWebView as a page is loading, and then, when it's done, capture an image from the web view to cache and display later (so I don't have to reload and render the web page).
I have something along the lines of:
CGContextRef context = CGBitmapContextCreate(…);
[[webView layer] renderInContext:context];
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *image = [UIImage imageWithCGImage:imageRef];
The problem I'm running into is that, due to UIWebView's tiling, sometimes only half of the page is rendered to the context by the time I capture the image.
Is there a way to detect or block on UIWebView's background rendering thread so that I can get the image only after all of the rendering has finished?
UPDATE: It may be that thread race conditions were a red herring (it's unclear from the documentation, at any rate, whether UIWebView's custom layer or a CATiledLayer in general blocks on its background threads).
This may instead have been an invalidation issue (despite several sorts of calls to setNeedsDisplay on both the UIWebView and its layer). Changing the bounds of the UIWebView before rendering it appears to have eliminated the "not drawing the whole thing" problem.
I still ran into a problem where a few tiles were being drawn at the old scale, but calling renderInContext: twice seems to have mitigated that sufficiently.
UIWebView is probably using a CATiledLayer or custom derivative. You may be able to replace the layer with something of your own choosing such as a simple CALayer which does not do threaded drawing. Replace the layer before you start loading content.
If replacing the layer with a standard CALayer does not work, you may have to make your own subclass that emulates the behavior of a CATiledLayer without actually being threaded.
Edit:
From CATiledLayer.h
/* Note: do not attempt to directly modify the `contents' property of
* an CATiledLayer object - doing so will effectively turn it into a
* regular CALayer. */
So you may just be able to set the contents to nil before calling renderInContext: