AVCaptureVideoPreviewLayer: taking a snapshot - iphone

I'm trying to emulate the animation seen in the default camera app, where a snapshot of the cameras viewfinder is animated into the corner of the apps display.
The AVCaptureVideoPreviewLayer object that holds the key to solving this problem isn't very open to these requirements: trying to create a copy of it in a new layer with ..
- (id)initWithLayer:(id)layer
.. returns an empty layer, without the image snapshot, so clearly there is some deeper magic going on here.
Your clues/boos are most welcome.
M.

facing the same woes, from a slightly different angle.
Here are possible solutions, that none are too great IMO:
You can add to an AVCaptureSession both an AVCaptureStillImageOutput and an AVCaptureVideoDataOutput. When you set the sessionPreset to AVCaptureSessionPresetHigh you'll start getting frames by the API, and when you switch to AVCaptureSessionPresetPhoto you can take real images. So right before taking the picture, you can switch to video, get a frame, and then return to camera. Major caveat is that it takes a "long" time (couple of seconds) for the camera to switch between the video camera and picture camera.
Another option would be to use only the camera output (AVCaptureStillImageOutput), and use UIGetScreenImage to get a screen capture of the phone. You could then crop out the controls and leave only the image. This gets complicated if you're showing UI controls over the image. Also, according to this post, Apple started rejecting apps that use this function (it was always iffy).
Aside from these I also tried playing with AVCaptureVideoPreviewLayer. There's this post to save a UIView or CALayer to a UIImage. But it all produces clear or white images. I tried accessing the layer, the view's layer, the superlayer, the presentationLayer, the modelLayer, but to no avail. I guess the data in AVCaptureVideoPreviewLayer is very internal, and not really part of the regular layer infrastructure.
Hope this helps,
Oded.

I think you should add an AVCaptureVideoDataOutput to the current session with:
AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init];
videoOutput.videoSettings = #{ (NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
[session addOutput:videoOutput];
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[videoOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
Then, implement the delegate method below to get your image snapshot:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
// Add your code here that uses the image.
dispatch_async(dispatch_get_main_queue(), ^{
_imageView.image = image;
});
}
This will consume memory and reduce the performance of the app. To improve, you can also optimize your AVCaptureVideoDataOutput with:
videoOutput.minFrameDuration = CMTimeMake(1, 15);
You can also use alwaysDiscardsLateVideoFrames.

there are 2 ways to grab frames of the preview.. AVCaptureVideoDataOutput & AVCaptureStillImageOutput :)
is your capture session is setup to grab video frames, then make your layer with the cgimage from a chosen frame. if it's setup for stills, wait until getting your still image and make your layer from a scaled down version of that cgimage. if you don't have an output on your session yet, you'll have to add one i think.

Starting in iOS 7, you can use UIView::snapshotViewAfterScreenUpdates to snapshot the UIView wrapping your AVCaptureVideoPreviewLayer. This is not the same as UIGetScreenImage, which will get your app rejected.
UIView *snapshot = [self.containerView snapshotViewAfterScreenUpdates:YES];
Recall the old-school way of turning a view into an image. For some reason it worked on everything except for camera previews:
UIGraphicsBeginImageContextWithOptions(self.containerView.bounds.size, NO, [UIScreen mainScreen].scale);
[self.containerView drawViewHierarchyInRect:self.containerView.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Related

Speed up UIImagePickerController by using a low res image then later swapping in the high res image

There is a great wiki about image loading from the camera picker. Which made me aware of costs of taking an image at full resolution.
At the moment, when a photo is picked, I push a new view controller and display the image at full resolution. Pushing the view is a really slow and choppy experience (about 1 fps!) that I want to smooth out. Comparing to picking a photo on Instagram, I notice that they use a low resolution image and later swap in the full image. (I need the full res image because the user should be able to zoom and pan)
The idea I want is somthing like this:
- (void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage* fullImage = [info objectForKey:UIImagePickerControllerOriginalImage];
// Push a view controller and give it the image.....
}
- (void) viewDidLoad {
CGSize smallerImageSize = _imageView.bounds;
UIImage* smallerImage = [MyHelper quickAndDirtyImageResize:_fullImage
toSize:smallerImageSize];
// Set the low res image for now... then later swap in the high res
_imageView.image = smallerImage;
// Swap in high res image async
// This is the part im unsure about... Im sure UIKit isn't thread-safe!
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, NULL), ^{
_imageView.image = _fullImage;
});
}
I think that UIImage isn't memory mapped in until it is used. Therefore it dons't slow things down until its given to the imageView. Is this correct?
I think image decoding is already done asynchronously by the system, however, it is sill slowing the phone down considerably while its loading.
Is there a way do perform some of the work required to display an image in a very low priority background queue?
You're trying to do things the most complicated way :)
Why not just prepare the small image before pushing the view controller and pass it to them? Look at this code:
- (void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *fullImage = [info objectForKey:UIImagePickerControllerOriginalImage];
UIImage *smallImage = [fullImage imageScaledToSize:self.view.bounds];
// Push a view controller and give it BOTH images
}
// And in your pushed view controller
- (void)viewDidLoad
{
_imageView.image = self.smallImage;
}
- (void)viewDidAppear:(BOOL)animated
{
[super viewDidAppear:animated];
_imageView.image = self.fullImage;
}
The main thing is that viewDidAppear: will be called right after the animation is done so you can switch images here without any worries.
In addition to Andrey's answer, instead of using imageScaledToSize, use CGImageSourceCreateThumbnailAtIndex. In fact, it's very possible (I am pretty sure it's the case) that any image used in the photo album already has a thumbnail. So instead of bothering with the image itself, grab the existing thumbnail and display it, then use Andrey's code to switch in the main image. This way you do as little work as possible during the animation period.
Calling CGImageSourceCreateThumbnailAtIndex will return the thumbnail image, whether it's already there or needs to be generated. So it'll be quite safe to use, and probably at least as fast as imageScaledToSize.
You can find complete code samples to use it in the Apple docs, no need to duplicate it here.
Have you tried using ALAssetsLibrary to load the thumbnail of that image instead of trying to load the image at full resolution? It's faster than resize it as well.

Getting Blank Image while trying to capture MPMoviePlayerViewController view using UIGraphicsGetImageFromCurrentImageContext()

I have an requirement to capture iPhone screen when my app is in foreground . I have used UIGraphicsGetImageFromCurrentImageContaxt() for this it works in most of synerio but fails when video is playing by using MPMoviePlayerViewController or AVPlayer and gives back black image with player control.
Probabely My guess is MPMoviePlayerViewController rendering frames using OpenGl and method UIGraphicsGetImageFromCurrentImageContaxt() is not able to capture the image ??
I am missing something or is there any alternative soln available to capture iPhone Screen while app is in foreground ??
There is not any easy solution(i.e. might be I don't know) for this but you need to figure out. Here I have described possible solutions. When you will try simply rendering the view on context then it will give blank screen in place of player and other things will be as it is.
Possible solutions that I know
Private API
You can use UIGetScreenImage() function to capture whole screen of device including player and its control. This is the way you can get the image of player with view easily.
Note: Some buddies are saying use of this function may cause rejection of application(I have never used for app store's app so I don't have much experience about it :)).
Second way.
If you want to get image of player with the other things that resides in main view or iPhone screen then basic idea is to capture two images(i.e. One of movie player and another of whole screen like you are doing using UIGraphicsGetImageFromCurrentImageContext) and combine them as a one image using some proper calculations.In this way you need to do some calculations so accuracy is depend upon your calculations.
Here I am preferring to use AVPlayerinstead of MPMoviePlayer not sure I am true or not but as I have feel that AVPlayer providing exact frame at certain time(Good accuracy. Exact image that is showing on the screen.)
Please set the gravity mode of AVPlayer to AVLayerVideoGravityResizeAspect. This mode will preserver aspect ratio and fit within layers bounds(i.e. This option is default). You can set this way.
playerLayer.videoGravity = setVideoFillMode:AVLayerVideoGravityResizeAspect;
where playerLayer is the object of AVPlayerLayer.
Now get the image of AVPlayerLayer
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc]
initWithAsset:asset];
imageGenerator.appliesPreferredTrackTransform=YES;
/*AVAssetImageGenerator will scale images such that they fit within the defined bounding box. Images will never be scaled up. So provide this should be the size of AVPlayer. */
imageGenerator.maximumSize=CGSizeMake(400, 400);
CGImageRef imgRef = [imageGenerator copyCGImageAtTime:storedCMTime actualTime:NULL error:NULL];
[imageGenerator release];
Please note that this generated image might be not exact size for what you are looking for or as you have exactly provided in maximum size because as I mentioned in above comment here image will be never scaled up. if image is not the exact size for what you are looking then use some image utility function to scale the image as per the size of your player but don't spoil Aspect ration because we have set the AVPLayer's mode to AVLayerVideoGravityResizeAspect.store this image temporary.
Now Capture the image of View using UIGraphicsGetImageFromCurrentImageContext(i.e. like you are doing currently). Draw image of AVplayer that we have captured and stored over exactly the black area that is showing on your view(i.e. You need to do some trial and error to get the exactly same).
Here I have tried to cover all the main points that may cause panic or not immediately obvious.
Update:
Screen capturing solutions from APPLE: http://developer.apple.com/library/ios/#qa/qa1703/_index.html#//apple_ref/doc/uid/DTS40010193
If you would like to go with MPMoviewPlayerController then it's okay. Get the thumbnail using code shows by #Kuldeep. And combine the Both view's image and Thumbnail using Image Masking exactly explain here : Link
I would rather suggest to use below snippet, may it would be more easy to use. You just need to pass time factor at which you need to capture the video.
float timeFactor = 60.0;
CGRect rect = CGRectMake(self.view.center.x, self.view.center.y, 100.0, 100.0)
MPMoviePlayerController *player = [[MPMoviePlayerController alloc] initWithContentURL:videoURL];
UIImage *thumbnail = [player thumbnailImageAtTime:timeFactor timeOption:MPMovieTimeOptionNearestKeyFrame];
UIImageView *imagV=[[UIImageView alloc]initWithImage:thumbnail];
[imagV setFrame:rect];
Well, it seems I've found a solution to capture frames in a AVPlayer:
- (UIImage*) captureFrame {
AVAssetImageGenerator *imageGenerator = [AVAssetImageGenerator assetImageGeneratorWithAsset:[self.player.currentItem asset]];
if ([[[UIDevice currentDevice] systemVersion] floatValue] >= 5) {
[imageGenerator setRequestedTimeToleranceAfter:kCMTimeZero];
[imageGenerator setRequestedTimeToleranceBefore:kCMTimeZero];
}
CGImageRef ref = [imageGenerator copyCGImageAtTime:self.player.currentItem.currentTime actualTime:nil error:nil];
UIImage *img = [UIImage imageWithCGImage:ref];
CGImageRelease(ref);
return img;
}

writeImageToSavedPhotosAlbum save only a few images

I've been following apples example, QA1702, on how to capture images using the AVFoundation. I won't cite the code here because of space concern. A brief description of what I'm trying to achieve:
Use the iPhone camera to pass a "video" (actually a sequence of images) to a web server, and I know this is possible. However in order to be able to pass the an image using the HTTP POST as in this example, I have to save the image. not necessarily in the photos album but I wan't to be able to view the pictures there as well in debug purposes.
The apple QA1702 contains 3 methods:
- (void)setupCaptureSession
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
//this is modified to be void as you might see, will get back to this
- (void) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
in setupCaptureSession I start the session as in the example. the captureOutput is only running the imageFromSampleBuffer, and that's where I've added some changes:
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
//library is declared in .h and is a ALAssetsLibrary
[library writeImageToSavedPhotosAlbum:quartzImage orientation:ALAssetOrientationDown completionBlock:nil];
// Release the Quartz image
CGImageRelease(quartzImage);
I've removed the creation of the UIImage and changed it to void typ since I do the writeImageToSavedPhotosAlbum: with the CGImageRef here instead.
The problem as I see it is that during the 10sec that I capture images ~150 calls to captureOutput are made, and therefor the same amount to writeImageToSavedPhotos but only ~5-10 pictures are saved. I'm aware of the memory abuse this is but since I'm not getting any warnings I can't figure out why not more images are created. and what can I do about it? Is it because, and I'm only guessing now, writeImageToSavedPhotos starts new threads and the iPhone can't handle more than a certain amount of threads. I've read something about NSOperationQueue, should I look into it?
On a side note, I use a NSTimer in setupCaptureSession:
[NSTimer scheduledTimerWithTimeInterval: 10.0 target:self selector:#selector(timerFireMethod:) userInfo:nil repeats: NO];
however I want to start it in first call to captureOutput in order to avoid time elapsing during the startup of video camera. but if I move this code line to captureOutput then timerFireMethod: is never called? any ideas?
This is solvable with NSOperationQueue, but is no longer interesting to me since writing to file is way to ineffective for most applications.

Show camera stream while AVCaptureSession's running

I was able to capture video frames from the camera using AVCaptureSession according to http://developer.apple.com/iphone/library/qa/qa2010/qa1702.html. However, it seems that AVCaptureScreen captures frames from the camera without showing the camera stream on the screen. I would like to also show camera stream just like in UIImagePicker so that the user knows that the camera is being turned on and sees what the camera is pointed at. Any help or pointer would be appreciated!
AVCaptureVideoPreviewLayer is exactly what you're looking for.
The code fragment Apple uses to demonstrate how to use it is:
AVCaptureSession *captureSession = <#Get a capture session#>;
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
UIView *aView = <#The view in which to present the layer#>;
previewLayer.frame = aView.bounds; // Assume you want the preview layer to fill the view.
[aView.layer addSublayer:previewLayer];

UIImage View + Pinch/Zoom feature when image is downloaded from web

I have a UIImageView, which pics up different images from the web, they are all different res. I have them to display as Aspect fit so that the whole picture displays even if it means leaving empty space on top or sides.
What i want to have the feature of doing is, once its displayed, to zoom in and out using the pinch or any other way.
NSURL* iURL = [NSURL URLWithString:tURL];
NSData* iData = [NSData dataWithContentsOfURL:iURL];
UIImage* iImage;
iImage = [UIImage imageWithData:iData];
FullScreenImage.image = iImage;
All this code is being executed on another thread, then it returns to the main thread and is displayed on a view using
[navigationController pushViewController:vFullscreen animated:YES];
Thanks for the help
You will need to use a UIScrollView as UIImageView does not offer any built-in zooming. See the Apple example entitled: ScrollViewSuite