Applying Effect to iPhone Camera Preview "Video" - iphone

My goal is to write a custom camera view controller that:
Can take photos in all four interface orientations with both the back and, when available, front camera.
Properly rotates and scales the preview "video" as well as the full resolution photo.
Allows a (simple) effect to be applied to BOTH the preview "video" and full resolution photo.
Implementation (on iOS 4.2 / Xcode 3.2.5):
Due to requirement (3), I needed to drop down to AVFoundation.
I started with Technical Q&A QA1702 and made these changes:
Changed the sessionPreset to AVCaptureSessionPresetPhoto.
Added an AVCaptureStillImageOutput as an additional output before starting the session.
The issue that I am having is with the performance of processing the preview image (a frame of the preview "video").
First, I get the UIImage result of imageFromSampleBuffer: on the sample buffer from captureOutput:didOutputSampleBuffer:fromConnection:. Then, I scale and rotate it for the screen using a CGGraphicsContext.
At this point, the frame rate is already under the 15 FPS that is specified in the video output of the session and when I add in the effect, it drops to under or around 10. Quickly the app crashes due to low memory.
I have had some success with dropping the frame rate to 9 FPS on the iPhone 4 and 8 FPS on the iPod Touch (4th gen).
I have also added in some code to "flush" the dispatch queue, but I am not sure how much it is actually helping. Basically, every 8-10 frames, a flag is set that signals captureOutput:didOutputSampleBuffer:fromConnection: to return right away rather than process the frame. The flag is reset after a sync operation on the output dispatch queue finishes.
At this point I don't even mind the low frame rates, but obviously we can't ship with the low memory crashes. Anyone have any idea how to take action to prevent the low memory conditions in this case (and/or a better way to "flush" the dispatch queue)?

To prevent the memory issues, simply create an autorelease pool in captureOutput:didOutputSampleBuffer:fromConnection:.
This makes sense since imageFromSampleBuffer: returns an autoreleased UIImage object. Plus it frees up any autoreleased objects created by image processing code right away.
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
// Create a UIImage from the sample buffer data
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
< Add your code here that uses the image >
[pool release];
}
My testing has shown that this will run without memory warnings on an iPhone 4 or iPod Touch (4th gen) even if requested FPS is very high (e.g. 60) and image processing is very slow (e.g. 0.5+ secs).
OLD SOLUTION:
As Brad pointed out, Apple recommends image processing be on a background thread so as to not interfere with the UI responsiveness. I didn't notice much lag in this case, but best practices are best practices, so use the above solution with autorelease pool instead of running this on the main dispatch queue / main thread.
To prevent the memory issues, simply use the main dispatch queue instead of creating a new one.
This also means that you don't have to switch to the main thread in captureOutput:didOutputSampleBuffer:fromConnection: when you want to update the UI.
In setupCaptureSession, change FROM:
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
TO:
// we want our dispatch to be on the main thread
[output setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

A fundamentally better approach would be to use OpenGL to handle as much of the image-related heavy lifting for you (as I see you're trying in your latest attempt). However, even then you might have issues with building up frames to be processed.
While it seems strange that you'd be running into memory accumulation when processing frames (in my experience, you just stop getting them if you can't process them fast enough), Grand Central Dispatch queues can get jammed up if they are waiting on I/O.
Perhaps a dispatch semaphore would let you throttle the addition of new items to the processing queues. For more on this, I highly recommend Mike Ash's "GCD Practicum" article, where he looks at optimizing an I/O bound thumbnail processing operation using dispatch semaphores.

Related

Why is the AVCaptureVideoDataOutput callback dependent on the OpenGL draw framerate?

When looking at the GLCameraRipple example, the AVCaptureVideoDataOutput is setup in such a way that a callback is called (captureOutput) whenever a new frame arrives from the iphone camera.
However, putting a "sleep(1)" at the beginning of the "drawInRect" function (that is used for OpenGL drawing), this callback gets called only 1 time per second, instead of 30 times per second.
Can anyone tell me why the framerate of the iphone camera is linked with the framerate of the OpenGL draw call?
Update: Steps to reproduce
Download the GLCameraRipple sample from here: http://developer.apple.com/library/ios/#samplecode/GLCameraRipple/Introduction/Intro.html
In RippleViewController.m => captureOutput, add a
NSLog(#"Got Frame");. Running it will generate a lot of "Got Frame" messages (about 30 per second)
In RippleViewController.m => drawInRect, add a sleep(1); at the very beginning of the function. Only one message per second appears now.
When AVCaptureVideoDataOutput call delegate method captureOutput:didOutputSampleBuffer:fromConnection: to make programer able to edit or record image from camera, this method called from main thread. and, normally we should program the code that interact with user interface directly by main thread and that why OpenGL liked with AVCaptureVideoDataOutput because method from camera and draw to screen are run in main thread.
and AVCaptureVideoDataOutput class can drop the image if iPhone cannot process the captureOutput:didOutputSampleBuffer:fromConnection: finished in time like the process time more than 1/30 second next frame will be ignore that you can collect the data with captureOutput: didDropSampleBuffer: fromConnection: method

How Can I delay the stream to UIImageview using AVCaptureVideoPreviewLayer from camera?

How Can I delay the stream to UIImageview using AVCaptureVideoPreviewLayer from camera?
See below how I bind them, but I just can't figure how to delay it (I don't want it in real time)
AVCaptureVideoPreviewLayer* captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = self.imageView.bounds;
[self.imageView.layer addSublayer:captureVideoPreviewLayer];
First you're going to want to remove the preview layer frame you have right now, as there is no method to delay those preview frames out of the box.
You're going to want to create a buffer. If we're talking about a few frames, you could have a NSMutableArray that you're filling up on one end with UIImages while you're feeding your image view from the other end.
Your UIImage would come from the didOutputSampleBuffer method, use something like this UIImage created from CMSampleBufferRef not displayed in UIImageView?
Now, few challenges you will have to deal with:
you're talking about having a multiple seconds delay, 5 seconds would be about 150 frames. Storing 150 UIImage in memory isn't gonna happen, unless they're very tiny and on the latest devices
You would solve that by saving the images to disk and have your array only store the path of those images instead of the images themselves. Now you're probably going to run into performance issues, as you're going to do read/write operations in real time, your framerate is going to suffer from that
Because of that bad frame rate, you're going to have to make sure you're not losing synchronization between recorded feed and live feed, otherwise you'll start with a 5 sec delay and end up with a much longer delay
Good luck with that, it can be done with some trade-off (slower frame rate...) but it can be done. (I have done something very similar myself multiple times, can't share the code for IP reasons).

How can I modify the GLCameraRipple example to process on a background thread?

I'm trying to modify the GLCameraRipple sample application from Apple to process video frames on a background thread. In this example, it handles each frame on the main thread using the following code:
// Set dispatch to be on the main thread so OpenGL can do things with the data
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
If I change this code to process in a background thread:
dispatch_queue_t videoQueue = dispatch_queue_create("com.test.queue", NULL);
[dataOutput setSampleBufferDelegate:self queue:videoQueue];
then program crashes.
When I try to create a second EAGLContext with sharing, as specified in Apple's documentation, then I only see a green or black screen.
How can I modify this sample application to run on a background thread?
This was actually fairly interesting, after I tinkered with the sample. The problem here is with the CVOpenGLESTextureCacheCreateTextureFromImage() function. If you look at the console when you get the green texture, you'll see something like the following being logged:
Error at CVOpenGLESTextureCacheCreateTextureFromImage -6661
-6661, according to the headers (the only place I could find documentation on these new functions currently), is a kCVReturnInvalidArgument error. Something's obviously wrong with one of the arguments to this function.
It turns out that it is the CVImageBufferRef that is the problem here. It looks like this is being deallocated or otherwise changed while the block that handles this texture cache update is happening.
I tried a few ways of solving this, and ended up using a dispatch queue and dispatch semaphore like I describe in this answer, having the delegate still call back on the main thread, and within the delegate do something like the following:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
if (dispatch_semaphore_wait(frameRenderingSemaphore, DISPATCH_TIME_NOW) != 0)
{
return;
}
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
CFRetain(pixelBuffer);
dispatch_async(openGLESContextQueue, ^{
[EAGLContext setCurrentContext:_context];
// Rest of your processing
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CFRelease(pixelBuffer);
dispatch_semaphore_signal(frameRenderingSemaphore);
});
}
By creating the CVImageBufferRef on the main thread, locking the bytes it points to, and retaining it, then handing it off to the asynchronous block, that seems to fix this error. A full project that shows this modification can be downloaded from here.
I should say one thing here: this doesn't appear to gain you anything. If you look at the way that the GLCameraRipple sample is set up, the heaviest operation in the application, the calculation of the ripple effect, is already dispatched to a background queue. This is also using the new fast upload path for providing camera data to OpenGL ES, so that's not a bottleneck here when run on the main thread.
In my Instruments profiling on a dual-core iPhone 4S, I see no significant difference in rendering speed or CPU usage between the stock version of this sample application and my modified one that runs the frame upload on a background queue. Still, it was an interesting problem to diagnose.

Saving file to disk while running AVCaptureVideoPreviewLayer and CMMotionManager

Good day, hope someone can help me up with this situation:
I working on an iPhone app that takes series of images, assisted by gyroscope.
So both AVCamCaptureManager and CMMotionManager sessions are running at the same time.
after taking a still image, i am:
- processing the image in a background thread (which works fine without affecting anything)
- then saving processed image data to disk
[imageData writeToFile:imagePath atomically:YES];
The issue: both AVCamCaptureManager and CMMotionManager sessions freeze for less then 1/2 second, right after initiating writeToFile function.
Does anyone have any experience with such scenario?
Thanks for your time! :)
It appears that saving to disk does not affect sessions.
I am also setting UIImageView.image to a large image in the end of my routine, and this is what was freezing everything for 1/2 second.

iPhone video buffer

I'm trying to build a video recorder without jailbreaking my iPhone (i've a Developer license).
I began using PhotoLibrary private framework, but i can only reach 2ftp (too slow).
Cycoder app have a fps of 15, i think it uses a different approach.
I tried to create a bitmap from the previewView of the CameraController, but it always returns e black bitmap.
I wonder if there's a way to directly access the video buffer, maybe with IOKit framework.
Thanks
Marco
Here is the code:
image = [window _createCGImageRefRepresentationInFrame:rectToCapture];
Marco
That is the big problem. So far i've solved using some temp fixed size buffers and detach a thread for every buffer when is full. The thread will save the buffer content in the Flash memory. Launching some heavy threads, heavy beacause each thread access the flash, will slow the device down and the refresh of the camera view.
Buffers cannot be big, because you will get memory warning, and cannot be small because you will freeze the device, because of too many threads and accesses to the flash memory at a time.
The solution resides in balancing buffer size and number of threads.
I haven't already tried to use sqlite3 db to store images binary data, but i don't if will be a better solution.
PS: to speed up class methods call, avoid the common solution [object method] because of how method call works, but try to get and save the method address as below.
From Apple ObjC doc:
"The example below shows how the procedure that implements the setFilled: method might be
called:
void (*setter)(id, SEL, BOOL);
int i;
setter = (void (*)(id, SEL, BOOL))[target methodForSelector:#selector(setFilled:)];
for ( i = 0; i < 1000, i++ )
setter(targetList[i], #selector(setFilled:), YES); "
Marco
If you're intending to ever release your app on the App Store, using a private framework will ensure that it will be rejected. Video, using the SDK, simply isn't supported.
To capture the video you can see when the Camera is active requires fairly sophisticate techniques, not exposed by any framework/lib out of the box.
I used a non documented UIWindow method to get the current displayed frame as CGImageRef.
Now it works successfully!!
If you would, and if i'm allowded, i can post the code that do the trick.
Marco