I am building an iOS application (my first) that processes video still frames on the fly. To dive into this, I followed an example from the AV* documentation from Apple.
The process involves setting up an input (the camera) and an output. The output works with a delegate, which in this case is the controller itself (it conforms and implements the method needed).
The problem I am having is that the delegate method never gets called. The code below is the implementation of the controller and it has a couple of NSLogs. I can see the "started" message, but the "delegate method called" never shows.
This code is all within a controller that implements the "AVCaptureVideoDataOutputSampleBufferDelegate" protocol.
- (void)viewDidLoad {
[super viewDidLoad];
// Initialize AV session
AVCaptureSession *session = [AVCaptureSession new];
if ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone)
[session setSessionPreset:AVCaptureSessionPreset640x480];
else
[session setSessionPreset:AVCaptureSessionPresetPhoto];
// Initialize back camera input
AVCaptureDevice *camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:camera error:&error];
if( [session canAddInput:input] ){
[session addInput:input];
}
// Initialize image output
AVCaptureVideoDataOutput *output = [AVCaptureVideoDataOutput new];
NSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[output setVideoSettings:rgbOutputSettings];
[output setAlwaysDiscardsLateVideoFrames:YES]; // discard if the data output queue is blocked (as we process the still image)
//[output addObserver:self forKeyPath:#"capturingStillImage" options:NSKeyValueObservingOptionNew context:#"AVCaptureStillImageIsCapturingStillImageContext"];
videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
[output setSampleBufferDelegate:self queue:videoDataOutputQueue];
if( [session canAddOutput:output] ){
[session addOutput:output];
}
[[output connectionWithMediaType:AVMediaTypeVideo] setEnabled:YES];
[session startRunning];
NSLog(#"started");
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
NSLog(#"delegate method called");
CGImageRef cgImage = [self imageFromSampleBuffer:sampleBuffer];
self.theImage.image = [UIImage imageWithCGImage: cgImage ];
CGImageRelease( cgImage );
}
Note: I'm building with iOS 5.0 as a target.
Edit:
I've found a question that, although asking for a solution to a different problem, is doing exactly what my code is supposed to do. I've copied the code from that question verbatim into a blank xcode app, added NSLogs to the captureOutput function and it doesn't get called. Is this a configuration issue? Is there something I'm missing?
Your session is a local variable. Its scope is limited to viewDidLoad. Since this is a new project, I assume it's safe to say that you're using ARC. In that case that object won't leak and therefore continue to live as it would have done in the linked question, rather the compiler will ensure the object is deallocated before viewDidLoad exits.
Hence your session isn't running because it no longer exists.
(aside: the self.theImage.image = ... is unsafe since it performs a UIKit action of the main queue; you probably want to dispatch_async that over to dispatch_get_main_queue())
So, sample corrections:
#implementation YourViewController
{
AVCaptureSession *session;
}
- (void)viewDidLoad {
[super viewDidLoad];
// Initialize AV session
session = [AVCaptureSession new];
if ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone)
[session setSessionPreset:AVCaptureSessionPreset640x480];
else
/* ... etc ... */
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
NSLog(#"delegate method called");
CGImageRef cgImage = [self imageFromSampleBuffer:sampleBuffer];
dispatch_sync(dispatch_get_main_queue(),
^{
self.theImage.image = [UIImage imageWithCGImage: cgImage ];
CGImageRelease( cgImage );
});
}
Most people advocate using an underscore at the beginning of instance variable names nowadays but I omitted it for simplicity. You can use Xcode's built in refactor tool to fix that up after you've verified that the diagnosis is correct.
I moved the CGImageRelease inside the block sent to the main queue to ensure its lifetime extends beyond its capture into a UIImage. I'm not immediately able to find any documentation to confirm that CoreFoundation objects have their lifetime automatically extended when captured in a block.
I've found one more reason why didOutputSampleBuffer delegate method may not be called — save to file and get sample buffer output connections are mutually exclusive. In other words, if your session already has AVCaptureMovieFileOutput and then you add AVCaptureVideoDataOutput, only AVCaptureFileOutputRecordingDelegate delegate methods are called.
Just for the reference, I couldn't find anywhere in AV Foundation framework documentation explicit description of this limitation, but Apple support confirmed this a few years ago, as noted in this SO answer.
One way to solve the problem is to remove AVCaptureMovieFileOutput entirely and manually write recorded frames to the file in didOutputSampleBuffer delegate method, alongside your custom buffer data processing. You may find these two SO answers useful.
In my case the problem is there because I call
if ([_session canAddOutput:_videoDataOutput])
[_session addOutput:_videoDataOutput];
before I call
[_session startRunning];
I'm just start call addOutput: after startRunning
Hope it's help somebody.
My captureOutput function was not called either. And the accepted answer did not exactly point at my problem, as my session was already an instance variable.
BUT, my DispatchQueue for my video frames was local. And the dispatchQueue must ALSO be an instance variable. I don't quite understand why this should be necessary. Perhaps the underlying AVCapture code only keeps a weak pointer to it?
The documentation is very confusing on this.
Related
I've spent days trying every possible solution I can think of to this problem, but nothing seems to be working.
I run a background thread like this:
[MagicalRecord saveInBackgroundWithBlock:^(NSManagedObjectContext *localContext) {
Media *localMedia = [media inContext:localContext];
UIImage *image = localMedia.thumbnail;
dispatch_async(dispatch_get_main_queue(), ^{
[thumbnails setObject:image forKey:[NSNumber numberWithInt:i]];
[contentDict setObject:thumbnails forKey:#"MediaItems"];
[cell.entryView setNeedsDisplay];
});
}];
Or like this:
dispatch_queue_t cellSetupQueue = dispatch_queue_create("com.Journalized.SetupTimelineThumbnails", NULL);
dispatch_async(cellSetupQueue, ^{
NSManagedObjectContext *newMoc = [[NSManagedObjectContext alloc] init];
NSPersistentStoreCoordinator *coordinator = [NSManagedObjectContext contextForCurrentThread].persistentStoreCoordinator;
[newMoc setPersistentStoreCoordinator:coordinator];
NSNotificationCenter *notify = [NSNotificationCenter defaultCenter];
[notify addObserver:self
selector:#selector(mergeChanges:)
name:NSManagedObjectContextDidSaveNotification
object:newMoc];
Media *localMedia = [media inContext:[NSManagedObjectContext contextForCurrentThread];
UIImage *image = localMedia.thumbnail;
dispatch_async(dispatch_get_main_queue(), ^{
[thumbnails setObject:image forKey:[NSNumber numberWithInt:i]];
[contentDict setObject:thumbnails forKey:#"MediaItems"];
[cell.entryView setNeedsDisplay];
});
}];
Both of these give me a crash with UIImage returning as nil object, and a Cocoa Error 133000.
I've removed every other piece of background threading code, and have saved on the main thread directly before this just to make sure. Running the code above on the main thread also works, but freezes up my UI. Despite all of these efforts, the above code crashes every time.
I'm not sure what to do to make this work.
Edit:
My question, specifically, is how do I make this work without crashing? It seems to be a problem with objects from 1 context not existing in another, but how do i make them exist in another context?
Remember, the MR_inContext: method is using [NSManagedObjectContext objectWithID: ] method under the covers. You should look in there to make sure your object has:
1) Been saved prior to entering into the background context/block in your first code block
2) This method is returning something useful
I'm also not sure how you set up your thumbnail attribute. Ideally it shouldn't matter as long as you have the NSTransformable code write (there are samples on the internets that show you how to save a UIImage in core data using the transformable attribute)
Also, your code should look like this:
UIImage *image = nil;
[MagicalRecord saveInBackgroundWithBlock:^(NSManagedObjectContext *localContext) {
Media *localMedia = [media inContext:localContext]; //remember, this looks up an existing object. If your context is a child of a parent context that has contains this, the localContext should refresh the object from the parent.
//image processing/etc
image = localMedia.thumbnail;
} completion:^{
[thumbnails setObject:image forKey:#(i)]; //new literals are teh awesome
contentInfo[#"MediaItems"] = thumbnails; //as is the new indexer syntax (on the latest compiler)
[cell.entryView setNeedsDisplay];
}];
Fast answer:
NSManagedObjectReferentialIntegrityError = 133000,
// attempt to fire a fault pointing to an object that does not exist (we can see the store, we can't see the object)
EDIT:
It's pretty difficult to see something from the code. What is a managed object there?
I suppose the problem is that you are using temporary objects from one context in another context.
The Apple docs seem to indicate that while recording video to a file, the app can change the URL on the fly with no problem. But I'm seeing a problem. When I try this, the recording delegate gets called with an error...
The operation couldn’t be completed. (OSStatus error -12780.) Info
dictionary is: {
AVErrorRecordingSuccessfullyFinishedKey = 0; }
(funky single quote in "couldn't" comes from logging [error localizedDescription])
Here's the code, which is basically tweaks to WWDC10 AVCam sample:
1) Start recording. Start timer to change the output URL every few seconds
- (void) startRecording
{
// start the chunk timer
self.chunkTimer = [NSTimer scheduledTimerWithTimeInterval:5
target:self
selector:#selector(chunkTimerFired:)
userInfo:nil
repeats:YES];
AVCaptureConnection *videoConnection = [AVCamCaptureManager connectionWithMediaType:AVMediaTypeVideo fromConnections:[[self movieFileOutput] connections]];
if ([videoConnection isVideoOrientationSupported]) {
[videoConnection setVideoOrientation:[self orientation]];
}
if ([[UIDevice currentDevice] isMultitaskingSupported]) {
[self setBackgroundRecordingID:[[UIApplication sharedApplication] beginBackgroundTaskWithExpirationHandler:^{}]];
}
NSURL *fileUrl = [[ChunkManager sharedInstance] nextURL];
NSLog(#"now recording to %#", [fileUrl absoluteString]);
[[self movieFileOutput] startRecordingToOutputFileURL:fileUrl recordingDelegate:self];
}
2) When the timer fires, change the output file name without stopping recording
- (void)chunkTimerFired:(NSTimer *)aTimer {
if ([[UIDevice currentDevice] isMultitaskingSupported]) {
[self setBackgroundRecordingID:[[UIApplication sharedApplication] beginBackgroundTaskWithExpirationHandler:^{}]];
}
NSURL *nextUrl = [self nextURL];
NSLog(#"changing capture output to %#", [[nextUrl absoluteString] lastPathComponent]);
[[self movieFileOutput] startRecordingToOutputFileURL:nextUrl recordingDelegate:self];
}
Note: [self nextURL] generates file urls like file-0.mov, file-5.mov, file-10.mov and so on.
3) This gets called each time the file changes, and every other invocation is an error...
- (void) captureOutput:(AVCaptureFileOutput *)captureOutput
didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL
fromConnections:(NSArray *)connections
error:(NSError *)error
{
id delegate = [self delegate];
if (error && [delegate respondsToSelector:#selector(someOtherError:)]) {
NSLog(#"got an error, tell delegate");
[delegate someOtherError:error];
}
if ([self backgroundRecordingID]) {
if ([[UIDevice currentDevice] isMultitaskingSupported]) {
[[UIApplication sharedApplication] endBackgroundTask:[self backgroundRecordingID]];
}
[self setBackgroundRecordingID:0];
}
if ([delegate respondsToSelector:#selector(recordingFinished)]) {
[delegate recordingFinished];
}
}
When this runs, file-0 gets written, then we see error -12780 right after changing the url to file-5, file-10 gets written, then an error, then okay, and so on.
It's appears that changing the URL on the fly doesn't work, but it stops the writing which allows the next URL change to work.
Thanks all, for the review and good thoughts on this. Here's the word from Apple DTS...
I spoke with our AV Foundation engineers, and it is definitely a bug
in that this method is not doing what the documentation says it should
("You do not need to call stopRecording before calling this method
while another recording is in progress."). Please file a bug report
using the Apple Bug Reporter (http://developer.apple.com/bugreporter/)
so the team can investigate. Make sure and include your minimal
project in the report.
I've filed this with Apple as bug 11632087
In the docs it's stated this:
If a file at the given URL already exists when capturing starts, recording to the new file will fail.
Are you sure you check that nextUrl is a non-existing file name?
According to the documentation, calling to 2 consecutive startRecordingToOutputFileURL is not supported.
you can read about it here
In iOS, this frame accurate file switching is not supported. You must call stopRecording before calling this method again to avoid any errors.
I am new iPhone Developer. I am upgrading existing iPhone App. I am using Core Data Model to save data.
In App, there is a 15 square boxes to add images. I am calling a Detached Thread to make a separate process. In this process, I am saving image into two size. I have added observer with image object and remove observer at last.
I am using this method to add Observer:-
[projectImage addObserver:self forKeyPath:#"fileName" options:NSKeyValueObservingOptionNew context:nil];
And this method for making separate Thread:-
[NSThread detachNewThreadSelector:#selector(addImage:) toTarget:self withObject:[dic retain]];
here AddImage is Method like:-
- (void) addImage:(NSDictionary *) dic {
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
UIImage *image = [dic objectForKey:#"image"];
projectImage = nil;
projectImage = [dic objectForKey:#"managedObject"];
[projectImage importImageData:image];
[projectImage removeObserver:self forKeyPath:#"fileName"];
[pool drain];
}
And dic is Dictionary
My problem is :
It is Crashing after taking 4-5 images by Camera or Phone library.
If any can guide me to get rid to this problem.
Thanks in Advance
You are leaking memory, and probably because of this your app will crash. I think the app runs out of memory and gets killed.
remove the [dic retain] from
[NSThread detachNewThreadSelector:#selector(addImage:) toTarget:self withObject:[dic retain]];
the object is retained by the method call. See the discussion of detachNewThreadSelector:toTarget:withObject:.
The objects aTarget and anArgument are retained during the execution of the detached thread, then released. The detached thread is exited (using the exit class method) as soon as aTarget has completed executing the aSelector method.
your call should be
[NSThread detachNewThreadSelector:#selector(addImage:) toTarget:self withObject:dic];
i am new to AVFoundation and i am trying to implement a video camera with AVFoundation here is my basic setup. Basically, when you click a button it will call the showCamera method. In here i create the session and then add an audio input and video input then add the video output.
Where in here do i add the AVCaptureConnection and how do i do it? Is there some tutorial that shows how to use the connections? Any help is appreciated.
- (IBAction) showCamera
{
//Add the camview to the current view ontop of controller
[[[[[UIApplication sharedApplication] delegate] self] window] addSubview:camView];
session = [[AVCaptureSession alloc] init];
//Set preset on session to make recording scale high
if ([session canSetSessionPreset:AVCaptureSessionPresetHigh]) {
session.sessionPreset = AVCaptureSessionPresetHigh;
}
// Add inputs and outputs.
NSArray *devices = [AVCaptureDevice devices];
//Print out all devices on phone
for (AVCaptureDevice *device in devices)
{
if ([device hasMediaType:AVMediaTypeVideo])
{
if ([device position] == AVCaptureDevicePositionBack)
{
//Add Rear Video input to session
[self addRearCameraInputToSession:session withDevice:device];
}
}
else if ([device hasMediaType:AVMediaTypeAudio])
{
//Add Microphone input to session
[self addMicrophoneInputToSession:session withDevice:device];
}
else
{
//Show error that your camera does not have a phone
}
}
//Add movie output
[self addMovieOutputToSession:session];
//Construct preview layer
[self constructPreviewLayerWithSession:session onView:camView];
}
You don't add AVCaptureConnections manually. When you have both an input and an output added to the AVCaptureSession object, the connections are automatically created for you. Quoth the documentation:
When an input or an output is added to a session, the session greedily forms connections between all the compatible capture inputs’ ports and capture outputs.
Unless you need to disable one of the automatically-created connections, or change the videoMirrored or videoOrientation properties, you shouldn't have to worry about them at all.
Take a look at following URLs...
Documentation from Apple:
An Article:
Video Recording using AVFoundation Framework iPhone?
I think, it will help you....
Long time lurker, first time poster.
I'm making a ServerConnection module to make it a whole lot modular and easier but am having trouble getting the delegate called. I've seen a few more questions like this but none of the answers fixed my problem.
ServerConnection is set up as a protocol. So a ServerConnection object is created in Login.m which makes the call to the server and then add delegate methods in Login to handle if there's an error or if it's done, these are called by ServerConnection like below.
- (void)connectionDidFinishLoading:(NSURLConnection *)connection {
if( [self.delegate respondsToSelector:#selector(connectionDidFinish:)]) {
NSLog(#"DOES RESPOND");
[self.delegate connectionDidFinish:self];
} else {
NSLog(#"DOES NOT RESPOND");
}
self.connection = nil;
self.receivedData = nil;
}
It always "does not respond". I've tried the CFRunLoop trick (below) but it still doesn't work.
- (IBAction)processLogin:(id)sender {
// Hide the keyboard
[sender resignFirstResponder];
// Start new thread
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
// Acutally call the server
[self authenticate];
// Prevent the thread from exploding before we've got the data
CFRunLoopRun();
// End thread
[pool release];
}
I copied the Apple URLCache demo pretty heavily and have compared them both many times but can't find any discrepancies.
Any help would be greatly appreciated.
Here are the questions to ask:
Does your delegate respond to connectionDidFinishLoading:?
Does the signature match, i.e. it takes another object?
Is the delegate set at all or is it nil? (Check this in that very method)
If any of those are "NO", you will see "doesn't respond"... and all equally likely to happen in your application, but all are easy to figure out.