AVCaptureMovieFileOutput - how it saves video - iphone

I am using the following code to capture video and save it to the documents folder of my app:
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] error:NULL];
m_captureFileOutput = [[AVCaptureMovieFileOutput alloc] init];
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:captureInput];
[captureSession addOutput:m_captureFileOutput];
[captureSession beginConfiguration];
[captureSession setSessionPreset:AVCaptureSessionPresetHigh];
[captureSession commitConfiguration];
[captureSession startRunning];
...some function that starts the recording process...
[m_captureFileOutput startRecordingToOutputFileURL:url recordingDelegate:self];
...some function that ends the recording process...
[m_captureFileOutput stopRecording];
The catch is, my goal is to be able to record up to 9 hours of video at a time. Practically, is is feasible to record a video of this size using this method? Does AVCaptureMovieFileOutput encode and save the video to the disk in real time as it receives frames from the camera, or is the entire video buffered in RAM before being processed after [m_captureFileOutput stopRecording]; is called?
If this approach is not reasonable for recording such a long duration of video, what might be a reasonable alternative?
Thanks,
James

Pretty sure the AVCaptureMovieFileOutput appends to the file and does not use a in memory buffer (maybe it does but it flushes it to the file before it gets too large)...some evidence of this can be seen in the movieFragmentInterval property here. Also i have used this method to record to a file for large files, and it works ok, if it was keeping the file in memory one would run out of memory pretty quickly under some presets (1280x720 for example)

Related

Camera view as a subview in iOS

I'll start with saying that I am new to objective-c and iOS-programming.
What I want to do is to display camera as a part of a view, like a rectangle in the upper part of a screen, where should I start?
(What GUI-component for the "camera view"? AVCamCaptureManager or UIImagePickerController?)
You can use the AVFoundation to do that. A good starting point is to see the WWDC videos (since 2011) related with AVFoundation and Camera.
The Apple's example code for AVCam project is a very good starting point.
Here's an example of what you can do.
First you need to instantiate a capture session:
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPreset1280x720;
Then you must create the input and add it to the session in order to get images from your device camera:
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
NSLog(#"Couldn't create video capture device");
}
[session addInput:input];
Then you make use of AVCaptureVideoPreviewLayer to present in a Layer the images from your device camera:
AVCaptureVideoPreviewLayer *newCaptureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
Finally you just need to set the frame (portion of UI you need) of that specific layer and add it to the desired view and start the session capture.

How can I use AVCaptureVideoDataOutput with low resolution preview profile and take photos (while previewing) with high resolution

I want to use the AVFoundation Famework for previewing and capturing photos.
I created a AVCaptureSession and added AVCaptureVideoDataOutput, AVCaptureStillImageOutput to this session. I set the preset to AVCaptureSessionPresetLow.
Now I want to take a Photo in full Resolution. But within captureStillImageAsynchronouslyFromConnection the resolution is the same as in my preview delegate.
Here is my Code:
AVCaptureSession* cameraSession = [[AVCaptureSession alloc] init];
cameraSession.sessionPreset = AVCaptureSessionPresetLow;
AVCaptureVideoDataOutput* output = [[AVCaptureVideoDataOutput alloc] init];
[cameraSession addOutput:output];
AVCaptureStillImageOutput* cameraStillImage = [[AVCaptureStillImageOutput alloc] init];
[cameraSession addOutput:cameraStillImage];
// delegation
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
[cameraSession startRunning];
Take Photo:
//[cameraSession beginConfiguration];
//[cameraSession setSessionPreset:AVCaptureSessionPresetPhoto]; <-- slow
//[cameraSession commitConfiguration];
[cameraStillImage captureStillImageAsynchronouslyFromConnection:photoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
{
...
}];
I tried it by changing the preset to photo just befor capturing the image. But this is very slow (it takes 2-3 seconds to change the preset). I do not want to have such a big delay.
How can I do this? Thanks.

Record audio to NSData

I have set up a TCP connection between two iPhones and I am able to send NSData packages between the two.
I would like to talk into the microphone and get the recording as an NSData object and send this to the other iPhone.
I have successfulyl used Audio Queue Services to record audio and play it but I have not managed to get the recording as NSData. I posted a question about converting the recording to NSData when using Audio Queue Services but it has not got me any further.
Therefore I would like to hear if there is any other approach I can take to speak into the microphone of an iPhone and have the input as raw data?
Update:
I need to send the packages continuous while recording. E.g. every second while recording I will send the data recorded during that second.
Both Audio Queues and the RemoteIO Audio Unit will give you buffers of raw audio in real-time with fairly low latency. You can take the buffer pointer and the byte length given in each audio callback to create a new block of NSData. RemoteIO will provide the lowest latency, but may require the network messaging to be done outside the callback thread.
Using AVAudioRecorder like this:
NSURL *filePath = //Your desired path for the file
NSDictionary *settings; //The settings for the recorded file
NSError *error = nil;
AVAudioRecorder *recorder = [[AVAudioRecorder alloc] initWithURL:filePath settings:recordSetting error:&error];
....
[recorder record];
....
[recorder stop];
....
And the retrieve the NSData from the file:
NSData *audioData = [[NSData alloc] initWithContentsOfFile:filePath];
AVAudioRecorder reference, here.
Edit:
In order to retrieve chunks of the recorded audio you could use the subdataWithRange: method in the NSData class. Keep an offset from which you wish to retrieve the bytes. You can have a NSTimer getting fired every second so you can collect the bytes and send them. You will need to find out how many bytes are getting recorded every second.
This is what I did on the recording iphone:
void AudioInputCallback(void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription * inPacketDescs)
{
RecordState * recordState = (RecordState*)inUserData;
if (!recordState->recording)
{
printf("Not recording, returning\n");
}
// if (inNumberPacketDescriptions == 0 && recordState->dataFormat.mBytesPerPacket != 0)
// {
// inNumberPacketDescriptions = inBuffer->mAudioDataByteSize / recordState->dataFormat.mBytesPerPacket;
// }
printf("Writing buffer %lld\n", recordState->currentPacket);
OSStatus status = AudioFileWritePackets(recordState->audioFile,
false,
inBuffer->mAudioDataByteSize,
inPacketDescs,
recordState->currentPacket,
&inNumberPacketDescriptions,
inBuffer->mAudioData);
NSLog(#"DATA = %#",[NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize]);
[[NSNotificationCenter defaultCenter] postNotificationName:#"Recording" object:[NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize]];
if (status == 0)
{
recordState->currentPacket += inNumberPacketDescriptions;
}
AudioQueueEnqueueBuffer(recordState->queue, inBuffer, 0, NULL);
}
I have called upon a notification which help to send data packets to the other iPhone in the network.
But I do not know how to read the data on the other side. I am still trying to figure out how that works. I will surely update once I do that.

How to set the AVCaptureVideoDataOutput in a library

I´m trying to make a library for iPhone, so I´m trying to init the camera just with a call.
The problem comes when I call "self" in this declaration:
"[captureOutput setSampleBufferDelegate:self queue:queue];"
because the compiler says:" self was not declared in this scope", what Do I need to do to set the same class as a "AVCaptureVideoDataOutputSampleBufferDelegate"?. At least point me in the right direction :P.
Thank you !!!
here is the complete function:
bool VideoCamera_Init(){
//Init Capute from the camera and show the camera
/*We setup the input*/
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput
deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]
error:nil];
/*We setupt the output*/
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
/*While a frame is processes in -captureOutput:didOutputSampleBuffer:fromConnection: delegate methods no other frames are added in the queue.
If you don't want this behaviour set the property to NO */
captureOutput.alwaysDiscardsLateVideoFrames = YES;
/*We specify a minimum duration for each frame (play with this settings to avoid having too many frames waiting
in the queue because it can cause memory issues). It is similar to the inverse of the maximum framerate.
In this example we set a min frame duration of 1/10 seconds so a maximum framerate of 10fps. We say that
we are not able to process more than 10 frames per second.*/
captureOutput.minFrameDuration = CMTimeMake(1, 20);
/*We create a serial queue to handle the processing of our frames*/
dispatch_queue_t queue;
queue = dispatch_queue_create("cameraQueue", NULL);
variableconnombrealeatorio= [[VideoCameraThread alloc] init];
[captureOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Set the video output to store frame in BGRA (It is supposed to be faster)
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
/*And we create a capture session*/
AVCaptureSession * captureSession = [[AVCaptureSession alloc] init];
captureSession.sessionPreset= AVCaptureSessionPresetMedium;
/*We add input and output*/
[captureSession addInput:captureInput];
[captureSession addOutput:captureOutput];
/*We start the capture*/
[captureSession startRunning];
return TRUE;
}
I also did the next class, but the buffer is empty:
"
#import "VideoCameraThread.h"
CMSampleBufferRef bufferCamara;
#implementation VideoCameraThread
(void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
bufferCamera=sampleBuffer;
}
"
You are writing a C function, which has no concept of Objective C classes, objects or the self identifier. You will need to modify your function to take a parameter to accept the sampleBufferDelegate that you want to use:
bool VideoCamera_Init(id<AVCaptureAudioDataOutputSampleBufferDelegate> sampleBufferDelegate) {
...
[captureOutput setSampleBufferDelegate:sampleBufferDelegate queue:queue];
...
}
Or you could write your library with an Objective C object-oriented interface rather than a C-style interface.
You also have problems with memory management in this function. For instance, you are allocating an AVCaptureSession and assigning it to a local variable. After this function returns you will have no way of retrieving that AVCaptureSession so that you can release it.

Playing back captured videos on the ipad makes my app crash

I'm working on an app for the ipad 2 that lets the user record video of himself using the device front camera, and then play it back on a video player. I have the overall functionality working, but sometimes, just sometimes my app crashes when I load the view where the video will be played back, because of this:
'CALayerInvalidGeometry', reason: 'CALayer position contains NaN: [nan 11.5]'
I have noticed that the app crashes mostly, but not exclusively, when the recorded clip playing is less than around 15 seconds long.
Anyone have an idea whats going on?
Heres the code that takes care of recording:
-(void)record{
AVCaptureMovieFileOutput *output = [[AVCaptureMovieFileOutput alloc]init];
NSMutableString *videoURL;
if(isRecording){
//here i do some stuff to generate a random system path
[session addOutput:output];
AVCaptureConnection *videoConnection;
[session beginConfiguration];
for ( AVCaptureConnection *connection in [output connections] ) {
for ( AVCaptureInputPort *port in [connection inputPorts] ) {
if ( [[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
}
}
}
if([videoConnection isVideoOrientationSupported]){
[videoConnection setVideoOrientation:AVCaptureVideoOrientationLandscapeLeft];
videoConnection.videoMirrored = true;
}
[session commitConfiguration];
[session startRunning];
NSURL *vidURL = [[NSURL alloc]initFileURLWithPath:videoURL];
[output startRecordingToOutputFileURL:vidURL recordingDelegate:self];
NSLog(#"Recording started in %#", videoURL);
[rootRep addObject:videoURL];
[vidURL release];
[videoURL release];
}else{
isRecording = false;
[output stopRecording];
[session removeOutput:output];
[output release];
NSLog(#"Recording stopped");
[recBut setImage:[UIImage imageNamed:#"rec.png"] forState:UIControlStateNormal];
}
}
EDIT: I've implemented a method to analize all the captured videos and delete the faulty ones, my app is stable again, but I still wonder why some videos don't get created well.
I don't know why exactly the camera doesn't capture good video files every time, but I'll show you how you can safely test whether the file works or not. You have to create an AVAsset with the file path you want to test, the AVAsset has a property called 'playable', a bool, then you can cycle an array with the addresses of your captured videos asking if the current asset is playable and deleting the corrupted files.