Using different resolution presets with AVFoundation - iphone

I'm trying to use AVFoundation to have three recording modes: Audio, Video and Photo. Audio and Video work just fine, but the problem is, if I set the session preset to AVCaptureSessionPreset352x288, the still pictures are also saved at that resolution. If I change my session preset to AVCaptureSessionPresetPhoto, then the photos look great but the video stops working because that isn't a supported preset for video. I've tried creating multiple sessions, reassigning the session preset, etc. but nothing seems to work. Anyone have a way to make this work with the video at a low resolution and still images at full resolution?

before taking the picture set the property for a new session preset
// captureSession is your capture session object
[captureSession beginConfiguration];
captureSession.sessionPreset = AVCaptureSessionPresetHigh;
[captureSession commitConfiguration];
then call your capture image handler
captureStillImageAsynchronouslyFromConnection: completionHandler:
then change back to low res (= prevPreset)
[captureSession beginConfiguration];
captureSession.sessionPreset = prevPreset;
[captureSession commitConfiguration];

Related

use rear microphone of iphone 5

I have used to following code the stream the i/o of audio from microphone. What I want to do is want to select the rear microphone for recording. I have read that setting kAudioSessionProperty_Mode to kAudioSessionMode_VideoRecording can do the work but I am not sure how to use this with my code. Can any one help me in successfully setting this parameter.
I have these lines for setting the property
status = AudioUnitSetProperty(audioUnit,
kAudioSessionProperty_Mode,
kAudioSessionMode_VideoRecording,
kOutputBus,
&audioFormat,
sizeof(audioFormat));
checkStatus(status);
but its not working.
in apple developer library click here
you can see a specific method
struct AudioChannelLayout {
AudioChannelLayoutTag mChannelLayoutTag;
UInt32 mChannelBitmap;
UInt32 mNumberChannelDescriptions;
AudioChannelDescription mChannelDescriptions[1];
};
typedef struct AudioChannelLayout AudioChannelLayout;
you can change AudioChannelDescription to 2 for using secondary microphone
I did some searching and reading. Finally ended up in the AVCaptureDevice Class Reference. The key command here for you is NSLog(#"%#", [AVCaptureDevice devices]);. I ran this with my iPhone attached and got this:
"<AVCaptureFigVideoDevice: 0x1fd43a50 [Back Camera][com.apple.avfoundation.avcapturedevice.built-in_video:0]>",
"<AVCaptureFigVideoDevice: 0x1fd47230 [Front Camera][com.apple.avfoundation.avcapturedevice.built-in_video:1]>",
"<AVCaptureFigAudioDevice: 0x1fd46730 [Microphone][com.apple.avfoundation.avcapturedevice.built-in_audio:0]>"
Only one microphone ever shows up in the list. So to answer your question, it cannot be done (yet).
Your code:
status = AudioUnitSetProperty(audioUnit,
kAudioSessionProperty_Mode,
kAudioSessionMode_VideoRecording,
kOutputBus,
&audioFormat,
sizeof(audioFormat));
checkStatus(status);
Is not working as the code is not correct. Audio SESSIONS are not properties of Audio UNITS. The Audio Session describes the general behaviour of your app with hardware resources, and how it cooperates with other demands on those same resources by other apps and other parts of the system. It is your best chance of taking control of input and output hardware, but does not give you total control as the iOS frameworks have the overall user experience as the uppermost priority.
Your app has a single audio session, which you can initialise, activate and deactivate, and get and set properties of. Since ios6 most of these properties can be addressed using the AVFoundation singleton AVAudioSession object, but to get full access you will still want to use Core Audio function syntax.
To set the audio session mode to "VideoRecording" using AVFoundation you would do something like this:
- (void) configureAVAudioSession
{
//get your app's audioSession singleton object
AVAudioSession* session = [AVAudioSession sharedInstance];
//error handling
BOOL success;
NSError* error;
//set the audioSession category.
//Needs to be Record or PlayAndRecord to use VideoRecording mode:
success = [session setCategory:AVAudioSessionCategoryPlayAndRecord
error:&error]
if (!success) NSLog(#"AVAudioSession error setting category:%#",error);
//set the audioSession mode
succcess = [session setMode:AVAudioSessionModeVideoRecording error:&error];
if (!success) NSLog(#"AVAudioSession error setting mode:%#",error);
//activate the audio session
success = [session setActive:YES error:&error];
if (!success) NSLog(#"AVAudioSession error activating: %#",error);
else NSLog(#"audioSession active");
}
The same functionality using Core Audio functions (ios5 and below). checkStatus is the error handling function from your code sample.
- (void) configureAudioSession
{
OSStatus status;
//initialise the audio session
status = AudioSessionInitialize ( NULL
//runloop
, kCFRunLoopDefaultMode
//runloopmode
, NULL
//MyInterruptionListener
, (__bridge void *)(self)
//user info
);
//set the audio session category
UInt32 category = kAudioSessionCategory_PlayAndRecord;
status = AudioSessionSetProperty ( kAudioSessionProperty_AudioCategory
, sizeof(category)
, &category);
checkStatus(status);
//set the audio session mode
UInt32 mode = kAudioSessionMode_VideoRecording;
status = AudioSessionSetProperty(kAudioSessionMode_VideoRecording
, sizeof(mode)
, &mode);
checkStatus(status);
//activate the audio session
status = AudioSessionSetActive(true);
checkStatus(status);
}
The reason you have been told to use VideoRecording mode is because it is the only mode that will give you any hope of directly selecting the rear mic. What it does is select the mic nearest to the video camera.
"On devices with more than one built-in microphone, the microphone closest to the video camera is used." (From Apple's AVSession Class Reference)
This suggests that the video camera will need to be active when using the mic, and the choice of camera from front to back is the parameter that the system uses to select the appropriate microphone. It may be that video-free apps using the rear mic (such as your example) are in fact getting a video input stream from the rear camera and not doing anything with it. I am unable to test this as I do not have access to an iPhone 5. I do see that the "Babyscope" app you mentioned has an entirely different app for running on ios5 vs. ios4.
The answer from Kuriakose is misleading: AudioChannelLayout is a description of an audo track, it has no effect on the audio hardware used in capture. The answer from Sangony just shows us that Apple do not really want us to have full control over the hardware. Much of it's audio management on iOS is an attempt to keep us away from direct control in order to accommodate both user expectations (of audio i/o behaviour between apps) and hardware limitations when dealing with live signals.

How to get real time video stream from iphone camera and send it to server?

I am using AVCaptureSession to capture video and get real time frame from iPhone camera but how can I send it to server with multiplexing of frame and sound and how to use ffmpeg to complete this task, if any one have any tutorial about ffmpeg or any example please share here.
The way I'm doing it is to implement an AVCaptureSession, which has a delegate with a callback that's run on every frame. That callback sends each frame over the network to the server, which has a custom setup to receive it.
Here's the flow:
http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW2
And here's some code:
// make input device
NSError *deviceError;
AVCaptureDevice *cameraDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *inputDevice = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:&deviceError];
// make output device
AVCaptureVideoDataOutput *outputDevice = [[AVCaptureVideoDataOutput alloc] init];
[outputDevice setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
// initialize capture session
AVCaptureSession *captureSession = [[[AVCaptureSession alloc] init] autorelease];
[captureSession addInput:inputDevice];
[captureSession addOutput:outputDevice];
// make preview layer and add so that camera's view is displayed on screen
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
previewLayer.frame = view.bounds;
[view.layer addSublayer:previewLayer];
// go!
[captureSession startRunning];
Then the output device's delegate (here, self) has to implement the callback:
-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );
CGSize imageSize = CVImageBufferGetEncodedSize( imageBuffer );
// also in the 'mediaSpecific' dict of the sampleBuffer
NSLog( #"frame captured at %.fx%.f", imageSize.width, imageSize.height );
}
Sending raw frames or individual images will never work well enough for you (because of the amount of data and number of frames). Nor can you reasonably serve anything from the phone (WWAN networks have all sorts of firewalls). You'll need to encode the video, and stream it to a server, most likely over a standard streaming format (RTSP, RTMP). There is an H.264 encoder chip on the iPhone >= 3GS. The problem is that it is not stream oriented. That is, it outputs the metadata required to parse the video last. This leaves you with a few options.
1) Get the raw data and use FFmpeg to encode on the phone (will use a ton of CPU and battery).
2) Write your own parser for the H.264/AAC output (very hard).
3) Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions).
There is a long and a short story to it.
This is the short one:
go look at https://github.com/OpenWatch/H264-RTSP-Server-iOS
this is a starting point.
you can get it and see how he extracts the frame. This is a small and simple project.
Then you can look at kickflip which has a specific function "encodedFrame" its called back onces and encoded frame arrives from this point u can do what you want with it, send via websocket. There is a bunch of very hard code avalible to read mpeg atoms
Look here , and here
Try capturing video using AV Foundation framework. Upload it to your server with HTTP streaming.
Also check out a stack another stack overflow post below
(The post below was found at this link here)
You most likely already know....
1) How to get compressed frames and audio from iPhone's camera?
You can not do this. The AVFoundation API has prevented this from
every angle. I even tried named pipes, and some other sneaky unix foo.
No such luck. You have no choice but to write it to file. In your
linked post a user suggest setting up the callback to deliver encoded
frames. As far as I am aware this is not possible for H.264 streams.
The capture delegate will deliver images encoded in a specific pixel
format. It is the Movie Writers and AVAssetWriter that do the
encoding.
2) Encoding uncompressed frames with ffmpeg's API is fast enough for
real-time streaming?
Yes it is. However, you will have to use libx264 which gets you into
GPL territory. That is not exactly compatible with the app store.
I would suggest using AVFoundation and AVAssetWriter for efficiency
reasons.

MPMoviePlayerController background audio issue in iOS5

I have an App that does the pretty standard operation:
It plays audio (streamed or in filesystem) when the app is in 1) Foreground mode, 2) Screen locked state 3)Background mode.
This was working fine in all iOS prior to iOS5.
I have been using MPMoviePlayerController (Because it can play streamed and local file system audio)
I have the following setup:
info.plist has Background Mode set to "Audio"
I have Audiosession setup as shown at http://developer.apple.com/library/ios/#qa/qa1668/_index.html
NSError *activationError = nil;
AVAudioSession *mySession = [AVAudioSession sharedInstance];
[mySession setCategory: AVAudioSessionCategoryPlayback error: &activationError];
if (activationError) { /* handle the error condition */ }
[mySession setActive: YES error: &activationError];
if (activationError) { /* handle the error condition */ }
I have background timer enabled that gets stopped at the end of audio playback
UIBackgroundTaskIdentifier newId = [[UIApplication sharedApplication]
beginBackgroundTaskWithExpirationHandler:NULL];
I have the Moveplayer's useApplicationAudioSession = NO
I have subscribed to the following events to detect and handle various playback state and to start a new audio file at the end of current file.
MPMoviePlayerLoadStateDidChangeNotification
MPMoviePlayerPlaybackDidFinishNotification
MPMoviePlayerPlaybackStateDidChangeNotification
MPMoviePlayerNowPlayingMovieDidChangeNotification
Problem:
With this the audio starts to play and when the application is put to background state or if the phone is locked, the audio continues to play. But, after when I start another audio file,
I start getting PlaybackDidFinishNotification immediately with the state set to Playback ended (But the file was never played)
The same code plays audio files in foreground mode (After the current audio file ends, the next file is started without any problem)
Is there anything new in iOS5 I should be doing to get this to work? I read through the MPMoviePlayerController class reference and I couldn't see anything specific for iOS5.
Thanks in advance.
Finally figured out the issue. This is solved in this post in apple dev forums (needs login to see). That post was applicable to AVPlayer but also fixes the problem with MPMoviePlayerController as well.
Basically, this is an excerpt from that post:
your app must support remote control events! These are the audio
controller interface prex/nex/play/pause on the left of the multitask
switcher taskbar (not sure about the proper name of the thing). You
to this ensuring your view becomes First Controller and then calling
> [[UIApplication sharedApplication] beginReceivingRemoteControlEvents];
in viewDidLoad. Once you do this, your Player will no longer return
NO!!
My situation was different and I'm only answering here (and in the other SO question) to help future searchers on this error message. This does not answer the original question.
My app plays a sound OR a song but when I first coded it could play both. And in testing I always tested with a song. I played the song in the usual way:
self.musicQuery = [MPMediaQuery songsQuery];
[_musicQuery addFilterPredicate:[MPMediaPropertyPredicate predicateWithValue:selectedSongID forProperty:MPMediaItemPropertyPersistentID comparisonType:MPMediaPredicateComparisonEqualTo]];
[_musicQuery setGroupingType:MPMediaGroupingTitle];
[_myPlayer setQueueWithQuery:_musicQuery];
[_myPlayer play];
Weeks passed and I started testing with the sound, played with AVAudioPlayer. My app started freezing for 5 seconds and I'd get the MediaPlayer: Message playbackState timed out message in the Console.
It turns out that passing a query that was empty was causing the freeze and the message. Changing my app's logic to only play a song when there was a song to play fixed it.

Has anyone been able to play a video file and show live camera feed at the same time in separate views on iOS?

I have been trying to do this for a few days now using AVFoundation as well as trying to use MPMoviePlayerViewController. The closest I can get is allowing one to play at a time. I would like to think that this is possible because of Facetime. However, I know this is a little different because there is no separate video file.
Any ideas would help, and thanks.
I'm not sure where this is documented, but to get AVCaptureVideoPreviewLayer and MPMoviePlayerViewController to play together at the same time you need to set a mixable audio session category first.
Here's one way to do that:
AVAudioSession* session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryPlayback error:nil];
UInt32 mixable = 1;
AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof(mixable), &mixable);
[session setActive:YES error:nil];
See the Audio Session Programming Guide and Audio Session Cookbook for more info.
Have you tried to play video on one thread and recording video on another? That would allow both of them to run while maintaining their separation.

Upload live streaming video from iPhone like Ustream or Qik

How to live stream videos from iPhone to server like Ustream or Qik? I know there's something called Http Live Streaming from Apple, but most resources I found only talks about streaming videos from server to iPhone.
Is Apple's Http Living Streaming something I should use? Or something else? Thanks.
There isn't a built-in way to do this, as far as I know. As you say, HTTP Live Streaming is for downloads to the iPhone.
The way I'm doing it is to implement an AVCaptureSession, which has a delegate with a callback that's run on every frame. That callback sends each frame over the network to the server, which has a custom setup to receive it.
Here's the flow: https://developer.apple.com/library/content/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW2
And here's some code:
// make input device
NSError *deviceError;
AVCaptureDevice *cameraDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *inputDevice = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:&deviceError];
// make output device
AVCaptureVideoDataOutput *outputDevice = [[AVCaptureVideoDataOutput alloc] init];
[outputDevice setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
// initialize capture session
AVCaptureSession *captureSession = [[[AVCaptureSession alloc] init] autorelease];
[captureSession addInput:inputDevice];
[captureSession addOutput:outputDevice];
// make preview layer and add so that camera's view is displayed on screen
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
previewLayer.frame = view.bounds;
[view.layer addSublayer:previewLayer];
// go!
[captureSession startRunning];
Then the output device's delegate (here, self) has to implement the callback:
-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );
CGSize imageSize = CVImageBufferGetEncodedSize( imageBuffer );
// also in the 'mediaSpecific' dict of the sampleBuffer
NSLog( #"frame captured at %.fx%.f", imageSize.width, imageSize.height );
}
EDIT/UPDATE
Several people have asked how to do this without sending the frames to the server one by one. The answer is complex...
Basically, in the didOutputSampleBuffer function above, you add the samples into an AVAssetWriter. I actually had three asset writers active at a time -- past, present, and future -- managed on different threads.
The past writer is in the process of closing the movie file and uploading it. The current writer is receiving the sample buffers from the camera. The future writer is in the process of opening a new movie file and preparing it for data. Every 5 seconds, I set past=current; current=future and restart the sequence.
This then uploads video in 5-second chunks to the server. You can stitch the videos together with ffmpeg if you want, or transcode them into MPEG-2 transport streams for HTTP Live Streaming. The video data itself is H.264-encoded by the asset writer, so transcoding merely changes the file's header format.
I have found one library that will help you on this.
HaishinKit Streaming Library
Above Library is giving you all option streaming Via RTMP or HLS.
Just follow this library given step and read it all instruction carefully. Please don't direct run example code given in this library it is having some error instead of that get required class and pod into your demo app.
I have just done it with this you can record screen, Camera and Audio.
I'm not sure you can do that with HTTP Live Streaming. HTTP Live Streaming segments the video in 10 secs (aprox.) length, and creates a playlist with those segments.
So if you want the iPhone to be the stream server side with HTTP Live Streaming, you will have to figure out a way to segment the video file and create the playlist.
How to do it is beyond my knowledge. Sorry.