iPhone AudioSession properties - iphone

I'm developing an app that should have the following properties regarding the audio:
can record and play sound at the same time
can mix the audio output with other app, e.g. iPod
audio output at speaker when earphone is not plugged in
audio output at earphone when it is plugged in
I used the following code.
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSError *audioSessionError;
[audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:&audioSessionError];
UInt32 mix = 1;
AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof(mix), &mix);
UInt32 route = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute, sizeof(route), &route);
[audioSession setActive:YES error:&audioSessionError];
However, I can achieve 1-3 but failed at 4. When earphone is plugged in, the audio still comes through the speaker. Then I tried setting kAudioSessionProperty_OverrideCategoryDefaultToSpeaker instead of kAudioSessionProperty_OverrideAudioRoute, but this resulted pausing the iPod instead of mixing both audio. Could anyone please help pointing out what's wrong with the above code?
Thanks for any help.

I think this:
UInt32 route = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute, sizeof(route), &route);
Specifically means use the speaker. The default action (to use the headphones when plugged in) should be:
UInt32 route = kAudioSessionOverrideAudioRoute_None;

Related

use rear microphone of iphone 5

I have used to following code the stream the i/o of audio from microphone. What I want to do is want to select the rear microphone for recording. I have read that setting kAudioSessionProperty_Mode to kAudioSessionMode_VideoRecording can do the work but I am not sure how to use this with my code. Can any one help me in successfully setting this parameter.
I have these lines for setting the property
status = AudioUnitSetProperty(audioUnit,
kAudioSessionProperty_Mode,
kAudioSessionMode_VideoRecording,
kOutputBus,
&audioFormat,
sizeof(audioFormat));
checkStatus(status);
but its not working.
in apple developer library click here
you can see a specific method
struct AudioChannelLayout {
AudioChannelLayoutTag mChannelLayoutTag;
UInt32 mChannelBitmap;
UInt32 mNumberChannelDescriptions;
AudioChannelDescription mChannelDescriptions[1];
};
typedef struct AudioChannelLayout AudioChannelLayout;
you can change AudioChannelDescription to 2 for using secondary microphone
I did some searching and reading. Finally ended up in the AVCaptureDevice Class Reference. The key command here for you is NSLog(#"%#", [AVCaptureDevice devices]);. I ran this with my iPhone attached and got this:
"<AVCaptureFigVideoDevice: 0x1fd43a50 [Back Camera][com.apple.avfoundation.avcapturedevice.built-in_video:0]>",
"<AVCaptureFigVideoDevice: 0x1fd47230 [Front Camera][com.apple.avfoundation.avcapturedevice.built-in_video:1]>",
"<AVCaptureFigAudioDevice: 0x1fd46730 [Microphone][com.apple.avfoundation.avcapturedevice.built-in_audio:0]>"
Only one microphone ever shows up in the list. So to answer your question, it cannot be done (yet).
Your code:
status = AudioUnitSetProperty(audioUnit,
kAudioSessionProperty_Mode,
kAudioSessionMode_VideoRecording,
kOutputBus,
&audioFormat,
sizeof(audioFormat));
checkStatus(status);
Is not working as the code is not correct. Audio SESSIONS are not properties of Audio UNITS. The Audio Session describes the general behaviour of your app with hardware resources, and how it cooperates with other demands on those same resources by other apps and other parts of the system. It is your best chance of taking control of input and output hardware, but does not give you total control as the iOS frameworks have the overall user experience as the uppermost priority.
Your app has a single audio session, which you can initialise, activate and deactivate, and get and set properties of. Since ios6 most of these properties can be addressed using the AVFoundation singleton AVAudioSession object, but to get full access you will still want to use Core Audio function syntax.
To set the audio session mode to "VideoRecording" using AVFoundation you would do something like this:
- (void) configureAVAudioSession
{
//get your app's audioSession singleton object
AVAudioSession* session = [AVAudioSession sharedInstance];
//error handling
BOOL success;
NSError* error;
//set the audioSession category.
//Needs to be Record or PlayAndRecord to use VideoRecording mode:
success = [session setCategory:AVAudioSessionCategoryPlayAndRecord
error:&error]
if (!success) NSLog(#"AVAudioSession error setting category:%#",error);
//set the audioSession mode
succcess = [session setMode:AVAudioSessionModeVideoRecording error:&error];
if (!success) NSLog(#"AVAudioSession error setting mode:%#",error);
//activate the audio session
success = [session setActive:YES error:&error];
if (!success) NSLog(#"AVAudioSession error activating: %#",error);
else NSLog(#"audioSession active");
}
The same functionality using Core Audio functions (ios5 and below). checkStatus is the error handling function from your code sample.
- (void) configureAudioSession
{
OSStatus status;
//initialise the audio session
status = AudioSessionInitialize ( NULL
//runloop
, kCFRunLoopDefaultMode
//runloopmode
, NULL
//MyInterruptionListener
, (__bridge void *)(self)
//user info
);
//set the audio session category
UInt32 category = kAudioSessionCategory_PlayAndRecord;
status = AudioSessionSetProperty ( kAudioSessionProperty_AudioCategory
, sizeof(category)
, &category);
checkStatus(status);
//set the audio session mode
UInt32 mode = kAudioSessionMode_VideoRecording;
status = AudioSessionSetProperty(kAudioSessionMode_VideoRecording
, sizeof(mode)
, &mode);
checkStatus(status);
//activate the audio session
status = AudioSessionSetActive(true);
checkStatus(status);
}
The reason you have been told to use VideoRecording mode is because it is the only mode that will give you any hope of directly selecting the rear mic. What it does is select the mic nearest to the video camera.
"On devices with more than one built-in microphone, the microphone closest to the video camera is used." (From Apple's AVSession Class Reference)
This suggests that the video camera will need to be active when using the mic, and the choice of camera from front to back is the parameter that the system uses to select the appropriate microphone. It may be that video-free apps using the rear mic (such as your example) are in fact getting a video input stream from the rear camera and not doing anything with it. I am unable to test this as I do not have access to an iPhone 5. I do see that the "Babyscope" app you mentioned has an entirely different app for running on ios5 vs. ios4.
The answer from Kuriakose is misleading: AudioChannelLayout is a description of an audo track, it has no effect on the audio hardware used in capture. The answer from Sangony just shows us that Apple do not really want us to have full control over the hardware. Much of it's audio management on iOS is an attempt to keep us away from direct control in order to accommodate both user expectations (of audio i/o behaviour between apps) and hardware limitations when dealing with live signals.

AudioSessionProperty on iPhone

In my iPhone application I need to enable loud speaker and enable bluetooth.
[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayAndRecord error: nil];
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute,
sizeof (audioRouteOverride), &audioRouteOverride);
UInt32 allowBluetoothInput = 1;
AudioSessionSetProperty (
kAudioSessionProperty_OverrideCategoryEnableBluetoothInput,
sizeof (allowBluetoothInput),
&allowBluetoothInput
);
But it is not working together. I mean that if bluetooth is on I hear voice from BT headset, if it is off, I hear voice not from loud speaker. How can I resolve this issue?
Have you tried changeDefaultRoute instead of audioRouteOverride for the speaker?
UInt32 doChangeDefaultRoute = 1;
AudioSessionSetProperty(
kAudioSessionProperty_OverrideCategoryDefaultToSpeaker
, sizeof(doChangeDefaultRoute)
, &doChangeDefaultRoute);
This fixes the output going to speaker instead of receiver issue, but I cannot test the bluetooth input side (no bluetooth device!). It may not as from what I can gather OverrideCategoryEnableBluetoothInput actually controls input and output.
Apple says this:
This property affects the kAudioSessionCategory_PlayAndRecord category as follows: If the audio input to the device is coming from a Bluetooth headset, setting this property to TRUE results in audio output also going to the Bluetooth headset.
(Audio Session Services Reference)
In general, Apple do not want apps to manipulate routing in ways that may be counter-intuitive to the user. The idea is that routing should be something that the user feels they have control over.

Recording and playing audio with iPod

I'm trying to record and play audio in an universal app. The AVAudioSession configuration is:
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error: nil];
UInt32 category = kAudioSessionCategory_PlayAndRecord;
AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(category), &category);
This is working fine for devices with built-in microphone. But in a third generation iPod I am experiencing random undesired behaviours. Sometimes all works fine, and sometimes it happens only when the earphones with the integrated mic are plugged-in. In this case, is not possible to hear any sound in the app without the earphones.
The 3rd generation of iPod Touch doesn't have a built-in microphone. The only way to have one is to plug your headphones with a mic.
The only generation of iPod Touch with a built-in microphone is the 4th.
You can check if inputRecording is available :
UInt32 propertySize, micConnected;
AudioSessionGetProperty(kAudioSessionProperty_AudioInputAvailable, &propertySize, &micConnected);

AVAudioPlayer sound files playing out of sync when mixed - iPhone

I am trying to play 6 seperate .mp3 sound files using instances of AVAudioPlayer.
The sounds are playing at the same time but they seem to be playing out of sync or
at slightly different speeds. Does anyone know why this may be?
Here is how I initialize a sound:
NSURL *musicurl = [[NSURL alloc] initFileURLWithPath:[[NSBundle mainBundle] pathForResource: #"SoundLoop" ofType: #"mp3"]];
music = [[AVAudioPlayer alloc] initWithContentsOfURL: musicurl error: nil];
[musicurl release];
music.numberOfLoops = -1; // Loop indefinately
music.currentTime = 0; // start at beginning
music.volume = 1.0;
music.meteringEnabled = YES;
and here is my AudioSession code:
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
audioSession.delegate = self;
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback error:nil];
[[AVAudioSession sharedInstance] setPreferredHardwareSampleRate:44100 error:nil];
[[AVAudioSession sharedInstance] setPreferredIOBufferDuration:30 error:nil];
UInt32 sessionCategory = kAudioSessionCategory_MediaPlayback;
Float32 hardvol = 1.0;
AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(sessionCategory), &sessionCategory);
AudioSessionSetProperty(kAudioSessionProperty_CurrentHardwareOutputVolume, sizeof(hardvol), &hardvol);
UInt32 doSetProperty = 1;
AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof(doSetProperty), &doSetProperty);
[[AVAudioSession sharedInstance] setActive:YES error: nil];
Could it possibly have anything to do with the bit rates of the sounds? or that I am using .mp3?
Thanks.
I found the solution to this problem by using the playAtTime: method of the AVAudioPlayer class:
NSTimeInterval shortStartDelay = 0.5; // seconds
NSTimeInterval now = player.deviceCurrentTime;
[player playAtTime:now + shortStartDelay]; //these players are instances of AVAudioPlayer
[player2 playAtTime:now + shortStartDelay];
[player3 playAtTime:now + shortStartDelay];
Using this will allow all the sounds to play asynchronously and in sync.
You should check out the MultiMedia Programming Guide. The "Using Audio" section has tons of helpful information.
http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/MultimediaPG/Introduction/Introduction.html
This sounds like it relates to your issue:
When using hardware-assisted decoding,
the device can play only a single
instance of one of the supported
formats at a time. For example, if you
are playing a stereo MP3 sound using
the hardware codec, a second
simultaneous MP3 sound will use
software decoding. Similarly, you
cannot simultaneously play an AAC and
an ALAC sound using hardware. If the
iPod application is playing an AAC or
MP3 sound in the background, it has
claimed the hardware codec; your
application then plays AAC, ALAC, and
MP3 audio using software decoding.
To play multiple sounds with best
performance, or to efficiently play
sounds while the iPod is playing in
the background, use linear PCM
(uncompressed) or IMA4 (compressed)
audio.
Here's another bit that claims what you are doing should be possible but it seems like Apple is throwing in a caveat with "the most processor-efficient multiple playback" line. I would think that if the processor is strained that would lend itself to not keeping things perfectly in time.
Starting in iOS 3.0, nearly all
supported audio formats can be used
for simultaneous playback—namely, all
those that can be played using
software decoding, as described in
Table 1-1. For the most
processor-efficient multiple playback,
use linear PCM (uncompressed) or IMA4
(compressed) audio.
In terms of debugging, you should start with two audio sounds and see if there is an issue. Then work your way up to the 6 and figure out if there is a distinct point at which the problem starts to occur. I would also find (or make) some audio tracks in the formats that Apple recommends (PCM or IMA4) and do the same testing with those. Doing these two things should help you narrow down what the actual problem is.

AVAudioRecorder & AVAudioPlayer - Sound output on internal speaker, how to change?

i have a problem with AVAudioRecorder and AVAudioPlayer.
when i use Player and Record at the same time (eg. for playing sound while recording) the sound is in the quiet internal Speaker. i searched stackoverflow and all i found was this code:
UInt32 *audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute,sizeof (audioRouteOverride),&audioRouteOverride);
But this doesn't help me :(
When i copyPaste it, i got errors.
What can i do to record and play the loud Speaker at the bottom?
I don't use anything like SCLister oder something...
Thanks in advance
Max
This is a bit old, but this post helped me and I wanted to update it for anyone else who might need it in the future. The code posted at the top is correct - it will take the quiet audio that's being played through the phone speaker and route it to the loudspeaker at the bottom. There is a minor typo in the code, which is why it's giving errors. Here is the correct snippet which will solve this issue:
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute,sizeof (audioRouteOverride),&audioRouteOverride);
Make sure you also activate the audio session right after setting this, before creating your audio player/recorder:
[[AVAudioSession sharedInstance] setActive:YES error:nil];
Last, if you're going to be playing and recording at the same time you'll probably need to set the category and mixing functions too. Here's the entire snippet which will set the category, enable mixing, route the audio to the main speaker, and activate the session. You'll want to do this only once right after the app launches.
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];
OSStatus propertySetError = 0;
UInt32 allowMixing = true;
propertySetError = AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof(allowMixing), &allowMixing);
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute,sizeof (audioRouteOverride),&audioRouteOverride);
NSLog(#"Mixing: %x", propertySetError); // This should be 0 or there was an issue somewhere
[[AVAudioSession sharedInstance] setActive:YES error:nil];
Hope that helps someone!
I answered it here already: How to get AVAudioPlayer output to the speaker
In short, use this before recording:
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryRecord error:nil];
...and use this before playback (on loud speakers or headphones if they are plugged in)
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil];
Only thing I have found about this topic is this:
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];
which must be set when you record your audio if you want to play back at the same time. Give that a try and lemme know.
P.S. Make sure you add the AudioToolbox and AVFoundation frameworks to your project and include them in your .m files.
If you're currently playing through the quiet speakers, and want to play through the loud speaker at the bottom of the iPhone, use this code:
UInt32 doChangeDefaultRoute = 1;
AudioSessionSetProperty (
kAudioSessionProperty_OverrideCategoryDefaultToSpeaker,
sizeof (doChangeDefaultRoute),
&doChangeDefaultRoute
);
This is old question, but none of the other answers helped me... However, I found a solution which I am posting for future reference in case someone needs it.
The solution is described in the following blog post: iOS: Force audio output to speakers while headphones are plugged in .
You need to create new Objective-C class AudioRouter in your project. Then import AudioRouter.h into your header file of the class where you are initiating audio functionality. Now in the corrseponding .m file add the following lines within viewDidLoad method:
AudioRouter *foobar = [[AudioRouter alloc] init];
[foobar initAudioSessionRouting];
[foobar forceOutputToBuiltInSpeakers];
Now you have audio (e.g. AVAudioPlayer) output forced to loudspeaker!
AudioSessionSetProperty is deprecated since iOS7 but the following worked for me.
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:&error];