How to determine sound recording source - iphone

While recording a sound using my iPad app, how can I know if the source of the sound is from the built-in microphone or headphone microphone?
Additional information: iOS ver 4.2 and above.

The way to determine this is to poll the hardware and query the current audio route.
Use the AudioSessionGetProperty object to get back the route of the audio.
This example by #TPoschel should set you on the right track.
- (void)playSound:(id) sender
{
if(player){
CFStringRef route;
UInt32 propertySize = sizeof(CFStringRef);
AudioSessionInitialize(NULL, NULL, NULL, NULL);
AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &propertySize, &route);
if((route == NULL) || (CFStringGetLength(route) == 0)){
// Silent Mode
NSLog(#"AudioRoute: SILENT");
} else {
NSString* routeStr = (NSString*)route;
NSLog(#"AudioRoute: %#", routeStr);
/* Known values of route:
* "Headset"
* "Headphone"
* "Speaker"
* "SpeakerAndMicrophone"
* "HeadphonesAndMicrophone"
* "HeadsetInOut"
* "ReceiverAndMicrophone"
* "Lineout"
*/
NSRange headphoneRange = [routeStr rangeOfString : #"Headphone"];
NSRange headsetRange = [routeStr rangeOfString : #"Headset"];
NSRange receiverRange = [routeStr rangeOfString : #"Receiver"];
NSRange speakerRange = [routeStr rangeOfString : #"Speaker"];
NSRange lineoutRange = [routeStr rangeOfString : #"Lineout"];
if (headphoneRange.location != NSNotFound) {
// Don't change the route if the headphone is plugged in.
} else if(headsetRange.location != NSNotFound) {
// Don't change the route if the headset is plugged in.
} else if (receiverRange.location != NSNotFound) {
// Change to play on the speaker
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute,sizeof (audioRouteOverride),&audioRouteOverride);
} else if (speakerRange.location != NSNotFound) {
// Don't change the route if the speaker is currently playing.
} else if (lineoutRange.location != NSNotFound) {
// Don't change the route if the lineout is plugged in.
} else {
NSLog(#"Unknown audio route.");
}
}
[player play];
}
}

Related

Use built-in mic if Headset is plugged in

I am playing around AudioSessions in iOS and I want to use the built in mic of the iphone as audio input route even if an external headset (including mic) is plugged in. I'm able to detect if a headset is plugged in using the following code:
CFStringRef route;
UInt32 propertySize = sizeof(CFStringRef);
AudioSessionInitialize(NULL, NULL, NULL, NULL);
AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &propertySize, &route);
if((route == NULL) || (CFStringGetLength(route) == 0)){
// Silent Mode
NSLog(#"AudioRoute: SILENT");
} else {
NSString* routeStr = (NSString*)route;
NSLog(#"AudioRoute: %#", routeStr);
NSRange headsetRange = [routeStr rangeOfString : #"Headset"];
if(headsetRange.location != NSNotFound) {
NSLog(#"Headset")
//route Audio IN to built-in mic.
}
.... more code
So, any ideas how to do this?

Redirecting audio output to phone speaker and mic input to headphones

Is it possible to redirect audio output to the phone speaker and still use the microphone headphone input?
If i redirect the audio route to the phone speaker instead of the headphones it also redirects the mic. This makes sense but I can't seem to just be able to just redirect the mic input? Any ideas?
Here is the code I'm using to redirect audio to the speaker:
UInt32 doChangeDefaultRoute = true;
propertySetError = AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryDefaultToSpeaker, sizeof(doChangeDefaultRoute), &doChangeDefaultRoute);
NSAssert(propertySetError == 0, #"Failed to set audio session property: OverrideCategoryDefaultToSpeaker");
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute,sizeof (audioRouteOverride),&audioRouteOverride);
This is possible, but it's picky about how you set it up.
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride);
It's very important to use AVAudioSessionCategoryPlayAndRecord or the route will fail to go to the speaker. Once you've set the override route for the audio session, you can use an AVAudioPlayer instance and send some output to the speaker.
Hope that works for others like it did for me. The documentation on this is scattered, but the Skype app proves it's possible. Persevere, my friends! :)
Some Apple documentation here: http://developer.apple.com/library/ios/#documentation/AudioToolbox/Reference/AudioSessionServicesReference/Reference/reference.html
Do a search on the page for kAudioSessionProperty_OverrideAudioRoute
It doesn't look like it's possible, I'm afraid.
From the Audio Session Programming Guide - kAudioSessionProperty_OverrideAudioRoute
If a headset is plugged in at the time you set this property’s value
to kAudioSessionOverrideAudioRoute_Speaker, the system changes the
audio routing for input as well as for output: input comes from the
built-in microphone; output goes to the built-in speaker.
Possible duplicate of this question
What you can do is to force audio output to speakers in any case:
From UI Hacker - iOS: Force audio output to speakers while headphones are plugged in
#interface AudioRouter : NSObject
+ (void) initAudioSessionRouting;
+ (void) switchToDefaultHardware;
+ (void) forceOutputToBuiltInSpeakers;
#end
and
#import "AudioRouter.h"
#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVFoundation.h>
#implementation AudioRouter
#define IS_DEBUGGING NO
#define IS_DEBUGGING_EXTRA_INFO NO
+ (void) initAudioSessionRouting {
// Called once to route all audio through speakers, even if something's plugged into the headphone jack
static BOOL audioSessionSetup = NO;
if (audioSessionSetup == NO) {
// set category to accept properties assigned below
NSError *sessionError = nil;
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker error: &sessionError];
// Doubly force audio to come out of speaker
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride);
// fix issue with audio interrupting video recording - allow audio to mix on top of other media
UInt32 doSetProperty = 1;
AudioSessionSetProperty (kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof(doSetProperty), &doSetProperty);
// set active
[[AVAudioSession sharedInstance] setDelegate:self];
[[AVAudioSession sharedInstance] setActive: YES error: nil];
// add listener for audio input changes
AudioSessionAddPropertyListener (kAudioSessionProperty_AudioRouteChange, onAudioRouteChange, nil );
AudioSessionAddPropertyListener (kAudioSessionProperty_AudioInputAvailable, onAudioRouteChange, nil );
}
// Force audio to come out of speaker
[[AVAudioSession sharedInstance] overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:nil];
// set flag
audioSessionSetup = YES;
}
+ (void) switchToDefaultHardware {
// Remove forcing to built-in speaker
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_None;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride);
}
+ (void) forceOutputToBuiltInSpeakers {
// Re-force audio to come out of speaker
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride);
}
void onAudioRouteChange (void* clientData, AudioSessionPropertyID inID, UInt32 dataSize, const void* inData) {
if( IS_DEBUGGING == YES ) {
NSLog(#"==== Audio Harware Status ====");
NSLog(#"Current Input: %#", [AudioRouter getAudioSessionInput]);
NSLog(#"Current Output: %#", [AudioRouter getAudioSessionOutput]);
NSLog(#"Current hardware route: %#", [AudioRouter getAudioSessionRoute]);
NSLog(#"==============================");
}
if( IS_DEBUGGING_EXTRA_INFO == YES ) {
NSLog(#"==== Audio Harware Status (EXTENDED) ====");
CFDictionaryRef dict = (CFDictionaryRef)inData;
CFNumberRef reason = CFDictionaryGetValue(dict, kAudioSession_RouteChangeKey_Reason);
CFDictionaryRef oldRoute = CFDictionaryGetValue(dict, kAudioSession_AudioRouteChangeKey_PreviousRouteDescription);
CFDictionaryRef newRoute = CFDictionaryGetValue(dict, kAudioSession_AudioRouteChangeKey_CurrentRouteDescription);
NSLog(#"Audio old route: %#", oldRoute);
NSLog(#"Audio new route: %#", newRoute);
NSLog(#"=========================================");
}
}
+ (NSString*) getAudioSessionInput {
UInt32 routeSize;
AudioSessionGetPropertySize(kAudioSessionProperty_AudioRouteDescription, &routeSize);
CFDictionaryRef desc; // this is the dictionary to contain descriptions
// make the call to get the audio description and populate the desc dictionary
AudioSessionGetProperty (kAudioSessionProperty_AudioRouteDescription, &routeSize, &desc);
// the dictionary contains 2 keys, for input and output. Get output array
CFArrayRef outputs = CFDictionaryGetValue(desc, kAudioSession_AudioRouteKey_Inputs);
// the output array contains 1 element - a dictionary
CFDictionaryRef diction = CFArrayGetValueAtIndex(outputs, 0);
// get the output description from the dictionary
CFStringRef input = CFDictionaryGetValue(diction, kAudioSession_AudioRouteKey_Type);
return [NSString stringWithFormat:#"%#", input];
}
+ (NSString*) getAudioSessionOutput {
UInt32 routeSize;
AudioSessionGetPropertySize(kAudioSessionProperty_AudioRouteDescription, &routeSize);
CFDictionaryRef desc; // this is the dictionary to contain descriptions
// make the call to get the audio description and populate the desc dictionary
AudioSessionGetProperty (kAudioSessionProperty_AudioRouteDescription, &routeSize, &desc);
// the dictionary contains 2 keys, for input and output. Get output array
CFArrayRef outputs = CFDictionaryGetValue(desc, kAudioSession_AudioRouteKey_Outputs);
// the output array contains 1 element - a dictionary
CFDictionaryRef diction = CFArrayGetValueAtIndex(outputs, 0);
// get the output description from the dictionary
CFStringRef output = CFDictionaryGetValue(diction, kAudioSession_AudioRouteKey_Type);
return [NSString stringWithFormat:#"%#", output];
}
+ (NSString*) getAudioSessionRoute {
/*
returns the current session route:
* ReceiverAndMicrophone
* HeadsetInOut
* Headset
* HeadphonesAndMicrophone
* Headphone
* SpeakerAndMicrophone
* Speaker
* HeadsetBT
* LineInOut
* Lineout
* Default
*/
UInt32 rSize = sizeof (CFStringRef);
CFStringRef route;
AudioSessionGetProperty (kAudioSessionProperty_AudioRoute, &rSize, &route);
if (route == NULL) {
NSLog(#"Silent switch is currently on");
return #"None";
}
return [NSString stringWithFormat:#"%#", route];
}
#end

Could not start Audio Queue Error starting recording

CFStringRef state;
UInt32 propertySize = sizeof(CFStringRef);
// AudioSessionInitialize(NULL, NULL, NULL, NULL);
AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &propertySize, &state);
if(CFStringGetLength(state) == 0)
// if(state == 0)
{ //SILENT
NSLog(#"Silent switch is on");
// create vibrate
// AudioServicesPlaySystemSound(kSystemSoundID_Vibrate);
UInt32 audioCategory = kAudioSessionCategory_MediaPlayback;
AudioSessionSetProperty( kAudioSessionProperty_AudioCategory, sizeof(UInt32), &audioCategory);
}
else { //NOT SILENT
NSLog(#"Silent switch is off");
}
where ever i use Above code i am able to play sound file in Silent mode
but after playing recorded sound file in silent mode when i try to record voice again
I get an error
LIke
2010-12-08 13:29:56.710 VoiceRecorder[382:307] -66681
Could not start Audio Queue
Error starting recording
here is the code
// file url
[self setupAudioFormat:&recordState.dataFormat];
CFURLRef fileURL = CFURLCreateFromFileSystemRepresentation(NULL, (const UInt8 *) [filePath UTF8String], [filePath length], NO);
// recordState.currentPacket = 0;
// new input queue
OSStatus status;
status = AudioQueueNewInput(&recordState.dataFormat, HandleInputBuffer, &recordState, CFRunLoopGetCurrent(),kCFRunLoopCommonModes, 0, &recordState.queue);
if (status) {CFRelease(fileURL); printf("Could not establish new queue\n"); return NO;}
// create new audio file
status = AudioFileCreateWithURL(fileURL, kAudioFileAIFFType, &recordState.dataFormat, kAudioFileFlags_EraseFile, &recordState.audioFile); CFRelease(fileURL); // thanks august joki
if (status) {printf("Could not create file to record audio\n"); return NO;}
// figure out the buffer size
DeriveBufferSize(recordState.queue, recordState.dataFormat, 0.5, &recordState.bufferByteSize); // allocate those buffers and enqueue them
for(int i = 0; i < NUM_BUFFERS; i++)
{
status = AudioQueueAllocateBuffer(recordState.queue, recordState.bufferByteSize, &recordState.buffers[i]);
if (status) {printf("Error allocating buffer %d\n", i); return NO;}
status = AudioQueueEnqueueBuffer(recordState.queue, recordState.buffers[i], 0, NULL);
if (status) {printf("Error enqueuing buffer %d\n", i); return NO;}
} // enable metering
UInt32 enableMetering = YES;
status = AudioQueueSetProperty(recordState.queue, kAudioQueueProperty_EnableLevelMetering, &enableMetering,sizeof(enableMetering));
if (status) {printf("Could not enable metering\n"); return NO;}
// start recording
status = AudioQueueStart(recordState.queue, NULL); // status = 0; NSLog(#"%d",status);
if (status) {printf("Could not start Audio Queue\n"); return NO;}
recordState.currentPacket = 0;
recordState.recording = YES;
return YES;
i get an error here
I was facing similar problem in iOS 7.1. Add following in AppDelegate's didFinishLaunchingWithOptions :
AVAudioSession * audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error: nil];
[audioSession setActive:YES error: nil];
EDIT : Above code is working for me

Detect if headphones (not microphone) are plugged in to an iOS device [duplicate]

This question already has answers here:
Are headphones plugged in? iOS7
(8 answers)
Closed 6 years ago.
I need to change my audio depending on whether or not headphones are plugged in. I'm aware of kAudioSessionProperty_AudioInputAvailable, which will tell me if there's a microphone, but I'd like to test for any headphones, not just headphones with a built-in microphone. Is this possible?
Here is a method of my own which is a slightly modified version of one found on this site : http://www.iphonedevsdk.com/forum/iphone-sdk-development/9982-play-record-same-time.html
- (BOOL)isHeadsetPluggedIn {
UInt32 routeSize = sizeof (CFStringRef);
CFStringRef route;
OSStatus error = AudioSessionGetProperty (kAudioSessionProperty_AudioRoute,
&routeSize,
&route);
/* Known values of route:
* "Headset"
* "Headphone"
* "Speaker"
* "SpeakerAndMicrophone"
* "HeadphonesAndMicrophone"
* "HeadsetInOut"
* "ReceiverAndMicrophone"
* "Lineout"
*/
if (!error && (route != NULL)) {
NSString* routeStr = (NSString*)route;
NSRange headphoneRange = [routeStr rangeOfString : #"Head"];
if (headphoneRange.location != NSNotFound) return YES;
}
return NO;
}
Here's a solution based on rob mayoff's comment:
- (BOOL)isHeadsetPluggedIn
{
AVAudioSessionRouteDescription *route = [[AVAudioSession sharedInstance] currentRoute];
BOOL headphonesLocated = NO;
for( AVAudioSessionPortDescription *portDescription in route.outputs )
{
headphonesLocated |= ( [portDescription.portType isEqualToString:AVAudioSessionPortHeadphones] );
}
return headphonesLocated;
}
Simply link to the AVFoundation framework.
Just a heads up for any future readers of this post.
Most of the AVToolbox methods have been deprecated with the release of iOS 7 without alternative so audio listeners are now largely redundancy
I started with the code given above by jpsetung, but there were a few issues with it for my use case:
No evidence of something called kAudioSessionProperty_AudioRoute in the docs
Leaks route
No audio session check
String check for headphones instead of logical awareness of categories
I was more interested in whether the iPhone was using its speakers, with "headphones" meaning "anything other than speakers". I feel that leaving out options like "bluetooth", "airplay", or "lineout" was dangerous.
This implementation broadens the check to allow for any type of specified output:
BOOL isAudioRouteAvailable(CFStringRef routeType)
{
/*
As of iOS 5:
kAudioSessionOutputRoute_LineOut;
kAudioSessionOutputRoute_Headphones;
kAudioSessionOutputRoute_BluetoothHFP;
kAudioSessionOutputRoute_BluetoothA2DP;
kAudioSessionOutputRoute_BuiltInReceiver;
kAudioSessionOutputRoute_BuiltInSpeaker;
kAudioSessionOutputRoute_USBAudio;
kAudioSessionOutputRoute_HDMI;
kAudioSessionOutputRoute_AirPlay;
*/
//Prep
BOOL foundRoute = NO;
CFDictionaryRef description = NULL;
//Session
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
AudioSessionInitialize(NULL, NULL, NULL, NULL);
});
//Property
UInt32 propertySize;
AudioSessionGetPropertySize(kAudioSessionProperty_AudioRouteDescription, &propertySize);
OSStatus error = AudioSessionGetProperty(kAudioSessionProperty_AudioRouteDescription, &propertySize, &description);
if ( !error && description ) {
CFArrayRef outputs = CFDictionaryGetValue(description, kAudioSession_AudioRouteKey_Outputs);
CFIndex count = CFArrayGetCount(outputs);
if ( outputs && count ) {
for (CFIndex i = 0; i < count; i++) {
CFDictionaryRef route = CFArrayGetValueAtIndex(outputs, i);
CFStringRef type = CFDictionaryGetValue(route, kAudioSession_AudioRouteKey_Type);
NSLog(#"Got audio route %#", type);
//Audio route type
if ( CFStringCompare(type, routeType, 0) == kCFCompareEqualTo ) {
foundRoute = YES;
break;
}
}
}
} else if ( error ) {
NSLog(#"Audio route error %ld", error);
}
//Cleanup
if ( description ) {
CFRelease(description);
}
//Done
return foundRoute;
}
Used like so:
if ( isAudioRouteAvailable(kAudioSessionOutputRoute_BuiltInSpeaker) ) {
//Do great things...
}

Detecting if headphones are plugged into iPhone

Does anyone know if you can detect if headphones are plugged into the iPhone, and if they aren't - disable sound from your application.
I think I could manage disabling sound, but the detection part I have yet to find anything on.
Thanks
With this code you can detect the changes between:
MicrophoneWired
Headphone
LineOut
Speaker
Detecting when an iOS Device connector was plugged/unplugged
Note: Since iOS 5 part of the "audioRouteChangeListenerCallback(...)" behavior is deprecated but you can update it with:
// kAudioSession_AudioRouteChangeKey_PreviousRouteDescription -> Previous route
// kAudioSession_AudioRouteChangeKey_CurrentRouteDescription -> Current route
CFDictionaryRef newRouteRef = CFDictionaryGetValue(routeChangeDictionary, kAudioSession_AudioRouteChangeKey_CurrentRouteDescription);
NSDictionary *newRouteDict = (NSDictionary *)newRouteRef;
// RouteDetailedDescription_Outputs -> Output
// RouteDetailedDescription_Outputs -> Input
NSArray * paths = [[newRouteDict objectForKey: #"RouteDetailedDescription_Outputs"] count] ? [newRouteDict objectForKey: #"RouteDetailedDescription_Outputs"] : [newRouteDict objectForKey: #"RouteDetailedDescription_Inputs"];
NSString * newRouteString = [[paths objectAtIndex: 0] objectForKey: #"RouteDetailedDescription_PortType"];
// newRouteString -> MicrophoneWired, Speaker, LineOut, Headphone
Greetings
http://developer.apple.com/iphone/library/samplecode/SpeakHere/Introduction/Intro.html
In this project there is a code-snippet where it pauses recording if the headphones is unpluged. Maybe you can use it to achieve your result.
Good luck!
(edit)
You will have to study the SpeakHereController.mm file.
I found this code in the awakeFromNib method
// we do not want to allow recording if input is not available
error = AudioSessionGetProperty(kAudioSessionProperty_AudioInputAvailable, &size, &inputAvailable);
if (error) printf("ERROR GETTING INPUT AVAILABILITY! %d\n", error);
btn_record.enabled = (inputAvailable) ? YES : NO;
// we also need to listen to see if input availability changes
error = AudioSessionAddPropertyListener(kAudioSessionProperty_AudioInputAvailable, propListener, self);
if (error) printf("ERROR ADDING AUDIO SESSION PROP LISTENER! %d\n", error);
Here is the solution, you may like it or it is helpful to you.
Before using below method please write this two line also
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_None; AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute,sizeof (audioRouteOverride),&audioRouteOverride);
(void)isHeadsetPluggedIn {
UInt32 routeSize = sizeof (CFStringRef); CFStringRef route;
AudioSessionGetProperty (kAudioSessionProperty_AudioRoute, &routeSize, &route);
//NSLog(#"Error >>>>>>>>>> :%#", error);
/* Known values of route:
"Headset"
"Headphone"
"Speaker"
"SpeakerAndMicrophone"
"HeadphonesAndMicrophone"
"HeadsetInOut"
"ReceiverAndMicrophone"
"Lineout" */
NSString* routeStr = (NSString*)route;
NSRange headsetRange = [routeStr rangeOfString : #"Headset"]; NSRange receiverRange = [routeStr rangeOfString : #"Receiver"];
if(headsetRange.location != NSNotFound) {
// Don't change the route if the headset is plugged in.
NSLog(#"headphone is plugged in ");
} else
if (receiverRange.location != NSNotFound) {
// Change to play on the speaker
NSLog(#"play on the speaker");
} else {
NSLog(#"Unknown audio route.");
}
}
To perform a one-off check to determine if headphones are plugged in (rather than setting a callback when they're unplugged) I found the following works in iOS5 and above:
- (BOOL) isAudioJackPlugged
{
// initialise the audio session - this should only be done once - so move this line to your AppDelegate
AudioSessionInitialize(NULL, NULL, NULL, NULL);
UInt32 routeSize;
// oddly, without calling this method caused an error.
AudioSessionGetPropertySize(kAudioSessionProperty_AudioRouteDescription, &routeSize);
CFDictionaryRef desc; // this is the dictionary to contain descriptions
// make the call to get the audio description and populate the desc dictionary
AudioSessionGetProperty (kAudioSessionProperty_AudioRouteDescription, &routeSize, &desc);
// the dictionary contains 2 keys, for input and output. Get output array
CFArrayRef outputs = CFDictionaryGetValue(desc, kAudioSession_AudioRouteKey_Outputs);
// the output array contains 1 element - a dictionary
CFDictionaryRef dict = CFArrayGetValueAtIndex(outputs, 0);
// get the output description from the dictionary
CFStringRef output = CFDictionaryGetValue(dict, kAudioSession_AudioRouteKey_Type);
/**
These are the possible output types:
kAudioSessionOutputRoute_LineOut
kAudioSessionOutputRoute_Headphones
kAudioSessionOutputRoute_BluetoothHFP
kAudioSessionOutputRoute_BluetoothA2DP
kAudioSessionOutputRoute_BuiltInReceiver
kAudioSessionOutputRoute_BuiltInSpeaker
kAudioSessionOutputRoute_USBAudio
kAudioSessionOutputRoute_HDMI
kAudioSessionOutputRoute_AirPlay
*/
return CFStringCompare(output, kAudioSessionOutputRoute_Headphones, 0) == kCFCompareEqualTo;
}
For those keeping score at home, that's a string in a dictionary in an array in a dictionary.