Is it possible to 'duck' an AudioSession whilst using OpenAL? - iphone

Does anybody know if this is possible?
I have my audio session and OpenAL set-up like so:
// Allow their music to play in the background
AudioSessionInitialize(NULL, NULL, openALInterruptionListener, (__bridge void *)(self));
UInt32 sessionCategory = kAudioSessionCategory_AmbientSound;
AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(sessionCategory), &sessionCategory);
UInt32 allowMixing = false;
AudioSessionSetProperty(kAudioSessionProperty_OtherMixableAudioShouldDuck, sizeof(allowMixing), &allowMixing);
// use the device to make a context
_mContext = alcCreateContext(_mDevice, NULL);
// set my context to the currently active one
alcMakeContextCurrent(_mContext);
And I have ducking set-up like so:
- (void)setSoundDucked:(BOOL)soundDucked
{
if(soundDucked)
{
UInt32 allowMixing = true;
AudioSessionSetProperty(kAudioSessionProperty_OtherMixableAudioShouldDuck, sizeof(allowMixing), &allowMixing);
AudioSessionSetActive(false);
AudioSessionSetActive(true);
}
else
{
UInt32 allowMixing = false;
AudioSessionSetProperty(kAudioSessionProperty_OtherMixableAudioShouldDuck, sizeof(allowMixing), &allowMixing);
AudioSessionSetActive(false);
AudioSessionSetActive(true);
}
}
However, sound doesn't duck. It will only duck if I comment out the following lines:
// use the device to make a context
_mContext = alcCreateContext(_mDevice, NULL);
// set my context to the currently active one
alcMakeContextCurrent(_mContext);
Is there anyway of getting OpenAL to play nice with the audio ducking property?

Related

Detect plugIn or unplug of headphone jack from iPhone when my app is in background mode

I want to notify the user when my headphone jack is PluggedIn or UnPlugged from my iPhone/iPod/iPad when my app is in the background mode.
Here I have the code which detects in the foreground mode.
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, audioSessionPropertyListener, nil);
}
BOOL isHeadsetPluggedIn()
{
UInt32 routeSize = sizeof (CFStringRef);
CFStringRef route;
OSStatus error = AudioSessionGetProperty (kAudioSessionProperty_AudioRoute,
&routeSize,
&route
);
NSLog(#"%#", route);
return (!error && (route != NULL) && ([( NSString*)route rangeOfString:#"Head"].location != NSNotFound));
}
void audioSessionPropertyListener(void* inClientData, AudioSessionPropertyID inID,UInt32 inDataSize, const void* inData)
{
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
// Determines the reason for the route change, to ensure that it is not
// because of a category change.
CFDictionaryRef routeChangeDictionary = inData;
CFNumberRef routeChangeReasonRef = CFDictionaryGetValue (routeChangeDictionary,CFSTR (kAudioSession_AudioRouteChangeKey_Reason));
SInt32 routeChangeReason;
CFNumberGetValue (routeChangeReasonRef, kCFNumberSInt32Type, &routeChangeReason);
// "Old device unavailable" indicates that a headset was unplugged, or that the
// device was removed from a dock connector that supports audio output.
// if (routeChangeReason != kAudioSessionRouteChangeReason_OldDeviceUnavailable)
// return;
if (!isHeadsetPluggedIn())
{
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute,sizeof (audioRouteOverride),&audioRouteOverride);
NSLog(#"With out headPhone");
}
else
{
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride);
NSLog(#"headPhone");
}
}
You could try this from the Apple Docs:
- (void)applicationDidEnterBackground:(UIApplication *)application
{
bgTask = [application beginBackgroundTaskWithExpirationHandler:^{
// Clean up any unfinished task business by marking where you
// stopped or ending the task outright.
[application endBackgroundTask:bgTask];
bgTask = UIBackgroundTaskInvalid;
}];
// Start the long-running task and return immediately.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// Do the work associated with the task, preferably in chunks.
[application endBackgroundTask:bgTask];
bgTask = UIBackgroundTaskInvalid;
});
}
I think Apple has given that data in their Docs here: Background Execution and Multitasking

Application turns off playing music when launched

HI i have set my app audio session to ambientsound,
when i launched the app for the fist time she just kill the music.
i dont want this to happen.
there is any other way to set this?
Try this :
Activate audio session :
OSStatus activationResult = NULL;
result = AudioSessionSetActive (true);
Test if other audio is playing
UInt32 otherAudioIsPlaying; // 1
UInt32 propertySize = sizeof (otherAudioIsPlaying);
AudioSessionGetProperty ( // 2
kAudioSessionProperty_OtherAudioIsPlaying,
&propertySize,
&otherAudioIsPlaying
);
if (otherAudioIsPlaying) { // 3
[[AVAudioSession sharedInstance]
setCategory: AVAudioSessionCategoryAmbient
error: nil];
} else {
[[AVAudioSession sharedInstance]
setCategory: AVAudioSessionCategorySoloAmbient
error: nil];
}
If YES, allow mixing
OSStatus propertySetError = 0;
UInt32 allowMixing = true;
propertySetError = AudioSessionSetProperty (
kAudioSessionProperty_OverrideCategoryMixWithOthers, // 1
sizeof (allowMixing), // 2
&allowMixing // 3
);
Source : http://developer.apple.com/library/ios/#documentation/Audio/Conceptual/AudioSessionProgrammingGuide/Cookbook/Cookbook.html

Redirecting audio output to phone speaker and mic input to headphones

Is it possible to redirect audio output to the phone speaker and still use the microphone headphone input?
If i redirect the audio route to the phone speaker instead of the headphones it also redirects the mic. This makes sense but I can't seem to just be able to just redirect the mic input? Any ideas?
Here is the code I'm using to redirect audio to the speaker:
UInt32 doChangeDefaultRoute = true;
propertySetError = AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryDefaultToSpeaker, sizeof(doChangeDefaultRoute), &doChangeDefaultRoute);
NSAssert(propertySetError == 0, #"Failed to set audio session property: OverrideCategoryDefaultToSpeaker");
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute,sizeof (audioRouteOverride),&audioRouteOverride);
This is possible, but it's picky about how you set it up.
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride);
It's very important to use AVAudioSessionCategoryPlayAndRecord or the route will fail to go to the speaker. Once you've set the override route for the audio session, you can use an AVAudioPlayer instance and send some output to the speaker.
Hope that works for others like it did for me. The documentation on this is scattered, but the Skype app proves it's possible. Persevere, my friends! :)
Some Apple documentation here: http://developer.apple.com/library/ios/#documentation/AudioToolbox/Reference/AudioSessionServicesReference/Reference/reference.html
Do a search on the page for kAudioSessionProperty_OverrideAudioRoute
It doesn't look like it's possible, I'm afraid.
From the Audio Session Programming Guide - kAudioSessionProperty_OverrideAudioRoute
If a headset is plugged in at the time you set this property’s value
to kAudioSessionOverrideAudioRoute_Speaker, the system changes the
audio routing for input as well as for output: input comes from the
built-in microphone; output goes to the built-in speaker.
Possible duplicate of this question
What you can do is to force audio output to speakers in any case:
From UI Hacker - iOS: Force audio output to speakers while headphones are plugged in
#interface AudioRouter : NSObject
+ (void) initAudioSessionRouting;
+ (void) switchToDefaultHardware;
+ (void) forceOutputToBuiltInSpeakers;
#end
and
#import "AudioRouter.h"
#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVFoundation.h>
#implementation AudioRouter
#define IS_DEBUGGING NO
#define IS_DEBUGGING_EXTRA_INFO NO
+ (void) initAudioSessionRouting {
// Called once to route all audio through speakers, even if something's plugged into the headphone jack
static BOOL audioSessionSetup = NO;
if (audioSessionSetup == NO) {
// set category to accept properties assigned below
NSError *sessionError = nil;
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker error: &sessionError];
// Doubly force audio to come out of speaker
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride);
// fix issue with audio interrupting video recording - allow audio to mix on top of other media
UInt32 doSetProperty = 1;
AudioSessionSetProperty (kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof(doSetProperty), &doSetProperty);
// set active
[[AVAudioSession sharedInstance] setDelegate:self];
[[AVAudioSession sharedInstance] setActive: YES error: nil];
// add listener for audio input changes
AudioSessionAddPropertyListener (kAudioSessionProperty_AudioRouteChange, onAudioRouteChange, nil );
AudioSessionAddPropertyListener (kAudioSessionProperty_AudioInputAvailable, onAudioRouteChange, nil );
}
// Force audio to come out of speaker
[[AVAudioSession sharedInstance] overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:nil];
// set flag
audioSessionSetup = YES;
}
+ (void) switchToDefaultHardware {
// Remove forcing to built-in speaker
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_None;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride);
}
+ (void) forceOutputToBuiltInSpeakers {
// Re-force audio to come out of speaker
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride);
}
void onAudioRouteChange (void* clientData, AudioSessionPropertyID inID, UInt32 dataSize, const void* inData) {
if( IS_DEBUGGING == YES ) {
NSLog(#"==== Audio Harware Status ====");
NSLog(#"Current Input: %#", [AudioRouter getAudioSessionInput]);
NSLog(#"Current Output: %#", [AudioRouter getAudioSessionOutput]);
NSLog(#"Current hardware route: %#", [AudioRouter getAudioSessionRoute]);
NSLog(#"==============================");
}
if( IS_DEBUGGING_EXTRA_INFO == YES ) {
NSLog(#"==== Audio Harware Status (EXTENDED) ====");
CFDictionaryRef dict = (CFDictionaryRef)inData;
CFNumberRef reason = CFDictionaryGetValue(dict, kAudioSession_RouteChangeKey_Reason);
CFDictionaryRef oldRoute = CFDictionaryGetValue(dict, kAudioSession_AudioRouteChangeKey_PreviousRouteDescription);
CFDictionaryRef newRoute = CFDictionaryGetValue(dict, kAudioSession_AudioRouteChangeKey_CurrentRouteDescription);
NSLog(#"Audio old route: %#", oldRoute);
NSLog(#"Audio new route: %#", newRoute);
NSLog(#"=========================================");
}
}
+ (NSString*) getAudioSessionInput {
UInt32 routeSize;
AudioSessionGetPropertySize(kAudioSessionProperty_AudioRouteDescription, &routeSize);
CFDictionaryRef desc; // this is the dictionary to contain descriptions
// make the call to get the audio description and populate the desc dictionary
AudioSessionGetProperty (kAudioSessionProperty_AudioRouteDescription, &routeSize, &desc);
// the dictionary contains 2 keys, for input and output. Get output array
CFArrayRef outputs = CFDictionaryGetValue(desc, kAudioSession_AudioRouteKey_Inputs);
// the output array contains 1 element - a dictionary
CFDictionaryRef diction = CFArrayGetValueAtIndex(outputs, 0);
// get the output description from the dictionary
CFStringRef input = CFDictionaryGetValue(diction, kAudioSession_AudioRouteKey_Type);
return [NSString stringWithFormat:#"%#", input];
}
+ (NSString*) getAudioSessionOutput {
UInt32 routeSize;
AudioSessionGetPropertySize(kAudioSessionProperty_AudioRouteDescription, &routeSize);
CFDictionaryRef desc; // this is the dictionary to contain descriptions
// make the call to get the audio description and populate the desc dictionary
AudioSessionGetProperty (kAudioSessionProperty_AudioRouteDescription, &routeSize, &desc);
// the dictionary contains 2 keys, for input and output. Get output array
CFArrayRef outputs = CFDictionaryGetValue(desc, kAudioSession_AudioRouteKey_Outputs);
// the output array contains 1 element - a dictionary
CFDictionaryRef diction = CFArrayGetValueAtIndex(outputs, 0);
// get the output description from the dictionary
CFStringRef output = CFDictionaryGetValue(diction, kAudioSession_AudioRouteKey_Type);
return [NSString stringWithFormat:#"%#", output];
}
+ (NSString*) getAudioSessionRoute {
/*
returns the current session route:
* ReceiverAndMicrophone
* HeadsetInOut
* Headset
* HeadphonesAndMicrophone
* Headphone
* SpeakerAndMicrophone
* Speaker
* HeadsetBT
* LineInOut
* Lineout
* Default
*/
UInt32 rSize = sizeof (CFStringRef);
CFStringRef route;
AudioSessionGetProperty (kAudioSessionProperty_AudioRoute, &rSize, &route);
if (route == NULL) {
NSLog(#"Silent switch is currently on");
return #"None";
}
return [NSString stringWithFormat:#"%#", route];
}
#end

Could not start Audio Queue Error starting recording

CFStringRef state;
UInt32 propertySize = sizeof(CFStringRef);
// AudioSessionInitialize(NULL, NULL, NULL, NULL);
AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &propertySize, &state);
if(CFStringGetLength(state) == 0)
// if(state == 0)
{ //SILENT
NSLog(#"Silent switch is on");
// create vibrate
// AudioServicesPlaySystemSound(kSystemSoundID_Vibrate);
UInt32 audioCategory = kAudioSessionCategory_MediaPlayback;
AudioSessionSetProperty( kAudioSessionProperty_AudioCategory, sizeof(UInt32), &audioCategory);
}
else { //NOT SILENT
NSLog(#"Silent switch is off");
}
where ever i use Above code i am able to play sound file in Silent mode
but after playing recorded sound file in silent mode when i try to record voice again
I get an error
LIke
2010-12-08 13:29:56.710 VoiceRecorder[382:307] -66681
Could not start Audio Queue
Error starting recording
here is the code
// file url
[self setupAudioFormat:&recordState.dataFormat];
CFURLRef fileURL = CFURLCreateFromFileSystemRepresentation(NULL, (const UInt8 *) [filePath UTF8String], [filePath length], NO);
// recordState.currentPacket = 0;
// new input queue
OSStatus status;
status = AudioQueueNewInput(&recordState.dataFormat, HandleInputBuffer, &recordState, CFRunLoopGetCurrent(),kCFRunLoopCommonModes, 0, &recordState.queue);
if (status) {CFRelease(fileURL); printf("Could not establish new queue\n"); return NO;}
// create new audio file
status = AudioFileCreateWithURL(fileURL, kAudioFileAIFFType, &recordState.dataFormat, kAudioFileFlags_EraseFile, &recordState.audioFile); CFRelease(fileURL); // thanks august joki
if (status) {printf("Could not create file to record audio\n"); return NO;}
// figure out the buffer size
DeriveBufferSize(recordState.queue, recordState.dataFormat, 0.5, &recordState.bufferByteSize); // allocate those buffers and enqueue them
for(int i = 0; i < NUM_BUFFERS; i++)
{
status = AudioQueueAllocateBuffer(recordState.queue, recordState.bufferByteSize, &recordState.buffers[i]);
if (status) {printf("Error allocating buffer %d\n", i); return NO;}
status = AudioQueueEnqueueBuffer(recordState.queue, recordState.buffers[i], 0, NULL);
if (status) {printf("Error enqueuing buffer %d\n", i); return NO;}
} // enable metering
UInt32 enableMetering = YES;
status = AudioQueueSetProperty(recordState.queue, kAudioQueueProperty_EnableLevelMetering, &enableMetering,sizeof(enableMetering));
if (status) {printf("Could not enable metering\n"); return NO;}
// start recording
status = AudioQueueStart(recordState.queue, NULL); // status = 0; NSLog(#"%d",status);
if (status) {printf("Could not start Audio Queue\n"); return NO;}
recordState.currentPacket = 0;
recordState.recording = YES;
return YES;
i get an error here
I was facing similar problem in iOS 7.1. Add following in AppDelegate's didFinishLaunchingWithOptions :
AVAudioSession * audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error: nil];
[audioSession setActive:YES error: nil];
EDIT : Above code is working for me

Why does this Audio Unit RemoteIO initialisation work on iPhone but not in simulator?

I am using the Audio Unit services to set up an output rendering callback so I can mix together synthesized audio. The code I have seems to work perfectly on the devices I have (iPod Touch, iPhone 3G, and iPad) but fails to work on the simulator.
On the simulator, the AudioUnitInitialise function fails and returns a value of -10851 (kAudioUnitErr_InvalidPropertyValue according to Apple documentation).
Here is my initialisation code.. anyone with more experience with this API than I see anything I'm doing incorrect here?
#define kOutputBus 0
#define kInputBus 1
...
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags* ioActionFlags,
const AudioTimeStamp* inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList* ioData)
{
// Mix audio here - but it never gets here on the simulator
return noErr;
}
...
{
OSStatus status;
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &m_audio_unit);
if(status != noErr) {
NSLog(#"Failed to get audio component instance: %d", status);
}
// Enable IO for playback
UInt32 flag = 1;
status = AudioUnitSetProperty(m_audio_unit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
if(status != noErr) {
NSLog(#"Failed to enable audio i/o for playback: %d", status);
}
// Describe format
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 44100.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 2;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 4;
audioFormat.mBytesPerFrame = 4;
// Apply format
status = AudioUnitSetProperty(m_audio_unit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioFormat,
sizeof(audioFormat));
if(status != noErr) {
NSLog(#"Failed to set format descriptor: %d", status);
}
// Set output callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(m_audio_unit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
if(status != noErr) {
NSLog(#"Failed to set output callback: %d", status);
}
// Initialize (This is where it fails on the simulator)
status = AudioUnitInitialize(m_audio_unit);
if(status != noErr) {
NSLog(#"Failed to initialise audio unit: %d", status);
}
}
My XCode version is 3.2.2 (64 bit)
My Simulator version is 3.2 (Though the same issue occurs in 3.1.3 Debug or Release)
Thanks, I appreciate it!
compiling for a device and for a simulator is totally different. Most common things have the same expected result. For example loading a view switch between them playing sounds and so on. However when it comes to other things like playing sound with OpenAL loading 10 buffers and then switching between them the simulator cannot handle that but the devices can.
The way i see it is as long as it works on the device that's all I care about. Try not t pull your hair out just to make an application work on a simulator when it works fine on the device.
hope that helps
Pk
Did you configure and enable an Audio Session prior to calling your RemoteIO initialization code?
When you are setting the stream properties to the input bus, you are using kOutputBus for your input scope. That's probably not good. Also, you probably don't need to apply the render callback to the global scope, as you only need it for output. Furthermore, I think that your definitions of kOutputBus and kInputBus are wrong... when I look at working iPhone Audio code, it uses 0 for the input bus and 1 for the output bus.
I can also think of a few minor things in regards to the AudioStreamBasicDescription, though I don't think these will make much of a difference:
Add the kAudioFormatFlagsNativeEndian property to your format flags
Explicitly set the mReserved field to 0.