Application turns off playing music when launched - iphone

HI i have set my app audio session to ambientsound,
when i launched the app for the fist time she just kill the music.
i dont want this to happen.
there is any other way to set this?

Try this :
Activate audio session :
OSStatus activationResult = NULL;
result = AudioSessionSetActive (true);
Test if other audio is playing
UInt32 otherAudioIsPlaying; // 1
UInt32 propertySize = sizeof (otherAudioIsPlaying);
AudioSessionGetProperty ( // 2
kAudioSessionProperty_OtherAudioIsPlaying,
&propertySize,
&otherAudioIsPlaying
);
if (otherAudioIsPlaying) { // 3
[[AVAudioSession sharedInstance]
setCategory: AVAudioSessionCategoryAmbient
error: nil];
} else {
[[AVAudioSession sharedInstance]
setCategory: AVAudioSessionCategorySoloAmbient
error: nil];
}
If YES, allow mixing
OSStatus propertySetError = 0;
UInt32 allowMixing = true;
propertySetError = AudioSessionSetProperty (
kAudioSessionProperty_OverrideCategoryMixWithOthers, // 1
sizeof (allowMixing), // 2
&allowMixing // 3
);
Source : http://developer.apple.com/library/ios/#documentation/Audio/Conceptual/AudioSessionProgrammingGuide/Cookbook/Cookbook.html

Related

Detect plugIn or unplug of headphone jack from iPhone when my app is in background mode

I want to notify the user when my headphone jack is PluggedIn or UnPlugged from my iPhone/iPod/iPad when my app is in the background mode.
Here I have the code which detects in the foreground mode.
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, audioSessionPropertyListener, nil);
}
BOOL isHeadsetPluggedIn()
{
UInt32 routeSize = sizeof (CFStringRef);
CFStringRef route;
OSStatus error = AudioSessionGetProperty (kAudioSessionProperty_AudioRoute,
&routeSize,
&route
);
NSLog(#"%#", route);
return (!error && (route != NULL) && ([( NSString*)route rangeOfString:#"Head"].location != NSNotFound));
}
void audioSessionPropertyListener(void* inClientData, AudioSessionPropertyID inID,UInt32 inDataSize, const void* inData)
{
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
// Determines the reason for the route change, to ensure that it is not
// because of a category change.
CFDictionaryRef routeChangeDictionary = inData;
CFNumberRef routeChangeReasonRef = CFDictionaryGetValue (routeChangeDictionary,CFSTR (kAudioSession_AudioRouteChangeKey_Reason));
SInt32 routeChangeReason;
CFNumberGetValue (routeChangeReasonRef, kCFNumberSInt32Type, &routeChangeReason);
// "Old device unavailable" indicates that a headset was unplugged, or that the
// device was removed from a dock connector that supports audio output.
// if (routeChangeReason != kAudioSessionRouteChangeReason_OldDeviceUnavailable)
// return;
if (!isHeadsetPluggedIn())
{
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute,sizeof (audioRouteOverride),&audioRouteOverride);
NSLog(#"With out headPhone");
}
else
{
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride);
NSLog(#"headPhone");
}
}
You could try this from the Apple Docs:
- (void)applicationDidEnterBackground:(UIApplication *)application
{
bgTask = [application beginBackgroundTaskWithExpirationHandler:^{
// Clean up any unfinished task business by marking where you
// stopped or ending the task outright.
[application endBackgroundTask:bgTask];
bgTask = UIBackgroundTaskInvalid;
}];
// Start the long-running task and return immediately.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// Do the work associated with the task, preferably in chunks.
[application endBackgroundTask:bgTask];
bgTask = UIBackgroundTaskInvalid;
});
}
I think Apple has given that data in their Docs here: Background Execution and Multitasking

Is it possible to 'duck' an AudioSession whilst using OpenAL?

Does anybody know if this is possible?
I have my audio session and OpenAL set-up like so:
// Allow their music to play in the background
AudioSessionInitialize(NULL, NULL, openALInterruptionListener, (__bridge void *)(self));
UInt32 sessionCategory = kAudioSessionCategory_AmbientSound;
AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(sessionCategory), &sessionCategory);
UInt32 allowMixing = false;
AudioSessionSetProperty(kAudioSessionProperty_OtherMixableAudioShouldDuck, sizeof(allowMixing), &allowMixing);
// use the device to make a context
_mContext = alcCreateContext(_mDevice, NULL);
// set my context to the currently active one
alcMakeContextCurrent(_mContext);
And I have ducking set-up like so:
- (void)setSoundDucked:(BOOL)soundDucked
{
if(soundDucked)
{
UInt32 allowMixing = true;
AudioSessionSetProperty(kAudioSessionProperty_OtherMixableAudioShouldDuck, sizeof(allowMixing), &allowMixing);
AudioSessionSetActive(false);
AudioSessionSetActive(true);
}
else
{
UInt32 allowMixing = false;
AudioSessionSetProperty(kAudioSessionProperty_OtherMixableAudioShouldDuck, sizeof(allowMixing), &allowMixing);
AudioSessionSetActive(false);
AudioSessionSetActive(true);
}
}
However, sound doesn't duck. It will only duck if I comment out the following lines:
// use the device to make a context
_mContext = alcCreateContext(_mDevice, NULL);
// set my context to the currently active one
alcMakeContextCurrent(_mContext);
Is there anyway of getting OpenAL to play nice with the audio ducking property?

AudioUnitInitialize throws error while initializing AudioComponentInstance

I am using below code to init my audio components.
-(void) startListeningWithCoreAudio
{
NSError *error = nil;
[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayAndRecord error:&error];
if (error)
NSLog(#"error setting up audio session: %#", [error localizedDescription]);
[[AVAudioSession sharedInstance] setDelegate:self];
OSStatus status = AudioSessionSetActive(YES);
checkStatus(status);
// Find the apple mic
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent inputComponent = AudioComponentFindNext( NULL, &desc );
status = AudioComponentInstanceNew( inputComponent, &kAudioUnit );
checkStatus( status );
// enable mic output as our input
UInt32 flag = 1;
status = AudioUnitSetProperty( kAudioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, kInputBus, &flag, sizeof(flag) );
checkStatus(status);
// Define mic output audio format
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 16000.0;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
status = AudioUnitSetProperty( kAudioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, kInputBus, &audioFormat, sizeof(audioFormat) );
checkStatus(status);
// Define our callback methods
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty( kAudioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, kInputBus, &callbackStruct, sizeof(callbackStruct) );
checkStatus(status);
// By pass voice processing
UInt32 audiobypassProcessing = [[NSUserDefaults standardUserDefaults] boolForKey:VOICE_BY_PASS_PROCESSING];
status = AudioUnitSetProperty(kAudioUnit, kAUVoiceIOProperty_BypassVoiceProcessing,
kAudioUnitScope_Global, kInputBus, &audiobypassProcessing, sizeof(audiobypassProcessing));
checkStatus(status);
// Automatic Gain Control
UInt32 audioAGC = [[NSUserDefaults standardUserDefaults]boolForKey:VOICE_AGC];
status = AudioUnitSetProperty(kAudioUnit, kAUVoiceIOProperty_VoiceProcessingEnableAGC,
kAudioUnitScope_Global, kInputBus, &audioAGC, sizeof(audioAGC));
checkStatus(status);
//Non Audio Voice Ducking
UInt32 audioDucking = [[NSUserDefaults standardUserDefaults]boolForKey:VOICE_DUCKING];
status = AudioUnitSetProperty(kAudioUnit, kAUVoiceIOProperty_DuckNonVoiceAudio,
kAudioUnitScope_Global, kInputBus, &audioDucking, sizeof(audioDucking));
checkStatus(status);
//Audio Quality
UInt32 quality = [[NSUserDefaults standardUserDefaults]integerForKey:VOICE_QUALITY];
status = AudioUnitSetProperty(kAudioUnit, kAUVoiceIOProperty_VoiceProcessingQuality,
kAudioUnitScope_Global, kInputBus, &quality, sizeof(quality));
checkStatus(status);
status = AudioUnitInitialize(kAudioUnit);
checkStatus(status);
status = AudioOutputUnitStart( kAudioUnit );
checkStatus(status);
UInt32 audioRoute = (UInt32)kAudioSessionOverrideAudioRoute_Speaker;
status = AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute, sizeof (audioRoute), &audioRoute);
checkStatus(status);
}
-(void) stopListeningWithCoreAudio
{
OSStatus status = AudioUnitUninitialize( kAudioUnit );
checkStatus(status);
status = AudioOutputUnitStop( kAudioUnit );
checkStatus( status );
// if(kAudioUnit)
// {
// status = AudioComponentInstanceDispose(kAudioUnit);
// checkStatus(status);
// kAudioUnit = nil;
// }
status = AudioSessionSetActive(NO);
checkStatus(status);
NSError *error = nil;
[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategorySoloAmbient error:&error];
if (error)
NSLog(#"error setting up audio session: %#", [error localizedDescription]);
}
It works fine for first time. I mean startListeningWithCoreAudio is called by a button pressed event. It could record/process audio well. On other event I am calling stopListeningWithCoreAudio to stop record/process audio.
The problem is coming when I am again trying to call the function startListeningWithCoreAudio. It throws error for two functions. AudioUnitInitialize and AudioOutputUnitStart which are called from startListeningWithCoreAudio.
Can anyone please help me what is the problem?
I found the solution.
If we call below functions back to back, it creates problem.
extern OSStatus AudioUnitUninitialize(AudioUnit inUnit)
extern OSStatus AudioComponentInstanceDispose(AudioComponentInstance inInstance)
So, I called dispose method on main thread by below way.
[self performSelectorOnMainThread:#selector(disposeCoreAudio) withObject:nil waitUntilDone:NO];
-(void) disposeCoreAudio
{
OSStatus status = AudioComponentInstanceDispose(kAudioUnit);
kAudioUnit = nil;
}
It solved the problem. So, the correct sequence is Stop recording, uninitialize recorder and dispose recorder on main thread.
One possible problem is that your code is trying to uninitialize a running audio unit before stopping it.

Redirecting audio output to phone speaker and mic input to headphones

Is it possible to redirect audio output to the phone speaker and still use the microphone headphone input?
If i redirect the audio route to the phone speaker instead of the headphones it also redirects the mic. This makes sense but I can't seem to just be able to just redirect the mic input? Any ideas?
Here is the code I'm using to redirect audio to the speaker:
UInt32 doChangeDefaultRoute = true;
propertySetError = AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryDefaultToSpeaker, sizeof(doChangeDefaultRoute), &doChangeDefaultRoute);
NSAssert(propertySetError == 0, #"Failed to set audio session property: OverrideCategoryDefaultToSpeaker");
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute,sizeof (audioRouteOverride),&audioRouteOverride);
This is possible, but it's picky about how you set it up.
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride);
It's very important to use AVAudioSessionCategoryPlayAndRecord or the route will fail to go to the speaker. Once you've set the override route for the audio session, you can use an AVAudioPlayer instance and send some output to the speaker.
Hope that works for others like it did for me. The documentation on this is scattered, but the Skype app proves it's possible. Persevere, my friends! :)
Some Apple documentation here: http://developer.apple.com/library/ios/#documentation/AudioToolbox/Reference/AudioSessionServicesReference/Reference/reference.html
Do a search on the page for kAudioSessionProperty_OverrideAudioRoute
It doesn't look like it's possible, I'm afraid.
From the Audio Session Programming Guide - kAudioSessionProperty_OverrideAudioRoute
If a headset is plugged in at the time you set this property’s value
to kAudioSessionOverrideAudioRoute_Speaker, the system changes the
audio routing for input as well as for output: input comes from the
built-in microphone; output goes to the built-in speaker.
Possible duplicate of this question
What you can do is to force audio output to speakers in any case:
From UI Hacker - iOS: Force audio output to speakers while headphones are plugged in
#interface AudioRouter : NSObject
+ (void) initAudioSessionRouting;
+ (void) switchToDefaultHardware;
+ (void) forceOutputToBuiltInSpeakers;
#end
and
#import "AudioRouter.h"
#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVFoundation.h>
#implementation AudioRouter
#define IS_DEBUGGING NO
#define IS_DEBUGGING_EXTRA_INFO NO
+ (void) initAudioSessionRouting {
// Called once to route all audio through speakers, even if something's plugged into the headphone jack
static BOOL audioSessionSetup = NO;
if (audioSessionSetup == NO) {
// set category to accept properties assigned below
NSError *sessionError = nil;
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker error: &sessionError];
// Doubly force audio to come out of speaker
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride);
// fix issue with audio interrupting video recording - allow audio to mix on top of other media
UInt32 doSetProperty = 1;
AudioSessionSetProperty (kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof(doSetProperty), &doSetProperty);
// set active
[[AVAudioSession sharedInstance] setDelegate:self];
[[AVAudioSession sharedInstance] setActive: YES error: nil];
// add listener for audio input changes
AudioSessionAddPropertyListener (kAudioSessionProperty_AudioRouteChange, onAudioRouteChange, nil );
AudioSessionAddPropertyListener (kAudioSessionProperty_AudioInputAvailable, onAudioRouteChange, nil );
}
// Force audio to come out of speaker
[[AVAudioSession sharedInstance] overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:nil];
// set flag
audioSessionSetup = YES;
}
+ (void) switchToDefaultHardware {
// Remove forcing to built-in speaker
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_None;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride);
}
+ (void) forceOutputToBuiltInSpeakers {
// Re-force audio to come out of speaker
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride);
}
void onAudioRouteChange (void* clientData, AudioSessionPropertyID inID, UInt32 dataSize, const void* inData) {
if( IS_DEBUGGING == YES ) {
NSLog(#"==== Audio Harware Status ====");
NSLog(#"Current Input: %#", [AudioRouter getAudioSessionInput]);
NSLog(#"Current Output: %#", [AudioRouter getAudioSessionOutput]);
NSLog(#"Current hardware route: %#", [AudioRouter getAudioSessionRoute]);
NSLog(#"==============================");
}
if( IS_DEBUGGING_EXTRA_INFO == YES ) {
NSLog(#"==== Audio Harware Status (EXTENDED) ====");
CFDictionaryRef dict = (CFDictionaryRef)inData;
CFNumberRef reason = CFDictionaryGetValue(dict, kAudioSession_RouteChangeKey_Reason);
CFDictionaryRef oldRoute = CFDictionaryGetValue(dict, kAudioSession_AudioRouteChangeKey_PreviousRouteDescription);
CFDictionaryRef newRoute = CFDictionaryGetValue(dict, kAudioSession_AudioRouteChangeKey_CurrentRouteDescription);
NSLog(#"Audio old route: %#", oldRoute);
NSLog(#"Audio new route: %#", newRoute);
NSLog(#"=========================================");
}
}
+ (NSString*) getAudioSessionInput {
UInt32 routeSize;
AudioSessionGetPropertySize(kAudioSessionProperty_AudioRouteDescription, &routeSize);
CFDictionaryRef desc; // this is the dictionary to contain descriptions
// make the call to get the audio description and populate the desc dictionary
AudioSessionGetProperty (kAudioSessionProperty_AudioRouteDescription, &routeSize, &desc);
// the dictionary contains 2 keys, for input and output. Get output array
CFArrayRef outputs = CFDictionaryGetValue(desc, kAudioSession_AudioRouteKey_Inputs);
// the output array contains 1 element - a dictionary
CFDictionaryRef diction = CFArrayGetValueAtIndex(outputs, 0);
// get the output description from the dictionary
CFStringRef input = CFDictionaryGetValue(diction, kAudioSession_AudioRouteKey_Type);
return [NSString stringWithFormat:#"%#", input];
}
+ (NSString*) getAudioSessionOutput {
UInt32 routeSize;
AudioSessionGetPropertySize(kAudioSessionProperty_AudioRouteDescription, &routeSize);
CFDictionaryRef desc; // this is the dictionary to contain descriptions
// make the call to get the audio description and populate the desc dictionary
AudioSessionGetProperty (kAudioSessionProperty_AudioRouteDescription, &routeSize, &desc);
// the dictionary contains 2 keys, for input and output. Get output array
CFArrayRef outputs = CFDictionaryGetValue(desc, kAudioSession_AudioRouteKey_Outputs);
// the output array contains 1 element - a dictionary
CFDictionaryRef diction = CFArrayGetValueAtIndex(outputs, 0);
// get the output description from the dictionary
CFStringRef output = CFDictionaryGetValue(diction, kAudioSession_AudioRouteKey_Type);
return [NSString stringWithFormat:#"%#", output];
}
+ (NSString*) getAudioSessionRoute {
/*
returns the current session route:
* ReceiverAndMicrophone
* HeadsetInOut
* Headset
* HeadphonesAndMicrophone
* Headphone
* SpeakerAndMicrophone
* Speaker
* HeadsetBT
* LineInOut
* Lineout
* Default
*/
UInt32 rSize = sizeof (CFStringRef);
CFStringRef route;
AudioSessionGetProperty (kAudioSessionProperty_AudioRoute, &rSize, &route);
if (route == NULL) {
NSLog(#"Silent switch is currently on");
return #"None";
}
return [NSString stringWithFormat:#"%#", route];
}
#end

Could not start Audio Queue Error starting recording

CFStringRef state;
UInt32 propertySize = sizeof(CFStringRef);
// AudioSessionInitialize(NULL, NULL, NULL, NULL);
AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &propertySize, &state);
if(CFStringGetLength(state) == 0)
// if(state == 0)
{ //SILENT
NSLog(#"Silent switch is on");
// create vibrate
// AudioServicesPlaySystemSound(kSystemSoundID_Vibrate);
UInt32 audioCategory = kAudioSessionCategory_MediaPlayback;
AudioSessionSetProperty( kAudioSessionProperty_AudioCategory, sizeof(UInt32), &audioCategory);
}
else { //NOT SILENT
NSLog(#"Silent switch is off");
}
where ever i use Above code i am able to play sound file in Silent mode
but after playing recorded sound file in silent mode when i try to record voice again
I get an error
LIke
2010-12-08 13:29:56.710 VoiceRecorder[382:307] -66681
Could not start Audio Queue
Error starting recording
here is the code
// file url
[self setupAudioFormat:&recordState.dataFormat];
CFURLRef fileURL = CFURLCreateFromFileSystemRepresentation(NULL, (const UInt8 *) [filePath UTF8String], [filePath length], NO);
// recordState.currentPacket = 0;
// new input queue
OSStatus status;
status = AudioQueueNewInput(&recordState.dataFormat, HandleInputBuffer, &recordState, CFRunLoopGetCurrent(),kCFRunLoopCommonModes, 0, &recordState.queue);
if (status) {CFRelease(fileURL); printf("Could not establish new queue\n"); return NO;}
// create new audio file
status = AudioFileCreateWithURL(fileURL, kAudioFileAIFFType, &recordState.dataFormat, kAudioFileFlags_EraseFile, &recordState.audioFile); CFRelease(fileURL); // thanks august joki
if (status) {printf("Could not create file to record audio\n"); return NO;}
// figure out the buffer size
DeriveBufferSize(recordState.queue, recordState.dataFormat, 0.5, &recordState.bufferByteSize); // allocate those buffers and enqueue them
for(int i = 0; i < NUM_BUFFERS; i++)
{
status = AudioQueueAllocateBuffer(recordState.queue, recordState.bufferByteSize, &recordState.buffers[i]);
if (status) {printf("Error allocating buffer %d\n", i); return NO;}
status = AudioQueueEnqueueBuffer(recordState.queue, recordState.buffers[i], 0, NULL);
if (status) {printf("Error enqueuing buffer %d\n", i); return NO;}
} // enable metering
UInt32 enableMetering = YES;
status = AudioQueueSetProperty(recordState.queue, kAudioQueueProperty_EnableLevelMetering, &enableMetering,sizeof(enableMetering));
if (status) {printf("Could not enable metering\n"); return NO;}
// start recording
status = AudioQueueStart(recordState.queue, NULL); // status = 0; NSLog(#"%d",status);
if (status) {printf("Could not start Audio Queue\n"); return NO;}
recordState.currentPacket = 0;
recordState.recording = YES;
return YES;
i get an error here
I was facing similar problem in iOS 7.1. Add following in AppDelegate's didFinishLaunchingWithOptions :
AVAudioSession * audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error: nil];
[audioSession setActive:YES error: nil];
EDIT : Above code is working for me