AudioQueue and iOS4? - iphone

The following code used to work for me in the past. I'm trying it now with iOS4 without luck. It is working in the simulator, but I don't hear anything on the device itself. I first try to record few samples into a NSMutableData variable, and then I try to play them back.
I've tried the SpeakHere sample from Apple - which works (but it playbacks from a file - not memory).
Any idea what am I missing?
AudioSessionInitialize(NULL, NULL, NULL, NULL);
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;
AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(sessionCategory), &sessionCategory);
AudioSessionSetActive(true);
AudioQueueNewOutput(&m_format,&OutputCallback,self,CFRunLoopGetCurrent(), kCFRunLoopCommonModes,0,&m_device);
AudioQueueBufferRef nBuffer=NULL;
AudioQueueAllocateBuffer(m_device,[data length],&nBuffer);
nBuffer->mAudioDataByteSize=[data length];
[data getBytes:(nBuffer->mAudioData) length:(nBuffer->mAudioDataByteSize)];
AudioQueueEnqueueBuffer(m_device,nBuffer,0,NULL);
AudioQueueStart(m_device,NULL);

The main things I can suggest are:
(1) make sure the device is not muted and the volume is up
(2) Check the result codes. For instance:
OSStatus errorCode = AudioQueueNewOutput (...) ;
if ( errorCode ) NSLog ( #"Error: %u" , errorCode ) ;
Something else that would give you a little bit more information:
While it is supposed to be running, try adjusting the volume. If it adjusts the ringer volume, the AudioQueue is not playing and/or setup correctly. If it adjusts the playback volume, than the AudioQueue is probably not getting data when it asks for it.
For the record, I have a an application that's using the AudioQueue on iOS 4 on all devices, so I know it works and it's not a bug.
Keep at it: the AudioQueue can be very, very annoying at times.

Related

Why are my audio sounds not playing on time?

One of my apps has a simple metronome-style feature that plays a click sound a specified number of times per minute (bpm). I'm doing this by starting an NSTimer, with an interval calculated from the specified bpm, that calls a method that plays the sound.
If I put an NSLog line into the play method, I can see that NSTimer is firing accurately to about 1 millisecond. However, if I record the sound output into an audio editor and then measure the interval between clicks, I can see that they are not evenly spaced. For example, with 150 bpm, the timer fires every 400 milliseconds. But most of the sounds play after 395 milliseconds, with every third or fourth sound playing after 418 milliseconds.
So the sounds are not uniformly delayed, but rather, they follow a pattern of shorter and longer intervals. It seems as if the iOS has a lower resolution for timing of sounds, and is rounding each sound event to the nearest available point, rounding up or down as needed to keep on track overall.
I have tried this with system sounds, AVAudioPlayer and OpenAL and have gotten the exact same results with all three methods. With each method, I'm doing all the setup when the view loads, so each time I play the sound all I have to do is play it. With AVAudioPlayer, I tried calling prepareToPlay using a second timer after each time the sound plays, so it is initialized and ready to go next time, but got the same results.
Here's the code for setting up the OpenAL sound in viewDidLoad (adapted from this tutorial):
// set up the context and device
ALCcontext *context;
ALCdevice *device;
OSStatus result;
device = alcOpenDevice(NULL); // select the "preferred device"
if (device) {
context = alcCreateContext(device, NULL); // use the device to make a context
alcMakeContextCurrent(context); // set the context to the currently active one
}
// open the sound file
NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:#"TempoClick" ofType:#"caf"];
NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];
AudioFileID fileID;
result = AudioFileOpenURL((CFURLRef)soundFileURL, kAudioFileReadPermission, 0, &fileID);
if (result != 0) DLog(#"cannot open file %#: %ld", soundFilePath, result);
// get the size of the file data
UInt32 fileSize = 0;
UInt32 propSize = sizeof(UInt64);
result = AudioFileGetProperty(fileID, kAudioFilePropertyAudioDataByteCount, &propSize, &fileSize);
if (result != 0) DLog(#"cannot find file size: %ld", result);
DLog(#"file size: %li", fileSize);
// copy the data into a buffer, then close the file
unsigned char *outData = malloc(fileSize);
AudioFileOpenURL((CFURLRef)soundFileURL, kAudioFileReadPermission, 0, &fileID); // we get a "file is not open" error on the next line if we don't open this again
result = AudioFileReadBytes(fileID, false, 0, &fileSize, outData);
if (result != 0) NSLog(#"cannot load data: %ld", result);
AudioFileClose(fileID);
alGenBuffers(1, &tempoSoundBuffer);
alBufferData(self.tempoSoundBuffer, AL_FORMAT_MONO16, outData, fileSize, 44100);
free(outData);
outData = NULL;
// connect the buffer to the source and set some preferences
alGenSources(1, &tempoSoundSource);
alSourcei(tempoSoundSource, AL_BUFFER, tempoSoundBuffer);
alSourcef(tempoSoundSource, AL_PITCH, 1.0f);
alSourcef(tempoSoundSource, AL_GAIN, 1.0f);
alSourcei(tempoSoundSource, AL_LOOPING, AL_FALSE);
And then in the play method I just call:
alSourcePlay(self.tempoSoundSource);
Can anyone explain what is happening here, and how I can work around it?
UPDATE 1:
I have another project that plays brief sounds with audio units, so as a quick test I added a timer to that project to play my click sound every 400 milliseconds. In that case, the timing is nearly perfect. So, it seems that NSTimer is fine but system sounds, AVAudioPlayer and OpenAL are less accurate in their playback than audio units.
UPDATE 2:
I just reworked my project to use audio units and now the audio is playing back much more accurately. It still occasionally drifts by up to four milliseconds in either direction, but this is better than the other audio methods. I'm still curious why the other methods all show a pattern of short, short, short, long intervals -- it's like the audio playback times are being rounded up or down to map to some kind of frame rate -- so I'll leave this question open for anyone who can explain that and/or offer a workaround for the other audio methods.
NSTimer does not guarantee when your method will actually get fired.
More info here: How to program a real-time accurate audio sequencer on the iphone?
Regarding your edits:
AVAudioPlayer takes some time to initialize itself. If you call prepareToPlay, it will initialize itself such that it can play the currently loaded sound immediately upon calling play. Once playback stops, it uninitializes itself, so you'd need to call prepareToPlay again to reinitialize. It's best to use this class for stream-y playback rather than discrete sound playback.
With OpenAL, once you've loaded the buffer, attaching it to a source and playing it should cause no delay at all.
You can encapsulate your audio units code into a .mm file and then call that from .m modules without having to compile those as C++.
Okay, I've figured it out. The real reason audio units worked better than the other audio methods is that my audio unit class, which I was adapting from another project, was setting a buffer duration property in the audio session, like this:
Float32 preferredBufferSize = .001;
UInt32 size = sizeof(preferredBufferSize);
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, size, &preferredBufferSize);
When I added this code to the OpenAL version, or even to the AVAudioPlayer version, I got accuracy to within a few milliseconds, the same as with audio units. (System Sounds, however, were still not very accurate.) I can verify the connection by increasing the buffer size and watching the playback intervals get less accurate.
Of course I only figured this out after spending an entire day adapting my project to use audio units -- tweaking it to compile under C++, testing the interruption handlers, etc. I hope this can save someone else from the same trouble.

Playing audio from a continuous stream of data (iOS)

Been banging my head against this problem all morning.
I have setup a connection to a datasource which returns audio data (It is a recording device, so there is no set length on the data. the data just streams in. Like, if you would open a stream to a radio)
and I have managed to receive all the packets of data in my code. Now I just need to play it. I want to play the data that is coming in, so I do not want to queue a few minutes or anything, I want to use the data I am recieving at that exact moment and play it.
Now I been searching all morning finding different examples but none were really layed out.
in the
(void)connection:(NSURLConnection )connection didReceiveData:(NSData)data {
function, the "data" package is the audio package. I tried streaming it with AVPlayer, MFVideoPlayer but nothing has worked for me so far. Also tried looking at mattgallagher's Audiostreamer but still was unable to achieve it.
Anyone here can help, has some (preferably) working examples?
Careful: The answer below is only valid if you receive PCM data from the server. This is of course never happens. That's why between rendering the audio and receiving the data you need another step: data conversion.
Depending on format, this could be more or less tricky, but in general you should use Audio Converter Services for this step.
You should use -(void)connection:(NSURLConnection )connection didReceiveData:(NSData)data only to fill a buffer with the data that comes from the server, playing it should not have anything to do with this method.
Now, to play the data you 'stored' in memory using the buffer you need to use RemoteIO and audio units. Here is a good, comprehensive tutorial. You can remove the "record" part from the tutorial as you don't really need it.
As you can see, they define a callback for playback:
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
and playbackCallback function looks like this:
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
for (int i = 0 ; i < ioData->mNumberBuffers; i++){
AudioBuffer buffer = ioData->mBuffers[i];
unsigned char *frameBuffer = buffer.mData;
for (int j = 0; j < inNumberFrames*2; j++){
frameBuffer[j] = getNextPacket();//this here is a function you have to make to get the next chunk of bytes available in the stream buffer
}
}
return noErr;
}
Basically what it does is to fill up the ioData buffer with the next chunk of bytes that need to be played. Be sure to zero out (silence) the ioData buffer if there is no new data to play (the player is silenced if not enough data is in the stream buffer).
Also, you can achieve the same thing with OpenAL using alSourceQueueBuffers and alSourceUnqueueBuffers to queue buffers one after the other.
That's it. Happy codding!

playing pcm data on iphone

i need to play linear pcm data live on a iphone.
i get a LIVE datastream via RTSP, and i can currently read it out from iphone, save it into a file, play it on a desktop audioplayer that supports pcm, therefore i think the transport is okay.
now i got stuck, i have completely! no idea what to do with my NSData object containing the data.
i did a bit of research, ending up with AudioUnits, but i just cannot assign my NSdata to the audiobuffer, respectivly i have no clue how.
for my instance, i assigned the callback:
AURenderCallbackStruct input;
input.inputProc = makeSound;
input.inputProcRefCon = self;
and having the function 'makeSound':
OSStatus makeSound(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
//so what to do here?
//ioData->mBuffers[0].mdata = [mySound bytes]; does not work, nor does
//ioData->mBuffers = [mySound byes];
return noErr;
}
is my approeach wrong in gerneral?
of what do i need to know/learn/implement? i am a complete audio-newbie, so my suggestion was, that i dont need several buffers, since when i get the new sound-package from rtsp, the old one is ended, since its a live stream (i base this on my recordings, that just appended the bytes w/o looking up presentation timestamps, since i dont receive some anyways)
Cheers
I don't know if this is exactly what you are looking for but some of Matt Gallagher's AudioStreamer code might be helpful to you. In particular, check out how he handles the audio buffering.
http://cocoawithlove.com/2010/03/streaming-mp3aac-audio-again.html

AudioQueue code from SpeakHere fails on iPad

I've using the SpeakHere audio classes in an App I'm creating that must Play & Record simultaneously.
I'm using the newest SDK with a 3.2 device target in a universal app build (targeting iPad & iPhone).
The app plays streaming movies using MPMoviePlayerController and Records audio simultaneously.
This works 100% perfectly on an iPhone.
However, it fails 100% on my clients iPad. Logs show !act errors that the AudioSession simply is refusing to active! And every log file I've received from him contains numerous Interruptions & Route Changes (namely Category) being returned to the callback functions.
**On an iPhone I do NOT see anything like this at all. The logs show only that teh record was create, and recorded to the specified file. No interruptions, no route changes, no nonsense.
Here's the relevant logs:
Jul 10 07:15:21 iPad mediaserverd[15502] <Error>: [07:15:21.464 <0x1207000>] AudioSessionSetClientPlayState: Error adding running client - session not active
Sat Jul 10 07:15:21 iPad mediaserverd[15502] <Error>: [07:15:21.464 <AudioQueueServer>] AudioQueue: Error '!act' from AudioSessionSetClientPlayState(15642)
I've stubbed out both my callback functions to merely log the occurrences of interruptions and route changes (with reasons). So I won't bother posting the code, since it does literally nothing. I see these logs numerous times during a single attempt to start recording on the iPad though.
I've read virtually every post I can find in the Apple Dev forum and StackOverflow, but cannot seem to find someone with the same problem or any relevant notes in the Apple Docs that explain the difference in iPad behavior.
--Note: The iPad did display some other defective behaviors that were remedied, such as the mismatched Begin Interruption calls that never ended (so I never deactivate the session).
I never receive any logs indicating any failed initialization or activation calls from the AudioQueue or AudioSession code. It simply fails when I attempt to start recording.
--I even attempted forcing AudioSessionSetActive(true); calls before every attempted use of the sound system and I still receive these errors.
Here's the relevant code for the initialization calls:
//Initialize the Sound System
OSStatus error = AudioSessionInitialize(NULL, NULL, interruptionListener, self);
if (error){ printf("ERROR INITIALIZING AUDIO SESSION! %d\n", (int)error); }
else {
//must set the session active first according to devs talking about some defect....
error = AudioSessionSetActive(true);
if (error) NSLog(#"AudioSessionSetActive (true) failed");
UInt32 category = kAudioSessionCategory_PlayAndRecord;
error = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(category), &category);
if (error) printf("couldn't set audio category!\n");
error = AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, propListener, self);
if (error) printf("ERROR ADDING AUDIO SESSION PROP LISTENER! %d\n", (int)error);
//Force mixing!
UInt32 allowMixing = true;
error = AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof (allowMixing), &allowMixing );
if (error) printf("ERROR ENABLING MIXING PROPS! %d\n", (int)error);
UInt32 inputAvailable = 0;
UInt32 size = sizeof(inputAvailable);
// we do not want to allow recording if input is not available
error = AudioSessionGetProperty(kAudioSessionProperty_AudioInputAvailable, &size, &inputAvailable);
if (error) printf("ERROR GETTING INPUT AVAILABILITY! %d\n", (int)error);
isInputAvailable = (inputAvailable) ? YES : NO;
//iPad doesn't require the routing changes, branched to help isolate iPad behavioral issues
if(! [Utils GetMainVC].usingiPad){
//redirect to speaker? //this only resets on a category change!
UInt32 doChangeDefaultRoute = 1;
error = AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryDefaultToSpeaker, sizeof (doChangeDefaultRoute), &doChangeDefaultRoute);
if (error) printf("ERROR CHANGING DEFAULT ROUTE PROPS! %d\n", (int)error);
//this resets with interruption and/or route changes
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
error = AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute,sizeof (audioRouteOverride),&audioRouteOverride);
if (error) printf("ERROR SPEAKER ROUTE PROPS! %d\n", (int)error);
}
// we also need to listen to see if input availability changes
error = AudioSessionAddPropertyListener(kAudioSessionProperty_AudioInputAvailable, propListener, self);
if (error) printf("ERROR ADDING AUDIO SESSION PROP LISTENER! %d\n", (int)error);
error = AudioSessionSetActive(true);
if (error) NSLog(#"AudioSessionSetActive (true) failed");
}
// Allocate our singleton instance for the recorder & player object
myRecorder = new AQRecorder();
myPlayer = new AQPlayer();
Later on in the loadstate callback for the video I merely attempt to start the recording to a predetermined filepath:
myRecorder->StartRecord((CFStringRef)myPathStr);
And audio recording completely fails.
Thanks for your time and help on this.
Turns out this is an odd issue.
1) Use only sound recording and play back and the code runs perfectly on iPad.
2) Add the movie playback and DO NOT call any routing changes and things work fine on iPad.
Somehow the presence of the Movie Player playback is enough to change the AudioSession in some way that forcing any route changes (like to use the device speaker instead of headphones) causes the AudioSession to become inactive.

Audio Volume in Apple's speakHere example code

I am trying to increase the volume of my Audio Output using the speakHere example from Apple. The volume is already set to max with :
// set the volume of the queue
XThrowIfError (AudioQueueSetParameter(mQueue, kAudioQueueParam_Volume, 1.0), "set queue volume");
However, the output is directed to the ear-piece speaker, which is not as loud as the bottom-left-speaker on the iPhone. An example of this can be seen nicely in the 'Voice Memos' that comes with the iPhone. They provide a 'Speaker-Button' that toggles between the two speakers. Does anybody have an idea how that is done? What do I need to output my Audio to the bottom speaker?
Any tips, hints, answers will be much appreciated.
Thanks you in advance
Al
You need to set Player to Speaker Mode.
Add this code in AQPlayer.mm:
OSStatus error;
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
error = AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof (audioRouteOverride), &audioRouteOverride);
if (error) printf("couldn't set audio speaker!");
Before this code:
XThrowIfError (AudioQueueSetParameter(mQueue, kAudioQueueParam_Volume, 1.0), "set queue volume");
I hope it helps.
Take a look at AudioSessionSetProperty, the kAudioSessionProperty_OverrideCategoryDefaultToSpeaker property in particular.
look at the kAudioSessionProperty_OverrideAudioRoute property