AudioQueue code from SpeakHere fails on iPad - iphone

I've using the SpeakHere audio classes in an App I'm creating that must Play & Record simultaneously.
I'm using the newest SDK with a 3.2 device target in a universal app build (targeting iPad & iPhone).
The app plays streaming movies using MPMoviePlayerController and Records audio simultaneously.
This works 100% perfectly on an iPhone.
However, it fails 100% on my clients iPad. Logs show !act errors that the AudioSession simply is refusing to active! And every log file I've received from him contains numerous Interruptions & Route Changes (namely Category) being returned to the callback functions.
**On an iPhone I do NOT see anything like this at all. The logs show only that teh record was create, and recorded to the specified file. No interruptions, no route changes, no nonsense.
Here's the relevant logs:
Jul 10 07:15:21 iPad mediaserverd[15502] <Error>: [07:15:21.464 <0x1207000>] AudioSessionSetClientPlayState: Error adding running client - session not active
Sat Jul 10 07:15:21 iPad mediaserverd[15502] <Error>: [07:15:21.464 <AudioQueueServer>] AudioQueue: Error '!act' from AudioSessionSetClientPlayState(15642)
I've stubbed out both my callback functions to merely log the occurrences of interruptions and route changes (with reasons). So I won't bother posting the code, since it does literally nothing. I see these logs numerous times during a single attempt to start recording on the iPad though.
I've read virtually every post I can find in the Apple Dev forum and StackOverflow, but cannot seem to find someone with the same problem or any relevant notes in the Apple Docs that explain the difference in iPad behavior.
--Note: The iPad did display some other defective behaviors that were remedied, such as the mismatched Begin Interruption calls that never ended (so I never deactivate the session).
I never receive any logs indicating any failed initialization or activation calls from the AudioQueue or AudioSession code. It simply fails when I attempt to start recording.
--I even attempted forcing AudioSessionSetActive(true); calls before every attempted use of the sound system and I still receive these errors.
Here's the relevant code for the initialization calls:
//Initialize the Sound System
OSStatus error = AudioSessionInitialize(NULL, NULL, interruptionListener, self);
if (error){ printf("ERROR INITIALIZING AUDIO SESSION! %d\n", (int)error); }
else {
//must set the session active first according to devs talking about some defect....
error = AudioSessionSetActive(true);
if (error) NSLog(#"AudioSessionSetActive (true) failed");
UInt32 category = kAudioSessionCategory_PlayAndRecord;
error = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(category), &category);
if (error) printf("couldn't set audio category!\n");
error = AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, propListener, self);
if (error) printf("ERROR ADDING AUDIO SESSION PROP LISTENER! %d\n", (int)error);
//Force mixing!
UInt32 allowMixing = true;
error = AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof (allowMixing), &allowMixing );
if (error) printf("ERROR ENABLING MIXING PROPS! %d\n", (int)error);
UInt32 inputAvailable = 0;
UInt32 size = sizeof(inputAvailable);
// we do not want to allow recording if input is not available
error = AudioSessionGetProperty(kAudioSessionProperty_AudioInputAvailable, &size, &inputAvailable);
if (error) printf("ERROR GETTING INPUT AVAILABILITY! %d\n", (int)error);
isInputAvailable = (inputAvailable) ? YES : NO;
//iPad doesn't require the routing changes, branched to help isolate iPad behavioral issues
if(! [Utils GetMainVC].usingiPad){
//redirect to speaker? //this only resets on a category change!
UInt32 doChangeDefaultRoute = 1;
error = AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryDefaultToSpeaker, sizeof (doChangeDefaultRoute), &doChangeDefaultRoute);
if (error) printf("ERROR CHANGING DEFAULT ROUTE PROPS! %d\n", (int)error);
//this resets with interruption and/or route changes
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
error = AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute,sizeof (audioRouteOverride),&audioRouteOverride);
if (error) printf("ERROR SPEAKER ROUTE PROPS! %d\n", (int)error);
}
// we also need to listen to see if input availability changes
error = AudioSessionAddPropertyListener(kAudioSessionProperty_AudioInputAvailable, propListener, self);
if (error) printf("ERROR ADDING AUDIO SESSION PROP LISTENER! %d\n", (int)error);
error = AudioSessionSetActive(true);
if (error) NSLog(#"AudioSessionSetActive (true) failed");
}
// Allocate our singleton instance for the recorder & player object
myRecorder = new AQRecorder();
myPlayer = new AQPlayer();
Later on in the loadstate callback for the video I merely attempt to start the recording to a predetermined filepath:
myRecorder->StartRecord((CFStringRef)myPathStr);
And audio recording completely fails.
Thanks for your time and help on this.

Turns out this is an odd issue.
1) Use only sound recording and play back and the code runs perfectly on iPad.
2) Add the movie playback and DO NOT call any routing changes and things work fine on iPad.
Somehow the presence of the Movie Player playback is enough to change the AudioSession in some way that forcing any route changes (like to use the device speaker instead of headphones) causes the AudioSession to become inactive.

Related

AudioKit error message: Too Many Frames to Process

I'm using the (very cool) AudioKit framework to process audio for a macOS music visualizer app. My audio source ("mic") is iTunes 12 via Rogue Amoeba Loopback.
In the Xcode debug window, I'm seeing the following error message each time I launch my app:
kAudioUnitErr_TooManyFramesToProcess : inFramesToProcess=513, mMaxFramesPerSlice=512
I've gathered from searches that this is probably related to sample rate, but I haven't found a clear description of what this error indicates (or if it even matters). My app is functioning normally, but I'm wondering if this could be affecting efficiency.
EDIT: The error message does not appear if I use Audio MIDI Setup to set the Loopback device output to 44.1kHz. (I set it initially to 48.0kHz to match my other audio devices, which I keep configured to the video standard.)
Keeping Loopback at 44.1kHz is an acceptable solution, but now my question would be: Is it possible to avoid this error even with a 48.0kHz input? (I tried AKSettings.sampleRate = 48000 but that made no difference.) Or can I just safely ignore the error in any case?
AudioKit is initialized thusly:
AKSettings.audioInputEnabled = true
mic = AKMicrophone()
do {
try mic.setDevice(AudioKit.inputDevices![inputDeviceNumber])
}
catch {
AKLog("Device not set")
}
amplitudeTracker = AKAmplitudeTracker(mic)
AudioKit.output = AKBooster(amplitudeTracker, gain: 0)
do {
try AudioKit.start()
} catch {
AKLog("AudioKit did not start")
}
mic.start()
amplitudeTracker?.start()
This string saved my app
try? AVAudioSession.sharedInstance().setPreferredIOBufferDuration(0.02)

Why are my audio sounds not playing on time?

One of my apps has a simple metronome-style feature that plays a click sound a specified number of times per minute (bpm). I'm doing this by starting an NSTimer, with an interval calculated from the specified bpm, that calls a method that plays the sound.
If I put an NSLog line into the play method, I can see that NSTimer is firing accurately to about 1 millisecond. However, if I record the sound output into an audio editor and then measure the interval between clicks, I can see that they are not evenly spaced. For example, with 150 bpm, the timer fires every 400 milliseconds. But most of the sounds play after 395 milliseconds, with every third or fourth sound playing after 418 milliseconds.
So the sounds are not uniformly delayed, but rather, they follow a pattern of shorter and longer intervals. It seems as if the iOS has a lower resolution for timing of sounds, and is rounding each sound event to the nearest available point, rounding up or down as needed to keep on track overall.
I have tried this with system sounds, AVAudioPlayer and OpenAL and have gotten the exact same results with all three methods. With each method, I'm doing all the setup when the view loads, so each time I play the sound all I have to do is play it. With AVAudioPlayer, I tried calling prepareToPlay using a second timer after each time the sound plays, so it is initialized and ready to go next time, but got the same results.
Here's the code for setting up the OpenAL sound in viewDidLoad (adapted from this tutorial):
// set up the context and device
ALCcontext *context;
ALCdevice *device;
OSStatus result;
device = alcOpenDevice(NULL); // select the "preferred device"
if (device) {
context = alcCreateContext(device, NULL); // use the device to make a context
alcMakeContextCurrent(context); // set the context to the currently active one
}
// open the sound file
NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:#"TempoClick" ofType:#"caf"];
NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];
AudioFileID fileID;
result = AudioFileOpenURL((CFURLRef)soundFileURL, kAudioFileReadPermission, 0, &fileID);
if (result != 0) DLog(#"cannot open file %#: %ld", soundFilePath, result);
// get the size of the file data
UInt32 fileSize = 0;
UInt32 propSize = sizeof(UInt64);
result = AudioFileGetProperty(fileID, kAudioFilePropertyAudioDataByteCount, &propSize, &fileSize);
if (result != 0) DLog(#"cannot find file size: %ld", result);
DLog(#"file size: %li", fileSize);
// copy the data into a buffer, then close the file
unsigned char *outData = malloc(fileSize);
AudioFileOpenURL((CFURLRef)soundFileURL, kAudioFileReadPermission, 0, &fileID); // we get a "file is not open" error on the next line if we don't open this again
result = AudioFileReadBytes(fileID, false, 0, &fileSize, outData);
if (result != 0) NSLog(#"cannot load data: %ld", result);
AudioFileClose(fileID);
alGenBuffers(1, &tempoSoundBuffer);
alBufferData(self.tempoSoundBuffer, AL_FORMAT_MONO16, outData, fileSize, 44100);
free(outData);
outData = NULL;
// connect the buffer to the source and set some preferences
alGenSources(1, &tempoSoundSource);
alSourcei(tempoSoundSource, AL_BUFFER, tempoSoundBuffer);
alSourcef(tempoSoundSource, AL_PITCH, 1.0f);
alSourcef(tempoSoundSource, AL_GAIN, 1.0f);
alSourcei(tempoSoundSource, AL_LOOPING, AL_FALSE);
And then in the play method I just call:
alSourcePlay(self.tempoSoundSource);
Can anyone explain what is happening here, and how I can work around it?
UPDATE 1:
I have another project that plays brief sounds with audio units, so as a quick test I added a timer to that project to play my click sound every 400 milliseconds. In that case, the timing is nearly perfect. So, it seems that NSTimer is fine but system sounds, AVAudioPlayer and OpenAL are less accurate in their playback than audio units.
UPDATE 2:
I just reworked my project to use audio units and now the audio is playing back much more accurately. It still occasionally drifts by up to four milliseconds in either direction, but this is better than the other audio methods. I'm still curious why the other methods all show a pattern of short, short, short, long intervals -- it's like the audio playback times are being rounded up or down to map to some kind of frame rate -- so I'll leave this question open for anyone who can explain that and/or offer a workaround for the other audio methods.
NSTimer does not guarantee when your method will actually get fired.
More info here: How to program a real-time accurate audio sequencer on the iphone?
Regarding your edits:
AVAudioPlayer takes some time to initialize itself. If you call prepareToPlay, it will initialize itself such that it can play the currently loaded sound immediately upon calling play. Once playback stops, it uninitializes itself, so you'd need to call prepareToPlay again to reinitialize. It's best to use this class for stream-y playback rather than discrete sound playback.
With OpenAL, once you've loaded the buffer, attaching it to a source and playing it should cause no delay at all.
You can encapsulate your audio units code into a .mm file and then call that from .m modules without having to compile those as C++.
Okay, I've figured it out. The real reason audio units worked better than the other audio methods is that my audio unit class, which I was adapting from another project, was setting a buffer duration property in the audio session, like this:
Float32 preferredBufferSize = .001;
UInt32 size = sizeof(preferredBufferSize);
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, size, &preferredBufferSize);
When I added this code to the OpenAL version, or even to the AVAudioPlayer version, I got accuracy to within a few milliseconds, the same as with audio units. (System Sounds, however, were still not very accurate.) I can verify the connection by increasing the buffer size and watching the playback intervals get less accurate.
Of course I only figured this out after spending an entire day adapting my project to use audio units -- tweaking it to compile under C++, testing the interruption handlers, etc. I hope this can save someone else from the same trouble.

Playing audio from a continuous stream of data (iOS)

Been banging my head against this problem all morning.
I have setup a connection to a datasource which returns audio data (It is a recording device, so there is no set length on the data. the data just streams in. Like, if you would open a stream to a radio)
and I have managed to receive all the packets of data in my code. Now I just need to play it. I want to play the data that is coming in, so I do not want to queue a few minutes or anything, I want to use the data I am recieving at that exact moment and play it.
Now I been searching all morning finding different examples but none were really layed out.
in the
(void)connection:(NSURLConnection )connection didReceiveData:(NSData)data {
function, the "data" package is the audio package. I tried streaming it with AVPlayer, MFVideoPlayer but nothing has worked for me so far. Also tried looking at mattgallagher's Audiostreamer but still was unable to achieve it.
Anyone here can help, has some (preferably) working examples?
Careful: The answer below is only valid if you receive PCM data from the server. This is of course never happens. That's why between rendering the audio and receiving the data you need another step: data conversion.
Depending on format, this could be more or less tricky, but in general you should use Audio Converter Services for this step.
You should use -(void)connection:(NSURLConnection )connection didReceiveData:(NSData)data only to fill a buffer with the data that comes from the server, playing it should not have anything to do with this method.
Now, to play the data you 'stored' in memory using the buffer you need to use RemoteIO and audio units. Here is a good, comprehensive tutorial. You can remove the "record" part from the tutorial as you don't really need it.
As you can see, they define a callback for playback:
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
and playbackCallback function looks like this:
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
for (int i = 0 ; i < ioData->mNumberBuffers; i++){
AudioBuffer buffer = ioData->mBuffers[i];
unsigned char *frameBuffer = buffer.mData;
for (int j = 0; j < inNumberFrames*2; j++){
frameBuffer[j] = getNextPacket();//this here is a function you have to make to get the next chunk of bytes available in the stream buffer
}
}
return noErr;
}
Basically what it does is to fill up the ioData buffer with the next chunk of bytes that need to be played. Be sure to zero out (silence) the ioData buffer if there is no new data to play (the player is silenced if not enough data is in the stream buffer).
Also, you can achieve the same thing with OpenAL using alSourceQueueBuffers and alSourceUnqueueBuffers to queue buffers one after the other.
That's it. Happy codding!

How to use AVCaptureSession to stream live preview video, then take a photo, then return to streaming

I have an application that creates its own live preview prior to taking a still photo. The app needs to run some processing on the image data and thus is not able to rely on AVCaptureVideoPreviewLayer. Getting the initial stream to work is going quite well, using Apple's example code. The problem comes when I try to switch to the higher quality image to take the snapshot. In response to a button press I attempt to reconfigure the session for taking a full resolution photo. I've tried many variations but here is my latest example (which still does not work):
- (void)sessionSetupForPhoto
{
[session beginConfiguration];
session.sessionPreset = AVCaptureSessionPresetPhoto;
AVCaptureStillImageOutput *output = [[[AVCaptureStillImageOutput alloc] init] autorelease];
for (AVCaptureOutput *output in [session outputs]) {
[session removeOutput:output];
}
if ([session canAddOutput:output]){
[session addOutput:output];
} else {
NSLog(#"Not able to add an AVCaptureStillImageOutput");
}
[session commitConfiguration];
}
I am consistently getting an error message just after the commitConfiguration line that looks like this:
(that is to say, I am getting an AVCaptureSessionRuntimeErrorNotification sent to my registered observer)
Received an error:
NSConcreteNotification 0x19d870 {name
= AVCaptureSessionRuntimeErrorNotification;
object = ;
userInfo = {
AVCaptureSessionErrorKey = "Error Domain=AVFoundationErrorDomain
Code=-11800 \"The operation
couldn\U2019t be completed.
(AVFoundationErrorDomain error
-11800.)\" UserInfo=0x19d810 {}";
The documentation in XCode ostensibly provides more information for the error number (-11800), "AVErrorUnknown - Reason for the error is unknown.";
Previously I had also tried calls to stopRunning and startRunning, but no longer do that after watching WWDC Session 409, where it is discouraged. When I was stopping and starting, I was getting a different error message -11819, which corresponds to "AVErrorMediaServicesWereReset - The operation could not be completed because media services became unavailable.", which is much nicer than simply "unknown", but not necessarily any more helpful.
It successfully adds the AVCaptureStillImageOutput (i.e., does NOT emit the log message).
I am testing on an iPhone 3g (w/4.1) and iPhone 4.
This call is happening in the main thread, which is also where my original AVCaptureSession setup took place.
How can I avoid the error? How can I switch to the higher resolution to take the photo?
Thank you!
Since you're processing the video data coming out of the AVCaptureSession, I'm assuming you have an AVCaptureVideoDataOutput connected to it prior to calling sessionSetupForPhoto.
If so, can you elaborate on what you're doing in captureOutput:didOutputSampleBuffer:? Without being able to see more, I'm guessing there may be a problem with removing the old outputs and subsequently setting the photo quality preset.
Also, the output variable you're using as an iterator when you remove your outputs is hiding the still image output. Not a problem, but it makes the code a little harder to read.
There is no need to switch sessions. Just add AVCaptureStillImageOutput to your session on initialization and call the following when you are about to capture the image and use the CMSampleBufferRef accordingly:
captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
{
}

AudioQueue and iOS4?

The following code used to work for me in the past. I'm trying it now with iOS4 without luck. It is working in the simulator, but I don't hear anything on the device itself. I first try to record few samples into a NSMutableData variable, and then I try to play them back.
I've tried the SpeakHere sample from Apple - which works (but it playbacks from a file - not memory).
Any idea what am I missing?
AudioSessionInitialize(NULL, NULL, NULL, NULL);
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;
AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(sessionCategory), &sessionCategory);
AudioSessionSetActive(true);
AudioQueueNewOutput(&m_format,&OutputCallback,self,CFRunLoopGetCurrent(), kCFRunLoopCommonModes,0,&m_device);
AudioQueueBufferRef nBuffer=NULL;
AudioQueueAllocateBuffer(m_device,[data length],&nBuffer);
nBuffer->mAudioDataByteSize=[data length];
[data getBytes:(nBuffer->mAudioData) length:(nBuffer->mAudioDataByteSize)];
AudioQueueEnqueueBuffer(m_device,nBuffer,0,NULL);
AudioQueueStart(m_device,NULL);
The main things I can suggest are:
(1) make sure the device is not muted and the volume is up
(2) Check the result codes. For instance:
OSStatus errorCode = AudioQueueNewOutput (...) ;
if ( errorCode ) NSLog ( #"Error: %u" , errorCode ) ;
Something else that would give you a little bit more information:
While it is supposed to be running, try adjusting the volume. If it adjusts the ringer volume, the AudioQueue is not playing and/or setup correctly. If it adjusts the playback volume, than the AudioQueue is probably not getting data when it asks for it.
For the record, I have a an application that's using the AudioQueue on iOS 4 on all devices, so I know it works and it's not a bug.
Keep at it: the AudioQueue can be very, very annoying at times.