AudioQueue PropertyListener IsRunning only callback once - iphone

OSStatus err = AudioQueueNewOutput(&audioDescription, AudioPlayerAQOutputCallback, ( void* )self, nil, nil, 0, &audioQueue);
if( err != noErr )
NSLog(#"Couldn't open AudioFile.");
err = AudioQueueAddPropertyListener(audioQueue, kAudioQueueProperty_IsRunning, isRunningProc, self);
if( err != noErr )
NSLog(#"Couldn't register for playback state changes.");
this callback function only be called once after AudioQueueStart(audioQueue, NULL);
what ever i call AudioQueuePause(audioQueue);
or audio reach to end.
static void isRunningProc(void * inUserData,
AudioQueueRef inAQ,
AudioQueuePropertyID inID)
what i have missed?

I did a short test on this:
It does indeed seem like the callback it is not called either for pause or for start, when you are resuming a pause.
But this is not something you cannot solve. You started the song somehow. This will trigger the property listener. Equally if the song stops. Or you stop it. You may have to trigger the property listener yourself somehow using something like this in your play routine:
if (bytesRead == 0) {
//This will trigger the property listener
AudioQueueStop(inAQ, false);
}
else {
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
}
As AudioQueue is concerned, as long as you keep feeding it with audio buffers to play, it is still playing. (I also tested not feeding any buffers at all, which did not trigger a stop, so you have to call stop explicitly to trigger the property listener.)
This means you already know whether your song is playing or not. To pause or un-pause is requested by clicking your button. If the song is not playing, don't do anything. If the song is playing, call AudioQueuePause and set a flag that you have paused your music. Remember to check the error code. (See (1) below). If the flag says that you have paused the music, call AudioQueueStart, and clear the flag indicating if you have paused or not. Again check the error code.
(1) Why check the error code?
First, although unlikely, an error may occur because it is a blue moon.
However, my concern is with multiple threads. AudioQueue obviously runs on a separate thread than your GUI. That means if you test a flag whether music is playing or not, this state cannot fully be trusted because it might have changed since you tested the status. Another thread might have snuck in between your test and your action based on that test.
Say you check that the song is already playing. (It is.)
Then you ask the song to pause, but the song is really stopped since it reached the end in the meantime before you got to ask the song to pause.
Then you ask to pause the song. But it is already stopped.
What happens then? I don't really know. It might not even be a problem in this situation, but things like this is worth considering. It needs testing, or at least a consulting with the documentation.
How about another scenario? What if the song is stopped and you ask to start it again. I would think that is a worse scenario, but it might not be a problem. Again consider those cases and check the documentation, or even test yourself.

Related

Stopping a `CallbackInstrument` prior to setting `AVAudioSession.setActive(false)`

In attempt to pause my signal chain when a user puts the app into the background, or is interrupted by a phone call, I am trying to handle the interruption by stopping all playing nodes and setting the AVAudioSession().setActive(false) as per convention.
It seems fine to call stop() on all nodes except CallbackInstrument, which crashes at line 231 of DSPBase.cpp in *CAudioKitEX from the AudioKitEX repo :
void DSPBase::processOrBypass(AUAudioFrameCount frameCount, AUAudioFrameCount bufferOffset) {
if (isStarted) {
process(FrameRange{bufferOffset, frameCount});
} else {
// Advance all ramps.
stepRampsBy(frameCount);
// Copy input to output.
if (inputBufferLists.size() and !bCanProcessInPlace) {
for (int channel=0; channel< channelCount; ++channel) {
auto input = (const float *)inputBufferLists[0]->mBuffers[channel].mData + bufferOffset;
auto output = (float *)outputBufferList->mBuffers[channel].mData + bufferOffset;
std::copy(input, input+frameCount, output);
}
}
// Generators should be silent.
if (inputBufferLists.empty()) {
zeroOutput(frameCount, bufferOffset);
}
}
}
My CallbackInstrument is attached to a Sequencer as a discrete track. The crash occurs when the sequencer is playing and the app goes into the background, which I then call a method to stop all current sequencers, stop all active nodes prior to calling AVAudioSession.setSession(false).
I would simply ignore this and/or not stop CallbackInstrument however, by not attempting to stop or reset the CallbackInstrument node, AVAudioSession catches an error:
AVAudioSession_iOS.mm:1271 Deactivating an audio session that has running I/O. All I/O should be stopped or paused prior to deactivating the audio session.
Error seting audio session false:Error Domain=NSOSStatusErrorDomain Code=560030580 "(null)"
Error code=560030580 refers to AVAudioSessionErrorCodeIsBusy as stated here
Question:
If stopping a Sequencer with a CallbackInstrument does not in fact stop rendering audio/midi from the callback, how do we safely stop a signal chain with a CallbackInstrument in order to prepare for AVAudioSession.setActive(false)?
I have an example repo of the issue which can be found here.
Nicely presented question by the way :)
Seems like doing it this way does not guarantee that the audioBufferList in process or bypass is populated, while it does report that the size() == 1, its pointer to audioBuferList[0] is nill...
Possibly there should be an additional check for this?
However, if instead of calling stop() on every node in your AudioManager.sleep() if you call engine.stop()
and conversely self.start() in your AudioManager.wake() instead of start on all your nodes... This avoids your error and should stop/start all nodes in the process.

Gstreamer 1.0 Pause signal

I need to detect when the current playing audio/video is paused. I cannot find anything for 1.0. My app is a bit complex but here is condensed code
/* This function is called when the pipeline changes states. We use it to
* keep track of the current state. */
static void state_changed_cb(GstBus *bus, GstMessage *msg, CustomData *data)
{
GstState old_state, new_state, pending_state;
gst_message_parse_state_changed(msg, &old_state, &new_state, &pending_state);
if(GST_MESSAGE_SRC(msg) == GST_OBJECT(data->playbin))
{
g_print("State set to %s\n", gst_element_state_get_name(new_state));
}
}
gst_init(&wxTheApp->argc, &argv);
m_playbin = gst_element_factory_make("playbin", "playbin");
if(!m_playbin)
{
g_printerr("Not all elements could be created.\n");
exit(1);
}
CustomData* data = new CustomData(xid, m_playbin);
GstBus *bus = gst_element_get_bus(m_playbin);
gst_bus_set_sync_handler(bus, (GstBusSyncHandler) create_window, data, NULL);//here I do video overly stuffs
g_signal_connect (G_OBJECT (bus), "message::state-changed", (GCallback)state_changed_cb, &data);
What do I do wrong? I cannot find working example on connecting such events on Gstreamer 1.0 and 0.x seems a bit different than 1.0 so the vast exaples there don't help
UPDATE
I have found a way to get signals. I run wxWidgets timer with 500ms time span and each time timer fires I call
GstMessage* msg = gst_bus_pop(m_bus);
if(msg!=NULL)
{
g_print ("New Message -- %s\n", gst_message_type_get_name(msg->type));
}
Now I get a lot of 'state-change' messages. Still I want to know if that message is for Pause or Stop or Play or End of Media (I mean way to differentiate which message is this) so that I can notify the UI.
So while I get signals now, the basic problem, to get specific signals, remains unsolved.
You have to call gst_bus_add_signal_watch() (like in 0.10) to enable emission of the signals. Without that you can only use the other ways to get notified about GstMessages on that bus.
Also just to be sure, you need a running GLib main loop on the default main context for this to work. Otherwise you need to do things a bit different.
For the updated question:
Check the documentation: gst_message_parse_state_changed() can be used to parse the old, new and pending state from the message. This is also still the same as in 0.10. From the application point of view, and conceptionally nothing much has changed really between 0.10 and 1.0
Also you shouldn't do this timeout-waiting as it will block your wxwidget main loop. Easiest solution would be to use a sync bus handler (which you already have) and dispatch all messages from there to some callback on the wxwidget main loop.

Speakhere does NOT record after receive a phone call

I'm working on a project that need record and analyze sound, every think is ok when i use speak here.
But when some one call my phone, the record stop and when the app return, it never record again.
I try to restart the recorder by press record, but i get this error:
Error: couldn't get input channel count ('!cat')
Error: couldn't enable metering (-50)
ERROR: metering failed
I also try to restart by call StartRecord(....) but nothing different. So anyone can help me
if (inInterruptionState == kAudioSessionEndInterruption)
THIS->recorder->StartRecord(CFSTR("recordedFile.caf"));
An app must stop recording in any audio session interrupt listener begin interruption callback if it ever wants to start recording again. Otherwise, a force quit and restart by the user may be required.
I've been having the same problem with SpeakHere and found this solution by (hours and hours of) trial and error. Try this: get rid of the references to playbackWasInterrupted (commented out below), but leave in the other player-related directives. Somehow this re-enables the recorder! If anyone could explain why this works, I would love to know!
Under void interruptionListener, change
else if ((inInterruptionState == kAudioSessionEndInterruption)&& THIS->playbackWasInterrupted))
to
else if (inInterruptionState == kAudioSessionEndInterruption)
//&& THIS->playbackWasInterrupted)
and then comment out or delete the "playbackWasInterrupted" line below:
{
// we were playing back when we were interrupted, so reset and resume now
THIS->player->StartQueue(true);
[[NSNotificationCenter defaultCenter] postNotificationName:#"playbackQueueResumed" object:THIS];
// THIS->playbackWasInterrupted = NO;
}
just from memory - when returning to foreground (in the corresponding notification handler), you need to call
AudioSessionSetActive (true)
or something similar. As I said, I only read it on a related question - no garanties.
Good Luck, nobi

Why are my audio sounds not playing on time?

One of my apps has a simple metronome-style feature that plays a click sound a specified number of times per minute (bpm). I'm doing this by starting an NSTimer, with an interval calculated from the specified bpm, that calls a method that plays the sound.
If I put an NSLog line into the play method, I can see that NSTimer is firing accurately to about 1 millisecond. However, if I record the sound output into an audio editor and then measure the interval between clicks, I can see that they are not evenly spaced. For example, with 150 bpm, the timer fires every 400 milliseconds. But most of the sounds play after 395 milliseconds, with every third or fourth sound playing after 418 milliseconds.
So the sounds are not uniformly delayed, but rather, they follow a pattern of shorter and longer intervals. It seems as if the iOS has a lower resolution for timing of sounds, and is rounding each sound event to the nearest available point, rounding up or down as needed to keep on track overall.
I have tried this with system sounds, AVAudioPlayer and OpenAL and have gotten the exact same results with all three methods. With each method, I'm doing all the setup when the view loads, so each time I play the sound all I have to do is play it. With AVAudioPlayer, I tried calling prepareToPlay using a second timer after each time the sound plays, so it is initialized and ready to go next time, but got the same results.
Here's the code for setting up the OpenAL sound in viewDidLoad (adapted from this tutorial):
// set up the context and device
ALCcontext *context;
ALCdevice *device;
OSStatus result;
device = alcOpenDevice(NULL); // select the "preferred device"
if (device) {
context = alcCreateContext(device, NULL); // use the device to make a context
alcMakeContextCurrent(context); // set the context to the currently active one
}
// open the sound file
NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:#"TempoClick" ofType:#"caf"];
NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];
AudioFileID fileID;
result = AudioFileOpenURL((CFURLRef)soundFileURL, kAudioFileReadPermission, 0, &fileID);
if (result != 0) DLog(#"cannot open file %#: %ld", soundFilePath, result);
// get the size of the file data
UInt32 fileSize = 0;
UInt32 propSize = sizeof(UInt64);
result = AudioFileGetProperty(fileID, kAudioFilePropertyAudioDataByteCount, &propSize, &fileSize);
if (result != 0) DLog(#"cannot find file size: %ld", result);
DLog(#"file size: %li", fileSize);
// copy the data into a buffer, then close the file
unsigned char *outData = malloc(fileSize);
AudioFileOpenURL((CFURLRef)soundFileURL, kAudioFileReadPermission, 0, &fileID); // we get a "file is not open" error on the next line if we don't open this again
result = AudioFileReadBytes(fileID, false, 0, &fileSize, outData);
if (result != 0) NSLog(#"cannot load data: %ld", result);
AudioFileClose(fileID);
alGenBuffers(1, &tempoSoundBuffer);
alBufferData(self.tempoSoundBuffer, AL_FORMAT_MONO16, outData, fileSize, 44100);
free(outData);
outData = NULL;
// connect the buffer to the source and set some preferences
alGenSources(1, &tempoSoundSource);
alSourcei(tempoSoundSource, AL_BUFFER, tempoSoundBuffer);
alSourcef(tempoSoundSource, AL_PITCH, 1.0f);
alSourcef(tempoSoundSource, AL_GAIN, 1.0f);
alSourcei(tempoSoundSource, AL_LOOPING, AL_FALSE);
And then in the play method I just call:
alSourcePlay(self.tempoSoundSource);
Can anyone explain what is happening here, and how I can work around it?
UPDATE 1:
I have another project that plays brief sounds with audio units, so as a quick test I added a timer to that project to play my click sound every 400 milliseconds. In that case, the timing is nearly perfect. So, it seems that NSTimer is fine but system sounds, AVAudioPlayer and OpenAL are less accurate in their playback than audio units.
UPDATE 2:
I just reworked my project to use audio units and now the audio is playing back much more accurately. It still occasionally drifts by up to four milliseconds in either direction, but this is better than the other audio methods. I'm still curious why the other methods all show a pattern of short, short, short, long intervals -- it's like the audio playback times are being rounded up or down to map to some kind of frame rate -- so I'll leave this question open for anyone who can explain that and/or offer a workaround for the other audio methods.
NSTimer does not guarantee when your method will actually get fired.
More info here: How to program a real-time accurate audio sequencer on the iphone?
Regarding your edits:
AVAudioPlayer takes some time to initialize itself. If you call prepareToPlay, it will initialize itself such that it can play the currently loaded sound immediately upon calling play. Once playback stops, it uninitializes itself, so you'd need to call prepareToPlay again to reinitialize. It's best to use this class for stream-y playback rather than discrete sound playback.
With OpenAL, once you've loaded the buffer, attaching it to a source and playing it should cause no delay at all.
You can encapsulate your audio units code into a .mm file and then call that from .m modules without having to compile those as C++.
Okay, I've figured it out. The real reason audio units worked better than the other audio methods is that my audio unit class, which I was adapting from another project, was setting a buffer duration property in the audio session, like this:
Float32 preferredBufferSize = .001;
UInt32 size = sizeof(preferredBufferSize);
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, size, &preferredBufferSize);
When I added this code to the OpenAL version, or even to the AVAudioPlayer version, I got accuracy to within a few milliseconds, the same as with audio units. (System Sounds, however, were still not very accurate.) I can verify the connection by increasing the buffer size and watching the playback intervals get less accurate.
Of course I only figured this out after spending an entire day adapting my project to use audio units -- tweaking it to compile under C++, testing the interruption handlers, etc. I hope this can save someone else from the same trouble.

iPhone App Pick Up Sound

I am trying to do a certain action based on whether or not the user makes a loud sound. I'm not trying to do any voice recognition or anything. Just simply do an action based on whether the iPhone picks up a loud sound.
Any suggestions, tutorials, I can't find anything on the apple developer site. I'm assuming i'm not looking or searching right.
The easiest thing for you do is to use the AudioQueue services. Here's the manual:
Apple AQ manual
Basically, look for any example code that initialized things with AudioQueueNewInput(). Something like this:
Status = AudioQueueNewInput(&_Description,
Audio_Input_Buffer_Ready,
self,
NULL,
NULL,
0,
&self->Queue);
Once you have that going, you can enable sound level metering with something like this:
// Turn on level metering (iOS 2.0 and later)
UInt32 on = 1;
AudioQueueSetProperty(self->Queue,kAudioQueueProperty_EnableLevelMetering,&on,sizeof(on));
You will have a callback routine that is invoked for each chunk of audio data. In it, you can check the current meter levels with something like this:
//
// Check metering levels and detect silence
//
AudioQueueLevelMeterState meters[1];
UInt32 dlen = sizeof(meters);
Status = AudioQueueGetProperty(_Queue,kAudioQueueProperty_CurrentLevelMeterDB,meters,&dlen);
if (Status == 0) {
if (meters[0].mPeakPower > _threshold) {
silence = 0.0; // reset silence timer
} else {
silence += time;
}
}
//
// Notify observers of incoming data.
//
if (delegate) {
[delegate audioMeter:meters[0].mPeakPower duration:time];
[delegate audioData:Buffer->mAudioData size:Buffer->mAudioDataByteSize];
}
Or, in your case, instead of silence you can detect if the decibel level is over a certain value for long enough. Note that the decibel values you will see will range from about -70.0 for dead silence, up to 0.0db for very loud things. On an exponential scale. You'll have to play with it to see what values work for your particular application.
Apple has examples such as Speak Here which looks to have code relating to decibels. I would check some of the meter classes for examples. I have no audio programming experience but hopefully that will get you started while someone provides you with a better answer.