Getting notified when a sound is done playing in OpenAL - iphone

I'm using OpenAL on iPhone to play multiple audio samples simultaneously.
Can I get OpenAL to notify me when a single sample is done playing?
I'd like to avoid hardcoding the sample length and setting a timer.

I didn't have much luck with callbacks in OpenAL. In my state machines, I simply poll the source and delay the transition until it's done.
- (BOOL)playing {
ALint sourceState;
alGetSourcei(sourceID, AL_SOURCE_STATE, &sourceState);
return sourceState == AL_PLAYING;
}
// ... //
case QSTATE_DYING:
if (![audioSource playing])
[self transitionTo:QSTATE_DEAD];
If this isn't what you need, then you're best bet is probably a timer. You shouldn't need to hardcode any values. You can determine the playback time when you're populating your buffers.
A bit of insight into the "why" of the question might offer some additional choices.

If you have the OpenAL source abstracted into a class, I guess you can simply call performSelector:afterDelay: when you start the sound:
- (void) play
{
[delegate performSelector:#selector(soundHasFinishedPlaying)
afterDelay:self.length];
…
}
(If you stop the sound manually in the meantime, the callback can be cancelled, see the NSObject Class Reference.) Or you can poll the AL_SOURCE_STATE:
- (void) checkStatus
{
ALint state;
alGetSourcei(source, AL_SOURCE_STATE, &state);
if (state == AL_PLAYING)
return;
[timer invalidate];
[delegate soundHasFinishedPlaying];
}
I don’t know how to have OpenAL call you back. What exactly do you want the callback for? Some things can be solved better without a callback.

This OpenAL guide suggests a possible solution:
The 'stream' function also tells us if the stream is finished playing.
...and provides sample source code to illustrate the usage.

Wait, are you talking about having finished one sample (e.g., 1/44100 second for 44.1 KHz audio)? Or are you talking about knowing that a source has played through its buffer and has no more audio to play?
For the latter, I've had good results polling a source for the AL_BUFFERS_PROCESSED property when I stream buffers to a source; it might work for the single-buffer case to look for a non-zero value of this property.

Related

moving an image on the basis of sound frequency?

I am trying to make something like,i am recording a sound and on the basis of sound (pitch,frequency,not sure) the image should move.
I am able to achieve recording, also i image sequence in place, but seperately.
I am not sure how to link that,just for the information, i am trying to achieve something like
mouth mover app:
app url here
My question is , how can i move/animate image on the basis of sound frequency.
Thanks
I am done with the solution.Used Dirac and problem solved.
Edit:
What is it?
DiracAudioPlayer is a new set of Cocoa classes that wrap the entire Dirac functionality in a convenient way, exposing an API that is similar to what AVAudioPlayer offers. Note that this is not an AVAudioPlayer subclass.
Following are the core features and a description of the API.
DiracAudioPlayer Core Features
DiracAudioPlayer is a set of classes that allow file based playback of a variety of audio formats (including MPMediaItems) while simultaneously changing speed and pitch of the audio file in real time. Version 3.6 consists of DiracAudioPlayerBase (the base class taking care of file IO and playback), DiracAudioPlayer (wrapping the Dirac Core API) and DiracFxAudioPlayer (wrapping the DiracFx API).
Make sure you include all 3 classes in your project as well as the "ExtAudioFile" and "util" folders, and add Accelerate.framework and CoreAudio.framework to the project. On MacOS X you will have to add the AudioUnit.framework as well, on iOS you will have to add AudioToolbox.framework, AVFoundation.framework, MediaPlayer.framework and CoreMedia.framework instead.
DiracAudioPlayer is…
…an Apple-compatible class to play back time stretched audio that works on both iOS (version 4 and higher) and MacOS X (version 10.6 and higher)
…very easy to use
…fully ARC compatible
…delivered to you including the full source code
DiracAudioPlayer API
Version 3.6 released in November 2012 offers the following calls:
- (id) initWithContentsOfURL:(NSURL*)inUrl channels:(int)channels error: (NSError **)error;
Initializes and returns an audio player for playing a designated sound file. A URL identifying the sound file to play. The audio data must be in a format supported by Core Audio. Pass in the address of a nil-initialized NSError object. If an error occurs, upon return the NSError object describes the error. To use an item from the user's iPod library supply the URL that you get via MPMediaItem's MPMediaItemPropertyAssetURL property as inUrl. Note that FairPlay protected content can NOT be processed.
- (void) setDelegate:(id)delegate;
- (id) delegate;
Set/get delegate of the class. If you implement the delegate protocol, DiracAudioPlayer will call your implementation of
- (void)diracPlayerDidFinishPlaying:(DiracAudioPlayerBase *)player successfully:(BOOL)flag
When it is done playing
- (void) changeDuration:(float)duration;
- (void) changePitch:(float)pitch;
Change playback speed and pitch
- (NSInteger) numberOfLoops;
- (void) setNumberOfLoops:(NSInteger)loops;
A value of 0, which is the default, means to play the sound once. Set a positive integer value to specify the number of times to return to the start and play again. For example, specifying a value of 1 results in a total of two plays of the sound. Set any negative integer value to loop the sound indefinitely until you call the stop method.
- (void) updateMeters;
Must be called prior to calling -peakPowerForChannel in order to update its internal measurements
- (float) peakPowerForChannel:(NSUInteger)channelNumber;
A floating-point representation, in decibels, of a given audio channel’s current peak power. A return value of 0 dB indicates full scale, or maximum power; a return value of -160 dB indicates minimum power (that is, near silence). If the signal provided to the audio player exceeds ±full scale, then the return value may exceed 0 (that is, it may enter the positive range). To obtain a current peak power value, you must call the updateMeters method before calling this method.
- (BOOL) prepareToPlay;
Starts the Dirac processing thread and prepares the sound file for playback. If you don't call this explicitly it will be called when calling -play
- (NSUInteger) numberOfChannels;
The number of audio channels in the sound associated with the audio player. (read-only)
- (NSTimeInterval) fileDuration;
Returns the total duration, in seconds, of the sound associated with the audio player. (read-only)
- (NSTimeInterval) currentTime;
- (void) setCurrentTime:(NSTimeInterval)time
Returns the current play time in the input file. Note that if you apply time stretching, -currentTime will reflect the slowed down time depending on the time stretch factor.
IMPORTANT CHANGE: In previous versions this value returned the total play time independent of the position in the file. Please update your code accordingly to reflect the change
Setting this property causes playback to fast forward or rewind to the specified play time.
- (void) play;
Plays a sound asynchronously. Returns YES on success, or NO on failure. Calling this method implicitly calls the -prepareToPlay method if the audio player is not already prepared to play.
- (NSURL*) url;
The URL for the sound associated with the audio player. (read-only)
- (void) setVolume:(float)volume;
- (float) volume;
The playback gain for the audio player, ranging from 0.0 through 1.0.
- (BOOL) playing;
A Boolean value that indicates whether the audio player is playing (YES) or not (NO). (read-only). To find out when playback has stopped, use the diracPlayerDidFinishPlaying:successfully: delegate method.
- (void) pause;
Pauses playback; sound remains ready to resume playback from where it left off. Calling pause leaves the audio player prepared to play; it does not release the audio hardware that was acquired upon calling -play or -prepareToPlay.
- (void) stop;
Stops playback and undoes the setup needed for playback. Calling this method, or allowing a sound to finish playing, undoes the setup performed upon calling the -play or -prepareToPlay methods.
Most text-to-speech systems will allow you to register a callback function that will send you the phoneme (in laymen terms the sound) that is being produced. Look at the following link. Click on callbacks on the left hand side. Look down at SpeechPhonemeProcPtr which will allow you to register a function that will be called when the noise being made is "uh", "th", "ah", or whatever noise it is. You would then update your image to look like what a person's mouth would look like when making that particular sound. This was very easy in IBM's ViaVoice and I have never coded such an application on an iPhone but I think this is better than trying to match the audio.
If this is truly unfiltered audio you are trying to match then you can pass it to a voice recognition system and pass the recognized text into the TTS system and get the phonemes.

AudioServicesAddSystemSoundCompletion callback-method is not called after a few calls

I'm using AudioServicesAddSystemSoundCompletion in my app to detect when sound has finished and then trigger some other action.
For some reason I am getting the following behavior, it works for the first 8 to 12 sounds (that's at least what I tested) and then the callback defined for AudioServicesAddSystemSoundCompletion is not being called anymore.
Here is my code to create the sound:
NSString *soundPath = [[NSBundle mainBundle] pathForResource:[[soundFileName componentsSeparatedByString:#"."]objectAtIndex:0]ofType:#"wav"];
Log2(#"soundFileName: %#", soundFileName);
CFURLRef soundURL = (CFURLRef)[NSURL fileURLWithPath:soundPath];
AudioServicesCreateSystemSoundID(soundURL, &sound);
AudioServicesAddSystemSoundCompletion(sound, nil, nil, playSoundFinished, (void*) self);
to play the sound:
AudioServicesPlaySystemSound(sound);
and to do some stuff when the sound finished playing:
void playSoundFinished (SystemSoundID sound, void *data) {
pf //Typedef for PRETTY_FUNCTION log
AudioServicesRemoveSystemSoundCompletion(sound);
[AppState sharedInstance].b_touchesFREE = TRUE;
if([[AppState sharedInstance].ma_cardsToCheck count] >= 2)
{
[[AppState sharedInstance].vc_spiel pairFound];
}
if (((Card*)data).b_isTurnedFaceUp) {
[AppState sharedInstance].i_cardsTurnedFaceUp --;
}
[(Card*)data release];
}
Has anyone of you any idea why it works the first few times and then stops working?
Thx in advance.
Maverick1st
***** Edit *****
I just found out, that this happens when i try to play the sound a second time.
Could it be, that i forgot to release it somewhere?
But I always thought AudioServicesRemoveSystemSoundCompletion handles the memory management.
***** One more edit *****
So posting this on Stackoverflow made me think a bit deeper about the problem and i got the solution now (at least i think i got it ;)).
Unfortunately i cannot answer the question for myself for the next 7.5 hours so i have to edit the question.
Just for you to better understand my problem.
I'm programming a memory game and every Card is a Class containing its image for front and back and the sound it plays when its turned around.
since i only initialize the sound on creation of the card i was not sure if I should call AudioServicesRemoveSystemSoundCompletion every time the sound ends.
So i just tried it without AudioServicesRemoveSystemSoundCompletion and it works now.
Only thing i am not sure about now is if this could lead to a memory leak or something like that.
But for now it works fine.
If someone could tell me if this is ok regarding the memory use i'd be really happe. :)
Best regards.
Maverick1st
If you create the sound (and set sound completion) only once during the lifetime of the app, it should be ok from memory management standpoint. It is always you create - you release.
However, you should call
AudioServicesRemoveSystemSoundCompletion(sound);
AudioServicesDisposeSystemSoundID(sound);
when you don't need the sound anymore (most probably in the dealloc method of the object where you created (or keep reference to) the sound. Do not change the order of these two, otherways you have a memory leak.
Maybe you would find AppSoundEngine useful. It is simple to use wrapper for SystemSoundID and associated C functions.

Can't fix Severe Memory Leak from Open AL

I'm nearing the end of a big iPhone project and whilst checking for memory leaks stumbled on this huge one. I implemented the sound following this tutorial:
http://www.gehacktes.net/2009/03/iphone-programming-part-6-multiple-sounds-with-openal/
Works a charm, a lot of people use it but I get a huge leak a start of the project when sound is initially loaded in. Below are the lines of code that start of the leak:
[[Audio sharedMyOpenAL] loadSoundWithKey:#"music" File:#"Music" Ext:#"wav" Loop:true];
[[Audio sharedMyOpenAL] loadSoundWithKey:#"btnPress" File:#"BtnPress" Ext:#"wav" Loop:false];
[[Audio sharedMyOpenAL] loadSoundWithKey:#"ting1" File:#"GlassTing1" Ext:#"wav" Loop:false];
etc. etc. it loads in 20 sounds altogether. And more specifically in the Audio.m file this chunk of code:
+ (Audio*)sharedMyOpenAL {
#synchronized(self) {
if (sharedMyOpenAL == nil) {
sharedMyOpenAL = [[self alloc] init]; // assignment not done here
}
}
return sharedMyOpenAL;
}
I am unsure how to resolve this and any help on the matter would be greatly appreciated.
Thanks.
Isn’t the “leak” simply the Audio singleton? I am not sure how the leak detection works, but from a certain viewpoint most singletons are leaks, since they only release memory after your application exits.
If this really is the case, then it depends on whether you need to release the memory used by the sounds. The memory usage should not go up, so you don’t have to worry about the “traditional leak” scenario where your application takes more and more memory until it gets killed. The code you are using does not seem to support sound unloading, so that if you want to release the memory, you’ll have to add that code yourself.
And a personal viewpoint: Writing a sound effect engine using a singleton is not a good design. Managing the sounds becomes a pain (this is exactly the problem you are facing), the singleton adds a lot of unnecessary boilerplate code, etc. I see no reason the sounds should not be simple separate objects with their own lifecycle – this is the way I’ve done it in my attempt at an OpenAL SFX engine. Of course, I could be wrong.
Update: I suppose the magic ‘assignment not done here’ is the key. The singleton code is taken from the Apple documentation, but somebody inserted an extra assignment. The sharedFoo method should look like this:
+ (MyGizmoClass*)sharedManager
{
#synchronized(self) {
if (sharedGizmoManager == nil) {
[[self alloc] init]; // assignment not done here
}
}
return sharedGizmoManager;
}
When you perform the extra assignment to self, you create the leak you are looking for.

iPhone Syncing a time sequence with music

I'm using AVAudioPlayer to play music in my iPhone app.
In a class that I wrote I have an array that contains random ascending integers. (2, 4, 9, 17, 18, 20,...)
These integers represent times in the song at which a certain event should occur. So if you take the above array, after 2 seconds of the song playing, some method should be called. After 4 seconds, another method should be called. And so on.
I have tried using a repeating NSTimer:
NSTimer *myTimer = [NSTimer scheduledTimerWithTimeInterval:1 target:self selector:#selector(timerTick) userInfo:nil repeats:YES];
Everytime it fires, it checks whether the value of the Audioplayer and of the current arrayindex are the same:
- (void) timerTick {
if([[myArray objectAtIndex:currentIndex] intValue] == (int)(player.currentTime)) {
//here the event-method is called
currentIndex++;
}
}
This code actually works, but only for some time. After some time however, myTimer and the timer that controls the musicplayer are out of sync. So it misses an element of myArray and an infinite loop starts. I don't know exactly why they get out of sync, but I think it could be because the timer and the player aren't being started at exactly the same time or maybe because of short performance lags.
I think I have to approach this in a totally different way. Is key-value observing a way to do this? I could add my class as an observer to the player object, so that it gets notified when the player.currentTime value changes. But that would cause a LOT of notifications to be send and I think it would be really bad for performance.
Any help much appreciated!
Ok here is my solution: I found an open source app that does almost the same thing my app should do, which helped me a lot.
I'm going to stick with the code I already have, with a little modification it should be precise enough for my purposes. Here it is:
currentIndex = 0;
myTimer = [NSTimer scheduledTimerWithTimeInterval:0.01 target:self selector:#selector(timerTick) userInfo:nil repeats:YES];
- (void) timerTick {
if(timerRunning){
if([[myArray objectAtIndex:currentIndex] intValue] <= (int)(player.currentTime*100)) {
//some event code
currentIndex++;
}
}
}
The important change is from == to <= in the if-condition. When it gets asynchronous and misses an element of myArray, the error is corrected the next hundredth of a second. That's good enough.
Thanks for your help!
It could be that the timers are reasonably in sync, but your code just takes to long to execute (i.e: longer then 1 second).
Couldn't you just use the timer of the musicplayer, and spawn a thread each time an event should occur? This way the timer stays uninterrupted, and your thread will do what it needs to do (lets say the heavy stuff).
If you reall y need two timers, I guess you could create a background threads that keeps those two timers in sync, but I think you're asking for trouble there..
Real world synchronization with music is very hard because users can notice mis-syncs of just a tenth of a second or less. You might find that AVAudioPlayer is to simple for what you need. You might have to control the rate the music plays using AudioQueueServices so that you can sync the music to the code instead of the other way around. You could see time to fire your code coming up and then start the methods before the music actually plays. Done skillfully this would sync the music most of the time.

What is a better way to create a game loop on the iPhone other than using NSTimer?

I am programming a game on the iPhone. I am currently using NSTimer to trigger my game update/render. The problem with this is that (after profiling) I appear to lose a lot of time between updates/renders and this seems to be mostly to do with the time interval that I plug into NSTimer.
So my question is what is the best alternative to using NSTimer?
One alternative per answer please.
You can get a better performance with threads, try something like this:
- (void) gameLoop
{
while (running)
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
[self renderFrame];
[pool release];
}
}
- (void) startLoop
{
running = YES;
#ifdef THREADED_ANIMATION
[NSThread detachNewThreadSelector:#selector(gameLoop)
toTarget:self withObject:nil];
#else
timer = [NSTimer scheduledTimerWithTimeInterval:1.0f/60
target:self selector:#selector(renderFrame) userInfo:nil repeats:YES];
#endif
}
- (void) stopLoop
{
[timer invalidate];
running = NO;
}
In the renderFrame method You prepare the framebuffer, draw frame and present the framebuffer on screen. (P.S. There is a great article on various types of game loops and their pros and cons.)
I don't know about the iPhone in particular, but I may still be able to help:
Instead of simply plugging in a fixed delay at the end of the loop, use the following:
Determine a refresh interval that you would be happy with and that is larger than a single pass through your main loop.
At the start of the loop, take a current timestamp of whatever resolution you have available and store it.
At the end of the loop, take another timestamp, and determine the elapsed time since the last timestamp (initialize this before the loop).
sleep/delay for the difference between your ideal frame time and the already elapsed time this for the frame.
At the next frame, you can even try to compensate for inaccuracies in the sleep interval by comparing to the timestamp at the start of the previous loop. Store the difference and add/subtract it from the sleep interval at the end of this loop (sleep/delay can go too long OR too short).
You might want to have an alert mechanism that lets you know if you're timing is too tight(i,e, if your sleep time after all the compensating is less than 0, which would mean you're taking more time to process than your frame rate allows). The effect will be that your game slows down. For extra points, you may want to simplify rendering for a while, if you detect this happening, until you have enough spare capacity again.
Use the CADisplayLink, you can find how in the OpenGL ES template project provided in XCODE (create a project starting from this template and give a look to the EAGLView class, this example is based on open GL, but you can use CADisplayLink only for other kind of games