One of my apps has a simple metronome-style feature that plays a click sound a specified number of times per minute (bpm). I'm doing this by starting an NSTimer, with an interval calculated from the specified bpm, that calls a method that plays the sound.
If I put an NSLog line into the play method, I can see that NSTimer is firing accurately to about 1 millisecond. However, if I record the sound output into an audio editor and then measure the interval between clicks, I can see that they are not evenly spaced. For example, with 150 bpm, the timer fires every 400 milliseconds. But most of the sounds play after 395 milliseconds, with every third or fourth sound playing after 418 milliseconds.
So the sounds are not uniformly delayed, but rather, they follow a pattern of shorter and longer intervals. It seems as if the iOS has a lower resolution for timing of sounds, and is rounding each sound event to the nearest available point, rounding up or down as needed to keep on track overall.
I have tried this with system sounds, AVAudioPlayer and OpenAL and have gotten the exact same results with all three methods. With each method, I'm doing all the setup when the view loads, so each time I play the sound all I have to do is play it. With AVAudioPlayer, I tried calling prepareToPlay using a second timer after each time the sound plays, so it is initialized and ready to go next time, but got the same results.
Here's the code for setting up the OpenAL sound in viewDidLoad (adapted from this tutorial):
// set up the context and device
ALCcontext *context;
ALCdevice *device;
OSStatus result;
device = alcOpenDevice(NULL); // select the "preferred device"
if (device) {
context = alcCreateContext(device, NULL); // use the device to make a context
alcMakeContextCurrent(context); // set the context to the currently active one
}
// open the sound file
NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:#"TempoClick" ofType:#"caf"];
NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];
AudioFileID fileID;
result = AudioFileOpenURL((CFURLRef)soundFileURL, kAudioFileReadPermission, 0, &fileID);
if (result != 0) DLog(#"cannot open file %#: %ld", soundFilePath, result);
// get the size of the file data
UInt32 fileSize = 0;
UInt32 propSize = sizeof(UInt64);
result = AudioFileGetProperty(fileID, kAudioFilePropertyAudioDataByteCount, &propSize, &fileSize);
if (result != 0) DLog(#"cannot find file size: %ld", result);
DLog(#"file size: %li", fileSize);
// copy the data into a buffer, then close the file
unsigned char *outData = malloc(fileSize);
AudioFileOpenURL((CFURLRef)soundFileURL, kAudioFileReadPermission, 0, &fileID); // we get a "file is not open" error on the next line if we don't open this again
result = AudioFileReadBytes(fileID, false, 0, &fileSize, outData);
if (result != 0) NSLog(#"cannot load data: %ld", result);
AudioFileClose(fileID);
alGenBuffers(1, &tempoSoundBuffer);
alBufferData(self.tempoSoundBuffer, AL_FORMAT_MONO16, outData, fileSize, 44100);
free(outData);
outData = NULL;
// connect the buffer to the source and set some preferences
alGenSources(1, &tempoSoundSource);
alSourcei(tempoSoundSource, AL_BUFFER, tempoSoundBuffer);
alSourcef(tempoSoundSource, AL_PITCH, 1.0f);
alSourcef(tempoSoundSource, AL_GAIN, 1.0f);
alSourcei(tempoSoundSource, AL_LOOPING, AL_FALSE);
And then in the play method I just call:
alSourcePlay(self.tempoSoundSource);
Can anyone explain what is happening here, and how I can work around it?
UPDATE 1:
I have another project that plays brief sounds with audio units, so as a quick test I added a timer to that project to play my click sound every 400 milliseconds. In that case, the timing is nearly perfect. So, it seems that NSTimer is fine but system sounds, AVAudioPlayer and OpenAL are less accurate in their playback than audio units.
UPDATE 2:
I just reworked my project to use audio units and now the audio is playing back much more accurately. It still occasionally drifts by up to four milliseconds in either direction, but this is better than the other audio methods. I'm still curious why the other methods all show a pattern of short, short, short, long intervals -- it's like the audio playback times are being rounded up or down to map to some kind of frame rate -- so I'll leave this question open for anyone who can explain that and/or offer a workaround for the other audio methods.
NSTimer does not guarantee when your method will actually get fired.
More info here: How to program a real-time accurate audio sequencer on the iphone?
Regarding your edits:
AVAudioPlayer takes some time to initialize itself. If you call prepareToPlay, it will initialize itself such that it can play the currently loaded sound immediately upon calling play. Once playback stops, it uninitializes itself, so you'd need to call prepareToPlay again to reinitialize. It's best to use this class for stream-y playback rather than discrete sound playback.
With OpenAL, once you've loaded the buffer, attaching it to a source and playing it should cause no delay at all.
You can encapsulate your audio units code into a .mm file and then call that from .m modules without having to compile those as C++.
Okay, I've figured it out. The real reason audio units worked better than the other audio methods is that my audio unit class, which I was adapting from another project, was setting a buffer duration property in the audio session, like this:
Float32 preferredBufferSize = .001;
UInt32 size = sizeof(preferredBufferSize);
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, size, &preferredBufferSize);
When I added this code to the OpenAL version, or even to the AVAudioPlayer version, I got accuracy to within a few milliseconds, the same as with audio units. (System Sounds, however, were still not very accurate.) I can verify the connection by increasing the buffer size and watching the playback intervals get less accurate.
Of course I only figured this out after spending an entire day adapting my project to use audio units -- tweaking it to compile under C++, testing the interruption handlers, etc. I hope this can save someone else from the same trouble.
Related
Okay guys, I've read many things about the FFT stuff, but it seems to be a bit more complicated than building a tableView.
I am searching for a way to analyze the playing audio (from iPod Library) in three ranges (low, mid, high). I think FFT is doing the job, but I'm not sure if I could filter (Lowpass, Bandpass and Highpass) the playing audio and analyze the peaks as well.
So if anyone knows what is the best (by best I mean, fastest (CPU) way to do so, please help me. There will be no front-end, so I won't draw the FFT in a Window (I guess the drawing does eat a lot of the cpu).
Then I have no idea how I could analyze the audio. All the FFT Sample Codes I found are using the mic. I do not want to use the mic. I saw something getting the Audio File and exporting it to a uncompressed file, but I need a live-analysation.
I've had a look at aurioTouch2, but I don't get how I could change the input from the mic to the iPod Library.
I think, the part I'm searching for is here:
// Initialize our remote i/o unit
inputProc.inputProc = PerformThru;
inputProc.inputProcRefCon = self;
CFURLRef url = NULL;
try {
url = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, CFStringRef([[NSBundle mainBundle] pathForResource:#"button_press" ofType:#"caf"]), kCFURLPOSIXPathStyle, false);
XThrowIfError(AudioServicesCreateSystemSoundID(url, &buttonPressSound), "couldn't create button tap alert sound");
CFRelease(url);
// Initialize and configure the audio session
XThrowIfError(AudioSessionInitialize(NULL, NULL, rioInterruptionListener, self), "couldn't initialize audio session");
UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord;
XThrowIfError(AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(audioCategory), &audioCategory), "couldn't set audio category");
XThrowIfError(AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, propListener, self), "couldn't set property listener");
Float32 preferredBufferSize = .005;
XThrowIfError(AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, sizeof(preferredBufferSize), &preferredBufferSize), "couldn't set i/o buffer duration");
UInt32 size = sizeof(hwSampleRate);
XThrowIfError(AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate, &size, &hwSampleRate), "couldn't get hw sample rate");
XThrowIfError(AudioSessionSetActive(true), "couldn't set audio session active\n");
XThrowIfError(SetupRemoteIO(rioUnit, inputProc, thruFormat), "couldn't setup remote i/o unit");
unitHasBeenCreated = true;
drawFormat.SetAUCanonical(2, false);
drawFormat.mSampleRate = 44100;
(...)
But I'm quite new to all of these AudioUnits, so I can't understand where an input is loaded. Then, the code mentioned above uses AVAudioSession. A little birdie told me, this will be deprecated, so what is the alternative?
So, basically:
How can I get the currently playing audio in order to do an analyzation? Can I just use a MPMusicPlayerController and get the samples? Or do I have to build a entire AudioUnit which plays the Library?
What is the fastest way (CPU) to analyze lows, mids and highs? Filtering? FFT? Something else?
Will I get in trouble with the Copyrights of bought music? Because I tried to convert the playing file to PCA Samples and sometimes I have this error:
VTM_AViPodReader[7666:307] * Terminating app
due to uncaught exception 'NSInvalidArgumentException', reason:
'* -[AVAssetReader initWithAsset:error:] invalid parameter not
satisfying: asset != ((void *)0)'
What is the "new" way to do an FFT if the whole AVAudioSession stuff won't work in the future?
You can't get the currently playing audio (security sandbox prevents this) on iOS, unless your app is the one playing the audio using certain select APIs (Audio Queue, RemoteIO, etc.)
3 bandpass filters (made with IIR biquads) will be faster than an FFT. But even a full FFT will use a very small percentage of CPU time.
An app can't convert or play protected music from the iTunes library in a form where samples can be captured.
The FFT is in the Accelerate framework, not in the audio session.
all.
I have a project where I need to interface with a A/V receiver via an X-Fi sound blaster card. The A/V receiver is connected to a 7.1 speaker system. I would like to know the start to finish way to access each of the 7.1 channels individually so that I can direct aircraft cockpit information in a simulator. I am using OpenAL and am writing this code in C. I have developed some code that I thought should do the trick, but I am getting audio bleed through on the other 6 speakers. Below is a sample of some of the code I have already written. I hope that someone can help me here.
Thanks, Vincent.`{
ALuint NorthWestSource;
ALint PlayStatus;
switch (event)
{
case EVENT_COMMIT:
//Load user selected .wav file into the buffer that is initialized here, "InitBuf".
LoadDotWavFile();
//Generate a source, attach buffer to source, set source position, and play sound.
alGenSources(NumOfSources, &NorthWestSource);
ErrorCheck();
//Attach the buffer that contains the .wav file's data to the source.
alSourcei(NorthWestSource, AL_BUFFER, WavFileDataBuffer);
ErrorCheck();
//Set source's position, velocity, and orientation/direction.
alSourcefv(NorthWestSource, AL_POSITION, SourcePosition);
ErrorCheck();
alSourcefv(NorthWestSource, AL_VELOCITY, SourceVelocity);
ErrorCheck();
alSourcefv(NorthWestSource, AL_DIRECTION, SourceDirectionNorthWest);
ErrorCheck();
alSourcei(NorthWestSource, AL_SOURCE_RELATIVE, AL_TRUE);
ErrorCheck();
alSourcei(NorthWestSource, AL_CONE_INNER_ANGLE, 180);
ErrorCheck();
alSourcei(NorthWestSource, AL_CONE_OUTER_ANGLE, 270);
ErrorCheck();
SetCtrlVal(panelHandle, PANEL_SOURCEISSET, 1);
//Play the user selected file by playing the sources.
alSourcePlay(NorthWestSource);
ErrorCheck();
//Check that the .wav file has finished playing and if so clean things up.
do
{
alGetSourcei(NorthWestSource, AL_SOURCE_STATE, &PlayStatus);
if(PlayStatus != AL_PLAYING)
{
printf("File done playing. \n");
}//End do-while if statement
}
while(PlayStatus == AL_PLAYING);
//Clean things up more before exiting out of this audio projection.
alDeleteSources(NumOfSources, &NorthWestSource);
ErrorCheck();
alDeleteBuffers(NumOfBuffers, &WavFileDataBuffer);
ErrorCheck();
SetCtrlVal(panelHandle, PANEL_SOURCEISSET, 0);
//alDeleteBuffers(NumOfBuffers,
break;
}
return 0;
}`
I am confronted with the same problem. I want to play a tone to either the left or right ear. The only way I have found so far is to produce a stereo buffer (7.1 buffer for you) with the sound, then overwrite the information on the other channel (... other 7 channels for you) with zeros, and then play it back from a source in front of the listener.
This is my workaround. I know that it is clumsy. But I haven't found any better if you want to stay in openAL and to avoid programming using ALSA directly (for Linux) or CoreAudio (for Mac).
To answer your question more directly: No, there does not seem to be a direct way of saying (as I had wished for): "Speaker #3 say 'Hello World'! All other speakers remain silent."
Cheers,
farid
I have a program that uses a file player audio unit to play, pause and stop a audio file. The way I am accomplishing this is by initializing the file player audio unit to play the file at position zero and then when the user presses the pause button, I stop the AUGraph, capture the current position, and then use that position as the start position when the user presses the play button. Everything is working as it should, but every 3 or 4 times I hit pause and then play, the song starts playing a half to a full second BEFORE the point where I hit pause.
I can't figure out why this is happening, do any of you have any thoughts? here is a simplified version of my code.
//initialize AUGraph and File player Audio unit
...
...
...
//Start AUGraph
...
...
...
// pause playback
- (void) pauseAUGraph {
//first stop the AuGrpah
result = AUGraphStop (processingGraph);
// get current play head position
AudioTimeStamp ts;
UInt32 size = sizeof(ts);
result = AudioUnitGetProperty(filePlayerUnit,
kAudioUnitProperty_CurrentPlayTime, kAudioUnitScope_Global, 0, &ts,
&size);
//save our play head position for use later
//must add it to itself to take care of multiple presses of the pause button
sampleFrameSavedPosition = sampleFrameSavedPosition + ts.mSampleTime;
//this stops the file player unit from playing
AudioUnitReset(filePlayerUnit, kAudioUnitScope_Global, 0);
NSLog (#"AudioUnitReset - stopped file player from playing");
//all done
}
// Stop playback
- (void) stopAUGraph {
// lets set the play head to zero, so that when we restart, we restart at the beginning of the file.
sampleFrameSavedPosition = 0;
//ok now that we saved the current pleayhead position, lets stop the AUGraph
result = AUGraphStop (processingGraph);
}
May be you should use packet counts instead of timestamps, since you just want to pause and play the music, not display the time information.
See BufferedAudioPlayer for an example of using this method.
It may be due to rounding problems with your code:
For example, if every time you hit the pause button, your timer would record at a 0.5/4 seconds before your actual pause time, you would still see a desired result. But after repeating for four more times, the amount of space you have created is 0.5/4 times 4 which is the half of a second you seem to be experiencing.
Thus, I would pay careful attention to the object types you are using and make sure they don't round inappropriately. Try using a double float for your sample times to try to alleviate that problem!
Hope this is clear and helpful! :)
I am using multiple instances of AVAudioPlayer to play multiple audio files simultaneously. I run a loop to start playing the audio files (prepareToPlay is called beforehand and the loop only makes a call to the play method)
But invariably, one of the players does not play in sync. How can I ensure that all the 4 players start playing audio simultaneously?
Thanks.
The Apple docs talk about how you can "Play multiple sounds simultaneously, one sound per audio player, with precise synchronization". Perhaps you need to call playAtTime: e.g. [myAudioPlayer playAtTime: myAudioPlayer.deviceCurrentTime + playbackDelay];
In fact, the Apple docs for playAtTime: contain the following code snippet:
NSTimeInterval shortStartDelay = 0.01; // seconds
NSTimeInterval now = player.deviceCurrentTime;
[player playAtTime: now + shortStartDelay];
[secondPlayer playAtTime: now + shortStartDelay];
They should play simultaneously (assuming you choose a large enough value for shortStartDelay -- not so soon that it happens before this thread returns or whatever).
Unfortunately, you can't. AVAudioPlayer doesn't provide any mechanism for fine-grained control of start time. The currentTime property sets the point in the file to read from, it doesn't guarantee when the AVAudioPlayer instance will start playing in system time, which is what you need to sync multiple audio streams.
When I need this behavior, I use the RemoteIO Audio Unit + the 3D Mixer Audio Unit + ExtAudioFile.
EDIT
Note that as of iOS 4, you can synchronize multiple AVAudioPlayer instances using playAtTime:
This code segment of mine allows you to do this as long as you don't have to do it instantly. You can pass in the targetTime as a timestamp for when you want to hear the sounds. The trick is to make use of time-stamps and the delay functionality of NSObject. Also, it utilizes the fact that it takes way less time to change the volume of the player than it does to change the current time. Should work almost perfectly precisely.
- (void) moveTrackPlayerTo:(double) timeInSong atTime:(double) targetTime {
[trackPlayer play];
trackPlayer.volume = 0;
double timeOrig = CFAbsoluteTimeGetCurrent();
double delay = targetTime - CFAbsoluteTimeGetCurrent();
[self performSelector:#selector(volumeTo:)
withObject:[NSNumber numberWithFloat:single.GLTrackVolume]
afterDelay:delay];
trackPlayer.currentTime = timeInSong - delay - (CFAbsoluteTimeGetCurrent() - timeOrig);
}
- (void) volumeTo:(NSNumber *) volNumb {
trackPlayer.volume = [volNumb floatValue];
}
Try to set same currentTime property value for every AVAudioPlayer object.
I want to program a simple audio sequencer on the iphone but I can't get accurate timing. The last days I tried all possible audio techniques on the iphone, starting from AudioServicesPlaySystemSound and AVAudioPlayer and OpenAL to AudioQueues.
In my last attempt I tried the CocosDenshion sound engine which uses openAL and allows to load sounds into multiple buffers and then play them whenever needed. Here is the basic code:
init:
int channelGroups[1];
channelGroups[0] = 8;
soundEngine = [[CDSoundEngine alloc] init:channelGroups channelGroupTotal:1];
int i=0;
for(NSString *soundName in [NSArray arrayWithObjects:#"base1", #"snare1", #"hihat1", #"dit", #"snare", nil])
{
[soundEngine loadBuffer:i fileName:soundName fileType:#"wav"];
i++;
}
[NSTimer scheduledTimerWithTimeInterval:0.14 target:self selector:#selector(drumLoop:) userInfo:nil repeats:YES];
In the initialisation I create the sound engine, load some sounds to different buffers and then establish the sequencer loop with NSTimer.
audio loop:
- (void)drumLoop:(NSTimer *)timer
{
for(int track=0; track<4; track++)
{
unsigned char note=pattern[track][step];
if(note)
[soundEngine playSound:note-1 channelGroupId:0 pitch:1.0f pan:.5 gain:1.0 loop:NO];
}
if(++step>=16)
step=0;
}
Thats it and it works as it should BUT the timing is shaky and instable. As soon as something else happens (i.g. drawing in a view) it goes out of sync.
As I understand the sound engine and openAL the buffers are loaded (in the init code) and then are ready to start immediately with alSourcePlay(source); - so the problem may be with NSTimer?
Now there are dozens of sound sequencer apps in the appstore and they have accurate timing. I.g. "idrum" has a perfect stable beat even in 180 bpm when zooming and drawing is done. So there must be a solution.
Does anybody has any idea?
Thanks for any help in advance!
Best regards,
Walchy
Thanks for your answer. It brought me a step further but unfortunately not to the aim. Here is what I did:
nextBeat=[[NSDate alloc] initWithTimeIntervalSinceNow:0.1];
[NSThread detachNewThreadSelector:#selector(drumLoop:) toTarget:self withObject:nil];
In the initialisation I store the time for the next beat and create a new thread.
- (void)drumLoop:(id)info
{
[NSThread setThreadPriority:1.0];
while(1)
{
for(int track=0; track<4; track++)
{
unsigned char note=pattern[track][step];
if(note)
[soundEngine playSound:note-1 channelGroupId:0 pitch:1.0f pan:.5 gain:1.0 loop:NO];
}
if(++step>=16)
step=0;
NSDate *newNextBeat=[[NSDate alloc] initWithTimeInterval:0.1 sinceDate:nextBeat];
[nextBeat release];
nextBeat=newNextBeat;
[NSThread sleepUntilDate:nextBeat];
}
}
In the sequence loop I set the thread priority as high as possible and go into an infinite loop. After playing the sounds I calculate the next absolute time for the next beat and send the thread to sleep until this time.
Again this works and it works more stable than my tries without NSThread but it is still shaky if something else happens, especially GUI stuff.
Is there a way to get real-time responses with NSThread on the iphone?
Best regards,
Walchy
NSTimer has absolutely no guarantees on when it fires. It schedules itself for a fire time on the runloop, and when the runloop gets around to timers, it sees if any of the timers are past-due. If so, it runs their selectors. Excellent for a wide variety of tasks; useless for this one.
Step one here is that you need to move audio processing to its own thread and get off the UI thread. For timing, you can build your own timing engine using normal C approaches, but I'd start by looking at CAAnimation and especially CAMediaTiming.
Keep in mind that there are many things in Cocoa that are designed only to run on the main thread. Don't, for instance, do any UI work on a background thread. In general, read the docs carefully to see what they say about thread-safety. But generally, if there isn't a lot of communication between the threads (which there shouldn't be in most cases IMO), threads are pretty easy in Cocoa. Look at NSThread.
im doing something similar using remoteIO output. i do not rely on NSTimer. i use the timestamp provided in the render callback to calculate all of my timing. i dont know how acurate the iphone's hz rate is but im sure its pretty close to 44100hz, so i just calculate when i should be loading the next beat based on what the current sample number is.
an example project that uses remote io can be found here have a look at the render callback inTimeStamp argument.
EDIT : Example of this approach working (and on the app store, can be found here)
I opted to use a RemoteIO AudioUnit and a background thread that fills swing buffers (one buffer for read, one for write which then swap) using the AudioFileServices API. The buffers are then processed and mixed in the AudioUnit thread. The AudioUnit thread signals the bgnd thread when it should start loading the next swing buffer. All the processing was in C and used the posix thread API. All the UI stuff was in ObjC.
IMO, the AudioUnit/AudioFileServices approach affords the greatest degree of flexibility and control.
Cheers,
Ben
You've had a few good answers here, but I thought I'd offer some code for a solution that worked for me. When I began researching this, I actually looked for how run loops in games work and found a nice solution that has been very performant for me using mach_absolute_time.
You can read a bit about what it does here but the short of it is that it returns time with nanosecond precision. However, the number it returns isn't quite time, it varies with the CPU you have, so you have to create a mach_timebase_info_data_t struct first, and then use it to normalize the time.
// Gives a numerator and denominator that you can apply to mach_absolute_time to
// get the actual nanoseconds
mach_timebase_info_data_t info;
mach_timebase_info(&info);
uint64_t currentTime = mach_absolute_time();
currentTime *= info.numer;
currentTime /= info.denom;
And if we wanted it to tick every 16th note, you could do something like this:
uint64_t interval = (1000 * 1000 * 1000) / 16;
uint64_t nextTime = currentTime + interval;
At this point, currentTime would contain some number of nanoseconds, and you'd want it to tick every time interval nanoseconds passed, which we store in nextTime. You can then set up a while loop, something like this:
while (_running) {
if (currentTime >= nextTime) {
// Do some work, play the sound files or whatever you like
nextTime += interval;
}
currentTime = mach_absolute_time();
currentTime *= info.numer;
currentTime /= info.denom;
}
The mach_timebase_info stuff is a bit confusing, but once you get it in there, it works very well. It's been extremely performant for my apps. It's also worth noting that you won't want to run this on the main thread, so dishing it off to its own thread is wise. You could put all the above code in its own method called run, and start it with something like:
[NSThread detachNewThreadSelector:#selector(run) toTarget:self withObject:nil];
All the code you see here is a simplification of a project I open-sourced, you can see it and run it yourself here, if that's of any help. Cheers.
Really the most precise way to approach timing is to count audio samples and do whatever you need to do when a certain number of samples has passed. Your output sample rate is the basis for all things related to sound anyway so this is the master clock.
You don't have to check on each sample, doing this every couple of msec will suffice.
One additional thing that may improve real-time responsiveness is setting the Audio Session's kAudioSessionProperty_PreferredHardwareIOBufferDuration to a few milliseconds (such as 0.005 seconds) before making your Audio Session active. This will cause RemoteIO to request shorter callback buffers more often (on a real-time thread). Don't take any significant time in these real-time audio callbacks, or you will kill the audio thread and all audio for your app.
Just counting shorter RemoteIO callback buffers is on the order of 10X more accurate and lower latency than using an NSTimer. And counting samples within an audio callback buffer for positioning the start of your sound mix will give you sub-millisecond relative timing.
By measuring the time elapsed for the "Do some work" part in the loop and subtracting this duration from the nextTime greatly improves accuracy:
while (loop == YES)
{
timerInterval = adjustedTimerInterval ;
startTime = CFAbsoluteTimeGetCurrent() ;
if (delegate != nil)
{
[delegate timerFired] ; // do some work
}
endTime = CFAbsoluteTimeGetCurrent() ;
diffTime = endTime - startTime ; // measure how long the call took. This result has to be subtracted from the interval!
endTime = CFAbsoluteTimeGetCurrent() + timerInterval-diffTime ;
while (CFAbsoluteTimeGetCurrent() < endTime)
{
// wait until the waiting interval has elapsed
}
}
If constructing your sequence ahead of time is not a limitation, you can get precise timing using an AVMutableComposition. This would play 4 sounds evenly spaced over 1 second:
// setup your composition
AVMutableComposition *composition = [[AVMutableComposition alloc] init];
NSDictionary *options = #{AVURLAssetPreferPreciseDurationAndTimingKey : #YES};
for (NSInteger i = 0; i < 4; i++)
{
AVMutableCompositionTrack* track = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
NSURL *url = [[NSBundle mainBundle] URLForResource:[NSString stringWithFormat:#"sound_file_%i", i] withExtension:#"caf"];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:options];
AVAssetTrack *assetTrack = [asset tracksWithMediaType:AVMediaTypeAudio].firstObject;
CMTimeRange timeRange = [assetTrack timeRange];
Float64 t = i * 1.0;
NSError *error;
BOOL success = [track insertTimeRange:timeRange ofTrack:assetTrack atTime:CMTimeMake(t, 4) error:&error];
NSAssert(success && !error, #"error creating composition");
}
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:composition];
self.avPlayer = [[AVPlayer alloc] initWithPlayerItem:playerItem];
// later when you want to play
[self.avPlayer seekToTime:kCMTimeZero];
[self.avPlayer play];
Original credit for this solution: http://forum.theamazingaudioengine.com/discussion/638#Item_5
And more detail: precise timing with AVMutableComposition
I thought a better approach for the time management would be to have a bpm setting (120, for example), and go off of that instead. Measurements of minutes and seconds are near useless when writing/making music / music applications.
If you look at any sequencing app, they all go by beats instead of time. On the opposite side of things, if you look at a waveform editor, it uses minutes and seconds.
I'm not sure of the best way to implement this code-wise by any means, but I think this approach will save you a lot of headaches down the road.