(iphone) Is having many AVAudioPlayer instance fine? - iphone

For my small game, I'd like to play effect sound for various scenario.
Mostly it will be user-interaction related.
I may need to play multiple sounds at one time.
I'm planning to allocate AVAudioPlayer for each sound.
I wonder a viewController having about 10-20 AVAudioPlayers is fine.
(sound data itself is rather small, less than 100k in aac)
I just feel that declaring 10-20 AVAudioPlayer instance in a class seems weird.
Is there a better way of doing it or am I just over-thinking it?

I think OpenAL is a better option in such situations. Dont worry if you dont know it. There is a great video tutorial here (with source code): http://www.71squared.com/2009/05/iphone-game-programming-tutorial-9-sound-manager/
To find more you can visit:
http://benbritten.com/2008/11/06/openal-sound-on-the-iphone/comment-page-1/

Yes, it's fine to have many AVAudioPlayer instances. I don't know how many the limit is but it's definitely more than a dozen.
Here are some gotchas:
AVAudioPlayer doesn't do level mixing, so if your sounds are high volume, they may end up constructively interfering with each other and causing waveform distortion. I set a maximum volume of 0.8 to try to work around this, but it's not reliable.
If you try to start them all at the same time, using the play method may end up starting them out of sync. Instead, figure out a time soon enough that the user won't notice, but far enough away that it gives your code time to exit and AVFoundation time to get ready. Then use [player playAtTime:soon].
Here's some code that's working for me now. YMMV:
-(void)play
{
BOOL success;
AVAudioPlayer *player = self.player;
player.numberOfLoops = -1;
player.currentTime = 0;
player.volume = _volume;
// NSLog(#"deviceCurrentTime=%f", player.deviceCurrentTime);
static double soon = 0;
if (soon < player.deviceCurrentTime) {
soon = player.deviceCurrentTime + 0.5; // why so flakey???
}
success = [player playAtTime:soon]; // too flakey for now
if (!success) {
NSLog(#"player %# FAILED", player);
} else {
NSLog(#"player %# %# playing at: %f", player, [[player.url relativePath] lastPathComponent], soon);
}
}
(I'm not sure if my "soon" var is thread-safe, and you should adjust the slop until it works for you... 0.1 was too fast for me at some point or other so I bumped it up to 0.5.)

Related

Why are my audio sounds not playing on time?

One of my apps has a simple metronome-style feature that plays a click sound a specified number of times per minute (bpm). I'm doing this by starting an NSTimer, with an interval calculated from the specified bpm, that calls a method that plays the sound.
If I put an NSLog line into the play method, I can see that NSTimer is firing accurately to about 1 millisecond. However, if I record the sound output into an audio editor and then measure the interval between clicks, I can see that they are not evenly spaced. For example, with 150 bpm, the timer fires every 400 milliseconds. But most of the sounds play after 395 milliseconds, with every third or fourth sound playing after 418 milliseconds.
So the sounds are not uniformly delayed, but rather, they follow a pattern of shorter and longer intervals. It seems as if the iOS has a lower resolution for timing of sounds, and is rounding each sound event to the nearest available point, rounding up or down as needed to keep on track overall.
I have tried this with system sounds, AVAudioPlayer and OpenAL and have gotten the exact same results with all three methods. With each method, I'm doing all the setup when the view loads, so each time I play the sound all I have to do is play it. With AVAudioPlayer, I tried calling prepareToPlay using a second timer after each time the sound plays, so it is initialized and ready to go next time, but got the same results.
Here's the code for setting up the OpenAL sound in viewDidLoad (adapted from this tutorial):
// set up the context and device
ALCcontext *context;
ALCdevice *device;
OSStatus result;
device = alcOpenDevice(NULL); // select the "preferred device"
if (device) {
context = alcCreateContext(device, NULL); // use the device to make a context
alcMakeContextCurrent(context); // set the context to the currently active one
}
// open the sound file
NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:#"TempoClick" ofType:#"caf"];
NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];
AudioFileID fileID;
result = AudioFileOpenURL((CFURLRef)soundFileURL, kAudioFileReadPermission, 0, &fileID);
if (result != 0) DLog(#"cannot open file %#: %ld", soundFilePath, result);
// get the size of the file data
UInt32 fileSize = 0;
UInt32 propSize = sizeof(UInt64);
result = AudioFileGetProperty(fileID, kAudioFilePropertyAudioDataByteCount, &propSize, &fileSize);
if (result != 0) DLog(#"cannot find file size: %ld", result);
DLog(#"file size: %li", fileSize);
// copy the data into a buffer, then close the file
unsigned char *outData = malloc(fileSize);
AudioFileOpenURL((CFURLRef)soundFileURL, kAudioFileReadPermission, 0, &fileID); // we get a "file is not open" error on the next line if we don't open this again
result = AudioFileReadBytes(fileID, false, 0, &fileSize, outData);
if (result != 0) NSLog(#"cannot load data: %ld", result);
AudioFileClose(fileID);
alGenBuffers(1, &tempoSoundBuffer);
alBufferData(self.tempoSoundBuffer, AL_FORMAT_MONO16, outData, fileSize, 44100);
free(outData);
outData = NULL;
// connect the buffer to the source and set some preferences
alGenSources(1, &tempoSoundSource);
alSourcei(tempoSoundSource, AL_BUFFER, tempoSoundBuffer);
alSourcef(tempoSoundSource, AL_PITCH, 1.0f);
alSourcef(tempoSoundSource, AL_GAIN, 1.0f);
alSourcei(tempoSoundSource, AL_LOOPING, AL_FALSE);
And then in the play method I just call:
alSourcePlay(self.tempoSoundSource);
Can anyone explain what is happening here, and how I can work around it?
UPDATE 1:
I have another project that plays brief sounds with audio units, so as a quick test I added a timer to that project to play my click sound every 400 milliseconds. In that case, the timing is nearly perfect. So, it seems that NSTimer is fine but system sounds, AVAudioPlayer and OpenAL are less accurate in their playback than audio units.
UPDATE 2:
I just reworked my project to use audio units and now the audio is playing back much more accurately. It still occasionally drifts by up to four milliseconds in either direction, but this is better than the other audio methods. I'm still curious why the other methods all show a pattern of short, short, short, long intervals -- it's like the audio playback times are being rounded up or down to map to some kind of frame rate -- so I'll leave this question open for anyone who can explain that and/or offer a workaround for the other audio methods.
NSTimer does not guarantee when your method will actually get fired.
More info here: How to program a real-time accurate audio sequencer on the iphone?
Regarding your edits:
AVAudioPlayer takes some time to initialize itself. If you call prepareToPlay, it will initialize itself such that it can play the currently loaded sound immediately upon calling play. Once playback stops, it uninitializes itself, so you'd need to call prepareToPlay again to reinitialize. It's best to use this class for stream-y playback rather than discrete sound playback.
With OpenAL, once you've loaded the buffer, attaching it to a source and playing it should cause no delay at all.
You can encapsulate your audio units code into a .mm file and then call that from .m modules without having to compile those as C++.
Okay, I've figured it out. The real reason audio units worked better than the other audio methods is that my audio unit class, which I was adapting from another project, was setting a buffer duration property in the audio session, like this:
Float32 preferredBufferSize = .001;
UInt32 size = sizeof(preferredBufferSize);
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, size, &preferredBufferSize);
When I added this code to the OpenAL version, or even to the AVAudioPlayer version, I got accuracy to within a few milliseconds, the same as with audio units. (System Sounds, however, were still not very accurate.) I can verify the connection by increasing the buffer size and watching the playback intervals get less accurate.
Of course I only figured this out after spending an entire day adapting my project to use audio units -- tweaking it to compile under C++, testing the interruption handlers, etc. I hope this can save someone else from the same trouble.

AVAudioPlayer - Metering - Want to build a waveform (graph)

I need to build a visual graph that represents voice levels (dB) in a recorded file. I tried to do it this way:
NSError *error = nil;
AVAudioPlayer *meterPlayer = [[AVAudioPlayer alloc]initWithContentsOfURL:[NSURL fileURLWithPath:self.recording.fileName] error:&error];
if (error) {
_lcl_logger(lcl_cEditRecording, lcl_vError, #"Cannot initialize AVAudioPlayer with file %# due to: %# (%#)", self.recording.fileName, error, error.userInfo);
} else {
[meterPlayer prepareToPlay];
meterPlayer.meteringEnabled = YES;
for (NSTimeInterval i = 0; i <= meterPlayer.duration; ++i) {
meterPlayer.currentTime = i;
[meterPlayer updateMeters];
float averagePower = [meterPlayer averagePowerForChannel:0];
_lcl_logger(lcl_cEditRecording, lcl_vTrace, #"Second: %f, Level: %f dB", i, averagePower);
}
}
[meterPlayer release];
It would be cool if it worked out however it didn't. I always get -160 dB. Any other ideas on how to implement that?
UPD: Here is what I got finally:
alt text http://img22.imageshack.us/img22/5778/waveform.png
I just want to help the others who have come into this same question and used a lot of time to search. To save your time, I put out my answer. I dislike somebody here who treat this as kind of secret...
After search around the articles about extaudioservice, audio queue and avfoundation.
I realised that i should use AVFoundation, reason is simple, it is the latest bundle and it is Objective C but not so cpp style.
So the steps to do it is not complicated:
Create AVAsset from the audio file
Create avassetreader from the avasset
Create avassettrack from avasset
Create avassetreadertrackoutput from avassettrack
Add the avassetreadertrackoutput to the previous avassetreader to start reading out the audio data
From the avassettrackoutput you can copyNextSampleBuffer one by one (it is a loop to read all data out).
Each copyNextSampleBuffer gives you a CMSampleBufferRef which can be used to get AudioBufferList by CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer. AudioBufferList is array of AudioBuffer. AudioBuffer is the a bunch of audio data which is stored in its mData part.
You can implement the above in extAudioService as well. But i think the above avfoundation approach is easier.
So next question, what to do with the mData? Note that when you get the avassetreadertrackoutput, you can specify its output format, so we specify the output is lpcm.
Then the mData you finally get is actually a float format amplitude value.
Easy right? Though i used a lot of time to organise this from piece here and there.
Two useful resource for share:
Read this article to know basic terms and conceptions: https://www.mikeash.com/pyblog/friday-qa-2012-10-12-obtaining-and-interpreting-audio-data.html
Sample code: https://github.com/iluvcapra/JHWaveform
You can copy most of the above mentioned code from this sample directly and used for your own purpose.
I haven't used it myself, but Apple's avTouch iPhone sample has bar graphs powered by AVAudioPlayer, and you can easily check to see how they do it.
I don't think you can use AVAudioPlayer based on your constraints. Even if you could get it to "start" without actually playing the sound file, it would only help you build a graph as fast as the audio file would stream. What you're talking about is doing static analysis of the sound, which will require a much different approach. You'll need to read in the file yourself and parse it manually. I don't think there's a quick solution using anything in the SDK.
Ok guys, seems I'm going to answer my own question again: http://www.supermegaultragroovy.com/blog/2009/10/06/drawing-waveforms/ No a lot of concretics, but at least you will know what Apple docs to read.

iPhone - why does AVAudioPlayer currentTime return a negative value?

When does the AVAudioPlayer's currentTime method return a negative value? The audio file is playing (I am putting in a check before getting currentTime) but making a call to currentTime returns a negative value.
Any ideas? Thanks
if(thePlayer != nil && [thePlayer isPlaying]){
double playerTime = [thePlayer currentTime];
NSLog(#"Player Time: %f", playerTime);
}
Output
Player Time: -0.019683
Are you testing this on the simulator? There are several bugs with AVAudioPlayer on the simulator. One is that the currentTime can be a very large positive or negative number. Even if you set the currentTime to a particular number, it will still often show something different. As far as I know this is only an issue on the simulator and not when running on a device.
Here is the code I use to set the currentTime property of an AVAudioPlayer instance:
- (void)safeSetCurentTime:(NSTimeInterval)newTime {
self._player.currentTime = newTime;
if (self._player.currentTime != newTime)
{
// code falls through to here all the time
// the second attempt _usually_ works.
[self prepareAudioForPlayback];
self._player.currentTime = newTime;
//NSLog(#"Set time failed");
}
}
I believe this issue is fixed the iOS4 beta 2 SDK release, so you shouldn't see it on the iPhone. See here. However, I think we're stuck with the problem on the iPad until iOS4 is available on that device.
Anyone know of a workaround? Any way to predict how the current time will be incorrectly reported, so a correction factor can be applied? What I'm seeing is that the current time is reported to be a few seconds behind the actual playback time (which could be negative, if you're near the start of he audio), and it tracks along with the correct position. So perhaps there's some offset that can be applied whenever the app is used on an earlier iOS version?
I found a workaround that fixes the problem on the iPad, which is workable until iOS4 is released for the iPad and fixes the issue.
Keep hold of the audio buffer, and when you're about to resume playback after pausing/stopping, reload the audio buffer into the AVAudioPlayer instance, set currentTime to where you want playback to resume from, and then resume playback.
Works perfectly for me, and reloading the audio buffer seems very fast.

iPhone - start multiple instances of AVAudioPlayer simultaneously

I am using multiple instances of AVAudioPlayer to play multiple audio files simultaneously. I run a loop to start playing the audio files (prepareToPlay is called beforehand and the loop only makes a call to the play method)
But invariably, one of the players does not play in sync. How can I ensure that all the 4 players start playing audio simultaneously?
Thanks.
The Apple docs talk about how you can "Play multiple sounds simultaneously, one sound per audio player, with precise synchronization". Perhaps you need to call playAtTime: e.g. [myAudioPlayer playAtTime: myAudioPlayer.deviceCurrentTime + playbackDelay];
In fact, the Apple docs for playAtTime: contain the following code snippet:
NSTimeInterval shortStartDelay = 0.01; // seconds
NSTimeInterval now = player.deviceCurrentTime;
[player playAtTime: now + shortStartDelay];
[secondPlayer playAtTime: now + shortStartDelay];
They should play simultaneously (assuming you choose a large enough value for shortStartDelay -- not so soon that it happens before this thread returns or whatever).
Unfortunately, you can't. AVAudioPlayer doesn't provide any mechanism for fine-grained control of start time. The currentTime property sets the point in the file to read from, it doesn't guarantee when the AVAudioPlayer instance will start playing in system time, which is what you need to sync multiple audio streams.
When I need this behavior, I use the RemoteIO Audio Unit + the 3D Mixer Audio Unit + ExtAudioFile.
EDIT
Note that as of iOS 4, you can synchronize multiple AVAudioPlayer instances using playAtTime:
This code segment of mine allows you to do this as long as you don't have to do it instantly. You can pass in the targetTime as a timestamp for when you want to hear the sounds. The trick is to make use of time-stamps and the delay functionality of NSObject. Also, it utilizes the fact that it takes way less time to change the volume of the player than it does to change the current time. Should work almost perfectly precisely.
- (void) moveTrackPlayerTo:(double) timeInSong atTime:(double) targetTime {
[trackPlayer play];
trackPlayer.volume = 0;
double timeOrig = CFAbsoluteTimeGetCurrent();
double delay = targetTime - CFAbsoluteTimeGetCurrent();
[self performSelector:#selector(volumeTo:)
withObject:[NSNumber numberWithFloat:single.GLTrackVolume]
afterDelay:delay];
trackPlayer.currentTime = timeInSong - delay - (CFAbsoluteTimeGetCurrent() - timeOrig);
}
- (void) volumeTo:(NSNumber *) volNumb {
trackPlayer.volume = [volNumb floatValue];
}
Try to set same currentTime property value for every AVAudioPlayer object.

How to program a real-time accurate audio sequencer on the iphone?

I want to program a simple audio sequencer on the iphone but I can't get accurate timing. The last days I tried all possible audio techniques on the iphone, starting from AudioServicesPlaySystemSound and AVAudioPlayer and OpenAL to AudioQueues.
In my last attempt I tried the CocosDenshion sound engine which uses openAL and allows to load sounds into multiple buffers and then play them whenever needed. Here is the basic code:
init:
int channelGroups[1];
channelGroups[0] = 8;
soundEngine = [[CDSoundEngine alloc] init:channelGroups channelGroupTotal:1];
int i=0;
for(NSString *soundName in [NSArray arrayWithObjects:#"base1", #"snare1", #"hihat1", #"dit", #"snare", nil])
{
[soundEngine loadBuffer:i fileName:soundName fileType:#"wav"];
i++;
}
[NSTimer scheduledTimerWithTimeInterval:0.14 target:self selector:#selector(drumLoop:) userInfo:nil repeats:YES];
In the initialisation I create the sound engine, load some sounds to different buffers and then establish the sequencer loop with NSTimer.
audio loop:
- (void)drumLoop:(NSTimer *)timer
{
for(int track=0; track<4; track++)
{
unsigned char note=pattern[track][step];
if(note)
[soundEngine playSound:note-1 channelGroupId:0 pitch:1.0f pan:.5 gain:1.0 loop:NO];
}
if(++step>=16)
step=0;
}
Thats it and it works as it should BUT the timing is shaky and instable. As soon as something else happens (i.g. drawing in a view) it goes out of sync.
As I understand the sound engine and openAL the buffers are loaded (in the init code) and then are ready to start immediately with alSourcePlay(source); - so the problem may be with NSTimer?
Now there are dozens of sound sequencer apps in the appstore and they have accurate timing. I.g. "idrum" has a perfect stable beat even in 180 bpm when zooming and drawing is done. So there must be a solution.
Does anybody has any idea?
Thanks for any help in advance!
Best regards,
Walchy
Thanks for your answer. It brought me a step further but unfortunately not to the aim. Here is what I did:
nextBeat=[[NSDate alloc] initWithTimeIntervalSinceNow:0.1];
[NSThread detachNewThreadSelector:#selector(drumLoop:) toTarget:self withObject:nil];
In the initialisation I store the time for the next beat and create a new thread.
- (void)drumLoop:(id)info
{
[NSThread setThreadPriority:1.0];
while(1)
{
for(int track=0; track<4; track++)
{
unsigned char note=pattern[track][step];
if(note)
[soundEngine playSound:note-1 channelGroupId:0 pitch:1.0f pan:.5 gain:1.0 loop:NO];
}
if(++step>=16)
step=0;
NSDate *newNextBeat=[[NSDate alloc] initWithTimeInterval:0.1 sinceDate:nextBeat];
[nextBeat release];
nextBeat=newNextBeat;
[NSThread sleepUntilDate:nextBeat];
}
}
In the sequence loop I set the thread priority as high as possible and go into an infinite loop. After playing the sounds I calculate the next absolute time for the next beat and send the thread to sleep until this time.
Again this works and it works more stable than my tries without NSThread but it is still shaky if something else happens, especially GUI stuff.
Is there a way to get real-time responses with NSThread on the iphone?
Best regards,
Walchy
NSTimer has absolutely no guarantees on when it fires. It schedules itself for a fire time on the runloop, and when the runloop gets around to timers, it sees if any of the timers are past-due. If so, it runs their selectors. Excellent for a wide variety of tasks; useless for this one.
Step one here is that you need to move audio processing to its own thread and get off the UI thread. For timing, you can build your own timing engine using normal C approaches, but I'd start by looking at CAAnimation and especially CAMediaTiming.
Keep in mind that there are many things in Cocoa that are designed only to run on the main thread. Don't, for instance, do any UI work on a background thread. In general, read the docs carefully to see what they say about thread-safety. But generally, if there isn't a lot of communication between the threads (which there shouldn't be in most cases IMO), threads are pretty easy in Cocoa. Look at NSThread.
im doing something similar using remoteIO output. i do not rely on NSTimer. i use the timestamp provided in the render callback to calculate all of my timing. i dont know how acurate the iphone's hz rate is but im sure its pretty close to 44100hz, so i just calculate when i should be loading the next beat based on what the current sample number is.
an example project that uses remote io can be found here have a look at the render callback inTimeStamp argument.
EDIT : Example of this approach working (and on the app store, can be found here)
I opted to use a RemoteIO AudioUnit and a background thread that fills swing buffers (one buffer for read, one for write which then swap) using the AudioFileServices API. The buffers are then processed and mixed in the AudioUnit thread. The AudioUnit thread signals the bgnd thread when it should start loading the next swing buffer. All the processing was in C and used the posix thread API. All the UI stuff was in ObjC.
IMO, the AudioUnit/AudioFileServices approach affords the greatest degree of flexibility and control.
Cheers,
Ben
You've had a few good answers here, but I thought I'd offer some code for a solution that worked for me. When I began researching this, I actually looked for how run loops in games work and found a nice solution that has been very performant for me using mach_absolute_time.
You can read a bit about what it does here but the short of it is that it returns time with nanosecond precision. However, the number it returns isn't quite time, it varies with the CPU you have, so you have to create a mach_timebase_info_data_t struct first, and then use it to normalize the time.
// Gives a numerator and denominator that you can apply to mach_absolute_time to
// get the actual nanoseconds
mach_timebase_info_data_t info;
mach_timebase_info(&info);
uint64_t currentTime = mach_absolute_time();
currentTime *= info.numer;
currentTime /= info.denom;
And if we wanted it to tick every 16th note, you could do something like this:
uint64_t interval = (1000 * 1000 * 1000) / 16;
uint64_t nextTime = currentTime + interval;
At this point, currentTime would contain some number of nanoseconds, and you'd want it to tick every time interval nanoseconds passed, which we store in nextTime. You can then set up a while loop, something like this:
while (_running) {
if (currentTime >= nextTime) {
// Do some work, play the sound files or whatever you like
nextTime += interval;
}
currentTime = mach_absolute_time();
currentTime *= info.numer;
currentTime /= info.denom;
}
The mach_timebase_info stuff is a bit confusing, but once you get it in there, it works very well. It's been extremely performant for my apps. It's also worth noting that you won't want to run this on the main thread, so dishing it off to its own thread is wise. You could put all the above code in its own method called run, and start it with something like:
[NSThread detachNewThreadSelector:#selector(run) toTarget:self withObject:nil];
All the code you see here is a simplification of a project I open-sourced, you can see it and run it yourself here, if that's of any help. Cheers.
Really the most precise way to approach timing is to count audio samples and do whatever you need to do when a certain number of samples has passed. Your output sample rate is the basis for all things related to sound anyway so this is the master clock.
You don't have to check on each sample, doing this every couple of msec will suffice.
One additional thing that may improve real-time responsiveness is setting the Audio Session's kAudioSessionProperty_PreferredHardwareIOBufferDuration to a few milliseconds (such as 0.005 seconds) before making your Audio Session active. This will cause RemoteIO to request shorter callback buffers more often (on a real-time thread). Don't take any significant time in these real-time audio callbacks, or you will kill the audio thread and all audio for your app.
Just counting shorter RemoteIO callback buffers is on the order of 10X more accurate and lower latency than using an NSTimer. And counting samples within an audio callback buffer for positioning the start of your sound mix will give you sub-millisecond relative timing.
By measuring the time elapsed for the "Do some work" part in the loop and subtracting this duration from the nextTime greatly improves accuracy:
while (loop == YES)
{
timerInterval = adjustedTimerInterval ;
startTime = CFAbsoluteTimeGetCurrent() ;
if (delegate != nil)
{
[delegate timerFired] ; // do some work
}
endTime = CFAbsoluteTimeGetCurrent() ;
diffTime = endTime - startTime ; // measure how long the call took. This result has to be subtracted from the interval!
endTime = CFAbsoluteTimeGetCurrent() + timerInterval-diffTime ;
while (CFAbsoluteTimeGetCurrent() < endTime)
{
// wait until the waiting interval has elapsed
}
}
If constructing your sequence ahead of time is not a limitation, you can get precise timing using an AVMutableComposition. This would play 4 sounds evenly spaced over 1 second:
// setup your composition
AVMutableComposition *composition = [[AVMutableComposition alloc] init];
NSDictionary *options = #{AVURLAssetPreferPreciseDurationAndTimingKey : #YES};
for (NSInteger i = 0; i < 4; i++)
{
AVMutableCompositionTrack* track = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
NSURL *url = [[NSBundle mainBundle] URLForResource:[NSString stringWithFormat:#"sound_file_%i", i] withExtension:#"caf"];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:options];
AVAssetTrack *assetTrack = [asset tracksWithMediaType:AVMediaTypeAudio].firstObject;
CMTimeRange timeRange = [assetTrack timeRange];
Float64 t = i * 1.0;
NSError *error;
BOOL success = [track insertTimeRange:timeRange ofTrack:assetTrack atTime:CMTimeMake(t, 4) error:&error];
NSAssert(success && !error, #"error creating composition");
}
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:composition];
self.avPlayer = [[AVPlayer alloc] initWithPlayerItem:playerItem];
// later when you want to play
[self.avPlayer seekToTime:kCMTimeZero];
[self.avPlayer play];
Original credit for this solution: http://forum.theamazingaudioengine.com/discussion/638#Item_5
And more detail: precise timing with AVMutableComposition
I thought a better approach for the time management would be to have a bpm setting (120, for example), and go off of that instead. Measurements of minutes and seconds are near useless when writing/making music / music applications.
If you look at any sequencing app, they all go by beats instead of time. On the opposite side of things, if you look at a waveform editor, it uses minutes and seconds.
I'm not sure of the best way to implement this code-wise by any means, but I think this approach will save you a lot of headaches down the road.