what is the time unit for Web Audio API 'noteOn()' - web-audio-api

I am trying to use web audio api 'noteOn(time)' to play a sound, but I am not sure what the time unit is.
Is it in millisecond? or in second?

It's seconds.
The time is relative to the audio context's currentTime, which can be accessed like so:
var context = new audioContext();
//....
note.noteOn(context.currentTime); //will play now
//....
note.noteOn(context.currentTime + 1); //will play in one second

Related

Get the duration of .m3u audio

I want to play audio by the URL
https://wortcast01.wortfm.org/appfiles/wort_210715_080006buzzthu.m3u
it has a body(with tracks)
https://wortcast01.wortfm.org/pitch/preroll-buzzthu.mp3
https://wortcast01.wortfm.org/mp3/wort_210715_080006buzzthu.mp3
When I set https://wortcast01.wortfm.org/appfiles/wort_210715_080006buzzthu.m3u to the AVPlayer then the duration is equal to Nan. But each track from the list has a duration.
Do you have an idea how to extract duration via AVPlayer?
return Nan:
var itemDuration: Double? {
return currentItem?.duration.seconds
}
AVFoundation(AVPlayer, AVAsset....) automatically marks any .m3u like "Live broadcasting" (my assumption: It looks like Apple uses scenario M3U8 for working with M3U)
Solution:
Create additional functionality which loads m3u(or pls) content, create an internal playlist, and play these internal parts ar AVPlayer.

Synchronizing Playback of Multiple Audio Files in Audio Kit

I am developing a small audio sequencer application using AudioKit. I only need to play back 4 channels of audio. However I need to play them back perfectly synchronized down to the sample level. When I run a test using just two audio files, I can hear that they are not synchronized. The difference is only a few samples, but even a one sample discrepancy would be a problem. I am currently using multiple AKClipPlayer objects routed to an AKMixer object. I called him with the basics for loop like this:
private var clipPlayers : [AKClipPlayer] = []
func play(){
for player in clipPlayers{
player.play()
}
}
Is sample accurate playback timing of multiple audio files possible using AudioKit?
Yes, you need to schedule playback to start in the future with play(at:).
// This can take longer than expected, so do this before choosing a future time
clipPlayers.forEach { $0.prepare(withFrameCount: 10_000) }
let nearFuture = AVAudioTime.now() + 0.2
clipPlayers.forEach { $0.play(at: nearFuture) }

In IOS core audio, how do you find the true current play head position of a file player audio unit?

I have a program that uses a file player audio unit to play, pause and stop a audio file. The way I am accomplishing this is by initializing the file player audio unit to play the file at position zero and then when the user presses the pause button, I stop the AUGraph, capture the current position, and then use that position as the start position when the user presses the play button. Everything is working as it should, but every 3 or 4 times I hit pause and then play, the song starts playing a half to a full second BEFORE the point where I hit pause.
I can't figure out why this is happening, do any of you have any thoughts? here is a simplified version of my code.
//initialize AUGraph and File player Audio unit
...
...
...
//Start AUGraph
...
...
...
// pause playback
- (void) pauseAUGraph {
//first stop the AuGrpah
result = AUGraphStop (processingGraph);
// get current play head position
AudioTimeStamp ts;
UInt32 size = sizeof(ts);
result = AudioUnitGetProperty(filePlayerUnit,
kAudioUnitProperty_CurrentPlayTime, kAudioUnitScope_Global, 0, &ts,
&size);
//save our play head position for use later
//must add it to itself to take care of multiple presses of the pause button
sampleFrameSavedPosition = sampleFrameSavedPosition + ts.mSampleTime;
//this stops the file player unit from playing
AudioUnitReset(filePlayerUnit, kAudioUnitScope_Global, 0);
NSLog (#"AudioUnitReset - stopped file player from playing");
//all done
}
// Stop playback
- (void) stopAUGraph {
// lets set the play head to zero, so that when we restart, we restart at the beginning of the file.
sampleFrameSavedPosition = 0;
//ok now that we saved the current pleayhead position, lets stop the AUGraph
result = AUGraphStop (processingGraph);
}
May be you should use packet counts instead of timestamps, since you just want to pause and play the music, not display the time information.
See BufferedAudioPlayer for an example of using this method.
It may be due to rounding problems with your code:
For example, if every time you hit the pause button, your timer would record at a 0.5/4 seconds before your actual pause time, you would still see a desired result. But after repeating for four more times, the amount of space you have created is 0.5/4 times 4 which is the half of a second you seem to be experiencing.
Thus, I would pay careful attention to the object types you are using and make sure they don't round inappropriately. Try using a double float for your sample times to try to alleviate that problem!
Hope this is clear and helpful! :)

Why are my audio sounds not playing on time?

One of my apps has a simple metronome-style feature that plays a click sound a specified number of times per minute (bpm). I'm doing this by starting an NSTimer, with an interval calculated from the specified bpm, that calls a method that plays the sound.
If I put an NSLog line into the play method, I can see that NSTimer is firing accurately to about 1 millisecond. However, if I record the sound output into an audio editor and then measure the interval between clicks, I can see that they are not evenly spaced. For example, with 150 bpm, the timer fires every 400 milliseconds. But most of the sounds play after 395 milliseconds, with every third or fourth sound playing after 418 milliseconds.
So the sounds are not uniformly delayed, but rather, they follow a pattern of shorter and longer intervals. It seems as if the iOS has a lower resolution for timing of sounds, and is rounding each sound event to the nearest available point, rounding up or down as needed to keep on track overall.
I have tried this with system sounds, AVAudioPlayer and OpenAL and have gotten the exact same results with all three methods. With each method, I'm doing all the setup when the view loads, so each time I play the sound all I have to do is play it. With AVAudioPlayer, I tried calling prepareToPlay using a second timer after each time the sound plays, so it is initialized and ready to go next time, but got the same results.
Here's the code for setting up the OpenAL sound in viewDidLoad (adapted from this tutorial):
// set up the context and device
ALCcontext *context;
ALCdevice *device;
OSStatus result;
device = alcOpenDevice(NULL); // select the "preferred device"
if (device) {
context = alcCreateContext(device, NULL); // use the device to make a context
alcMakeContextCurrent(context); // set the context to the currently active one
}
// open the sound file
NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:#"TempoClick" ofType:#"caf"];
NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];
AudioFileID fileID;
result = AudioFileOpenURL((CFURLRef)soundFileURL, kAudioFileReadPermission, 0, &fileID);
if (result != 0) DLog(#"cannot open file %#: %ld", soundFilePath, result);
// get the size of the file data
UInt32 fileSize = 0;
UInt32 propSize = sizeof(UInt64);
result = AudioFileGetProperty(fileID, kAudioFilePropertyAudioDataByteCount, &propSize, &fileSize);
if (result != 0) DLog(#"cannot find file size: %ld", result);
DLog(#"file size: %li", fileSize);
// copy the data into a buffer, then close the file
unsigned char *outData = malloc(fileSize);
AudioFileOpenURL((CFURLRef)soundFileURL, kAudioFileReadPermission, 0, &fileID); // we get a "file is not open" error on the next line if we don't open this again
result = AudioFileReadBytes(fileID, false, 0, &fileSize, outData);
if (result != 0) NSLog(#"cannot load data: %ld", result);
AudioFileClose(fileID);
alGenBuffers(1, &tempoSoundBuffer);
alBufferData(self.tempoSoundBuffer, AL_FORMAT_MONO16, outData, fileSize, 44100);
free(outData);
outData = NULL;
// connect the buffer to the source and set some preferences
alGenSources(1, &tempoSoundSource);
alSourcei(tempoSoundSource, AL_BUFFER, tempoSoundBuffer);
alSourcef(tempoSoundSource, AL_PITCH, 1.0f);
alSourcef(tempoSoundSource, AL_GAIN, 1.0f);
alSourcei(tempoSoundSource, AL_LOOPING, AL_FALSE);
And then in the play method I just call:
alSourcePlay(self.tempoSoundSource);
Can anyone explain what is happening here, and how I can work around it?
UPDATE 1:
I have another project that plays brief sounds with audio units, so as a quick test I added a timer to that project to play my click sound every 400 milliseconds. In that case, the timing is nearly perfect. So, it seems that NSTimer is fine but system sounds, AVAudioPlayer and OpenAL are less accurate in their playback than audio units.
UPDATE 2:
I just reworked my project to use audio units and now the audio is playing back much more accurately. It still occasionally drifts by up to four milliseconds in either direction, but this is better than the other audio methods. I'm still curious why the other methods all show a pattern of short, short, short, long intervals -- it's like the audio playback times are being rounded up or down to map to some kind of frame rate -- so I'll leave this question open for anyone who can explain that and/or offer a workaround for the other audio methods.
NSTimer does not guarantee when your method will actually get fired.
More info here: How to program a real-time accurate audio sequencer on the iphone?
Regarding your edits:
AVAudioPlayer takes some time to initialize itself. If you call prepareToPlay, it will initialize itself such that it can play the currently loaded sound immediately upon calling play. Once playback stops, it uninitializes itself, so you'd need to call prepareToPlay again to reinitialize. It's best to use this class for stream-y playback rather than discrete sound playback.
With OpenAL, once you've loaded the buffer, attaching it to a source and playing it should cause no delay at all.
You can encapsulate your audio units code into a .mm file and then call that from .m modules without having to compile those as C++.
Okay, I've figured it out. The real reason audio units worked better than the other audio methods is that my audio unit class, which I was adapting from another project, was setting a buffer duration property in the audio session, like this:
Float32 preferredBufferSize = .001;
UInt32 size = sizeof(preferredBufferSize);
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, size, &preferredBufferSize);
When I added this code to the OpenAL version, or even to the AVAudioPlayer version, I got accuracy to within a few milliseconds, the same as with audio units. (System Sounds, however, were still not very accurate.) I can verify the connection by increasing the buffer size and watching the playback intervals get less accurate.
Of course I only figured this out after spending an entire day adapting my project to use audio units -- tweaking it to compile under C++, testing the interruption handlers, etc. I hope this can save someone else from the same trouble.

iPhone - start multiple instances of AVAudioPlayer simultaneously

I am using multiple instances of AVAudioPlayer to play multiple audio files simultaneously. I run a loop to start playing the audio files (prepareToPlay is called beforehand and the loop only makes a call to the play method)
But invariably, one of the players does not play in sync. How can I ensure that all the 4 players start playing audio simultaneously?
Thanks.
The Apple docs talk about how you can "Play multiple sounds simultaneously, one sound per audio player, with precise synchronization". Perhaps you need to call playAtTime: e.g. [myAudioPlayer playAtTime: myAudioPlayer.deviceCurrentTime + playbackDelay];
In fact, the Apple docs for playAtTime: contain the following code snippet:
NSTimeInterval shortStartDelay = 0.01; // seconds
NSTimeInterval now = player.deviceCurrentTime;
[player playAtTime: now + shortStartDelay];
[secondPlayer playAtTime: now + shortStartDelay];
They should play simultaneously (assuming you choose a large enough value for shortStartDelay -- not so soon that it happens before this thread returns or whatever).
Unfortunately, you can't. AVAudioPlayer doesn't provide any mechanism for fine-grained control of start time. The currentTime property sets the point in the file to read from, it doesn't guarantee when the AVAudioPlayer instance will start playing in system time, which is what you need to sync multiple audio streams.
When I need this behavior, I use the RemoteIO Audio Unit + the 3D Mixer Audio Unit + ExtAudioFile.
EDIT
Note that as of iOS 4, you can synchronize multiple AVAudioPlayer instances using playAtTime:
This code segment of mine allows you to do this as long as you don't have to do it instantly. You can pass in the targetTime as a timestamp for when you want to hear the sounds. The trick is to make use of time-stamps and the delay functionality of NSObject. Also, it utilizes the fact that it takes way less time to change the volume of the player than it does to change the current time. Should work almost perfectly precisely.
- (void) moveTrackPlayerTo:(double) timeInSong atTime:(double) targetTime {
[trackPlayer play];
trackPlayer.volume = 0;
double timeOrig = CFAbsoluteTimeGetCurrent();
double delay = targetTime - CFAbsoluteTimeGetCurrent();
[self performSelector:#selector(volumeTo:)
withObject:[NSNumber numberWithFloat:single.GLTrackVolume]
afterDelay:delay];
trackPlayer.currentTime = timeInSong - delay - (CFAbsoluteTimeGetCurrent() - timeOrig);
}
- (void) volumeTo:(NSNumber *) volNumb {
trackPlayer.volume = [volNumb floatValue];
}
Try to set same currentTime property value for every AVAudioPlayer object.