Knowing when I can skip to any point in an audio file without buffering / delay in playback - web-audio-api

I'm loading an MP3 on my webpage using audio = new Audio(). But I'd like to know that when setting audio.currentTime, the audio can skip to any point in the file -near the end or wherever -without any delay in playback. Ie I want to know when the MP3 has downloaded in its entirety.
Can I use the Audio object/element for this, or must I use an AudioContext as shown here?

Every AudioElement is exposing its buffered data as a TimeRanges object. TimeRanges is an object which tells you how many continuous parts aka ranges are already buffered. It does also have getters which return the respective start and end of each range in seconds.
In case your AudioElement is named audio the following code snippet will log the buffered time ranges at a given point in time.
const numberOfRanges = audio.buffered.length;
for (let i = 0; i < numberOfRanges; i += 1) {
console.log(
audio.buffered.start(i),
audio.buffered.end(i)
);
}
If you want to detect the point in time at which all data is buffered you could use a check similar to this one:
const isBufferedCompletely = (audio.buffered.length === 1
&& audio.buffered.start(0) === 0
&& audio.buffered.end(0) === audio.duration);
I used the Gist referenced in the comments below to construct an example. The following snippet will periodically check if the file is already buffered. It will log a message to the console once that is the case. I tested it on Chrome (v74) and Firefox (v66) on OS X. Please note that the file can't be played at the same time as the script will set the currentTime of the Audio Element.
const audio = new Audio('http://www.obamadownloads.com/mp3s/charleston-eulogy-speech.mp3');
audio.preload = 'auto';
function detectBuffered(duration) {
// Stick with the duration once it is known because it might get updated
// when reaching the end of the file.
if (duration === undefined && !isNaN(audio.duration)) {
duration = audio.duration;
}
const isBufferedCompletely = (audio.buffered.length === 1
&& audio.buffered.start(0) === 0
&& audio.buffered.end(0) === duration);
if (isBufferedCompletely) {
const seconds = Math.round(duration);
console.log('The complete file is buffered.');
console.log(`It is about ${ seconds } seconds long.`);
} else {
// Move the playhead of the audio element to get the browser to load
// the complete file.
if (audio.buffered.length > 0) {
audio.currentTime = Math.max(0, audio.buffered.end(0) - 1);
}
setTimeout(detectBuffered, 100, duration);
}
}
detectBuffered();

Related

Web audio playback contains clicks

I am trying to build a midi player using web audio API. I used tonejs to parse midi file into JSON. I am using mp3 files to play notes. Following are the relevant parts of the code:
//create audio samples
static async setupSample(audioContext, filepath) {
const response = await fetch(filepath);
const arrayBuffer = await response.arrayBuffer();
const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
return audioBuffer;
}
//play a single sample
static playSample(audioContext, audioBuffer, time) {
const sampleSource = new AudioBufferSourceNode(audioContext, {
buffer: audioBuffer,
playbackRate: 1,
});
sampleSource.connect(audioContext.destination);
sampleSource.start(time);
return sampleSource;
}
Scheduling samples:
async start() {
this.startTime = this.audioCtx.currentTime;
this.play();
}
play() {
let nextNote = this.notes[this.noteIndex];
//schedule samples
while ((nextNote.time + this.startTime) - this.audioCtx.currentTime <= 0.250) {
let s = Audio.playSample(this.audioCtx, this.samples[nextNote.midi], this.startTime + nextNote.time);
s.stop(this.startTime + nextNote.time + nextNote.duration);
this.noteIndex++;
if (this.noteIndex == this.notes.length) {
break;
}
nextNote = this.notes[this.noteIndex];
}
if (this.noteIndex == this.notes.length) {
return;
}
requestAnimationFrame(() => {
this.play();
});
}
I am testing code with a midi file which contains C major scale. I have tested the midi file using timidity and it is fine.
The code does play the midi file correctly execpet a small problem: I hear some clicking sounds during playback. The clicking increases with increasing tempo but does not completely go away even with tempo as small as 50bpm. Any ideas what could be going wrong?
Full code can be viewed at : https://test.meedee.in/
Nothing is "wrong". You are observing a phenomenon intrinsic to the physics of audio.
Chopping audio samples arbitrarily like this creates clicks at the transitions. Any instantaneous change in level is heard as a click. To get rid of the clicks, apply an envelope to the sample, blend adjacent notes, or apply a low-pass filter.

There is a way to specify the time delay between audio tracks on the same playlist?

I need to set a specific time delay between audio tracks on the playlist. Ex. 10 seconds delay. How could I achieve this?. Thanks in advance
There are two ways:
Create a silent audio track of the desired duration and insert it between each item in your ConcatenatingAudioSource.
Don't use ConcatenatingAudioSource, write your own playlist logic.
An example of the second approach could be:
// Maintain your own playlist position
int index = 0;
// Define your tracks
final tracks = <IndexedAudioSource>[ ... ];
// Auto advance with a delay when the current track completes
player.processingStateStream.listen((state) async {
if (state == ProcessingState.completed && index < tracks.length) {
await Future.delayed(Duration(seconds: 10));
// You might want to check if another skip happened during our sleep
// before we execute this skip.
skipToIndex(index + 1);
}
});
// Make this a method so that you can wire up UI buttons to skip on demand.
Future<void> skipToIndex(int i) {
index = i;
await player.setAudioSource(tracks[index]);
}

How to offset note scheduling for interactive recording of notes via user controls

In the code below I have a note scheduler that increments a variable named current16thNote up to 16 and then looping back around to 1. The ultimate goal of the application is to allow the user to click a drum pad and push the current16thNote value to an array. On each iteration of current16thNote a loop is run on the track arrays looking for the current current16thNote value, if it finds it the sound will play.
//_________________________________________________________General variable declarations
var isPlaying = false,
tempo = 120.0, // tempo (in beats per minute)
current16thNote = 1,
futureTickTime = 0.0,
timerID = 0,
noteLength = 0.05; // length of "beep" (in seconds)
//_________________________________________________________END General variable declarations
//_________________________________________________________Load sounds
var kick = audioFileLoader("sounds/kick.mp3"),
snare = audioFileLoader("sounds/snare.mp3"),
hihat = audioFileLoader("sounds/hihat.mp3"),
shaker = audioFileLoader("sounds/shaker.mp3");
//_________________________________________________________END Load sounds
//_________________________________________________________Track arrays
var track1 = [],
track2 = [5, 13],
track3 = [],
track4 = [1, 3, 5, 7, 9, 11, 13, 15];
//_________________________________________________________END Track arrays
//_________________________________________________________Future tick
function futureTick() {
var secondsPerBeat = 60.0 / tempo;
futureTickTime += 0.25 * secondsPerBeat;
current16thNote += 1;
if (current16thNote > 16) {
current16thNote = 1
}
}
//_________________________________________________________END Future tick
function checkIfRecordedAndPlay(trackArr, sound, beatDivisionNumber, time) {
for (var i = 0; i < trackArr.length; i += 1) {
if (beatDivisionNumber === trackArr[i]) {
sound.play(time);
}
}
}
//__________________________________________________________Schedule note
function scheduleNote(beatDivisionNumber, time) {
var osc = audioContext.createOscillator(); //____Metronome
if (beatDivisionNumber === 1) {
osc.frequency.value = 800;
} else {
osc.frequency.value = 400;
}
osc.connect(audioContext.destination);
osc.start(time);
osc.stop(time + noteLength);//___________________END Metronome
checkIfRecordedAndPlay(track1, kick, beatDivisionNumber, time);
checkIfRecordedAndPlay(track2, snare, beatDivisionNumber, time);
checkIfRecordedAndPlay(track3, hihat, beatDivisionNumber, time);
checkIfRecordedAndPlay(track4, shaker, beatDivisionNumber, time);
}
//_________________________________________________________END schedule note
//_________________________________________________________Scheduler
function scheduler() {
while (futureTickTime < audioContext.currentTime + 0.1) {
scheduleNote(current16thNote, futureTickTime);
futureTick();
}
timerID = window.requestAnimationFrame(scheduler);
}
//_________________________________________________________END Scheduler
The Problem
In addition to the previous code I have some user interface controls as shown in the following image.
When a user mousedowns on a “drum pad” I want to do two things. The first is to hear the sound immediately , and the second is to push the current16thNote value to the respective array.
If I use the following code to do this a few problems emerge.
$("#kick").on("mousedown", function() {
kick.play(audioContext.currentTime)
track1.push(current16thNote)
})
The first problem is that the sound plays twice. This is because when the sound is pushed to the array it is immediately recognized by the next iteration of the note scheduler and immediately plays. I fixed this by creating a delay with setInterval to offset the push to the array.
$("#kick").on("mousedown", function() {
kick.play(audioContext.currentTime)
window.setTimeout(function() {
track1.push(note)
}, 500)
})
The second problem is musical.
When a user clicks a drum pad the value that the user anticipates the drum will be recorded at is one 16th value earlier. In other words if you listen to the metronome and click on the kick drum pad with the intent of landing right on the 1/1 down beat this won't happen. Instead, when the metronome loops back around it will have been “recorded” at one 16th increment later.
This can be remedied by writing code that intentionally offsets the value that is pushed to the array by -1 .
I wrote a helper function named pushNote to do this.
$("#kick").on("mousedown", function() {
var note = current16thNote;
kick.play(audioContext.currentTime)
window.setTimeout(function() {
pushNote(track1, note)
}, 500)
})
//________________________________________________Helper
function pushNote(trackArr, note) {
if (note - 1 === 0) {
trackArr.push(16)
} else {
trackArr.push(note - 1)
}
}
//________________________________________________END Helper
My question is really a basic one. Is there a way to solve this problem without creating these odd “offsets” ?
I suspect there is a way to set/write/place the current16thNote increment without having to create offsets to other parts of the program. But I'm hazy on what it could be.
In the world of professional audio recording there isn't a tick per 16th division value , you usually have 480 ticks per quarter note. I want to begin exploring writing my apps using this larger value but I want to resolve this "offset" issue before I go down that rabbit hole.

How to Delay a For Loop Until Sound has Finished Playing in Swift

So, I have a for loop that runs values from an array into an if statement. The if statement plays a sound depending on the value in the array.
However, right now, all the values are being run through at once, so the sounds are all played at the same time. Here's what my code looks like right now:
// sortedKeys is an array made from the keys in the newData dictionary.
let sortedKeys = Array(newData.keys).sort(<)
// newData is the dictionary of type [float:float] being used to get the values that are then being run through the if statement.
for (value) in sortedKeys {
let data = newData[value]
if data <= Float(1) {
self.audioPlayer1.play()
} else if data <= Float(2) && data > Float(1) {
self.audioPlayer2.play()
} else if data <= Float(3) && data > Float(2) {
self.audioPlayer3.play()
} else if data <= Float(4) && data > Float(3) {
self.audioPlayer4.play()
} else if data <= Float(5) && data > Float(4) {
self.audioPlayer5.play()
} else if data <= Float(6) && data > Float(5) {
self.audioPlayer6.play()
} else if data <= Float(7) && data > Float(6) {
self.audioPlayer7.play()
} else if data <= Float(8) && data > Float(7) {
self.audioPlayer8.play()
} else if data <= Float(9) && data > Float(8) {
self.audioPlayer9.play()
} else {
self.audioPlayer10.play()
}
}
How can I make it so that once the AVAudioPlayer finishes playing, I continue the for loop to get the next sound? I'm thinking it has something to do with AVPlayerItemDidPlayToEndTimeNotification, but I don't know how to use this to delay the for loop.
Thanks!
How can I make it so that once the AVAudioPlayer finishes playing, then I continue the for loop to get the next sound.
Have the audio player's delegate implement -audioPlayerDidFinishPlaying:successfully: such that it selects the next sound and plays it.
Audio is played asynchronously, so you can't just have your code pause for a bit while the sound plays and then continue on. You need to design your code so that it fires off the sound, and then when the audio player says it's finished, the next sound is started.
I had a similar issue and used the sleep(insert number of seconds to pause) e.g. sleep(3) function after the self.audioPlayer.play() function in order to give the play function time to play. It is not ideal though as it blocks the main thread.

How to play audio byte array (not file!) with JavaScript in a browser

For mostly security reasons, I'm not allowed to store a WAV file on the server to be accessed by a browser. What I have is a byte array contains audio data (the data portion of a WAV file I believe) on the sever, and I want it to be played on a browser through JavaScript (or Applet but JS preferred), I can use JSON-PRC to send the whole byte[] over, or I can open a socket to stream it over, but in either case I don't know who to play the byte[] within the browser?
The following code will play the sine wave at 0.5 and 2.0. Call the function play_buffersource() in your button or anywhere you want.
Tested using Chrome with Web Audio flag enabled. For your case, all that you need to do is just to shuffle your audio bytes to the buf.
<script type="text/javascript">
const kSampleRate = 44100; // Other sample rates might not work depending on the your browser's AudioContext
const kNumSamples = 16834;
const kFrequency = 440;
const kPI_2 = Math.PI * 2;
function play_buffersource() {
if (!window.AudioContext) {
if (!window.webkitAudioContext) {
alert("Your browser sucks because it does NOT support any AudioContext!");
return;
}
window.AudioContext = window.webkitAudioContext;
}
var ctx = new AudioContext();
var buffer = ctx.createBuffer(1, kNumSamples, kSampleRate);
var buf = buffer.getChannelData(0);
for (i = 0; i < kNumSamples; ++i) {
buf[i] = Math.sin(kFrequency * kPI_2 * i / kSampleRate);
}
var node = ctx.createBufferSource(0);
node.buffer = buffer;
node.connect(ctx.destination);
node.noteOn(ctx.currentTime + 0.5);
node = ctx.createBufferSource(0);
node.buffer = buffer;
node.connect(ctx.destination);
node.noteOn(ctx.currentTime + 2.0);
}
</script>
References:
http://epx.com.br/artigos/audioapi.php
https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html
If you need to resample the audio, you can use a JavaScript resampler: https://github.com/grantgalitz/XAudioJS
If you need to decode the base64 data, there are a lot of JavaScript base64 decoder: https://github.com/carlo/jquery-base64
I accomplished this via the following code. I pass in a byte array containing the data from the wav file to the function playByteArray. My solution is similar to Peter Lee's, but I could not get his to work in my case (the output was garbled) whereas this solution works well for me. I verified that it works in Firefox and Chrome.
window.onload = init;
var context; // Audio context
var buf; // Audio buffer
function init() {
if (!window.AudioContext) {
if (!window.webkitAudioContext) {
alert("Your browser does not support any AudioContext and cannot play back this audio.");
return;
}
window.AudioContext = window.webkitAudioContext;
}
context = new AudioContext();
}
function playByteArray(byteArray) {
var arrayBuffer = new ArrayBuffer(byteArray.length);
var bufferView = new Uint8Array(arrayBuffer);
for (i = 0; i < byteArray.length; i++) {
bufferView[i] = byteArray[i];
}
context.decodeAudioData(arrayBuffer, function(buffer) {
buf = buffer;
play();
});
}
// Play the loaded file
function play() {
// Create a source node from the buffer
var source = context.createBufferSource();
source.buffer = buf;
// Connect to the final output node (the speakers)
source.connect(context.destination);
// Play immediately
source.start(0);
}
If you have the bytes on the server then I would suggest that you create some kind of handler on the server that will stream the bytes to the response as a wav file. This "file" would only be in memory on the server and not on disk. Then the browser can just handle it like a normal wav file.
More details on the server stack would be needed to give more information on how this could be done in your environment.
I suspect you can achieve this with HTML5 Audio API easily enough:
https://developer.mozilla.org/en/Introducing_the_Audio_API_Extension
This library might come in handy too, though I'm not sure if it reflects the latest browser behaviours:
https://github.com/jussi-kalliokoski/audiolib.js