How can I get a continuous stream of samples from the JavaScript AudioAPI - web-audio-api

I'd like to get a continuous stream of samples in JavaScript from the audio API. The only way I've found to get samples is through the MediaRecorder object in the JavaScript Audio API.
I set up my recorder like this:
var options = {
mimeType: "audio/webm;codec=raw",
}
this.mediaRecorder = new MediaRecorder(stream, options);
this.mediaRecorder.ondataavailable = function (e) {
this.decodeChunk(e.data);
}.bind(this);
this.mediaRecorder.start(/*timeslice=*/ 100 /*ms*/);
This gives me a callback 10 times a second with new data. All good so far.
The data is encoded, so I use audioCtx.decodeAudioData to process it:
let fileReader = new FileReader();
fileReader.onloadend = () => {
let encodedData = fileReader.result;
// console.log("Encoded length: " + encodedData.byteLength);
this.audioCtx.decodeAudioData(encodedData,
(decodedSamples) => {
let newSamples = decodedSamples.getChannelData(0)
.slice(this.firstChunkSize, decodedSamples.length);
// The callback which handles the decodedSamples goes here. All good.
if (this.firstChunkSize == 0) {
this.firstChunkSize = decodedSamples.length;
}
});
};
This all works fine too.
Setting up the data for the file reader is where it gets strange:
let blob;
if (!this.firstChunk) {
this.firstChunk = chunk;
blob = new Blob([chunk], { 'type': chunk.type });
} else {
blob = new Blob([this.firstChunk, chunk], { 'type': chunk.type });
}
fileReader.readAsArrayBuffer(blob);
The first chunk works just fine, but the second and later chunks fail to decode unless I combine them with the first chunk. I'm guessing what is happening here is that the first chunk has a header that is required to decode the data. I remove the samples decoded from the first chunk after decoding them a second time. See this.firstChunkSize above.
This all executes without error, but the audio that I get back has a vibrato-like effect at 10Hz. A few hypotheses:
I have some simple mistake in my "firstChunkSize" and "splice" logic
The first chunk has some header which is causing the remaining data to be interpreted in a strange way.
There is some strange interaction with some option when creating the audio source (noise cancellation?)

You want codecs=, not codec=.
var options = {
mimeType: "audio/webm;codecs=pcm",
}
Though MediaRecorder.isSupported will return true with codec= it is only because this parameter is being ignored. For example:
MediaRecorder.isTypeSupported("audio/webm;codec=pcm")
true
MediaRecorder.isTypeSupported("audio/webm;codecs=pcm")
true
MediaRecorder.isTypeSupported("audio/webm;codecs=asdfasd")
false
MediaRecorder.isTypeSupported("audio/webm;codec=asdfasd")
true
The garbage codec name asdfasd is "supported" if you specify codec instead of codecs.

Related

Pg-promise - How to stream binary data directly to response

Forgive me I'm still learning. I'm trying to download some mp3 files that I have stored in a table. I can download files directly from the file system like this:
if (fs.existsSync(filename)) {
res.setHeader('Content-disposition', 'attachment; filename=' + filename);
res.setHeader('Content-Type', 'application/audio/mpeg3');
var rstream = fs.createReadStream(filename);
rstream.pipe(res);
I have stored the data in the table using pg-promise example in the docs like so:
const rs = fs.createReadStream(filename);
function receiver(_, data) {
function source(index) {
if (index < data.length) {
return data[index];
}
}
function dest(index, data) {
return this.none('INSERT INTO test_bin (utterance) VALUES($1)', data);
}
return this.sequence(source, {dest});
} // end receiver func
rep.tx(t => {
return streamRead.call(t, rs, receiver);
})
.then(data => {
console.log('DATA:', data);
})
.catch(error => {
console.log('ERROR: ', error);
});
But now I want to take that data out of the table and download it to the client. The example in the docs of taking data out of binary converts it to JSON and then prints it to the console like this:
db.stream(qs, s => {
s.pipe(JSONStream.stringify()).pipe(process.stdout)
})
and that works. So the data is coming out of the database ok. But I can't seem to send it to the client. It seems that the data is already a stream so I have tried:
db.stream(qs, s => {
s.pipe(res);
});
But I get a typeerror: First argument must be a string or Buffer
Alternatively, I could take that stream and write it to the file system, and then serve it as in the top step above, but that seems like a workaround. I wish there was an example of how to save to a file in the docs.
What step am I missing?

actions on google--unable to use app.tell to give response from JSON

I am trying to get my webhook to return a parsed JSON response from an API. I can log it on the console, but when I try to use app.tell; it gives me: TypeError: Cannot read property 'tell' of undefined. I am basically able to successfully get the data from the API, but I'm not able to use it in a response for some reason. Thanks for the help!
[Actions.API_TRY] () {
var request = http.get(url2, function (response) {
// data is streamed in chunks from the server
// so we have to handle the "data" event
var buffer = "",
data,
route;
response.on("data", function (chunk) {
buffer += chunk;
});
response.on("end", function (err) {
// finished transferring data
// dump the raw data
console.log(buffer);
console.log("\n");
data = JSON.parse(buffer);
route = data.routes[0];
// extract the distance and time
console.log("Walking Distance: " + route.legs[0].distance.text);
console.log("Time: " + route.legs[0].duration.text);
this.app.tell(route.legs[0].distance.text);
});
});
}
This looks to me to be more of a JavaScript scoping issue than anything else. The error message is telling you that app is undefined. Often in Actions, you find code like yours embedded in a function which is defined inside the intent handler which is passed the instance of your Actions app (SDK or Dialog Flow).

416 Error when creating url from a Blob

I'm using the Web Audio API to record a stream of audio source nodes. My code looks like this:
var context,
bufferLoader,
destination,
mediaRecorder,
source,
bufferList,
chunks = [],
sound_paths = [],
audioRecordings = [];
//fill in sound paths
sound_paths = ['sound.mp3', 'sound2.mp3'];
bufferLoader = new BufferLoader(
context,
sound_paths,
callback
);
//fill bufferList with bufferdata
bufferLoader.load();
destination = context.CreateMediaStreamDestination();
mediaRecorder = new MediaRecorder(destination);
mediaRecorder.ondataavailable = function(e){
chunks.push(e.data);
}
mediaRecorder.onstop = function (e) {
var blob = new Blob(chunks, {'type': 'audio/ogg; codecs=opus'});
var audio = document.createElement('audio');
audio.src = URL.createObjectURL(blob);
audioRecordings.push(audio);
chunks = [];
};
function startRecording(){
mediaRecorder.start();
source = Recorder.context.createBufferSource();
source.buffer = bufferList[0];
source.connect(Recorder.destination);
}
function stopRecording(){
mediaRecorder.stop();
}
//call startRecording(), then source.start(0) on user input
//call stopRecording(), then source.stop(0) on user input
I am using a the BufferLoader as defined here: http://middleearmedia.com/web-audio-api-bufferloader/
This works for the most part, but sometimes I get a 416 (Requested Range Not Satisfiable) when creating a Blob and creating a URL from it. This seems to happen more often when the web page begins to lag. I'm guessing this is because the Blob is undefined when creating the URL, or something like that. Is there a safer way to handle the onstop event for the media recorder? Maybe it would be better to use srcObjet and a MediaStream instead of a Blob?
For my website http://gtube.de (just an example no commercial) i am using recorder.js=>https://github.com/mattdiamond/Recorderjs. It works very good. Perhaps you should give that a try to record the context.
If you load the mp3.s in buffers with the web audio api and play them just at the same time it will devinitely work. => https://www.html5rocks.com/en/tutorials/webaudio/intro/
But thats already the way you do it => The code is missing in your example above so i had to read the article => Perhaps next time you try to make a shorter example.
Sorry i don't know enough about mediaStream API => I suppose it's broken ;-)
If something in web-audio doesn't work just try another way. It is still not very stable => Especially the Mozilla people are supporting it badly.

It can RecorderJs processing a record of a file without emit any sound in speakers meanwhile?

While reading some about RecorderJs I ask myself if is possible record a sound without emit any sound in the speakers, all in a background, somebody knows if that is possible? because I don't see something similar in the Recorderjs Repository.
If you really want to use recorder.js, I guess there is a way to feed it directly with a MediaStream, that you'll get from the streamNode.stream.
Reading quickly the source code of this lib, it seems it only accepts AudioContext Source Nodes, not directly streams, and anyway, you just have to comment the line 38 of recorder.js file.
this.node.connect(this.context.destination); //this should not be necessary
comment from the author
And indeed it is.
Otherwise, you can also achieve it vanilla style (except that it will save as ogg instead of wav), by using the official MediaRecorder API, available in latests browsers.
The main key is the MediaStreamDestination which doesn't need to be connected to the AudioContext's destination.
var audio = new Audio();
audio.crossOrigin = 'anonymous';
audio.src = 'https://dl.dropboxusercontent.com/s/agepbh2agnduknz/camera.mp3';
audio.onloadedmetadata = startRecording;
var aCtx = new AudioContext();
var sourceNode = aCtx.createMediaElementSource(audio);
var streamNode = aCtx.createMediaStreamDestination();
sourceNode.connect(streamNode);
function startRecording() {
var recorder = new MediaRecorder(streamNode.stream),
chunks = [];
recorder.ondataavailable = function(e) {
chunks.push(e.data);
}
recorder.onstop = function() {
var blob = new Blob(chunks);
var url = URL.createObjectURL(blob);
var a = new Audio(url);
a.controls = true;
document.body.appendChild(a);
}
audio.onended = function() {
recorder.stop();
};
audio.play();
recorder.start();
}

Using Sailsjs Skipper file uploading with Flowjs

I'm trying to use skipper and flowjs with ng-flow together for big file uploading.
Based on sample for Nodejs located in flowjs repository, I've created my sails controller and service to handle file uploads. When I uploading a small file it's works fine, but if I try to upload bigger file (e.g. video of 200 Mb) I'm receiving errors (listed below) and array req.file('file')._files is empty. Intersting fact that it happening only few times during uploading. For example, if flowjs cut the file for 150 chunks, in sails console these errors will appear only 3-5 times. So, almost all chunks will uploaded to the server, but a few are lost and in result file is corrupted.
verbose: Unable to expose body parameter `flowChunkNumber` in streaming upload! Client tried to send a text parameter (flowChunkNumber) after one or more files had already been sent. Make sure you always send text params first, then your files.
These errors appears for all flowjs parameters.
I know about that text parameters must be sent first for correct work with skipper. And in chrome network console I've checked that flowjs sends this data in a correct order.
Any suggestions?
Controller method
upload: function (req, res) {
flow.post(req, function (status, filename, original_filename, identifier) {
sails.log.debug('Flow: POST', status, original_filename, identifier);
res.status(status).send();
});
}
Service post method
$.post = function(req, callback) {
var fields = req.body;
var file = req.file($.fileParameterName);
if (!file || !file._files.length) {
console.log('no file', req);
file.upload(function() {});
}
var stream = file._files[0].stream;
var chunkNumber = fields.flowChunkNumber;
var chunkSize = fields.flowChunkSize;
var totalSize = fields.flowTotalSize;
var identifier = cleanIdentifier(fields.flowIdentifier);
var filename = fields.flowFilename;
if (file._files.length === 0 || !stream.byteCount)
{
callback('invalid_flow_request', null, null, null);
return;
}
var original_filename = stream.filename;
var validation = validateRequest(chunkNumber, chunkSize, totalSize, identifier, filename, stream.byteCount);
if (validation == 'valid')
{
var chunkFilename = getChunkFilename(chunkNumber, identifier);
// Save the chunk by skipper file upload api
file.upload({saveAs:chunkFilename},function(err, uploadedFiles){
// Do we have all the chunks?
var currentTestChunk = 1;
var numberOfChunks = Math.max(Math.floor(totalSize / (chunkSize * 1.0)), 1);
var testChunkExists = function()
{
fs.exists(getChunkFilename(currentTestChunk, identifier), function(exists)
{
if (exists)
{
currentTestChunk++;
if (currentTestChunk > numberOfChunks)
{
callback('done', filename, original_filename, identifier);
} else {
// Recursion
testChunkExists();
}
} else {
callback('partly_done', filename, original_filename, identifier);
}
});
};
testChunkExists();
});
} else {
callback(validation, filename, original_filename, identifier);
}};
Edit
Found solution to set flowjs property maxChunkRetries: 5, because by default it's 0.
On the server side, if req.file('file')._files is empty I'm throwing not permanent(in context of flowjs) error.
So, it's solves my problem, but question why it behave like this is still open. Sample code for flowjs and Nodejs uses connect-multiparty and has no any additional error handling code, so it's most likely skipper bodyparser bug.