416 Error when creating url from a Blob - web-audio-api

I'm using the Web Audio API to record a stream of audio source nodes. My code looks like this:
var context,
bufferLoader,
destination,
mediaRecorder,
source,
bufferList,
chunks = [],
sound_paths = [],
audioRecordings = [];
//fill in sound paths
sound_paths = ['sound.mp3', 'sound2.mp3'];
bufferLoader = new BufferLoader(
context,
sound_paths,
callback
);
//fill bufferList with bufferdata
bufferLoader.load();
destination = context.CreateMediaStreamDestination();
mediaRecorder = new MediaRecorder(destination);
mediaRecorder.ondataavailable = function(e){
chunks.push(e.data);
}
mediaRecorder.onstop = function (e) {
var blob = new Blob(chunks, {'type': 'audio/ogg; codecs=opus'});
var audio = document.createElement('audio');
audio.src = URL.createObjectURL(blob);
audioRecordings.push(audio);
chunks = [];
};
function startRecording(){
mediaRecorder.start();
source = Recorder.context.createBufferSource();
source.buffer = bufferList[0];
source.connect(Recorder.destination);
}
function stopRecording(){
mediaRecorder.stop();
}
//call startRecording(), then source.start(0) on user input
//call stopRecording(), then source.stop(0) on user input
I am using a the BufferLoader as defined here: http://middleearmedia.com/web-audio-api-bufferloader/
This works for the most part, but sometimes I get a 416 (Requested Range Not Satisfiable) when creating a Blob and creating a URL from it. This seems to happen more often when the web page begins to lag. I'm guessing this is because the Blob is undefined when creating the URL, or something like that. Is there a safer way to handle the onstop event for the media recorder? Maybe it would be better to use srcObjet and a MediaStream instead of a Blob?

For my website http://gtube.de (just an example no commercial) i am using recorder.js=>https://github.com/mattdiamond/Recorderjs. It works very good. Perhaps you should give that a try to record the context.
If you load the mp3.s in buffers with the web audio api and play them just at the same time it will devinitely work. => https://www.html5rocks.com/en/tutorials/webaudio/intro/
But thats already the way you do it => The code is missing in your example above so i had to read the article => Perhaps next time you try to make a shorter example.
Sorry i don't know enough about mediaStream API => I suppose it's broken ;-)
If something in web-audio doesn't work just try another way. It is still not very stable => Especially the Mozilla people are supporting it badly.

Related

MediaRecorder No Metadata on Download

I'm using MediaRecorder (along with the Web Audio API) to record and process audio and download the blob that it generates. The recording and downloading work great, but there is no metadata when the file is downloaded (length, sample rate, channels, etc.)
I'm using this to create the blob, and I've also tried the mimetype with no luck:
const blob = new Blob(chunks, {
'type' : 'audio/wav'
});
chunks = [];
const audioURL = window.URL.createObjectURL(blob);
audio.src = audioURL;
console.log("recorder stopped");
var new_file = document.getElementById('downloadblob').src
var download_link = document.getElementById("download_link");
download_link.href = new_file;
var name = generateFileName();
download_link.download = name;
How could I ensure the length of the recording, sample rate, and other metadata are included in the download?
I don't know of any browser which allows you to record something as audio/wav. You can get the mimeType of your recording from the instance of the MediaRecorder.
const blob = new Blob(chunks, {
'type': mediaRecorder.mimeType
});
Please note that the length will only be correct if you omit the timeslice parameter when calling mediaRecorder.start(). Otherwise the browser doesn't know the final length of the file when generating the metadata.

How can I get a continuous stream of samples from the JavaScript AudioAPI

I'd like to get a continuous stream of samples in JavaScript from the audio API. The only way I've found to get samples is through the MediaRecorder object in the JavaScript Audio API.
I set up my recorder like this:
var options = {
mimeType: "audio/webm;codec=raw",
}
this.mediaRecorder = new MediaRecorder(stream, options);
this.mediaRecorder.ondataavailable = function (e) {
this.decodeChunk(e.data);
}.bind(this);
this.mediaRecorder.start(/*timeslice=*/ 100 /*ms*/);
This gives me a callback 10 times a second with new data. All good so far.
The data is encoded, so I use audioCtx.decodeAudioData to process it:
let fileReader = new FileReader();
fileReader.onloadend = () => {
let encodedData = fileReader.result;
// console.log("Encoded length: " + encodedData.byteLength);
this.audioCtx.decodeAudioData(encodedData,
(decodedSamples) => {
let newSamples = decodedSamples.getChannelData(0)
.slice(this.firstChunkSize, decodedSamples.length);
// The callback which handles the decodedSamples goes here. All good.
if (this.firstChunkSize == 0) {
this.firstChunkSize = decodedSamples.length;
}
});
};
This all works fine too.
Setting up the data for the file reader is where it gets strange:
let blob;
if (!this.firstChunk) {
this.firstChunk = chunk;
blob = new Blob([chunk], { 'type': chunk.type });
} else {
blob = new Blob([this.firstChunk, chunk], { 'type': chunk.type });
}
fileReader.readAsArrayBuffer(blob);
The first chunk works just fine, but the second and later chunks fail to decode unless I combine them with the first chunk. I'm guessing what is happening here is that the first chunk has a header that is required to decode the data. I remove the samples decoded from the first chunk after decoding them a second time. See this.firstChunkSize above.
This all executes without error, but the audio that I get back has a vibrato-like effect at 10Hz. A few hypotheses:
I have some simple mistake in my "firstChunkSize" and "splice" logic
The first chunk has some header which is causing the remaining data to be interpreted in a strange way.
There is some strange interaction with some option when creating the audio source (noise cancellation?)
You want codecs=, not codec=.
var options = {
mimeType: "audio/webm;codecs=pcm",
}
Though MediaRecorder.isSupported will return true with codec= it is only because this parameter is being ignored. For example:
MediaRecorder.isTypeSupported("audio/webm;codec=pcm")
true
MediaRecorder.isTypeSupported("audio/webm;codecs=pcm")
true
MediaRecorder.isTypeSupported("audio/webm;codecs=asdfasd")
false
MediaRecorder.isTypeSupported("audio/webm;codec=asdfasd")
true
The garbage codec name asdfasd is "supported" if you specify codec instead of codecs.

actions on google--unable to use app.tell to give response from JSON

I am trying to get my webhook to return a parsed JSON response from an API. I can log it on the console, but when I try to use app.tell; it gives me: TypeError: Cannot read property 'tell' of undefined. I am basically able to successfully get the data from the API, but I'm not able to use it in a response for some reason. Thanks for the help!
[Actions.API_TRY] () {
var request = http.get(url2, function (response) {
// data is streamed in chunks from the server
// so we have to handle the "data" event
var buffer = "",
data,
route;
response.on("data", function (chunk) {
buffer += chunk;
});
response.on("end", function (err) {
// finished transferring data
// dump the raw data
console.log(buffer);
console.log("\n");
data = JSON.parse(buffer);
route = data.routes[0];
// extract the distance and time
console.log("Walking Distance: " + route.legs[0].distance.text);
console.log("Time: " + route.legs[0].duration.text);
this.app.tell(route.legs[0].distance.text);
});
});
}
This looks to me to be more of a JavaScript scoping issue than anything else. The error message is telling you that app is undefined. Often in Actions, you find code like yours embedded in a function which is defined inside the intent handler which is passed the instance of your Actions app (SDK or Dialog Flow).

It can RecorderJs processing a record of a file without emit any sound in speakers meanwhile?

While reading some about RecorderJs I ask myself if is possible record a sound without emit any sound in the speakers, all in a background, somebody knows if that is possible? because I don't see something similar in the Recorderjs Repository.
If you really want to use recorder.js, I guess there is a way to feed it directly with a MediaStream, that you'll get from the streamNode.stream.
Reading quickly the source code of this lib, it seems it only accepts AudioContext Source Nodes, not directly streams, and anyway, you just have to comment the line 38 of recorder.js file.
this.node.connect(this.context.destination); //this should not be necessary
comment from the author
And indeed it is.
Otherwise, you can also achieve it vanilla style (except that it will save as ogg instead of wav), by using the official MediaRecorder API, available in latests browsers.
The main key is the MediaStreamDestination which doesn't need to be connected to the AudioContext's destination.
var audio = new Audio();
audio.crossOrigin = 'anonymous';
audio.src = 'https://dl.dropboxusercontent.com/s/agepbh2agnduknz/camera.mp3';
audio.onloadedmetadata = startRecording;
var aCtx = new AudioContext();
var sourceNode = aCtx.createMediaElementSource(audio);
var streamNode = aCtx.createMediaStreamDestination();
sourceNode.connect(streamNode);
function startRecording() {
var recorder = new MediaRecorder(streamNode.stream),
chunks = [];
recorder.ondataavailable = function(e) {
chunks.push(e.data);
}
recorder.onstop = function() {
var blob = new Blob(chunks);
var url = URL.createObjectURL(blob);
var a = new Audio(url);
a.controls = true;
document.body.appendChild(a);
}
audio.onended = function() {
recorder.stop();
};
audio.play();
recorder.start();
}

ADO.NET Data Services - Uploading files

I am trying to write REST web service through which our clients can upload a file on our file server. IS there an example or any useful links which I can refer for any guidance?
I haven't seen many examples of POST operation using ADO.NET data services available.
I've uploaded a file to ADO.NET dataservices using POST although I'm not sure whether it's the recommended approach. The way I went about it is:
On the dataservice I've implemented a service operation called UploadFile (using the WebInvoke attribute so that it caters for POST calls):
[WebInvoke]
public void UploadFile()
{
var request = HttpContext.Current.Request;
for (int i = 0; i < request.Files.Count; i++)
{
var file = request.Files[i];
var inputValues = new byte[file.ContentLength];
using (var requestStream = file.InputStream)
{
requestStream.Read(inputValues, 0, file.ContentLength);
}
File.WriteAllBytes(#"c:\temp\" + file.FileName, inputValues);
}
}
Then on the client side I call the data service using:
var urlString = "http://localhost/TestDataServicePost/CustomDataService.svc/UploadFile";
var webClient = new WebClient();
webClient.UploadFile(urlString, "POST", #"C:\temp\test.txt");
This uses a WebClient to upload the file which places the file data in the HttpRequest.Files collection and sets the content type. If you would prefer to send the contents of the file yourself (eg from an Asp FileUpload control) rather than the webClient reading a file using a path to the file, you can use a WebRequest similar to the way that it's done in this post. Although instead of using
FileStream fileStream = new FileStream(uploadfile,
FileMode.Open, FileAccess.Read);
you could use a byte array that you pass in.
I hope this helps.
I'm not 100% sure how to do this directly to a file server per se, but ADO.Net Data Services definitely support something similar to a database. The code below is how a similar goal of putting a file into a database has been accomplished. Not sure how much that will help, but
var myDocumentRepositoryUri = new Uri("uri here");
var dataContext = new FileRepositoryEntities(myDocumentRepositoryUri);
var myFile = new FileItem();
myfile.Filename = "upload.dat";
myFile.Data = new byte[1000]; // or put whatever file data you want to here
dataContext.AddToFileItem(myFile);
dataContext.SaveChanges();
Note: this code is also using Entity Framework to create a FileItem (representation of a database table as an object) and to save that data.