Streaming and results - papaparse

I'm just getting started with PapaParse, so sorry if this is a silly question.
If I parse a file, I get my nice results object, I can look at the headers, and all that:
Papa.parse(file, {
header: true,
dynamicTyping: true,
complete: function(results) {
console.log("done");
data = results;
//headers = split(data[0]);
headers = results.meta['fields'];
However, if I add in a step callback, the results object in complete step is not defined. What am I actually supposed to do in the step callback? Their examples just dump the output of each row to the console.
Papa.parse(file, {
header: true,
dynamicTyping: true,
step: function(row) {
//console.log(row.data);
data.push(row.data);
},
complete: function(results) {
console.log("done");
data = results;
//headers = split(data[0]);
headers = results.meta['fields'];

In papa parse, usually step i.e, streaming is normally used when you process a large file. So you will process the data as parser is reading them. And when streaming, parse results are not available in the complete callback.
To know more about streaming in papa parse check out this. Also, see more about the step function and complete callback in the config explanation section of the documentation.
Hope this Helps

Related

uppy.io using to send base64 encoded data rather than specifying a file input

Is there a way to send base64 encoded data using uppy.io? I already have it working for 'soft-copy' document uploads using the Dashboard component, but cant seem to work out a way where I can pass the file bytes and not use an input file tag to provide the data to be uploaded.
Context:
I have a page that uses a JavaScript component to access local scanner hardware. It scans, shows a preview, all working. The user then hits an upload button to push it to the server, the scanning component outputs the scan as base64 encoded data. I can send this up to the server using XMLHttpRequest like so:
var req = new XMLHttpRequest();
var formData = new FormData();
formData.append('fileName', uploadFileName);
formData.append('imageFileAsBase64String', imageFileAsBase64String);
req.open("POST", uploadFormUrl);
req.onreadystatechange = __uploadImages_readyStateChanged;
req.send(formData);
but I would really like to use uppy because scan files can be quite large and I get the resumable uploads, nice progress bar etc, and I already have tusdotnet on the back setup and ready to receive it.
All the examples rely on input tags so I dont really know what approach to take. thanks for any pointers.
I eventually figured this out. here in case its useful to anyone else.
you can use fetch to convert the base64 string, then turn it into a blob and finally add it to uppy files via the addFile api.
I referenced this article:
https://ionicframework.com/blog/converting-a-base64-string-to-a-blob-in-javascript/
code below works with my setup with tusdotnet handling the tus service server side.
var uppy = new Uppy.Core({
autoProceed: true,
debug: true
})
.use(Uppy.Tus, { endpoint: 'https://localhost:44302/files/' })
.use(Uppy.ProgressBar, {
target: '.UppyInput-Progress',
hideAfterFinish: false,
})
uppy.on('upload', (data) => {
uppy.setMeta({ md:'value' })
})
uppy.on('complete', (result) => {
// do completion stuff
})
fetch(`data:image/jpeg;base64,${imageFileAsBase64String}`)
.then((response) => response.blob())
.then((blob) => {
uppy.addFile({
name: 'image.jpg',
type: 'image/jpeg',
data: blob
})
});

How can I get a continuous stream of samples from the JavaScript AudioAPI

I'd like to get a continuous stream of samples in JavaScript from the audio API. The only way I've found to get samples is through the MediaRecorder object in the JavaScript Audio API.
I set up my recorder like this:
var options = {
mimeType: "audio/webm;codec=raw",
}
this.mediaRecorder = new MediaRecorder(stream, options);
this.mediaRecorder.ondataavailable = function (e) {
this.decodeChunk(e.data);
}.bind(this);
this.mediaRecorder.start(/*timeslice=*/ 100 /*ms*/);
This gives me a callback 10 times a second with new data. All good so far.
The data is encoded, so I use audioCtx.decodeAudioData to process it:
let fileReader = new FileReader();
fileReader.onloadend = () => {
let encodedData = fileReader.result;
// console.log("Encoded length: " + encodedData.byteLength);
this.audioCtx.decodeAudioData(encodedData,
(decodedSamples) => {
let newSamples = decodedSamples.getChannelData(0)
.slice(this.firstChunkSize, decodedSamples.length);
// The callback which handles the decodedSamples goes here. All good.
if (this.firstChunkSize == 0) {
this.firstChunkSize = decodedSamples.length;
}
});
};
This all works fine too.
Setting up the data for the file reader is where it gets strange:
let blob;
if (!this.firstChunk) {
this.firstChunk = chunk;
blob = new Blob([chunk], { 'type': chunk.type });
} else {
blob = new Blob([this.firstChunk, chunk], { 'type': chunk.type });
}
fileReader.readAsArrayBuffer(blob);
The first chunk works just fine, but the second and later chunks fail to decode unless I combine them with the first chunk. I'm guessing what is happening here is that the first chunk has a header that is required to decode the data. I remove the samples decoded from the first chunk after decoding them a second time. See this.firstChunkSize above.
This all executes without error, but the audio that I get back has a vibrato-like effect at 10Hz. A few hypotheses:
I have some simple mistake in my "firstChunkSize" and "splice" logic
The first chunk has some header which is causing the remaining data to be interpreted in a strange way.
There is some strange interaction with some option when creating the audio source (noise cancellation?)
You want codecs=, not codec=.
var options = {
mimeType: "audio/webm;codecs=pcm",
}
Though MediaRecorder.isSupported will return true with codec= it is only because this parameter is being ignored. For example:
MediaRecorder.isTypeSupported("audio/webm;codec=pcm")
true
MediaRecorder.isTypeSupported("audio/webm;codecs=pcm")
true
MediaRecorder.isTypeSupported("audio/webm;codecs=asdfasd")
false
MediaRecorder.isTypeSupported("audio/webm;codec=asdfasd")
true
The garbage codec name asdfasd is "supported" if you specify codec instead of codecs.

Zapier POST with error "body used already for"

Basically following the code example verbatim, but trying to make a POST request with fetch
fetch('YOUR URL HERE', { method: 'POST', 'body': content })
.then(function (res) {
console.log(res.text());
return res.text();
})
.then(function (plain) {
var output = { okay: true, raw: plain };
callback(null, output);
})
.catch(callback);
Results in angry red box with:
We had trouble sending your test through. Please try again. Error:
Error: body used already for: https://YOURURLHERE
Not sure what it means?
...and then I realized it's not a Zapier question, but a fetch issue.
Google says it's because of what I'm doing in the promise, by trying to read the res stream as text twice (note the two res.text() calls).
(for about 2 minutes I had omitted what I thought was an unimportant detail from the question, the console.log line which was the culprit)

actions on google--unable to use app.tell to give response from JSON

I am trying to get my webhook to return a parsed JSON response from an API. I can log it on the console, but when I try to use app.tell; it gives me: TypeError: Cannot read property 'tell' of undefined. I am basically able to successfully get the data from the API, but I'm not able to use it in a response for some reason. Thanks for the help!
[Actions.API_TRY] () {
var request = http.get(url2, function (response) {
// data is streamed in chunks from the server
// so we have to handle the "data" event
var buffer = "",
data,
route;
response.on("data", function (chunk) {
buffer += chunk;
});
response.on("end", function (err) {
// finished transferring data
// dump the raw data
console.log(buffer);
console.log("\n");
data = JSON.parse(buffer);
route = data.routes[0];
// extract the distance and time
console.log("Walking Distance: " + route.legs[0].distance.text);
console.log("Time: " + route.legs[0].duration.text);
this.app.tell(route.legs[0].distance.text);
});
});
}
This looks to me to be more of a JavaScript scoping issue than anything else. The error message is telling you that app is undefined. Often in Actions, you find code like yours embedded in a function which is defined inside the intent handler which is passed the instance of your Actions app (SDK or Dialog Flow).

Incrementally update Kendo UI autocomplete

I have a Kendo UI autocomplete bound to a remote transport that I need to tweak how it works and am coming up blank.
Currently, I perform a bunch of searches on the server and integrate the results into a JSON response and then return this to the datasource for the autocomplete. The problem is that this can take a long time and our application is time sensitive.
We have identified which searches are most important and found that 1 search accounts for 95% of the chosen results. However, I still need to provide the data from the other searches. I was thinking of kicking off separate requests for data on the server and adding them the autocomplete as they return. Our main search returns extremely fast and would be the first items added to the list. Then as the other searches return, I would like them to add dynamically to the list.
Our application uses knockout.js and I thought about making the datasource part of our view model, but from looking around, Kendo doesn't update based on changes to your observables.
I am currently stumped and any advice would be welcomed.
Edit:
I have been experimenting and have had some success simulating what I want with the following datasource:
var dataSource = new kendo.data.DataSource({
transport: {
read: {
url: window.performLookupUrl,
data: function () {
return {
param1: $("#Input").val()
};
}
},
parameterMap: function (options) {
return {
param1: options.param1
};
}
},
serverFiltering: true,
serverPaging: true,
requestEnd: function (e) {
if (e.type == "read") {
window.setTimeout(function() {
dataSource.add({ Name: "testin1234", Id: "X1234" })
}, 2000);
}
}
});
If the first search returns results, then after 2 seconds, a new item pops into the list. However, if the first search fails, then nothing happens. Is it proper to use (abuse??) the requestEnd like this? My eventual goal is to kick off the rest of the searches from this function.
I contacted Telerik and they gave me the following jsbin that I was able to modify to suit my needs.
http://jsbin.com/ezucuk/5/edit