uppy.io using to send base64 encoded data rather than specifying a file input - uppy

Is there a way to send base64 encoded data using uppy.io? I already have it working for 'soft-copy' document uploads using the Dashboard component, but cant seem to work out a way where I can pass the file bytes and not use an input file tag to provide the data to be uploaded.
Context:
I have a page that uses a JavaScript component to access local scanner hardware. It scans, shows a preview, all working. The user then hits an upload button to push it to the server, the scanning component outputs the scan as base64 encoded data. I can send this up to the server using XMLHttpRequest like so:
var req = new XMLHttpRequest();
var formData = new FormData();
formData.append('fileName', uploadFileName);
formData.append('imageFileAsBase64String', imageFileAsBase64String);
req.open("POST", uploadFormUrl);
req.onreadystatechange = __uploadImages_readyStateChanged;
req.send(formData);
but I would really like to use uppy because scan files can be quite large and I get the resumable uploads, nice progress bar etc, and I already have tusdotnet on the back setup and ready to receive it.
All the examples rely on input tags so I dont really know what approach to take. thanks for any pointers.

I eventually figured this out. here in case its useful to anyone else.
you can use fetch to convert the base64 string, then turn it into a blob and finally add it to uppy files via the addFile api.
I referenced this article:
https://ionicframework.com/blog/converting-a-base64-string-to-a-blob-in-javascript/
code below works with my setup with tusdotnet handling the tus service server side.
var uppy = new Uppy.Core({
autoProceed: true,
debug: true
})
.use(Uppy.Tus, { endpoint: 'https://localhost:44302/files/' })
.use(Uppy.ProgressBar, {
target: '.UppyInput-Progress',
hideAfterFinish: false,
})
uppy.on('upload', (data) => {
uppy.setMeta({ md:'value' })
})
uppy.on('complete', (result) => {
// do completion stuff
})
fetch(`data:image/jpeg;base64,${imageFileAsBase64String}`)
.then((response) => response.blob())
.then((blob) => {
uppy.addFile({
name: 'image.jpg',
type: 'image/jpeg',
data: blob
})
});

Related

Get the path of selected file ( <input type="file"...> )

in my application there is an input field to upload a CV.
What I need to do is when the file is selected, send that CV (pdf) as an attachment with an email to a user.
For that Im using Sendgrid. In sendgrid we have to arrange the email option like below.
const fs = require('fs');
pathToAttachment = `file_path`;
attachment = fs.readFileSync(pathToAttachment).toString('base64');
const email = {
...
attachments: [
{
content: attachment,
filename: 'file_name',
type: 'application/pdf',
disposition: 'attachment'
}
]
...}
So here it is need to insert a file path to attach the pdf into email. I used Bootstrap for the input field. So I need to know, how can I insert the selected file's path. At the moment I can only get the select file using event.
pdfFile = event.target.files[0];
In the example code, the attachment is loaded from the file system, however in this case the attachment is being entered via a web form with a file input. So, you don't need to fetch the file from the file system, but deal with it from the form submission.
When you submit the form with attachments, the attachment is sent to your server when the form is submitted. Attachments are normally sent in the multipart/form-data format. From your code example, it looks like you are using Node.js, so I will also assume your server is an Express server. There are many ways to parse incoming multipart requests, one option is multer. To receive your file upload with multer and then pass it on to SendGrid would look like this:
const express = require('express');
const app = express();
const multer = require('multer');
const memoryStore = multer.memoryStorage();
const upload = multer({ storage: memoryStore });
app.post('/profile', upload.single("cv"), async function (req, res, next) {
// req.file is the "cv" file
const email = {
from: FROM,
to: TO,
text: "This has an attachment",
attachments: [
{
content: req.file.buffer.toString("base64"),
filename: "cv.pdf",
type: "application/pdf",
disposition: "attachment",
}
]
};
await sg.mail(email);
res.send("OK");
})
I chose memory storage for this file as it doesn't necessarily need to be written to disk. You may want to write the file to disk too though, and there are other considerations about using memory for this.
Because the file is in memory, req.file has an object that describes the file and has a buffer property that contains the contents. SendGrid needs you to base64 encode your attachments, so we call req.file.buffer.toString("base64").
This is a quick example, I recommend you read the documentation for multer to understand how your uploads work and how you can apply this to sending email attachments.

Mistake in using DOMPurify on the backend to sanitize form data?

I was wondering if it was possible to use DOMPurify to sanitize user input on a form before it is saved to database. Here's what I've got in my routes.js folder for my form post:
.post('/questionForm', (req, res, next) =>{
console.log(req.body);
/*console.log(req.headers);*/
const questions = new QuestionForm({
_id: mongoose.Types.ObjectId(),
price: req.body.price,
seats: req.body.seats,
body_style: req.body.body_style,
personality: req.body.personality,
activity: req.body.activity,
driving: req.body.driving,
priority: req.body.priority
});
var qClean = DOMPurify.sanitize(questions);
//res.redirect(200, path)({
// res: "Message recieved. Check for a response later."
//});
qClean.save()
.then(result => {
//res.redirect(200, '/path')({
// //res: "Message recieved. Check for a response later."
//});
res.status(200).json({
docs:[questions]
});
})
.catch(err => {
console.log(err);
});
});
I also imported the package at the top of the page with
import DOMPurify from 'dompurify';
When I run the server and submit a post request, it throws a 500 error and claims that dompurify.sanitize is not a function. Am I using it in the wrong place, and/or is it even correct to use it in the back end at all?
This might be a bit late, but for others like me happening to run into this use case I found an npm package that seems well suited so far. It's called isomorphic-dompurify.
isomorphic-dompurify
DOMPurify needs a DOM to interact with; usually supplied by the browser. Isomorphic-dompurify feeds DOMPurify another package, "jsdom", as a dependency that acts like a supplementary virtual DOM so DOMPurify knows how to sanitize your input server-side.
In the packages' own words "DOMPurify needs a DOM tree to base on, which is not available in Node by default. To work on the server side, we need a fake DOM to be created and supplied to DOMPurify. It means that DOMPurify initialization logic on server is not the same as on client".
Building on #Seth Lyness's excellent answer --
If you'd rather not add another dependency, you can just use this code before you require DOMPurify. Basically what isometric-dompurify is doing is just creating a jsdom object and putting it in global.window.
const jsdom = require('jsdom');
const {JSDOM} = jsdom;
const {window} = new JSDOM('<!DOCTYPE html>');
global.window = window;

Using Puppeteer, how to get Chrome DevTools' "Network" tab's timing information?

Below is a screenshot of me accessing https://www.ted.com and inspecting the Google Chrome DevTools' "Network" tab, and viewing the Timing data for the root & child requests.
How can I get all of the above timing data programatically using Puppeteer? Ideally, it would look like this JSON structure:
{
name: "www.ted.com",
queueTime: 0,
startedTime: 1.93,
stalledTime: 4.59,
dnsLookupTime: 10.67,
initialConnectionTime: <the number of milliseconds>,
...
},
{
name: <the next child request>,
...
}
You want to check out HAR (HTTP Archive) files, which is what you would create by saving the data via Chrome.
Quotation what a HAR file is (source):
The HAR file format is an evolving standard and the information contained within it is both flexible and extensible. You can expect a HAR file to include a breakdown of timings including:
How long it takes to fetch DNS information
How long each object takes to be requested
How long it takes to connect to the server
How long it takes to transfer assets from the server to the browser of each object
The data is stored as a JSON document and extracting meaning from the
low-level data is not always easy. But with practice a HAR file can
quickly help you identify the key performance problems with a web
page, letting you efficiently target your development efforts at areas
of your site that will deliver the greatest results.
There are libraries like puppeteer-har and chrome-har which can create HAR files by using puppeteer.
Code sample (for puppeteer-har, quote from the page)
const har = new PuppeteerHar(page);
await har.start({ path: 'results.har' });
await page.goto('http://example.com');
await har.stop();
HAR files are a good option, but if want something a bit more custom you can use Puppeteer to record request timing data by navigating to the page you wish to analyze, and tapping into the Chrome DevTools Protocol.
(async function() {
// launch in headless mode & create a new page
const browser = await pptr.launch({
headless: true,
});
const page = await browser.newPage();
// attach cdp session to page
const client = await page.target().createCDPSession();
await client.send('Debugger.enable');
await client.send('Debugger.setAsyncCallStackDepth', { maxDepth: 32 });
// enable network
await client.send('Network.enable');
// attach callback to network response event
await client.on('Network.responseReceived', (params) => {
const { response: { timing } } = params;
/*
* See: https://chromedevtools.github.io/devtools-protocol
* /tot/Network/#type-ResourceTiming for complete list of
* timing data available under 'timing'
*/
});
await page.goto('https://www.ted.com/', {
waitUntil: 'networkidle2',
});
// cleanup
await browser.close();
})();
For your case, you can listen on the Network.responseReceived event, and parse out the responseTime parameter, nested within the response property on the response object provided in the event listener callback. Their documentation on the interfaces are quite good. I'll list them below:
Chrome DevTools Protocol Docs
Data you can expect to receive from every Network.responseReceived event callback: Network.responseReceived.
More specific response-related data, in the response property: Network.Response.
And, finally, the nested request timing data you are looking for, under timing: Network.ResourceTiming.
You may also want to check out the Network.requestWillBeSent interface. You will be able to match up requests and responses by requestId.
From here, you can capture more data than you could ever need about the page you're visiting. You can also format however you wish obviously.
Currently, you can also get this information without the HAR file.
Using performance.getEntriesByType("resource")
// Obtain PerformanceEntry objects for resources
const performanceTiming = JSON.parse(
await page.evaluate(() =>
JSON.stringify(performance.getEntriesByType("resource"))
)
);
// Optionally filter resource results to find your specifics - ex. filters on URL
const imageRequests = performanceTiming.filter((e) =>
e.name.endsWith("/images")
);
console.log("Image Requests " , imageRequests)

Uploading images to s3 through stitch aws service fails

Sorry I am a noob, but I am building a quasar frontend using mongodb stitch as backend.
I am trying to upload an image using the stitch javascript sdks and the AwsRequest.Builder.
Quasar gives me an image object with base64 encoded data.
I remove the header string from the base64 string (the part that says "data:image/jpeg;base64,"), I convert it to Binary and upload it to the aws s3 bucket.
I can get the data to upload just fine and when I download it again I get the exact bytes that I have uploaded, so the roundtrip through stitch to aws S3 and back seems to work.
Only, the image I upload can neither be opened in S3 nor cannot be opened once downloaded.
The difficulties seem to be in the conversion to binary of the base64 string and/or in the choice of the proper upload parameters for stitch.
Here is my code:
var fileSrc = file.__img.src // valid base64 encoded image with header string
var fileData = fileSrc.substr(fileSrc.indexOf(',') + 1) // stripping out header string
var body = BSON.Binary.fromBase64(fileData, 0) // here I get the BSON error
const args = {
ACL: 'public-read',
Bucket: 'elever-erp-document-store',
ContentType: file.type,
ContentEncoding: 'x-www-form-urlencoded', // not sure about the need to specify encoding for binary file
Key: file.name,
Body: body
}
const request = new AwsRequest.Builder()
.withService('s3')
.withRegion('eu-west-1')
.withAction('PutObject')
.withArgs(args)
aws.execute(request.build())
.then(result => {
alert('OK ' + result)
return file
}).catch(err => {
alert('error ' + err)
})
In the snippet above I try to use BSON.Binary.fromBase64 for the conversion to binary as per Haley's suggestion below, but I get following error:
boot_stitch__WEBPACK_IMPORTED_MODULE_3__["BSON"].Binary.fromBase64 is not a function.
I have also tried other ways to convert the base64 string to binary, like the vanilla atob() function and the BUFFER npm module, but with no joy.
I must be doing something stupid somewhere but I cannot find my way out.
I had a similar issue, solved it by creating a buffer from the base64 data and then used new BSON.Binary(new Uint8Array(fileBuffer), 0) to create the BSON Binary Object.
Using the OP it would look something like this:
var fileSrc = file.__img.src // valid base64 encoded image with header string
var fileData = fileSrc.substr(fileSrc.indexOf(',') + 1) // stripping out header string
var fileBuffer = new Buffer(fileData, 'base64');
var body = new BSON.Binary(new Uint8Array(fileBuffer), 0)
You should be able to convert the base64 image to BSON.Binary and then upload the actual image that way (i have some of the values hard-coded, but you can replace those):
context.services.get("<aws-svc-name>").s3("<your-region>").PutObject({
Bucket: 'myBucket',
Key: "hello.png",
ContentType: "image/png",
Body: BSON.Binary.fromBase64("iVBORw0KGgoAA... (rest of the base64 string)", 0),
})

Meteor: Saving images from urls to AWS S3 storage

I am trying, server-side, to take an image from the web by it's url (i.e. http://www.skrenta.com/images/stackoverflow.jpg) and save this image to my AWS S3 bucket using Meteor, the aws-sdk meteorite package as well as the http meteor package.
This is my attempt, which indeed put a file in my bucket (someImageFile.jpg), but the image file is corrupted then and cannot be displayed by a browser or a viewer application.
Probably I am doing something wrong with the encoding of the file. I tried many combinations and none of them worked. Also, I tried adding ContentLength and/or ContentEncoding with different encodings like binary, hex, base64 (also in combination with Buffer.toString("base64"), none of them worked. Any advice will be greatly appreciated!
This is in my server-side-code:
var url="http://www.skrenta.com/images/stackoverflow.jpg";
HTTP.get(url, function(err, data) {
if (err) {
console.log("Error: " + err);
} else {
//console.log("Result: "+JSON.stringify(data));
//uncommenting above line fills up the console with raw image data
s3.putObject({
ACL:"public-read",
Bucket:"MY_BUCKET",
Key: "someImageFile.jpg",
Body: new Buffer(data.content,"binary"),
ContentType: data.headers["content-type"], // = image/jpeg
//ContentLength: parseInt(data.headers["content-length"]),
//ContentEncoding: "binary"
},
function(err,data){ // CALLBACK OF HTTP GET
if(err){
console.log("S3 Error: "+err);
}else{
console.log("S3 Data: "+JSON.stringify(data));
}
}
);
}
});
Actually I am trying to use the filepicker.io REST API via HTTP calls, i.e. for storing a converted image to my s3, but for this problem this is the minimum example to demonstrate the actual problem.
After several trial an error runs I gave up on Meteor.HTTP and put together the code below, maybe it will help somebody when running into encoding issues with Meteor.HTTP.
Meteor.HTTP seems to be meant to just fetch some JSON or text data from remote APIs and such, somehow it seems to be not quiet the choice for binary data. However, the Npm http module definitely does support binary data, so this works like a charm:
var http=Npm.require("http");
url = "http://www.whatever.com/check.jpg";
var req = http.get(url, function(resp) {
var buf = new Buffer("", "binary");
resp.on('data', function(chunk) {
buf = Buffer.concat([buf, chunk]);
});
resp.on('end', function() {
var thisObject = {
ACL: "public-read",
Bucket: "mybucket",
Key: "myNiceImage.jpg",
Body: buf,
ContentType: resp.headers["content-type"],
ContentLength: buf.length
};
s3.putObject(thisObject, function(err, data) {
if (err) {
console.log("S3 Error: " + err);
} else {
console.log("S3 Data: " + JSON.stringify(data));
}
});
});
});
The best solution is to look at what has already been done in this regard:
https://github.com/Lepozepo/S3
Also filepicker.so seems pretty simple:
Integrating Filepicker.IO with Meteor