Storing data url in Mongo DB - mongodb

I am considering storing data-urls in my mongoDB instead of storing a reference to a file or using GridFS.
Data url:
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAMcAAAEsCAYAAAB38aczAAAgAElEQV
All of the files I am storing are JPG or PNG, and are less than 1MB in size.
I am wondering if this is considered bad practice, and what the performance implications for both read and write operations, storing the data-urls 1) in a separate collection 2) as meta data in a collection.
I'm open to any other suggestions for small file storage.

First, I wouldn't store base64 encoded data in a database that is perfectly capable of storing binary data. That's just waste of space. Store the image itself, not its base64 representation, i.e. not data : "VBORw0KGgoAAAANSUhEUgAAA...", but data : BinData("VBORw0KGgoAAAANSUhEUgAAA...") (the former is a string for MongoDB, the latter is binary data). Base64 increases the size by 33%.
Other than that, I think this is fine. The trade off is 1 request that grabs all the data vs. multiple requests. The downside of storing larger chunks of data is that all the data must be in RAM for a moment, but at 1MB that's probably a non-issue.
You should, however, make sure that you don't fetch the document in situations where you don't need the image. 1MB isn't too much, but for a read-heavy collection it's a disaster.

I just finished a solution for this. This solution works with Ajax, so you can use fetch calls in Javascript with this. The strange thing is that nowhere on the whole internet this solution could be found and thats why I put in on to help others who want to work with images with data uris :-)
Model:
cmsImage: { data: Buffer, contentType: String }
Storing in MongoDB:
let rawData = fs.readFileSync(`${root}public/uploads/` + file.originalname);
let base64Data = Buffer.from(rawData).toString('base64');
// upload this image
let image = {
cmsImage: {
data: base64Data,
contentType: file.mimetype
}
};
// in this record in the database
await this.model.findByIdAndUpdate(body.id, image);
Retrieving from MongoDB and create image element dynamically:
// decode image from database as image uri
let imageArray = new Int8Array(image.data.data);
let decodedImage = new TextDecoder().decode(imageArray);
// image
let cmsImage = document.createElement("img");
cmsImage.src = "data:" + image.contentType + ";base64," + decodedImage;
cmsImage.alt = "image";
cmsContent.appendChild(cmsImage);
Multer - Use originalfile for upload to database.
let storage = multer.diskStorage({
destination: function (req, file, cb) {
cb(null, './public/uploads')
},
filename: function (req, file, cb) {
cb(null, file.originalname)
}
});
this.upload = multer({ storage: storage });
upload to directory
this.upload.single(uploadImage)

Related

uppy.io using to send base64 encoded data rather than specifying a file input

Is there a way to send base64 encoded data using uppy.io? I already have it working for 'soft-copy' document uploads using the Dashboard component, but cant seem to work out a way where I can pass the file bytes and not use an input file tag to provide the data to be uploaded.
Context:
I have a page that uses a JavaScript component to access local scanner hardware. It scans, shows a preview, all working. The user then hits an upload button to push it to the server, the scanning component outputs the scan as base64 encoded data. I can send this up to the server using XMLHttpRequest like so:
var req = new XMLHttpRequest();
var formData = new FormData();
formData.append('fileName', uploadFileName);
formData.append('imageFileAsBase64String', imageFileAsBase64String);
req.open("POST", uploadFormUrl);
req.onreadystatechange = __uploadImages_readyStateChanged;
req.send(formData);
but I would really like to use uppy because scan files can be quite large and I get the resumable uploads, nice progress bar etc, and I already have tusdotnet on the back setup and ready to receive it.
All the examples rely on input tags so I dont really know what approach to take. thanks for any pointers.
I eventually figured this out. here in case its useful to anyone else.
you can use fetch to convert the base64 string, then turn it into a blob and finally add it to uppy files via the addFile api.
I referenced this article:
https://ionicframework.com/blog/converting-a-base64-string-to-a-blob-in-javascript/
code below works with my setup with tusdotnet handling the tus service server side.
var uppy = new Uppy.Core({
autoProceed: true,
debug: true
})
.use(Uppy.Tus, { endpoint: 'https://localhost:44302/files/' })
.use(Uppy.ProgressBar, {
target: '.UppyInput-Progress',
hideAfterFinish: false,
})
uppy.on('upload', (data) => {
uppy.setMeta({ md:'value' })
})
uppy.on('complete', (result) => {
// do completion stuff
})
fetch(`data:image/jpeg;base64,${imageFileAsBase64String}`)
.then((response) => response.blob())
.then((blob) => {
uppy.addFile({
name: 'image.jpg',
type: 'image/jpeg',
data: blob
})
});

MediaRecorder No Metadata on Download

I'm using MediaRecorder (along with the Web Audio API) to record and process audio and download the blob that it generates. The recording and downloading work great, but there is no metadata when the file is downloaded (length, sample rate, channels, etc.)
I'm using this to create the blob, and I've also tried the mimetype with no luck:
const blob = new Blob(chunks, {
'type' : 'audio/wav'
});
chunks = [];
const audioURL = window.URL.createObjectURL(blob);
audio.src = audioURL;
console.log("recorder stopped");
var new_file = document.getElementById('downloadblob').src
var download_link = document.getElementById("download_link");
download_link.href = new_file;
var name = generateFileName();
download_link.download = name;
How could I ensure the length of the recording, sample rate, and other metadata are included in the download?
I don't know of any browser which allows you to record something as audio/wav. You can get the mimeType of your recording from the instance of the MediaRecorder.
const blob = new Blob(chunks, {
'type': mediaRecorder.mimeType
});
Please note that the length will only be correct if you omit the timeslice parameter when calling mediaRecorder.start(). Otherwise the browser doesn't know the final length of the file when generating the metadata.

How to upload file to mongodb on mongoose using nestJS?

Hello pls do somebody know how to upload file to MongoDB (mongoose) using nestJS ??
I already have the ability to #Post upload file to my nestJS projet and #Get, but know I wanna post to mongodb using mongoose, pls help
I don't recommend to store images on your database but you can do this:
async function saveFile(file: Express.Multer.File){
//Convert the file to base64 string
const fileB64 = file.buffer.toString('base64')
//userModel is a mongoose model
//Store the string
await this.userModel.create({file: fileB64})
}
async function getFile(userId: string){
//Get user from database
const user = await this.userModel.findOne({_id: userId}).lean()
if(!user) throw new NotFoundException('User not found')
const file = user.file
//Convert the string to buffer
return Buffer.from(file, 'base64')
}
First you have to convert that file to a string with base64 encoding then you can save that string on your database with the create method or updating a document.
If you want to get that file just search that info in your database and then convert the string to buffer and return it.
Like I said before, I don't recommend this it is better if you upload the buffer to s3 and save the link on your database.
thanks it is worked, but it is only buffer a can't see the image! please is there any others option to get is as image? please here what i'm getting :
{"type":"Buffer","data":[255,216,255,224,0,16,74,70,73,70,0,1,1,0,0,72,0,72,0,0,255,225,0,120,69,120,105,102,0,0,73,73,42,0,8,0,0,0,4,0,18,1,3,0,1,0,0,0,1,0,0,0,49,1,2,0,7,0,0,0,62,0,0,0,18,2,3,0,2,0,0,0,2,0,2,0,105,135,4,0,1,0,0,0,70,0,0,0,0,0,0,0,71,111,111,103,108,101,0,0,3,0,0,144,7,0,4,0,0,0,48,50,50,48,2,160,4,0,1,0,0,0,208,2,0,0,3,160,4,0,1,0,0,0,0,5,0,0,0,0,0,0,255,192,0,17,8,5,0,2,208,3,1,34,0,2,17,1,3,17,1,255,196,0,31,0,0,1,5,1,1,1,1,1,1,0,0,0,0,0,0,0,0,1,2,3,4,5,6,7,8,9,10,11,255,196,0,181,16,0,2,1,3,3,2,4,3,5,5,4,4,0,0,1,125,1,2,3,0,4,17,5,18,33,49,65,6,19,81,97,7,34,113,20,50,129,145,161,8,35,66,177,193....
Angular service file
postFile(fileToUpload: File): Observable<any> {
const formaData: FormData = new FormData();
formaData.append('fileKey', fileToUpload, fileToUpload.name);
return this.http.post(`${this.backEndURL}/api/prods/upload/two/tree/`, JSON.stringify(formaData));}
but my backend Nestjs trow error:
[ExceptionsHandler] Cannot read property 'buffer' of undefined +80859ms
TypeError: Cannot read property 'buffer' of undefined

Uploading images to s3 through stitch aws service fails

Sorry I am a noob, but I am building a quasar frontend using mongodb stitch as backend.
I am trying to upload an image using the stitch javascript sdks and the AwsRequest.Builder.
Quasar gives me an image object with base64 encoded data.
I remove the header string from the base64 string (the part that says "data:image/jpeg;base64,"), I convert it to Binary and upload it to the aws s3 bucket.
I can get the data to upload just fine and when I download it again I get the exact bytes that I have uploaded, so the roundtrip through stitch to aws S3 and back seems to work.
Only, the image I upload can neither be opened in S3 nor cannot be opened once downloaded.
The difficulties seem to be in the conversion to binary of the base64 string and/or in the choice of the proper upload parameters for stitch.
Here is my code:
var fileSrc = file.__img.src // valid base64 encoded image with header string
var fileData = fileSrc.substr(fileSrc.indexOf(',') + 1) // stripping out header string
var body = BSON.Binary.fromBase64(fileData, 0) // here I get the BSON error
const args = {
ACL: 'public-read',
Bucket: 'elever-erp-document-store',
ContentType: file.type,
ContentEncoding: 'x-www-form-urlencoded', // not sure about the need to specify encoding for binary file
Key: file.name,
Body: body
}
const request = new AwsRequest.Builder()
.withService('s3')
.withRegion('eu-west-1')
.withAction('PutObject')
.withArgs(args)
aws.execute(request.build())
.then(result => {
alert('OK ' + result)
return file
}).catch(err => {
alert('error ' + err)
})
In the snippet above I try to use BSON.Binary.fromBase64 for the conversion to binary as per Haley's suggestion below, but I get following error:
boot_stitch__WEBPACK_IMPORTED_MODULE_3__["BSON"].Binary.fromBase64 is not a function.
I have also tried other ways to convert the base64 string to binary, like the vanilla atob() function and the BUFFER npm module, but with no joy.
I must be doing something stupid somewhere but I cannot find my way out.
I had a similar issue, solved it by creating a buffer from the base64 data and then used new BSON.Binary(new Uint8Array(fileBuffer), 0) to create the BSON Binary Object.
Using the OP it would look something like this:
var fileSrc = file.__img.src // valid base64 encoded image with header string
var fileData = fileSrc.substr(fileSrc.indexOf(',') + 1) // stripping out header string
var fileBuffer = new Buffer(fileData, 'base64');
var body = new BSON.Binary(new Uint8Array(fileBuffer), 0)
You should be able to convert the base64 image to BSON.Binary and then upload the actual image that way (i have some of the values hard-coded, but you can replace those):
context.services.get("<aws-svc-name>").s3("<your-region>").PutObject({
Bucket: 'myBucket',
Key: "hello.png",
ContentType: "image/png",
Body: BSON.Binary.fromBase64("iVBORw0KGgoAA... (rest of the base64 string)", 0),
})

Read Uint8Array buffer from Collection and download as pdf

I saved a pdf file in a collection using this function:
/*** client.js ***/
// asign a change event into input tag
'change input' : function(event,template){
var file = event.target.files[0]; //assuming 1 file only
if (!file) return;
var reader = new FileReader(); //create a reader according to HTML5 File API
reader.onload = function(event){
var buffer = new Uint8Array(reader.result) // convert to binary
Meteor.call('saveFile', buffer);
}
reader.readAsArrayBuffer(file); //read the file as arraybuffer
}
/*** server.js ***/
Files = new Mongo.Collection('files');
Meteor.methods({
'saveFile': function(buffer){
Files.insert({data:buffer})
}
});
How can I read it again from the collection and provide a download link that the user can download the file as a pdf and save it on the local computer?
It depends on what the data type ends up to be on the front-end when you see that document record in your MiniMongo collection. What you want to do is to convert that Uint8Array data to a base64-encoded data URL and provide a Download PDF link after you get the data in the browser.
Meteor does not support serving files from the server out of the box, so you'll likely have to publish that file's blob via mongo->minimongo publication/subscription mechanism and then use the HTML data-uri API to get it like I've just described.