How to insert blob using knex? - postgresql

Currently, I have an upload system using ng-file-upload to another server, which is working well thanks to CORS.
To manage my database I use knex (migrations and seed), and I have a specific table with a bytea column.
PostgreSQL database.
to make upload possible, I've added the busboy module to allow express to manage multipart requests, and the file is being saved to the disk with no problem.
but what I really want is to save it in the table, in the bytea column, and right now I'm with no luck on such quest.
Any guidance and better documentation are welcome.

After a long time i figured it out.
In the end it is dead simple to make upload works with angular+express+knex+postgres
first of all, there's no need to busboy, instead, you'll need the bodyParser's raw mode
second, adjust it to comport a reasonable upload size.
third, ng-file-upload will help with the upload part.
here's a few snippets if anyone is in need of it:
Upload button:
<div layout="row" layout-align="center center">
<md-button ngf-select ng-model="arquivo" class="md-raised md-primary">Selecionar arquivo</md-button>
<md-button ng-show="arquivo" ng-click="arquivo = null" class="md-raised md-warn">Cancelar</md-button>
<md-button ng-show="arquivo" ng-click="sendarquivo(arquivo)" class="md-raised md-primary" ng-disabled="arquivo.size > 4096 * 1024">Enviar arquivo</md-button>
</div>
controller.sendarquivo:
$scope.sendarquivo = function (arquivo) {
enqueteservice.uploadanexo(idenquete, arquivo).then(function () {
$scope.list();
$scope.arquivo = null;
});
};
enqueteservice.uploadanexo:
// serviço de enquete
angular.module("roundabout").factory("enqueteservice", function($http, Upload) {
return {
uploadanexo: function(idenquete, file) {
return Upload.http({
url: "/enquete/" + idenquete + "/uploadanexo/" + file.name,
method: 'POST',
headers: {
'Content-Type': 'application/octet-stream' // file.type //
},
data: file
});
}
}
});
on server side, express router:
router.post("/:idenquete/uploadanexo/:descricaoanexoenquete", function (req, res) {
knex("anexoenquete").insert({
idenquete: req.params.idenquete,
descricaoanexoenquete: req.params.descricaoanexoenquete,
dadoanexoenquete: req.body
}, "idanexoenquete").then(function (ret) {
res.send("idanexoenquete:" + ret[0]);
}).catch(function (err) {
res.status(500).send(err);
console.log(err);
});
});
for reference, the bodyParser setup at index.js
// ...
app.use(bodyParser.json({limit: 1024 * 1024}));// 1MB of json is a lot of json
// parse some custom thing into a Buffer
app.use(bodyParser.raw({limit: 10240 * 1024, type: 'application/octet-stream'})); // 10 MB of attachments
with this setup, ng-file-upload body will arrive at express router as a Buffer, wich you can pass directly to knex insert statement.
download binary content can also be easily solved as follows:
download attachment
router.get("/downloadanexo/:idanexoenquete", function (req, res) {
knex("anexoenquete").select().where({
idanexoenquete: req.params.idanexoenquete
}).then(function (ret) {
if (!ret.length)
res.status(404).send("NOT FOUND");
else {
var anexoenquete = ret[0];
res.setHeader("Content-disposition", "attachment;filename=" + anexoenquete.descricaoanexoenquete);
res.send(anexoenquete.dadoanexoenquete);
}
}).catch(function (err) {
res.status(500).send(err);
console.log(err);
});
});
hope this reference helps anyone else in the future, i'm able to shutdown an simple java app which was solving this issue for me.

The best approach is using Amazon s3 or other services to store blobs while storing metadata in sql.
If you want to store anyway, you can use sql driver and bluebird.

Related

Pg-promise - How to stream binary data directly to response

Forgive me I'm still learning. I'm trying to download some mp3 files that I have stored in a table. I can download files directly from the file system like this:
if (fs.existsSync(filename)) {
res.setHeader('Content-disposition', 'attachment; filename=' + filename);
res.setHeader('Content-Type', 'application/audio/mpeg3');
var rstream = fs.createReadStream(filename);
rstream.pipe(res);
I have stored the data in the table using pg-promise example in the docs like so:
const rs = fs.createReadStream(filename);
function receiver(_, data) {
function source(index) {
if (index < data.length) {
return data[index];
}
}
function dest(index, data) {
return this.none('INSERT INTO test_bin (utterance) VALUES($1)', data);
}
return this.sequence(source, {dest});
} // end receiver func
rep.tx(t => {
return streamRead.call(t, rs, receiver);
})
.then(data => {
console.log('DATA:', data);
})
.catch(error => {
console.log('ERROR: ', error);
});
But now I want to take that data out of the table and download it to the client. The example in the docs of taking data out of binary converts it to JSON and then prints it to the console like this:
db.stream(qs, s => {
s.pipe(JSONStream.stringify()).pipe(process.stdout)
})
and that works. So the data is coming out of the database ok. But I can't seem to send it to the client. It seems that the data is already a stream so I have tried:
db.stream(qs, s => {
s.pipe(res);
});
But I get a typeerror: First argument must be a string or Buffer
Alternatively, I could take that stream and write it to the file system, and then serve it as in the top step above, but that seems like a workaround. I wish there was an example of how to save to a file in the docs.
What step am I missing?

Read file from GridFS based on _ID using mongoskin & nodejs

I am using mongoskin in my nodejs based application. I have used GridFS to uplaod the file. I am able to upload and read it back using the "filename" however I want to read it back using _id. How can i do? Following are code details.
Working code to read the file based on filename:
exports.previewFile = function (req, res) {
var contentId = req.params.contentid;
var gs = DBModule.db.gridStore('69316_103528209714703_155822_n.jpg', 'r');
gs.read(function (err, data) {
if (!err) {
res.setHeader('Content-Type', gs.contentType);
res.end(data);
} else {
log.error({err: err}, 'Failed to read the content for id '+contentId);
res.status(constants.HTTP_CODE_INTERNAL_SERVER_ERROR);
res.json({error: err});
}
});
};
How this code can be modified to make it work based on id?
After few hit & trial following code works. This is surprise bcz input parameter seems searching on all the fields.
//view file from database
exports.previewContent = function (req, res) {
var contentId = new DBModule.BSON.ObjectID(req.params.contentid);
console.log('Calling previewFile inside FileUploadService for content id ' + contentId);
var gs = DBModule.db.gridStore(contentId, 'r');
gs.read(function (err, data) {
if (!err) {
//res.setHeader('Content-Type', metadata.contentType);
res.end(data);
} else {
log.error({err: err}, 'Failed to read the content for id ' + contentId);
res.status(constants.HTTP_CODE_INTERNAL_SERVER_ERROR);
res.json({error: err});
}
});
};

Meteor: Saving images from urls to AWS S3 storage

I am trying, server-side, to take an image from the web by it's url (i.e. http://www.skrenta.com/images/stackoverflow.jpg) and save this image to my AWS S3 bucket using Meteor, the aws-sdk meteorite package as well as the http meteor package.
This is my attempt, which indeed put a file in my bucket (someImageFile.jpg), but the image file is corrupted then and cannot be displayed by a browser or a viewer application.
Probably I am doing something wrong with the encoding of the file. I tried many combinations and none of them worked. Also, I tried adding ContentLength and/or ContentEncoding with different encodings like binary, hex, base64 (also in combination with Buffer.toString("base64"), none of them worked. Any advice will be greatly appreciated!
This is in my server-side-code:
var url="http://www.skrenta.com/images/stackoverflow.jpg";
HTTP.get(url, function(err, data) {
if (err) {
console.log("Error: " + err);
} else {
//console.log("Result: "+JSON.stringify(data));
//uncommenting above line fills up the console with raw image data
s3.putObject({
ACL:"public-read",
Bucket:"MY_BUCKET",
Key: "someImageFile.jpg",
Body: new Buffer(data.content,"binary"),
ContentType: data.headers["content-type"], // = image/jpeg
//ContentLength: parseInt(data.headers["content-length"]),
//ContentEncoding: "binary"
},
function(err,data){ // CALLBACK OF HTTP GET
if(err){
console.log("S3 Error: "+err);
}else{
console.log("S3 Data: "+JSON.stringify(data));
}
}
);
}
});
Actually I am trying to use the filepicker.io REST API via HTTP calls, i.e. for storing a converted image to my s3, but for this problem this is the minimum example to demonstrate the actual problem.
After several trial an error runs I gave up on Meteor.HTTP and put together the code below, maybe it will help somebody when running into encoding issues with Meteor.HTTP.
Meteor.HTTP seems to be meant to just fetch some JSON or text data from remote APIs and such, somehow it seems to be not quiet the choice for binary data. However, the Npm http module definitely does support binary data, so this works like a charm:
var http=Npm.require("http");
url = "http://www.whatever.com/check.jpg";
var req = http.get(url, function(resp) {
var buf = new Buffer("", "binary");
resp.on('data', function(chunk) {
buf = Buffer.concat([buf, chunk]);
});
resp.on('end', function() {
var thisObject = {
ACL: "public-read",
Bucket: "mybucket",
Key: "myNiceImage.jpg",
Body: buf,
ContentType: resp.headers["content-type"],
ContentLength: buf.length
};
s3.putObject(thisObject, function(err, data) {
if (err) {
console.log("S3 Error: " + err);
} else {
console.log("S3 Data: " + JSON.stringify(data));
}
});
});
});
The best solution is to look at what has already been done in this regard:
https://github.com/Lepozepo/S3
Also filepicker.so seems pretty simple:
Integrating Filepicker.IO with Meteor

confused with facebook json object with jquery and nodejs

writing my first facebook/node/express app
With express I'm using
app.post('/friends', function(req, res) {
graph.get("/me/friends?fields=id,name", function(err, res2) {
res.send(res2);
});
});
client side I'm using
$('#getFriends').click(function() {
$.post('/friends', function(data) {
console.log(data);
console.log(data.length);
});
});
With a previous app, I called the graph from the client side with getJSON and looped through everything with a for loop to print out the id and name. With this, I'm confused. Do I need to convert it to an array or a string first? Am I using the express request properly?
It logs the data object, but even when I go to print the length it's null.

How to use GridFS to store images using Node.js and Mongoose

I am new to Node.js. Can anyone provide me an example of how to use GridFS for storing and retrieving binary data, such as images, using Node.js and Mongoose? Do I need to directly access GridFS?
I was not satisfied with the highest rated answer here and so I'm providing a new one:
I ended up using the node module 'gridfs-stream' (great documentation there!) which can be installed via npm.
With it, and in combination with mongoose, it could look like this:
var fs = require('fs');
var mongoose = require("mongoose");
var Grid = require('gridfs-stream');
var GridFS = Grid(mongoose.connection.db, mongoose.mongo);
function putFile(path, name, callback) {
var writestream = GridFS.createWriteStream({
filename: name
});
writestream.on('close', function (file) {
callback(null, file);
});
fs.createReadStream(path).pipe(writestream);
}
Note that path is the path of the file on the local system.
As for my read function of the file, for my case I just need to stream the file to the browser (using express):
try {
var readstream = GridFS.createReadStream({_id: id});
readstream.pipe(res);
} catch (err) {
log.error(err);
return next(errors.create(404, "File not found."));
}
Answers so far are good, however, I believe it would be beneficial to document here how to do this using the official mongodb nodejs driver instead of relying on further abstractions such as "gridfs-stream".
One previous answer has indeed utilized the official mongodb driver, however they use the Gridstore API; which has since been deprecated, see here. My example will be using the new GridFSBucket API.
The question is quite broad as such my answer will be an entire nodejs program. This will include setting up the express server, mongodb driver, defining the routes and handling the GET and POST routes.
Npm Packages Used
express (nodejs web application framework to simplify this snippet)
multer (for handling multipart/form-data requests)
mongodb (official mongodb nodejs driver)
The GET photo route takes a Mongo ObjectID as a parameter to retrieve the image.
I configure multer to keep the uploaded file in memory. This means the photo file will not be written to the file system at anytime, and instead be streamed straight from memory into GridFS.
/**
* NPM Module dependencies.
*/
const express = require('express');
const photoRoute = express.Router();
const multer = require('multer');
var storage = multer.memoryStorage()
var upload = multer({ storage: storage, limits: { fields: 1, fileSize: 6000000, files: 1, parts: 2 }});
const mongodb = require('mongodb');
const MongoClient = require('mongodb').MongoClient;
const ObjectID = require('mongodb').ObjectID;
let db;
/**
* NodeJS Module dependencies.
*/
const { Readable } = require('stream');
/**
* Create Express server && Routes configuration.
*/
const app = express();
app.use('/photos', photoRoute);
/**
* Connect Mongo Driver to MongoDB.
*/
MongoClient.connect('mongodb://localhost/photoDB', (err, database) => {
if (err) {
console.log('MongoDB Connection Error. Please make sure that MongoDB is running.');
process.exit(1);
}
db = database;
});
/**
* GET photo by ID Route
*/
photoRoute.get('/:photoID', (req, res) => {
try {
var photoID = new ObjectID(req.params.photoID);
} catch(err) {
return res.status(400).json({ message: "Invalid PhotoID in URL parameter. Must be a single String of 12 bytes or a string of 24 hex characters" });
}
let bucket = new mongodb.GridFSBucket(db, {
bucketName: 'photos'
});
let downloadStream = bucket.openDownloadStream(photoID);
downloadStream.on('data', (chunk) => {
res.write(chunk);
});
downloadStream.on('error', () => {
res.sendStatus(404);
});
downloadStream.on('end', () => {
res.end();
});
});
/**
* POST photo Route
*/
photoRoute.post('/', (req, res) => {
upload.single('photo')(req, res, (err) => {
if (err) {
return res.status(400).json({ message: "Upload Request Validation Failed" });
} else if(!req.body.name) {
return res.status(400).json({ message: "No photo name in request body" });
}
let photoName = req.body.name;
// Covert buffer to Readable Stream
const readablePhotoStream = new Readable();
readablePhotoStream.push(req.file.buffer);
readablePhotoStream.push(null);
let bucket = new mongodb.GridFSBucket(db, {
bucketName: 'photos'
});
let uploadStream = bucket.openUploadStream(photoName);
let id = uploadStream.id;
readablePhotoStream.pipe(uploadStream);
uploadStream.on('error', () => {
return res.status(500).json({ message: "Error uploading file" });
});
uploadStream.on('finish', () => {
return res.status(201).json({ message: "File uploaded successfully, stored under Mongo ObjectID: " + id });
});
});
});
app.listen(3005, () => {
console.log("App listening on port 3005!");
});
I wrote a blog post on this subject; is is an elaboration of my answer. Available here
Further Reading/Inspiration:
NodeJs Streams: Everything you need to know
Multer NPM docs
Nodejs MongoDB Driver
I suggest taking a look at this question: Problem with MongoDB GridFS Saving Files with Node.JS
Copied example from the answer (credit goes to christkv):
// You can use an object id as well as filename now
var gs = new mongodb.GridStore(this.db, filename, "w", {
"chunk_size": 1024*4,
metadata: {
hashpath:gridfs_name,
hash:hash,
name: name
}
});
gs.open(function(err,store) {
// Write data and automatically close on finished write
gs.writeBuffer(data, true, function(err,chunk) {
// Each file has an md5 in the file structure
cb(err,hash,chunk);
});
});
It looks like the writeBuffer has since been deprecated.
/Users/kmandrup/private/repos/node-mongodb-native/HISTORY:
82 * Fixed dereference method on Db class to correctly dereference Db reference objects.
83 * Moved connect object onto Db class(Db.connect) as well as keeping backward compatibility.
84: * Removed writeBuffer method from gridstore, write handles switching automatically now.
85 * Changed readBuffer to read on Gridstore, Gridstore now only supports Binary Buffers no Strings anymore.
remove the fileupload library
and if it is giving some multi-part header related error than remove the content-type from the headers