I'm building a Meteor (meteorjs) app that needs to store and display PDF files, sometimes as large as 500Mb. GridFS doesn't seem to be integrated yet so I'm wondering if it's worth using Meteor in this case or stick to Rails.
Ideally I would not use S3 - I'd like to keep the files on my server.
UPDATE: it seems it's possible to connect outside of Meteor directly, I don't need PDFs to be automatically moved - and it likely doesn't make sense.
More specifically I'm now looking at:
MongoDB -> ElasticSearch using https://github.com/richardwilly98/elasticsearch-river-mongodb
Using the instructions at https://github.com/richardwilly98/elasticsearch-river-mongodb/wiki
You can use GridFS inside meteor without touching any extra package
var db = MongoInternals.defaultRemoteCollectionDriver().mongo.db; //grab the database object
var GridStore = MongoInternals.NpmModule.GridStore;
WebApp.connectHandlers.use('/someurl', function(req, res) {
var bigFile = new GridStore(db, 'bigfile.iso', 'r') //to read
bigFile.open(function(error, result) {
if (error) return
bigFile.stream(); //stream the file
bigFile.on('error', function(e) {...}) //handle error etc
bigFile.on('end', function() {bigFile.close();}); //close the file when done
bigFile.pipe(res); //pipe the file to res
});
});
However, the current GridStore/mongo (v1.3.x) used by Meteor is a bit dated, the newest verion is 2.x from http://mongodb.github.io/node-mongodb-native/2.0/api-docs/
The v1.x doesnt seem to pipe well so you may need to use the newer version
The second option
var db = MongoInternals.defaultRemoteCollectionDriver().mongo.db; //grab the database object
var GridStore = Npm.require('mongodb').GridStore; //add Npm.depends({mongodb:'2.0.13'}) in your package.js
WebApp.connectHandlers.use('/someurl', function(req, res) {
var bigFile = new GridStore(db, 'bigfile.iso', 'r').stream(true); //the new API doens't require bigFile.open() and will close automatically on end
bigFile.on('error', function(e) {...}); //handle error etc
bigFile.on('end', function() {...});
bigFile.pipe(res); //pipe the file to res
});
In this example, I use the WebApp.connectHandlers, but of course you can use iron: router or something. I tried with a file of 500 MB and it pipes all well. You also need to set the res.writeHead(200) and other stuff such as content-type, etc
Related
If someone could help me I would be eternally grateful. I have been slamming my head against a brick wall for weeks trying to get images to upload the way it is demonstrated out of the box with the MEAN.js users module. In the generated users module the file is uploaded into a directory and the path to that file is stored in a field in the mongodb document. I can get the file to upload to where it needs to go using multer and the fileupload function. However, I cannot save the path to the field within the document. I cannot figure out how to avoid getting an 'undefined' variable. I've tried creating a $window service and passing data to it as a global variable and a bunch of other things and I'm totally stuck.
I have commented the code below to demonstrate what is going awry in my server controller changeShoePicture function.
// This is the boilerplate code from the mean.js "users" module.
// I can not create a $window service or global variable to store the
// shoe data below so that I can update the shoe.shoeImageURL field
// in MongoDB with path to the successfully uploaded file.
exports.changeShoePicture = function (req, res) {
var message = null;
var shoe = req.shoe;
var upload = multer(config.uploads.shoeUpload).single('newProfilePicture');
var profileUploadFileFilter = require(path.resolve('./config/lib/multer')).profileUploadFileFilter;
console.log('i am here', shoe); // shoe is defined here.
// Filtering to upload only images. This works and proceeds to the else condition!
upload.fileFilter = profileUploadFileFilter;
upload(req, res, function (uploadError) {
if(uploadError) {
return res.status(400).send({
message: 'Error occurred while uploading profile picture'
});
} else {
//shoe image file is successfully uploaded to the location on the server,
// However the following fails because the shoe variable is undefined.
shoe.shoeImageURL = config.uploads.shoeUpload.dest + req.file.filename;
}
});
To make sure I've got this right:
The upload function is being called on your parameters passed by your route, req and res. You set the shoe var from req.shoe.
What are the chances that upload() is messing with your req?
Drop a console.log(req) in right after you call upload and report back
I am developing an Ionic app using loki.js, but every time it refresh the app press f5 i loss all data stored in loki database.
Why it happen?
I using no chache in my ionic app.
It could be that when you press F5 the data saved in memory is not written to the target json file yet.
You can try to set explicitly a time range to save the data when you instantiate loki:
var _db = new Loki('./database/db.json', {
autoload: true,
autosave: true,
autosaveInterval: 5000 // 5 secs
});
function add(newPatient) {
return $q(function(resolve, reject) {
try {
var _patientsColl = GetCollection(dbCollections.PATIENTS);
if (!_patientsColl) {
_patientsColl = _db.addCollection(dbCollections.PATIENTS,{indices:['firstname','lastname']});
}
_patientsColl.insert(newPatient);
console.log('Collection data: ', _patientsColl.data);
resolve();
}
catch (err) {
reject(err);
}
});
}
function GetCollection(collectionName){
return _db.getCollection(collectionName);
}
With "autosaveInterval" the data in memory will be written to the JSON file every 5 seconds (you can adjust this value as you prefer).
EDIT
I added the code I use to save a new document into my collection and even with page refresh, the data is stored correctly. I use "autosave" among the db settings, maybe you can set it as well, in case there is a path that is not correctly catch when you explicitly trigger saving.
Put the file in the /Users/xxx directory,then magical things happened, the program is running normally.
I'm very keen to utilize Meteor as the framework for my next project. However, there is a requirement to keep customer data separated into different MongoDB instances for users from different customers.
I have read on this thread that it could be as simple as using this:
var d = new MongoInternals.RemoteCollectionDriver("<mongo url>");
C = new Mongo.Collection("<collection name>", { _driver: d });
However, I was dished this error on my server/server.js. I'm using meteor 0.9.2.2
with meteor-platform 1.1.0.
Exception from sub Ep9DL57K7F2H2hTBz Error: A method named '/documents/insert' is already defined
at packages/ddp/livedata_server.js:1439
at Function._.each._.forEach (packages/underscore/underscore.js:113)
at _.extend.methods (packages/ddp/livedata_server.js:1437)
at Mongo.Collection._defineMutationMethods (packages/mongo/collection.js:888)
at new Mongo.Collection (packages/mongo/collection.js:208)
at Function.Documents.getCollectionByMongoUrl (app/server/models/documents.js:9:30)
at null._handler (app/server/server.js:12:20)
at maybeAuditArgumentChecks (packages/ddp/livedata_server.js:1594)
at _.extend._runHandler (packages/ddp/livedata_server.js:943)
at packages/ddp/livedata_server.js:737
Can anyone be so kind as to enlighten me whether or not I have made a mistake somewhere?
Thanks.
Br,
Ethan
Edit: This is my server.js
Meteor.publish('userDocuments', function () {
// Get company data store's mongo URL here. Simulate by matching domain of user's email.
var user = Meteor.users.findOne({ _id: this.userId });
if (!user || !user.emails) return;
var email = user.emails[0].address;
var mongoUrl = (email.indexOf('#gmail.com') >= 0) ?
'mongodb://localhost:3001/company-a-db' :
'mongodb://localhost:3001/company-b-db';
// Return documents
return Documents.getCollectionByMongoUrl(mongoUrl).find();
});
and this is the server side model.js
Documents = function () { };
var documentCollections = { };
Documents.getCollectionByMongoUrl = function (url) {
if (!(url in documentCollections)) {
var driver = new MongoInternals.RemoteCollectionDriver(url);
documentCollections[url] = new Meteor.Collection("documents", { _driver: driver });
}
return documentCollections[url];
};
Observation: The first attempt to new a Meteor.Collection works fine. I can continue to use that collection multiple times. But when I log out and login as another user from another company (in this example by using an email that is not from #gmail.com), the error above is thrown.
Downloaded meteor's source codes and peeked into mongo package. There is a way to hack around having to declare different collection names on the mongodb server based on Hubert's suggestion.
In the server side model.js, I've made these adaptation:
Documents.getCollectionByMongoUrl = function (userId, url) {
if (!(userId in documentCollections)) {
var driver = new MongoInternals.RemoteCollectionDriver(url);
documentCollections[userId] = new Meteor.Collection("documents" + userId, { _driver: driver });
documentCollections[userId]._connection = driver.open("documents", documentCollections[userId]._connection);
}
return documentCollections[userId];
};
Super hack job here. Be careful when using this!!!!
I believe Meteor distinguish its collections internally by the name you pass to them as the first argument, so when you create the "documents" collection the second time, it tries to override the structure. Hence the error when trying to create the /documents/insert method the second time.
To work around this, you could apply a suffix to your collection name. So instead of:
new Meteor.Collection('documents', { _driver: driver });
you should try:
new Meteor.Collection('documents_' + userId, { _driver: driver })
I have a Meteor app and would like to upload data (from csv) to a meteor collection.
I have found:
solutions (e.g. Collectionfs) which deal with file uploads
methods for uploading directly to the underlying mongo db from the shell
references to meteor router - but I am using the excellent iron-router, which does not appear to provide this functionality
My requirement is that the app user be able to upload csv data to the app from within the app. I do not need to store the csv file anywhere within the app file structure, I just need to read the csv data to the collection.
It is possible that I cannot figure out how to do this because my terms of reference ('upload data to meteor') are ambiguous or incorrect. Or that I am an idiot.
ChristianF's answer is spot on and I have accepted it as the correct answer. However, it provides even more than I need at this stage, so I am including here the code I have actually used - which is largely taken from Christian's answer and other elements I have found as a result:
HTML UPLOAD BUTTON (I am not including drag and drop at this stage)
<template name="upload">
<input type="file" id="files" name="files[]" multiple />
<output id="list"></output>
</template>
JAVASCRIPT
Template.upload.events({
"change #files": function (e) {
var files = e.target.files || e.dataTransfer.files;
for (var i = 0, file; file = files[i]; i++) {
if (file.type.indexOf("text") == 0) {
var reader = new FileReader();
reader.onloadend = function (e) {
var text = e.target.result;
console.log(text)
var all = $.csv.toObjects(text);
console.log(all)
_.each(all, function (entry) {
Members.insert(entry);
});
}
reader.readAsText(file);
}
}
}
})
NB there is a jquery-csv library for Meteor here: https://github.com/donskifarrell/meteor-jquery-csv
I've solved this problem in the past using this gist of mine, together with this code (using the jquery-csv plugin to parse the csv data). This is done on the client side and is independent of using iron-router or not. It would be fairly straightforward to move the insertion code into a Meteor method, uploading the csv file first and then parsing and inserting the data on the server. I've tried that, too, but didn't see any performance improvement.
$(document).ready(function() {
var dd = new dragAndDrop({
onComplete: function(files) {
for (var i = 0; i < files.length; i++) {
// Only process csv files.
if (!f.type.match('text/csv')) {
continue;
}
var reader = new FileReader();
reader.onloadend = function(event) {
var all = $.csv.toObjects(event.target.result);
// do something with file content
_.each(all, function(entry) {
Items.insert(entry);
});
}
}
}
});
dd.add('upload-div'); // add to an existing div, turning it into a drop container
});
Beware though that if you are inserting a lot of entries, then you are better off turning all reactive rerendering off for a while, until all of them are inserted. Otherwise, both node on the server and the browser tab will get really slow. See my suggested solution here: Meteor's subscription and sync are slow
I have been searching for an example of how I can stream the result of a MongoDB query to a nodejs client. All solutions I have found so far seem to read the query result at once and then send the result back to the server.
Instead, I would (obviously) like to supply a callback to the query method and have MongoDB call that when the next chunk of the result set is available.
I have been looking at mongoose - should I probably use a different driver?
Jan
node-mongodb-driver (the underlying layer that every mongoDB client uses in nodejs) except the cursor API that others mentioned has a nice stream API (#458). Unfortunately i did not find it documented elsewhere.
Update: there are docs.
It can be used like this:
var stream = collection.find().stream()
stream.on('error', function (err) {
console.error(err)
})
stream.on('data', function (doc) {
console.log(doc)
})
It actually implements the ReadableStream interface, so it has all the goodies (pause/resume etc)
Streaming in Mongoose became available in version 2.4.0 which appeared three months after you've posted this question:
Model.where('created').gte(twoWeeksAgo).stream().pipe(writeStream);
More elaborated examples can be found on their documentation page.
mongoose is not really "driver", it's actually an ORM wrapper around the MongoDB driver (node-mongodb-native).
To do what you're doing, take a look at the driver's .find and .each method. Here's some code from the examples:
// Find all records. find() returns a cursor
collection.find(function(err, cursor) {
sys.puts("Printing docs from Cursor Each")
cursor.each(function(err, doc) {
if(doc != null) sys.puts("Doc from Each " + sys.inspect(doc));
})
});
To stream the results, you're basically replacing that sys.puts with your "stream" function. Not sure how you plan to stream the results. I think you can do response.write() + response.flush(), but you may also want to checkout socket.io.
Here is the solution I found (please correct me anyone if thatis the wrong way to do it):
(Also excuse the bad coding - too late for me now to prettify this)
var sys = require('sys')
var http = require("http");
var Db = require('/usr/local/src/npm/node_modules/mongodb/lib/mongodb').Db,
Connection = require('/usr/local/src/npm/node_modules/mongodb/lib/mongodb').Connection,
Collection = require('/usr/local/src/npm/node_modules/mongodb/lib/mongodb').Collection,
Server = require('/usr/local/src/npm/node_modules/mongodb/lib/mongodb').Server;
var db = new Db('test', new Server('localhost',Connection.DEFAULT_PORT , {}));
var products;
db.open(function (error, client) {
if (error) throw error;
products = new Collection(client, 'products');
});
function ProductReader(collection) {
this.collection = collection;
}
ProductReader.prototype = new process.EventEmitter();
ProductReader.prototype.do = function() {
var self = this;
this.collection.find(function(err, cursor) {
if (err) {
self.emit('e1');
return;
}
sys.puts("Printing docs from Cursor Each");
self.emit('start');
cursor.each(function(err, doc) {
if (!err) {
self.emit('e2');
self.emit('end');
return;
}
if(doc != null) {
sys.puts("doc:" + doc.name);
self.emit('doc',doc);
} else {
self.emit('end');
}
})
});
};
http.createServer(function(req,res){
pr = new ProductReader(products);
pr.on('e1',function(){
sys.puts("E1");
res.writeHead(400,{"Content-Type": "text/plain"});
res.write("e1 occurred\n");
res.end();
});
pr.on('e2',function(){
sys.puts("E2");
res.write("ERROR\n");
});
pr.on('start',function(){
sys.puts("START");
res.writeHead(200,{"Content-Type": "text/plain"});
res.write("<products>\n");
});
pr.on('doc',function(doc){
sys.puts("A DOCUMENT" + doc.name);
res.write("<product><name>" + doc.name + "</name></product>\n");
});
pr.on('end',function(){
sys.puts("END");
res.write("</products>");
res.end();
});
pr.do();
}).listen(8000);
I have been studying mongodb streams myself, while I do not have the entire answer you are looking for, I do have part of it.
you can setup a socket.io stream
this is using javascript socket.io and socket.io-streaming available at NPM
also mongodb for the database because
using a 40 year old database that has issues is incorrect, time to modernize
also the 40 year old db is SQL and SQL doesn't do streams to my knowledge
So although you only asked about data going from server to client, I also want to get client to server in my answer because I can NEVER find it anywhere when I search and I wanted to setup one place with both the send and receive elements via stream so everyone could get the hang of it quickly.
client side sending data to server via streaming
stream = ss.createStream();
blobstream=ss.createBlobReadStream(data);
blobstream.pipe(stream);
ss(socket).emit('data.stream',stream,{},function(err,successful_db_insert_id){
//if you get back the id it went into the db and everything worked
});
server receiving stream from the client side and then replying when done
ss(socket).on('data.stream.out',function(stream,o,c){
buffer=[];
stream.on('data',function(chunk){buffer.push(chunk);});
stream.on('end',function(){
buffer=Buffer.concat(buffer);
db.insert(buffer,function(err,res){
res=insertedId[0];
c(null,res);
});
});
});
//This is the other half of that the fetching of data and streaming to the client
client side requesting and receiving stream data from server
stream=ss.createStream();
binarystring='';
stream.on('data',function(chunk){
for(var I=0;i<chunk.length;i++){
binarystring+=String.fromCharCode(chunk[i]);
}
});
stream.on('end',function(){ data=window.btoa(binarystring); c(null,data); });
ss(socket).emit('data.stream.get,stream,o,c);
server side replying to request for streaming data
ss(socket).on('data.stream.get',function(stream,o,c){
stream.on('end',function(){
c(null,true);
});
db.find().stream().pipe(stream);
});
The very last one there is the only one where I am kind of just pulling it out of my butt because I have not yet tried it, but that should work. I actually do something similar but I write the file to the hard drive then use fs.createReadStream to stream it to the client. So not sure if 100% but from what I read it should be, I'll get back to you once I test it.
P.s. anyone want to bug me about my colloquial way of talking, I'm Canadian, and I love saying "eh" come at me with your hugs and hits bros/sis' :D