meteor: use different database for each user - mongodb

I currently assign a mongodb to my meteor app using the env variable
"MONGO_URL": "mongodb://localhost:27017/dbName" when I start the meteor instance.
So all data gets written to the mongo database with the name "dbName".
I am looking for a way to individually set the dbName for each custumer upon login in order to seperate their data into different databases.

This generally unsupported as this is defined at startup. However, this thread offers a possible solution:
https://forums.meteor.com/t/switch-database-while-meteor-is-running/4361/6
var database = new MongoInternals.RemoteCollectionDriver("<mongo url>");
MyCollection = new Mongo.Collection("collection_name", { _driver: database });
This would allow you to define the database name in the mongo url but would require a fair bit of extra work to redefine your collections on a customer by customer basis.

Here's another approach that will make your life eternally easier:
Create a generic site with no accounts at mysite.com
When they login at mysite.com, figure out what site they actually belong to and redirect them to customerName.mysite.com and log them in there
Run a separate instance of Meteor configured for a different mongo at each site
nginx might help you with the above.
It is generally good practice to run separate DBs when offering a B2B
solution.
That's a matter of opinion that depends heavily on the platform. Many SaaS providers would argue that point.

Related

Seperate Database for Users

In my postgreSQL I currently have one database with user and business relevant tables. Our company is planing in future to add more categories, therefor I would like to seperate the categories into individual databases, as most of the data is quiet static once initialized. Below I am a showing an example with cars, please keep in mind that our company services differ much more as Pkw and LorryTruck and need much more extra tables within their scope.
Current State:
PostgreSQL:
CompanyDB
UserTable
PkwTable
ServiceTable
BookingTable
Future State
PostgreSQL:
UserDB
UserTable
PkwDB
PkwTable
ServiceTable
BookingTable
LorryTruckDB
LorryTruckTable
ServiceTable
BookingTable
My concern is if and how I could connect user relevant Data to the desired databases. In example a user can register for Pkw services and might be later on interested in LorryTruck services. The main goal is also that a user should only register once on our system.
Is this possible or could I design this better?
Please provide your opinion or experiences.
Thank you!
I would use a SCHEMA, not a different database. It's not possible (or easy) to get some data from a different database, while using different schema's is standard and working great.

Change MongoDB Collection from local to server-side on running Meteor App

Due to the Meteor Docs there are 'server-side', 'client-side' and 'local' Collections. Is there a way to change the 'status' (e.g. if it's server-side, client-side or local) on a running app?
Use Case: A Web-Application where users can register and login. They can store sensible data. Depending on the Users personal preferences he should be able to choose if that data is stored local or on the server (General decision - not from case to case).
Current Approach: It's working fine if I either instantiate the Collection local CollectionName = new Mongo.Collection(null); or server side CollectionName = new Mongo.Collection('collectionName');.
But I can't think of an approach to make it possible to the user that he can change the Collection status.
Is there a way to do this?
Or is a workaround needed (e.g. Create both, a local and server-side Collaction, and just decide which to use for insert/update/find - what would mean a lot of duplicate code?!).
Edit: To make thinks clear: I want the user to be able to choose if his data is stored in a collection which is synced with the server or a collection without any syncing.
No, you can't change the type of a collection on a running app.
I think you are confused about what these terms mean. "Client-side" collections aren't permanently stored in localstorage. It just means it's a collection that's in the browser's memory. Just as "server-side" collections are those that reside in the server's memory. The difference is not how it's defined, but where the code runs. Most collections have a client-side and a server-side counterpart, and they are kept synchronized via pub/sub. Server-side collections are also synchronized with MongoDB (using the oplog).
Local collections can live in both places, but "local" means they aren't synchronized with anything.
I probably don't fully understand what you are trying to do, but local collections do not persist data.
If you pass null as the name, then you’re creating a local collection. It’s not synchronized anywhere; it’s just a local scratchpad that supports Mongo-style find, insert, update, and remove operations. (On both the client and the server, this scratchpad is implemented using Minimongo.)
This means any data added to them on the client will be blown away when the user closes their browser (unless you are also using one of the local collection persist meteor packages) and any data added to them on the server will be blown away when the meteor app is restarted. So I don't think you really want to use local collections.
Instead, I would use a regular collection (where a name is passed to the constructor) and either the standard allow or deny options (not really recommended anymore...but still a valid approach) or Meteor methods (the preferred approach) to control who can change data and what data is allowed to change.
Or, another option could be to pass your publication function a list of fields that the user wishes to see on the client for that given session. To do this you defined a new publication that receives a displayFields argument that you then use as the field specifier options in your collection .find().
Meteor.publish("userData", function (userId, displayFields) {
// validate the structure and contents of displayFields
// retrieve the data but only use the fields that the user requested
return UserData.find({user_id: userId}, {fields: displayFields});
});
Then on the client side you would subscribe to this and pass in the fields the user wishes to make visible on the client.
var displayFields = {
firstname: 1,
lastname: 0,
//...
};
this.subscribe("userData", [displayFields]);

Is meteor using the Mongo Oplog?

How can I check if meteor is using the oplog of my mongo?
I have a cluster of mongo and set two envs for my meteor.
MONGO_URL=mongodb://mongo/app?replicaSet=rs0
MONGO_OPLOG_URL=mongodb://mongo/local?authSource=app
How can I check if the opt log is actually in use. Meteor can fallback to query polling which is very inefficient but I would like to see if it's working properly with the oplog.
Any ideas?
Quoting the relevant bits from Meteor's OplogObserveDriver docs:
How to tell if your queries are using OplogObserveDriver
For now, we only have a crude way to tell how many observeChanges calls are using OplogObserveDriver, and not which calls they are.
This uses the facts package, an internal Meteor package that exposes real-time metrics for the current Meteor server. In your app, run meteor add facts, and add the {{> serverFacts}} template to your app. If you are using the autopublish package, Meteor will automatically publish all metrics to all users. If you are not using autopublish, you will have to tell Meteor which users can see your metrics by calling Facts.setUserIdFilter in server code; for example:
Facts.setUserIdFilter(function (userId) {
var user = Meteor.users.findOne(userId);
return user && user.admin;
});
(When running your app locally, Facts.setUserIdFilter(function () { return true; }); may be good enough!)
Now look at your app. The facts template will render a variety of metrics; the ones we're looking for are observe-drivers-oplog and observe-drivers-polling in the mongo-livedata section. If observe-drivers-polling is zero or not rendered at all, then all of your observeChanges calls are using OplogObserveDriver!
To set up oplog tailing, you need to set up a user on my_database, and an oplog_user on local. Then, specify the following URIs to connect to your replica set named test-shard (e.g. if there are 3 hosts named test-shard-[0-2]):
MONGO_URL="mongodb://user:PASS#test-shard-0.mongodb.net:27017,test-shard-1.mongodb.net:27017,test-shard-2.mongodb.net:27017/my_database?ssl=true&replicaSet=test-shard&authSource=admin"
MONGO_OPLOG_URL="mongodb://oplog_user:PASS#test-shard-0.mongodb.net:27017,test-shard-1.mongodb.net:27017,test-shard-2.mongodb.net:27017/local?ssl=true&replicaSet=test-shard&authSource=admin"
On MongoDB Atlas they require ssl=true, and also all users authenticate through the admin database. On another deployment you might just authenticate through my_database, in which case you'd remove the authsource=admin for MONGO_URL and write authsource=my_database for MONGO_OPLOG_URL. See this post for another example.
With MongoDB 3.6 and the Mongo node driver 3.0+, you may be able to use a succinct notation for DNS seedlist connections, e.g. on MongoDB Atlas, to specify the environment variables:
MONGO_URL="mongodb+srv://user:PASS#foo.mongodb.net/my_database"
MONGO_OPLOG_URL="mongodb+srv://oplog_user:PASS#foo.mongodb.net/local"
The link above explains how this notation fills in the ssl, replicaSet, and authSource arguments. This is a lot nicer than the long strings above, and also means you can scale your replica set up and down without needing to reconfigure anything.
As hwillson mentioned, use the facts-ui and facts-base packages (formerly facts) to see if there are any oplogObserveDrivers running in your app. If they are all pollingObserveDriver, than oplog is not set up correctly.
If you are using Kadira APM to monitor your app's performance, you can see if oplogs are working by navigating to the "Live Queries" section and having a look at the "Oplog notifications" chart.
You can see in my screenshot that oplogs are working, as values appear in the chart (bottom right). If oplogs weren't working then this chart would be empty.
This may be very late, but this is the only way that worked for me :
someCollection._driver.mongo._oplogHandle
if this is set to null then the oplog is not enabled, otherwise you can use this handle to check for more details.

How to properly handle mongoose schema migrations?

I'm completely new to MongoDB & Mongoose and can't seem to find an answer as to how to handle migrations when a schema changes.
I'm used to running migration SQL scripts that alter table structure and any underlying data that needs to be changed. This typically involves DB downtime.
How is this typically handled within MongoDB/Mongoose? Any gotcha's that I need to be aware of?
In coming across this and reasonably understanding how migrations work on a relational database, MongoDB makes this a little simpler. I've come to 2 ways to break this down. The things to consider when dealing with data migrations in MongoDB (not all that uncommon from RDBs) are:
Ensuring local test environments do not break when a developer merges the latest from the project repository
Ensuring any data is correctly updated on the live version regardless if a user is logged in or out if authentication is used. (Of course if everyone is automatically logged out when an upgrade is made, then only worrying about when a user logs in is necessary).
1) If your change will log everyone out or application downtime is expected then the simple way to do this is have a migration script to connect to local or live MongoDB and upgrade the correct data. Example where a user's name is changed from a single string to an object with given and family name (very basic of course and would need to be put into a script to run for all developers):
Using the CLI:
mongod
use myDatabase
db.myUsers.find().forEach( function(user){
var curName = user.name.split(' '); //need some more checks..
user.name = {given: curName[0], family: curName[1]};
db.myUsers.save( user );
})
2) You want the application to migrate the schemas up and down based on the application version they are running. This will obviously be less of a burden for a live server and not require down time due to only upgrading users when they use the upgraded / downgraded versions for the first time.
If your using middleware in Expressjs for Nodejs:
Set an app variable in your root app script via app.set('schemaVersion', 1) which will be used later to compare to the users schema version.
Now ensure all the user schemas have a schemaVersion property as well so we can detect a change between the application schema version and the current MongoDB schemas for THAT PARTICULAR USER only.
Next we need to create simple middleware to detect the config and user version
app.use( function( req, res, next ){
//If were not on an authenticated route
if( ! req.user ){
next();
return;
}
//retrieving the user info will be server dependent
if( req.user.schemaVersion === app.get('schemaVersion')){
next();
return;
}
//handle upgrade if user version is less than app version
//handle downgrade if user version is greater than app version
//save the user version to your session / auth token / MongoDB where necessary
})
For the upgrade / downgrade I would make simple js files under a migrations directory with an upgrade / downgrade export functions that will accept the user model and run the migration changes on that particular user in the MongoDB. Lastly ensure the users version is updated in your MongoDB so they don't run the changes again unless they move to a different version again.
If you're used to SQL type migrations or Rails-like migrations then you'll find my cli tool migrate-mongoose the right fit for you.
It allows you to write migrations with an up and a down function and manages the state for you based on success and failure of your migrations.
It also supports ES6 if you're using ES 2015 syntax.
You get access to your mongoose models via the this object, making it easy to make the changes you need to your models and schemas.
There are 2 types of migrations:
Offline: Will require you to take your service down for maintenance, then iterate over the entire collection and make the changes you need.
Online: Does not require to take your service down for maintenance. When you read the document, you check its version, and run a version specific migration routine for each version between the old and the new. Then you load the resulting thing.
Not all services can afford an offline migration, I recommend the online approach.

How to inspect every query going to DB from Zend Framework

I have a complex reporting application that allows clients to login and view reports for their client data. There are several sections of the application where there are database calls, using various controllers. I need to make sure that client A doesn't get client B's information via header manipulation.
The system authenticates, and assignes them a clientID and roleID. If your roleID >1, that means you work for the company hosting the data, and you can see all client info. I want to create a catch-all that basically works like this:
if($roleID > 1) {
...send query to database
}else {
if(...does this query select a record with clientID other than my $auth->clientID){
do not execute query
}else {
execute query
}
}
The problem is, I want this to run for every query that goes to the server... how can I place this code as a "roadblock" between the application and the DB? I already use Zend_Profiler to look at queries, so I know it is somehow possible, but cannot discern this from the Profiler code...
I can always write an authentication function and pass selected queries that way, but this catch-all would be easier to implement across all of the calls and would be future proof. Any help is appreciated.
it's application design fault.
you shoud use 'service architecture' - the only one entry point for queries would be a service. and any checks inside it.
If this is something you want run on every query, I'd suggest extending Zend_Db_Select and overwrite either the query() or assemble() functions to add in your logic. You'll also want to add a way for it to be aware of your $auth object.
Another option is to extend your database adapter so you can intercept the queries directly. IMO, you should try and do this at the application level though.
Depending on your database server, you can put a trace on the DB side.
Here's an example for Oracle:
http://orafaq.com/wiki/SQL_Trace