Is meteor using the Mongo Oplog? - mongodb

How can I check if meteor is using the oplog of my mongo?
I have a cluster of mongo and set two envs for my meteor.
MONGO_URL=mongodb://mongo/app?replicaSet=rs0
MONGO_OPLOG_URL=mongodb://mongo/local?authSource=app
How can I check if the opt log is actually in use. Meteor can fallback to query polling which is very inefficient but I would like to see if it's working properly with the oplog.
Any ideas?

Quoting the relevant bits from Meteor's OplogObserveDriver docs:
How to tell if your queries are using OplogObserveDriver
For now, we only have a crude way to tell how many observeChanges calls are using OplogObserveDriver, and not which calls they are.
This uses the facts package, an internal Meteor package that exposes real-time metrics for the current Meteor server. In your app, run meteor add facts, and add the {{> serverFacts}} template to your app. If you are using the autopublish package, Meteor will automatically publish all metrics to all users. If you are not using autopublish, you will have to tell Meteor which users can see your metrics by calling Facts.setUserIdFilter in server code; for example:
Facts.setUserIdFilter(function (userId) {
var user = Meteor.users.findOne(userId);
return user && user.admin;
});
(When running your app locally, Facts.setUserIdFilter(function () { return true; }); may be good enough!)
Now look at your app. The facts template will render a variety of metrics; the ones we're looking for are observe-drivers-oplog and observe-drivers-polling in the mongo-livedata section. If observe-drivers-polling is zero or not rendered at all, then all of your observeChanges calls are using OplogObserveDriver!

To set up oplog tailing, you need to set up a user on my_database, and an oplog_user on local. Then, specify the following URIs to connect to your replica set named test-shard (e.g. if there are 3 hosts named test-shard-[0-2]):
MONGO_URL="mongodb://user:PASS#test-shard-0.mongodb.net:27017,test-shard-1.mongodb.net:27017,test-shard-2.mongodb.net:27017/my_database?ssl=true&replicaSet=test-shard&authSource=admin"
MONGO_OPLOG_URL="mongodb://oplog_user:PASS#test-shard-0.mongodb.net:27017,test-shard-1.mongodb.net:27017,test-shard-2.mongodb.net:27017/local?ssl=true&replicaSet=test-shard&authSource=admin"
On MongoDB Atlas they require ssl=true, and also all users authenticate through the admin database. On another deployment you might just authenticate through my_database, in which case you'd remove the authsource=admin for MONGO_URL and write authsource=my_database for MONGO_OPLOG_URL. See this post for another example.
With MongoDB 3.6 and the Mongo node driver 3.0+, you may be able to use a succinct notation for DNS seedlist connections, e.g. on MongoDB Atlas, to specify the environment variables:
MONGO_URL="mongodb+srv://user:PASS#foo.mongodb.net/my_database"
MONGO_OPLOG_URL="mongodb+srv://oplog_user:PASS#foo.mongodb.net/local"
The link above explains how this notation fills in the ssl, replicaSet, and authSource arguments. This is a lot nicer than the long strings above, and also means you can scale your replica set up and down without needing to reconfigure anything.
As hwillson mentioned, use the facts-ui and facts-base packages (formerly facts) to see if there are any oplogObserveDrivers running in your app. If they are all pollingObserveDriver, than oplog is not set up correctly.

If you are using Kadira APM to monitor your app's performance, you can see if oplogs are working by navigating to the "Live Queries" section and having a look at the "Oplog notifications" chart.
You can see in my screenshot that oplogs are working, as values appear in the chart (bottom right). If oplogs weren't working then this chart would be empty.

This may be very late, but this is the only way that worked for me :
someCollection._driver.mongo._oplogHandle
if this is set to null then the oplog is not enabled, otherwise you can use this handle to check for more details.

Related

Orion contextBroker allows set read preference to Mongodb replicaset?

I'm reading the documentation of orion Context Broker and in the command line arguments I dont see any argument to set the read preference to my replicaset of mongoDB. In my application I need to set that the read preference have the option nearest to avoid bottle necks in high query traffic periods. Does anyone know if is possible?
Current Orion version (3.3.1) doesn't allow to set read preference. There is an open issue in the Orion repository about implement the -mongoUri CLI parameter to allow setting the MongoDB connection URI (so you could add for instance &readPreference=secondary to it).
Alternatively, you could hack the Orion source code to build an specific version for you with the readPreference value you want. Have a look to composeMongoUri() function. It seems it is a matter of just addding uri += optionPrefix + "readPreference=<whatever you want>"; at the end.
It is not a smart soluction (it is not flexible and you would need to rebuild Orion if you want to change the setting) but it could be a valid workaround while -mongoUri gets implemented.

Setting up MongoDB environment requirements for Parse Server

I have my instance running and am able to connect remotely however I'm stuck on where to set this parameter to false since it states that the default is set to true:
failIndexKeyTooLong
Setting the 'failIndexKeyTooLong' is a three-step process:
You need to go to the command console in the Tools menu item for the admin database of your database instance. This command will only work on the admin database, pictured here:
Once there, pick any command from the list and it will give you a short JSON text for that command.
Erase the command they provide (I chose 'ping') and enter the following JSON:
{
"setParameter" : 1,
"failIndexKeyTooLong" : false
}
Here is an example to help:
Note if you are using a free plan at MongoLab: This will NOT work if you have a free plan; it only works with paid plans. If you have the free plan, you will not even see the admin database. HOWEVER, I contacted MongoLab and here is what they suggest:
Hello,
First of all, welcome to MongoLab. We'd be happy to help.
The failIndexKeyTooLong=false option is only necessary when your data
include indexed values that exceed the maximum key value length of
1024 bytes. This only occurs when Parse auto-indexes certain
collections, which can actually lead to incorrect query results. Parse
has updated their migration guide to include a bit more information
about this, here:
https://parse.com/docs/server/guide#database-why-do-i-need-to-set-failindexkeytoolong-false-
Chances are high that your migration will succeed without this
parameter being set. Can you please give that a try? If for any reason
it does fail, please let us know and we can help you on potential next
steps.
Our Dedicated and Shared Cluster plans
(https://mongolab.com/plans/pricing/) do provide the ability to toggle
this option, but because our free Sandbox plans are running on shared
server processes, with other Sandbox users, this parameter is not
configurable.
When launching your mongodb server, you can set this parameter to false :
mongod --setParameter failIndexKeyTooLong=false
I have wrote an article that help you to Setting up Parse-Server and all its dependencies on your own server:
https://medium.com/#jcminarro/run-parse-server-on-your-own-server-using-digitalocean-b2a7d66e1205

meteor: use different database for each user

I currently assign a mongodb to my meteor app using the env variable
"MONGO_URL": "mongodb://localhost:27017/dbName" when I start the meteor instance.
So all data gets written to the mongo database with the name "dbName".
I am looking for a way to individually set the dbName for each custumer upon login in order to seperate their data into different databases.
This generally unsupported as this is defined at startup. However, this thread offers a possible solution:
https://forums.meteor.com/t/switch-database-while-meteor-is-running/4361/6
var database = new MongoInternals.RemoteCollectionDriver("<mongo url>");
MyCollection = new Mongo.Collection("collection_name", { _driver: database });
This would allow you to define the database name in the mongo url but would require a fair bit of extra work to redefine your collections on a customer by customer basis.
Here's another approach that will make your life eternally easier:
Create a generic site with no accounts at mysite.com
When they login at mysite.com, figure out what site they actually belong to and redirect them to customerName.mysite.com and log them in there
Run a separate instance of Meteor configured for a different mongo at each site
nginx might help you with the above.
It is generally good practice to run separate DBs when offering a B2B
solution.
That's a matter of opinion that depends heavily on the platform. Many SaaS providers would argue that point.

how to understand read preferences in mongo

I'm just getting started with mongoDB. I am trying to understand how to set up my secondary database servers so that when there is no primary, the secondaries can be used to read data. I believe the read preference I'm going for is preferredPrimary.
Now that I kinda understand which of the read preferences I want to test out, I'm trying to understand how to set up my replica set for preferredPrimary.
I've been reading through the following documentation:
http://docs.mongodb.org/manual/tutorial/configure-replica-set-tag-sets/
Questions:
Is this the right doc to follow to set up read preferences?
Assuming that it is, I want to verify that the tags names / values are anything that I come up with? So specifically, the key used in the example "dc" is NOT a keyword in mongo. Is that correct?
once I set up these tags, in my client, when I'm connecting to the mongo database, do i have to specify any settings? I'm using a php front end, and I found this:
http://php.net/manual/en/mongodb.setreadpreference.php
can you confirm that these tags replace the rs.slaveOK() method?
Environment:
mongoDB version 2.6.5
replica set with 3 members - one primary and 2 secondary servers
Yes
Yes
Yes, but the link that you provided is only for readPreference
You also need to supply custom writeConcern (extract from link in a question):
db.users.insert( { id: "xyz", status: "A" }, { writeConcern: { w: "MultipleDC" } } )
Look into php driver documentation how to do that.
Yes, you may skip call to slaveOK in this case (especially, that in 95% cases you will be reading from primary)

How to properly handle mongoose schema migrations?

I'm completely new to MongoDB & Mongoose and can't seem to find an answer as to how to handle migrations when a schema changes.
I'm used to running migration SQL scripts that alter table structure and any underlying data that needs to be changed. This typically involves DB downtime.
How is this typically handled within MongoDB/Mongoose? Any gotcha's that I need to be aware of?
In coming across this and reasonably understanding how migrations work on a relational database, MongoDB makes this a little simpler. I've come to 2 ways to break this down. The things to consider when dealing with data migrations in MongoDB (not all that uncommon from RDBs) are:
Ensuring local test environments do not break when a developer merges the latest from the project repository
Ensuring any data is correctly updated on the live version regardless if a user is logged in or out if authentication is used. (Of course if everyone is automatically logged out when an upgrade is made, then only worrying about when a user logs in is necessary).
1) If your change will log everyone out or application downtime is expected then the simple way to do this is have a migration script to connect to local or live MongoDB and upgrade the correct data. Example where a user's name is changed from a single string to an object with given and family name (very basic of course and would need to be put into a script to run for all developers):
Using the CLI:
mongod
use myDatabase
db.myUsers.find().forEach( function(user){
var curName = user.name.split(' '); //need some more checks..
user.name = {given: curName[0], family: curName[1]};
db.myUsers.save( user );
})
2) You want the application to migrate the schemas up and down based on the application version they are running. This will obviously be less of a burden for a live server and not require down time due to only upgrading users when they use the upgraded / downgraded versions for the first time.
If your using middleware in Expressjs for Nodejs:
Set an app variable in your root app script via app.set('schemaVersion', 1) which will be used later to compare to the users schema version.
Now ensure all the user schemas have a schemaVersion property as well so we can detect a change between the application schema version and the current MongoDB schemas for THAT PARTICULAR USER only.
Next we need to create simple middleware to detect the config and user version
app.use( function( req, res, next ){
//If were not on an authenticated route
if( ! req.user ){
next();
return;
}
//retrieving the user info will be server dependent
if( req.user.schemaVersion === app.get('schemaVersion')){
next();
return;
}
//handle upgrade if user version is less than app version
//handle downgrade if user version is greater than app version
//save the user version to your session / auth token / MongoDB where necessary
})
For the upgrade / downgrade I would make simple js files under a migrations directory with an upgrade / downgrade export functions that will accept the user model and run the migration changes on that particular user in the MongoDB. Lastly ensure the users version is updated in your MongoDB so they don't run the changes again unless they move to a different version again.
If you're used to SQL type migrations or Rails-like migrations then you'll find my cli tool migrate-mongoose the right fit for you.
It allows you to write migrations with an up and a down function and manages the state for you based on success and failure of your migrations.
It also supports ES6 if you're using ES 2015 syntax.
You get access to your mongoose models via the this object, making it easy to make the changes you need to your models and schemas.
There are 2 types of migrations:
Offline: Will require you to take your service down for maintenance, then iterate over the entire collection and make the changes you need.
Online: Does not require to take your service down for maintenance. When you read the document, you check its version, and run a version specific migration routine for each version between the old and the new. Then you load the resulting thing.
Not all services can afford an offline migration, I recommend the online approach.