How could meteor know MongoDB changes instantly? - mongodb

I had been wondering about knowing mongoDB changes instantly for a while and best solution that I have ever seen, is tailing mongoDb logs and I just had given wondering up till met Meteor.
When there is a changes on mongoDB somehow(through Meteor or directly), Meteor understands the changes instantly and apply them. We could see it on publishing changes or observing(observe or observeChange) changes.
I guess that, Meteor could understand it by tailing mongoDB logs as the best solution that I have ever seen suggests. But this assumption also make me ask that, Meteor can run with a mongoDB running on different host and if the way that meteor knows MongoDB changes instantly is tailing logs, how could Meteor tails logs of mongoDB running on different machine(different host) and handle changes? Tailing on different host requires painful things and I think Meteor does not follow this way.
I think tailing logs is not my answer because of this question.
I am curious about this knowledge. Cause, I think that, if I have it, I could use it on my other works without Meteor.
So, how could really Meteor knows mongoDB changes instantly? And I wanna ask once again; what would be the best way to know about mongoDB changes instantly?
Thank You!

Yes you are right this is from new new oplog tailing functionality. Meteor pretends that it is mongodb in a replica set and it tails mongodb's operation logs for collections it is monitoring.
Usually databases need multiple servers so that they can stay up in case one fails. MongoDB has this functionality in a replica set.
So this is where meteor comes in. Meteor pretends to be a mongodb replica database so when any data changes on this operations log for other copies to see, Meteor knows that data has changed and can push this new data down to the client instantly.
Usage in production
You can use a different mongo database with this new functionality as you set MONGO_OPLOG_URL. You will need to set mongodb as a replica set first so that the operations log is enabled
This was a very recent introduction put in just last week in version 0.7 you can see the blog post about it below
More links about it:
https://www.meteor.com/blog/2013/12/17/meteor-070-scalable-database-queries-using-mongodb-oplog-instead-of-poll-and-diff
https://www.youtube.com/watch?v=_dzX_LEbZyI
https://github.com/meteor/meteor/wiki/Oplog-Observe-Driver

Related

Records updated in Compass keep reverting

I have a MongoDB instance hosted on AWS DocumentDB. There is only one node in the replica set, and this is MongoDB 4.0.0 Community edition.
Twice now I've updated records in Compass and clicked the "Update" button. I've confirmed that the change was made. A few hours later, the change reverts.
From my research, this is typically caused by a MongoDB rollback. But everything I've read says that rollbacks typically occur when the secondary databases associated with a replica set are out of sync with the primary. But I don't have secondary databases.
Can anyone provide any insight - I'm not sure where else to look or what else to research.
Edit to add: Also, is this likely to be a hosting problem (AWS DocumentDB) or a database problem directly?
All writes on Amazon DocumentDB are durable, write concern majority by default and can't be changed. There's also no rollback mechanism that would cause the database server to revert to a previous state. You must have other client or application that is making other update and changing the document.
Try enabling the profiler, or, probably better, enable change streams and watch the changes to identify what's making the change.

Data is not being saved in mongdb

Hi i am using mongodb and deploying it in AWS. But the data is not properly being saved in the server.
I created many collections but the data is not present inside the collection.
Do i need any other setting . Please let me know
The database named READ_ME_TO_RECOVER_YOUR_DATA suggests that you created the mongod server without authentication, and some hackers were able to steal/delete all of your data, and are probably now expecting you to pay some bitcoin to get it back.
I doubt they actually made a backup of your data before deleting it, since they don't actually care about you or your data.
There was a blog post from the MongoDB folks a couple of years ago about how to avoid this: https://www.mongodb.com/blog/post/update-how-to-avoid-a-malicious-attack-that-ransoms-your-data
#1 recommendation is to enable authentication.

Create new MongoDB instance based on existing data

i want to dockerize my production application. I've got MongoDB set up on server and I want to remove it, and make a docker container with MongoDB which will work on existing data. I already tested this approach, so i created a docker container which storage is pointed to host storage with existing data. Basically it's the new MongoDB instance which work on data created by previous mongoDB which existed on Host. And it works, so i can query data, my application can connect to this database and so on. My question is, what are the threats to this approach? And if this is even good approach, or when i created new mongoDB instance should i import dump data from previous one ?
I guess there's no right and wrong in this case. It depends on how you want to have it working.
Let's say you left Mongodb running in the cloud.
Is it a development database? If yes, how would you keep coding / testing without access to that?

Share a MongoDB instance between Meteor apps without lag in reactivity?

This question has been asked multiple times, here and here, and the answer to get this working is fairly straight forward: add an environmental variable to your bash_profile and all Meteor instances on your localhost will share that MONGO_URL.
What I've noticed however is that while this may be the case, there's quite a bit of latency in the "reactivity" of Meteor. I've tested this with two very lean Meteor apps, with empty collections. Inserting a document to a collection from one Meteor app, where my second app is querying that same collection and printing out a field from the documents does work, but there's a noticeable lag before it updates. I've ruled out the possibility of the collection insertion being the source of the lag (simple console.log callback on the client of the first app, logging the id of the newly inserted document).
My purpose for having multiple apps (two to be precise) sharing the same MongoDB is to separate an admin panel from a mobile app without going crazy regarding name-spacing and bloat. This configuration works, but I'm not sure it's the "proper" way of accomplishing the task, and it certainly seems to be causing a performance hit.
Any insight into this matter would be appreciated. Thank you!
EDIT: To clarify, the db URL I'm using is on my localhost, and isn't something hosted online.
When you use an external database, by default meteor will use periodic polling (every few seconds) in order to observe any changes. The delay you are experiencing is a result of this polling process. You can remove the delay and reduce your app's CPU usage by taking advantage of meteor's oplog tailing feature. In order to use it you will:
Get access to a mongodb instance with the oplog turned on.
Set the environment variable MONGO_OPLOG_URL so your app(s) can read the oplog.
Personally, I'd recommend compose.io for this. They provide exactly this as part of their basic elastic deployment. See this post for detailed instructions.
For users who wish to connect to the oplog created locally for you, you can obtain the URL via:
MongoInternals.defaultRemoteCollectionDriver().mongo._oplogHandle._oplogUrl
It should end up looking something like mongodb://127.0.0.1:3001/local

MongoLab strategies for restoring a database that has people currently using it

My site runs on MongoDB and I'm using MongoLab as a host for the database. I have 10-15 people using the site at any given time and would rather not 'switch them off' if at all possible.
If I have a MongoDB dump that I'd like to restore, it takes a very long time (the file size is around 360mb and this takes a good while to upload on my connection speed). What's more, it appears that MongoDB wants me to delete my collection before doing a restore, so the user would have no data to look through while it's updating.
Is there any way around this, besides, say, having two mongoLab accounts, one 'active' and one for uploading backups and I switch between the two when I need to do a restore?
Is there a general recommended strategy for this sort of thing?