Watch mongo update events using oplogs - mongodb

I have to Listen to changes in particular field in mongodb documents and send data to client accordingly
I have tried looking for solutions, but only thing I could find is to query oplogs
(operation logs) of mongodb.
db.collection("oplog.rs", function(err, oplog) {})
Questions:
How are we actually going to make decisions based on oplogs (Capped Collections) and how frequently are we going to query oplogs to see the changes ?
Do we have any alternative solution to this problem may be using mongoose ?

MongoDB keeps track of all the database changes in a collection called "oplogs" (Operation Logs). Oplogs are used when you need to keep track of EACH and EVERY collection of the database. However if you need to keep track of just one collection, you may create tailable cursor on that particular collection using the below code. (Note: Only capped collections can have tailable cursor)
However if you want to use oplogs, In case of multiple MongoDB server instance, oplogs are enabled by default but if you are working on a single instance MongoDB server, you need to enable oplogs in mongoDB by this command in command prompt:
mongod --replSet rs0
Then to get the records from oplogs collection using Mongoose, you can try the following code.
var mongoose = require('mongoose');
var db = mongoose.connect('mongodb://localhost/local'); //local db has oplogs collection
mongoose.connection.on('open', function callback() {
var collection = mongoose.connection.db.collection('oplog.rs'); //or any capped collection
var stream = collection.find({}, {
tailable: true,
awaitdata: true,
numberOfRetries: Number.MAX_VALUE
}).stream();
stream.on('data', function(val) {
console.log('Doc: %j',val);
});
stream.on('error', function(val) {
console.log('Error: %j', val);
});
stream.on('end', function(){
console.log('End of stream');
});
});
Now I guess this may help. The above code is how you implement tailable cursor in Mongoose.

Related

How to avoid mongo from returning duplicated documents by iterating a cursor in a constantly updated big collection?

Context
I have a big collection with millions of documents which is constantly updated with production workload. When performing a query, I have noticed that a document can be returned multiple times; My workload tries to migrate the documents to a SQL system which is set to allow unique row ids, hence it crashes.
Problem
Because the collection is so big and lots of users are updating it after the query is started, iterating over the cursor's result may give me documents with the same id (old and updated version).
What I'v tried
const cursor = db.collection.find(query, {snapshot: true});
while (cursor.hasNext()) {
const doc = cursor.next();
// do some stuff
}
Based on old documentation for the mongo driver (I'm using nodejs but this is applicable to any official mongodb driver), there is an option called snapshot which is said to avoid what is happening to me. Sadly, the driver returns an error indicating that this option does not exists (It was deprecated).
Question
Is there a way to iterate through the documents of a collection in a safe fashion that I don't get the same document twice?
I only see a viable option with aggregation pipeline, but I want to explore other options with standard queries.
Finally I got the answer from a mongo changelog page:
MongoDB 3.6.1 deprecates the snapshot query option.
For MMAPv1, use hint() on the { _id: 1} index instead to prevent a cursor from returning a document more than once if an intervening write operation results in a move of the document.
For other storage engines, use hint() with { $natural : 1 } instead.
So, from my code example:
const cursor = db.collection.find(query).hint({$natural: 1});
while (cursor.hasNext()) {
const doc = cursor.next();
// do some stuff
}

Sometimes findOneandUpdate inserts double

I am facing a problem that MongoDB inserts the same documents twice. However, it is so rare a situation.
I am coding a live stats website which is including only soccer stats. To insert a document to the database first I check if there is the same match or not. Then, if there is a match update it with a new value. That's it. I can not share all codes but I am using the findOneandUpdate method with upsert option. In the findOneandUpdate method, I have to modify the returned document from the callback then after I save it with "doc.save()". In that script, I am using Cronjob (node-cron library) and it is running every minute. Most of the time I am getting the results that I want updated with new results if there exists. But sometimes results are inserted twice as you can see in the image.
//in async forEach,
await MatchObject.findOneAndUpdate({matchid: id}, // find a document with that filter
modelDoc, // document to insert when nothing was found
{upsert: true, new: true, runValidators: true, useFindAndModify: false}, // options
function (err, doc) { })
//Also inside findOneAndUpdate method i have save method that is modified with returned document from findOneAndUpdate method.
doc.save().then(res => console.log(res));
node v11.15.0
MongoDB shell version v4.0.10
mongoDB version v4.0.10
mongoose 5.6.2

Mongoose - Does where cause a extra query?

For MongoDB and Mongoose, does this cause 2 queries because of the where clause? Like findAndModify does because it returns the whole document before modifying?
Model.where({ _id: id }).update({ title: 'words' })
Nope, but neither does findAndModify, as in both cases the entire command is executed atomically by the MongoDB server.
To confirm, you can see the commands that Mongoose is executing by adding the following to your code:
mongoose.set('debug', true);

Does mongodb provide triggers, like in RDBMS? [duplicate]

I'm creating a sort of background job queue system with MongoDB as the data store. How can I "listen" for inserts to a MongoDB collection before spawning workers to process the job?
Do I need to poll every few seconds to see if there are any changes from last time, or is there a way my script can wait for inserts to occur?
This is a PHP project that I am working on, but feel free to answer in Ruby or language agnostic.
What you are thinking of sounds a lot like triggers. MongoDB does not have any support for triggers, however some people have "rolled their own" using some tricks. The key here is the oplog.
When you run MongoDB in a Replica Set, all of the MongoDB actions are logged to an operations log (known as the oplog). The oplog is basically just a running list of the modifications made to the data. Replicas Sets function by listening to changes on this oplog and then applying the changes locally.
Does this sound familiar?
I cannot detail the whole process here, it is several pages of documentation, but the tools you need are available.
First some write-ups on the oplog
- Brief description
- Layout of the local collection (which contains the oplog)
You will also want to leverage tailable cursors. These will provide you with a way to listen for changes instead of polling for them. Note that replication uses tailable cursors, so this is a supported feature.
MongoDB has what is called capped collections and tailable cursors that allows MongoDB to push data to the listeners.
A capped collection is essentially a collection that is a fixed size and only allows insertions. Here's what it would look like to create one:
db.createCollection("messages", { capped: true, size: 100000000 })
MongoDB Tailable cursors (original post by Jonathan H. Wage)
Ruby
coll = db.collection('my_collection')
cursor = Mongo::Cursor.new(coll, :tailable => true)
loop do
if doc = cursor.next_document
puts doc
else
sleep 1
end
end
PHP
$mongo = new Mongo();
$db = $mongo->selectDB('my_db')
$coll = $db->selectCollection('my_collection');
$cursor = $coll->find()->tailable(true);
while (true) {
if ($cursor->hasNext()) {
$doc = $cursor->getNext();
print_r($doc);
} else {
sleep(1);
}
}
Python (by Robert Stewart)
from pymongo import Connection
import time
db = Connection().my_db
coll = db.my_collection
cursor = coll.find(tailable=True)
while cursor.alive:
try:
doc = cursor.next()
print doc
except StopIteration:
time.sleep(1)
Perl (by Max)
use 5.010;
use strict;
use warnings;
use MongoDB;
my $db = MongoDB::Connection->new;
my $coll = $db->my_db->my_collection;
my $cursor = $coll->find->tailable(1);
for (;;)
{
if (defined(my $doc = $cursor->next))
{
say $doc;
}
else
{
sleep 1;
}
}
Additional Resources:
Ruby/Node.js Tutorial which walks you through creating an application that listens to inserts in a MongoDB capped collection.
An article talking about tailable cursors in more detail.
PHP, Ruby, Python, and Perl examples of using tailable cursors.
Check out this: Change Streams
January 10, 2018 - Release 3.6
*EDIT: I wrote an article about how to do this https://medium.com/riow/mongodb-data-collection-change-85b63d96ff76
https://docs.mongodb.com/v3.6/changeStreams/
It's new in mongodb 3.6
https://docs.mongodb.com/manual/release-notes/3.6/ 2018/01/10
$ mongod --version
db version v3.6.2
In order to use changeStreams the database must be a Replication Set
More about Replication Sets:
https://docs.mongodb.com/manual/replication/
Your Database will be a "Standalone" by default.
How to Convert a Standalone to a Replica Set: https://docs.mongodb.com/manual/tutorial/convert-standalone-to-replica-set/
The following example is a practical application for how you might use this.
* Specifically for Node.
/* file.js */
'use strict'
module.exports = function (
app,
io,
User // Collection Name
) {
// SET WATCH ON COLLECTION
const changeStream = User.watch();
// Socket Connection
io.on('connection', function (socket) {
console.log('Connection!');
// USERS - Change
changeStream.on('change', function(change) {
console.log('COLLECTION CHANGED');
User.find({}, (err, data) => {
if (err) throw err;
if (data) {
// RESEND ALL USERS
socket.emit('users', data);
}
});
});
});
};
/* END - file.js */
Useful links:
https://docs.mongodb.com/manual/tutorial/convert-standalone-to-replica-set
https://docs.mongodb.com/manual/tutorial/change-streams-example
https://docs.mongodb.com/v3.6/tutorial/change-streams-example
http://plusnconsulting.com/post/MongoDB-Change-Streams
Since MongoDB 3.6 there will be a new notifications API called Change Streams which you can use for this. See this blog post for an example. Example from it:
cursor = client.my_db.my_collection.changes([
{'$match': {
'operationType': {'$in': ['insert', 'replace']}
}},
{'$match': {
'newDocument.n': {'$gte': 1}
}}
])
# Loops forever.
for change in cursor:
print(change['newDocument'])
MongoDB version 3.6 now includes change streams which is essentially an API on top of the OpLog allowing for trigger/notification-like use cases.
Here is a link to a Java example:
http://mongodb.github.io/mongo-java-driver/3.6/driver/tutorials/change-streams/
A NodeJS example might look something like:
var MongoClient = require('mongodb').MongoClient;
MongoClient.connect("mongodb://localhost:22000/MyStore?readConcern=majority")
.then(function(client){
let db = client.db('MyStore')
let change_streams = db.collection('products').watch()
change_streams.on('change', function(change){
console.log(JSON.stringify(change));
});
});
Alternatively, you could use the standard Mongo FindAndUpdate method, and within the callback, fire an EventEmitter event (in Node) when the callback is run.
Any other parts of the application or architecture listening to this event will be notified of the update, and any relevant data sent there also. This is a really simple way to achieve notifications from Mongo.
Many of these answers will only give you new records and not updates and/or are extremely ineffecient
The only reliable, performant way to do this is to create a tailable cursor on local db: oplog.rs collection to get ALL changes to MongoDB and do with it what you will. (MongoDB even does this internally more or less to support replication!)
Explanation of what the oplog contains:
https://www.compose.com/articles/the-mongodb-oplog-and-node-js/
Example of a Node.js library that provides an API around what is available to be done with the oplog:
https://github.com/cayasso/mongo-oplog
There is an awesome set of services available called MongoDB Stitch. Look into stitch functions/triggers. Note this is a cloud-based paid service (AWS). In your case, on an insert, you could call a custom function written in javascript.
Actually, instead of watching output, why you dont get notice when something new is inserted by using middle-ware that was provided by mongoose schema
You can catch the event of insert a new document and do something after this insertion done
There is an working java example which can be found here.
MongoClient mongoClient = new MongoClient();
DBCollection coll = mongoClient.getDatabase("local").getCollection("oplog.rs");
DBCursor cur = coll.find().sort(BasicDBObjectBuilder.start("$natural", 1).get())
.addOption(Bytes.QUERYOPTION_TAILABLE | Bytes.QUERYOPTION_AWAITDATA);
System.out.println("== open cursor ==");
Runnable task = () -> {
System.out.println("\tWaiting for events");
while (cur.hasNext()) {
DBObject obj = cur.next();
System.out.println( obj );
}
};
new Thread(task).start();
The key is QUERY OPTIONS given here.
Also you can change find query, if you don't need to load all the data every time.
BasicDBObject query= new BasicDBObject();
query.put("ts", new BasicDBObject("$gt", new BsonTimestamp(1471952088, 1))); //timestamp is within some range
query.put("op", "i"); //Only insert operation
DBCursor cur = coll.find(query).sort(BasicDBObjectBuilder.start("$natural", 1).get())
.addOption(Bytes.QUERYOPTION_TAILABLE | Bytes.QUERYOPTION_AWAITDATA);
After 3.6 one is allowed to use database the following database triggers types:
event-driven triggers - useful to update related documents automatically, notify downstream services, propagate data to support mixed workloads, data integrity & auditing
scheduled triggers - useful for scheduled data retrieval, propagation, archival and analytics workloads
Log into your Atlas account and select Triggers interface and add new trigger:
Expand each section for more settings or details.

MongoDB: range queries on insertion time with _id and ObjectID

I am trying to use mongodb's ObjectID to do a range query on the insertion time of a given collection. I can't really find any documentation that this is possible, except for this blog entry: http://mongotips.com/b/a-few-objectid-tricks/ .
I want to fetch all documents created after a given timestamp. Using the nodejs driver, this is what I have:
var timeId = ObjectId.createFromTime(timestamp);
var query = {
localUser: userId,
_id: {$gte: timeId}
};
var cursor = collection.find(query).sort({_id: 1});
I always get the same amount of records (19 in a collection of 27), independent of the timestamp. I noticed that createFromTime only fills the bytes in the objectid related to time, the other ones are left at 0 (like this: 4f6198be0000000000000000).
The reason that I try to use an ObjectID for this, is that I need the timestamp when inserting the document on the mongodb server, not when passing the document to the mongodb driver in node.
Anyone knows how to make this work, or has another idea how to generate and query insertion times that were generated on the mongodb server?
Not sure about nodejs driver in ruby, you can simply apply range queries like this.
jan_id = BSON::ObjectId.from_time(Time.utc(2012, 1, 1))
feb_id = BSON::ObjectId.from_time(Time.utc(2012, 2, 1))
#users.find({'_id' => {'$gte' => jan_id, '$lt' => feb_id}})
make sure
var timeId = ObjectId.createFromTime(timestamp) is creating an ObjectId.
Also try query without localuser