How to ensure successful authentication with scala reactivemongo play - mongodb

I am using reactivemongo to connect to MongoDB.
val connection: MongoConnection = driver.connection(hosts, options = conOpts, authentications = List(credentials))
val db = connection(database)
val collection = db(collection)
val resultData = collection.find(query, filter)
And the first time I try to query the database, I get a:
Error executing MongoDB Query reactivemongo.core.errors.DetailedDatabaseException: DatabaseException['not authorized for query on test.test' (code = 13)]
If I try again, the query usually succeeds. I presume that this is because the authentication hasn't had time to successfully complete when the find method is called for the first time.
So I wonder if there is a way to check the status of the authentication in order to wait for its completion before querying the database?

Reactivemongo > 0.11.x should support SHA1 authorization but is not the default, to enable it just add "?authMode=scram-sha1" at the end of your mongodb.uri
uri = "mongodb://userName:password#localhost/databaseName?authMode=scram-sha1"
in order to proper set up users just follow official doc:
https://docs.mongodb.com/manual/tutorial/enable-authentication/
details about roles:
https://docs.mongodb.com/manual/tutorial/enable-authentication/

Related

MongoDB does't work as expected (Realm.findAll)

I am a newbie in MongoDB Realm. I followed this guide to start(https://www.mongodb.com/docs/realm/sdk/java/quick-start-sync/).
This is the implementation to fetch all employees from MongoDB.
val employeeRealmConfig = SyncConfiguration.Builder(
realmApp.currentUser()!!,
AppConfigs.MONGODB_REALM_USER_PARTITION_ID
).build()
backGroundRealm = Realm.getInstance(employeeRealmConfig)
val queryEmployeesTask = backGroundRealm.where<Employee>().findAll()
I printout queryEmployeesTask size but each time I run my application there is a different result printed out and queryEmployeestask size < 25000. I used mongo compas to check database, there are 25000 records for partition AppConfigs.MONGODB_REALM_USER_PARTITION_ID.
I want to get full 25000 records, Could you help me to resolve this problem ?
Thank in advanced
After checking the document carefully, I realized that Employee Object in the client has a different schema with Mongo Atlast schema, after correcting this problem val queryEmployeesTask = backGroundRealm.where<Employee>().findAll() returns the correct value.
I hope this can help someone has the same problem with me

MongoDB Change Stream for Azure CosmosDB MongoDB API

I have to use MongoDB Change Stream. The MongoDB is setup using CosmosDB MongoDB API. Wire compatibility is supported.
Somehow I cannot setup a watch. Here is the code I am using:
string connectionstring="my connection string";
var mongoClient = new MongoClient(connectionstring);
var database = mongoClient.GetDatabase("Events");
var collection = database.GetCollection<BsonDocument>("ACollection");
var options = new ChangeStreamOptions() { FullDocument = ChangeStreamFullDocumentOption.UpdateLookup };
var pipeline = new EmptyPipelineDefinition<ChangeStreamDocument<BsonDocument>>().Match("{ operationType: 'insert' }");
var cursor = collection.Watch(pipeline, options).ToEnumerable();
This last last throws an exception
Unhandled exception. MongoDB.Driver.MongoCommandException: Command aggregate failed: Change stream must be followed by a match and then a project stage.
...
...
at MongoDB.Driver.MongoCollectionImpl`1.Watch[TResult](PipelineDefinition`2 pipeline, ChangeStreamOptions options, CancellationToken cancellationToken)
at CosmosChangeStream.Program.Main(String[] args) in Program.cs:line
I have also tried
cursor = collection.Watch();
This line is written like this in many getting started articles, but it throws another exception.
MongoDB.Driver.MongoCommandException: Command aggregate failed: fullDocument option must be "updateLookup"..
Obviously it is looking for projection. Wondering how so many examples have code that does not run. But this not my problem, my problem is to get an IEnumerable Change Stream for all inserts to one or all collections in a database, and be on my way.
I have validated my connection to the database and collection by reading a document.
Thanks to any one who looks at this
I was able to make it work with the help of this article:
https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/change-streams?tabs=csharp
This has the pipeline definition including match and projection staging.

Unable to pull ALL documents from MongoDB collection in jmeter

Unable to pull all the documents from MongoDB collection using Jmeter JSR233 sampler.
I’m trying to pull all the documents from MongoDB collection to jmeter to pass it as a test case for my performance test execution.
The below works -
myDoc = collection.find(eq("ABCValue", "ABC")).first();
log.info myDoc.toJson();
collection.find(...query...).last(); also works.
I’m able to pull the first and last value from MongoDB collection for that query. However unable to pull all the documents from the collection when I try to use the following -
myDoc = collection.find();
log.info myDoc.toJson();
This does not work only in Jmeter. Please help!
To print all documents returned from find(), you need to iterate over the documents returned
for (Document cur : collection.find()) {
log.info cur.toJson();
}
The find() method returns a FindIterable() instance that provides a fluent interface for chaining other methods.

Meteor - find return undefined

I work on meteor with mongodb. I am trying to get a collection from database. I can insert data without any problem in this collection from meteor, but when I try to find, it doesn't works.
My collection is 'first'.
Server side:
Meteor.publish('first', function(){
return first.find();
});
Client side:
var datacollab = first.find({"Mois":"Mars"});
console.log("collab: " + datacollab);
When I make this command line in mongo shell, it works fine.
I already try to change my request with findOne, or put .fetch at the end.
If you need your code to be in your Template.myTemplate.onRendered hook, then you have several options:
Use a Tracker.autorun that will be automatically re-executed when your DB query / cursor returns new data.
Use the onReady callback of the subscription (assuming you subscribe either when template is created or rendered). Your callback will be executed when the client has received a first full snapshot of the server publication.

Does mongodb provide triggers, like in RDBMS? [duplicate]

I'm creating a sort of background job queue system with MongoDB as the data store. How can I "listen" for inserts to a MongoDB collection before spawning workers to process the job?
Do I need to poll every few seconds to see if there are any changes from last time, or is there a way my script can wait for inserts to occur?
This is a PHP project that I am working on, but feel free to answer in Ruby or language agnostic.
What you are thinking of sounds a lot like triggers. MongoDB does not have any support for triggers, however some people have "rolled their own" using some tricks. The key here is the oplog.
When you run MongoDB in a Replica Set, all of the MongoDB actions are logged to an operations log (known as the oplog). The oplog is basically just a running list of the modifications made to the data. Replicas Sets function by listening to changes on this oplog and then applying the changes locally.
Does this sound familiar?
I cannot detail the whole process here, it is several pages of documentation, but the tools you need are available.
First some write-ups on the oplog
- Brief description
- Layout of the local collection (which contains the oplog)
You will also want to leverage tailable cursors. These will provide you with a way to listen for changes instead of polling for them. Note that replication uses tailable cursors, so this is a supported feature.
MongoDB has what is called capped collections and tailable cursors that allows MongoDB to push data to the listeners.
A capped collection is essentially a collection that is a fixed size and only allows insertions. Here's what it would look like to create one:
db.createCollection("messages", { capped: true, size: 100000000 })
MongoDB Tailable cursors (original post by Jonathan H. Wage)
Ruby
coll = db.collection('my_collection')
cursor = Mongo::Cursor.new(coll, :tailable => true)
loop do
if doc = cursor.next_document
puts doc
else
sleep 1
end
end
PHP
$mongo = new Mongo();
$db = $mongo->selectDB('my_db')
$coll = $db->selectCollection('my_collection');
$cursor = $coll->find()->tailable(true);
while (true) {
if ($cursor->hasNext()) {
$doc = $cursor->getNext();
print_r($doc);
} else {
sleep(1);
}
}
Python (by Robert Stewart)
from pymongo import Connection
import time
db = Connection().my_db
coll = db.my_collection
cursor = coll.find(tailable=True)
while cursor.alive:
try:
doc = cursor.next()
print doc
except StopIteration:
time.sleep(1)
Perl (by Max)
use 5.010;
use strict;
use warnings;
use MongoDB;
my $db = MongoDB::Connection->new;
my $coll = $db->my_db->my_collection;
my $cursor = $coll->find->tailable(1);
for (;;)
{
if (defined(my $doc = $cursor->next))
{
say $doc;
}
else
{
sleep 1;
}
}
Additional Resources:
Ruby/Node.js Tutorial which walks you through creating an application that listens to inserts in a MongoDB capped collection.
An article talking about tailable cursors in more detail.
PHP, Ruby, Python, and Perl examples of using tailable cursors.
Check out this: Change Streams
January 10, 2018 - Release 3.6
*EDIT: I wrote an article about how to do this https://medium.com/riow/mongodb-data-collection-change-85b63d96ff76
https://docs.mongodb.com/v3.6/changeStreams/
It's new in mongodb 3.6
https://docs.mongodb.com/manual/release-notes/3.6/ 2018/01/10
$ mongod --version
db version v3.6.2
In order to use changeStreams the database must be a Replication Set
More about Replication Sets:
https://docs.mongodb.com/manual/replication/
Your Database will be a "Standalone" by default.
How to Convert a Standalone to a Replica Set: https://docs.mongodb.com/manual/tutorial/convert-standalone-to-replica-set/
The following example is a practical application for how you might use this.
* Specifically for Node.
/* file.js */
'use strict'
module.exports = function (
app,
io,
User // Collection Name
) {
// SET WATCH ON COLLECTION
const changeStream = User.watch();
// Socket Connection
io.on('connection', function (socket) {
console.log('Connection!');
// USERS - Change
changeStream.on('change', function(change) {
console.log('COLLECTION CHANGED');
User.find({}, (err, data) => {
if (err) throw err;
if (data) {
// RESEND ALL USERS
socket.emit('users', data);
}
});
});
});
};
/* END - file.js */
Useful links:
https://docs.mongodb.com/manual/tutorial/convert-standalone-to-replica-set
https://docs.mongodb.com/manual/tutorial/change-streams-example
https://docs.mongodb.com/v3.6/tutorial/change-streams-example
http://plusnconsulting.com/post/MongoDB-Change-Streams
Since MongoDB 3.6 there will be a new notifications API called Change Streams which you can use for this. See this blog post for an example. Example from it:
cursor = client.my_db.my_collection.changes([
{'$match': {
'operationType': {'$in': ['insert', 'replace']}
}},
{'$match': {
'newDocument.n': {'$gte': 1}
}}
])
# Loops forever.
for change in cursor:
print(change['newDocument'])
MongoDB version 3.6 now includes change streams which is essentially an API on top of the OpLog allowing for trigger/notification-like use cases.
Here is a link to a Java example:
http://mongodb.github.io/mongo-java-driver/3.6/driver/tutorials/change-streams/
A NodeJS example might look something like:
var MongoClient = require('mongodb').MongoClient;
MongoClient.connect("mongodb://localhost:22000/MyStore?readConcern=majority")
.then(function(client){
let db = client.db('MyStore')
let change_streams = db.collection('products').watch()
change_streams.on('change', function(change){
console.log(JSON.stringify(change));
});
});
Alternatively, you could use the standard Mongo FindAndUpdate method, and within the callback, fire an EventEmitter event (in Node) when the callback is run.
Any other parts of the application or architecture listening to this event will be notified of the update, and any relevant data sent there also. This is a really simple way to achieve notifications from Mongo.
Many of these answers will only give you new records and not updates and/or are extremely ineffecient
The only reliable, performant way to do this is to create a tailable cursor on local db: oplog.rs collection to get ALL changes to MongoDB and do with it what you will. (MongoDB even does this internally more or less to support replication!)
Explanation of what the oplog contains:
https://www.compose.com/articles/the-mongodb-oplog-and-node-js/
Example of a Node.js library that provides an API around what is available to be done with the oplog:
https://github.com/cayasso/mongo-oplog
There is an awesome set of services available called MongoDB Stitch. Look into stitch functions/triggers. Note this is a cloud-based paid service (AWS). In your case, on an insert, you could call a custom function written in javascript.
Actually, instead of watching output, why you dont get notice when something new is inserted by using middle-ware that was provided by mongoose schema
You can catch the event of insert a new document and do something after this insertion done
There is an working java example which can be found here.
MongoClient mongoClient = new MongoClient();
DBCollection coll = mongoClient.getDatabase("local").getCollection("oplog.rs");
DBCursor cur = coll.find().sort(BasicDBObjectBuilder.start("$natural", 1).get())
.addOption(Bytes.QUERYOPTION_TAILABLE | Bytes.QUERYOPTION_AWAITDATA);
System.out.println("== open cursor ==");
Runnable task = () -> {
System.out.println("\tWaiting for events");
while (cur.hasNext()) {
DBObject obj = cur.next();
System.out.println( obj );
}
};
new Thread(task).start();
The key is QUERY OPTIONS given here.
Also you can change find query, if you don't need to load all the data every time.
BasicDBObject query= new BasicDBObject();
query.put("ts", new BasicDBObject("$gt", new BsonTimestamp(1471952088, 1))); //timestamp is within some range
query.put("op", "i"); //Only insert operation
DBCursor cur = coll.find(query).sort(BasicDBObjectBuilder.start("$natural", 1).get())
.addOption(Bytes.QUERYOPTION_TAILABLE | Bytes.QUERYOPTION_AWAITDATA);
After 3.6 one is allowed to use database the following database triggers types:
event-driven triggers - useful to update related documents automatically, notify downstream services, propagate data to support mixed workloads, data integrity & auditing
scheduled triggers - useful for scheduled data retrieval, propagation, archival and analytics workloads
Log into your Atlas account and select Triggers interface and add new trigger:
Expand each section for more settings or details.