List all the query shapes in mongo database or collection - mongodb

I was going through the documentation on the official site, where I happened to find out the term query-shape while browsing details over the indexes section.
The details look interesting and quite possibly a list of these could help me with all possible queries that are being raised to a cluster while I am planning to onboard an existing deployed application.
But the question that I have now is that is there a way to do the above on the command line for a collection(or complete database)?
As a side note, I use both compass community and robo3t as tools built over CLI to access the datastore and as well comfortable to run the command on mongo shell directly too.

With some more time and effort, I could find PlanCache.listQueryShapes which was a slight variation towards the more recent version of mongo which I was using.
Seemingly the $planCacheStats introduced in 4.2 was something I was looking forward to. The following query helped me list all the query shapes over a collection as mentioned in the list query shapes section.
db.user_collections.aggregate( [ { $planCacheStats: { } } ,
{ $project: {createdFromQuery: 1, queryHash: 1 } } ] )

Related

Is meteor using the Mongo Oplog?

How can I check if meteor is using the oplog of my mongo?
I have a cluster of mongo and set two envs for my meteor.
MONGO_URL=mongodb://mongo/app?replicaSet=rs0
MONGO_OPLOG_URL=mongodb://mongo/local?authSource=app
How can I check if the opt log is actually in use. Meteor can fallback to query polling which is very inefficient but I would like to see if it's working properly with the oplog.
Any ideas?
Quoting the relevant bits from Meteor's OplogObserveDriver docs:
How to tell if your queries are using OplogObserveDriver
For now, we only have a crude way to tell how many observeChanges calls are using OplogObserveDriver, and not which calls they are.
This uses the facts package, an internal Meteor package that exposes real-time metrics for the current Meteor server. In your app, run meteor add facts, and add the {{> serverFacts}} template to your app. If you are using the autopublish package, Meteor will automatically publish all metrics to all users. If you are not using autopublish, you will have to tell Meteor which users can see your metrics by calling Facts.setUserIdFilter in server code; for example:
Facts.setUserIdFilter(function (userId) {
var user = Meteor.users.findOne(userId);
return user && user.admin;
});
(When running your app locally, Facts.setUserIdFilter(function () { return true; }); may be good enough!)
Now look at your app. The facts template will render a variety of metrics; the ones we're looking for are observe-drivers-oplog and observe-drivers-polling in the mongo-livedata section. If observe-drivers-polling is zero or not rendered at all, then all of your observeChanges calls are using OplogObserveDriver!
To set up oplog tailing, you need to set up a user on my_database, and an oplog_user on local. Then, specify the following URIs to connect to your replica set named test-shard (e.g. if there are 3 hosts named test-shard-[0-2]):
MONGO_URL="mongodb://user:PASS#test-shard-0.mongodb.net:27017,test-shard-1.mongodb.net:27017,test-shard-2.mongodb.net:27017/my_database?ssl=true&replicaSet=test-shard&authSource=admin"
MONGO_OPLOG_URL="mongodb://oplog_user:PASS#test-shard-0.mongodb.net:27017,test-shard-1.mongodb.net:27017,test-shard-2.mongodb.net:27017/local?ssl=true&replicaSet=test-shard&authSource=admin"
On MongoDB Atlas they require ssl=true, and also all users authenticate through the admin database. On another deployment you might just authenticate through my_database, in which case you'd remove the authsource=admin for MONGO_URL and write authsource=my_database for MONGO_OPLOG_URL. See this post for another example.
With MongoDB 3.6 and the Mongo node driver 3.0+, you may be able to use a succinct notation for DNS seedlist connections, e.g. on MongoDB Atlas, to specify the environment variables:
MONGO_URL="mongodb+srv://user:PASS#foo.mongodb.net/my_database"
MONGO_OPLOG_URL="mongodb+srv://oplog_user:PASS#foo.mongodb.net/local"
The link above explains how this notation fills in the ssl, replicaSet, and authSource arguments. This is a lot nicer than the long strings above, and also means you can scale your replica set up and down without needing to reconfigure anything.
As hwillson mentioned, use the facts-ui and facts-base packages (formerly facts) to see if there are any oplogObserveDrivers running in your app. If they are all pollingObserveDriver, than oplog is not set up correctly.
If you are using Kadira APM to monitor your app's performance, you can see if oplogs are working by navigating to the "Live Queries" section and having a look at the "Oplog notifications" chart.
You can see in my screenshot that oplogs are working, as values appear in the chart (bottom right). If oplogs weren't working then this chart would be empty.
This may be very late, but this is the only way that worked for me :
someCollection._driver.mongo._oplogHandle
if this is set to null then the oplog is not enabled, otherwise you can use this handle to check for more details.

What is more recommended to use in the C Driver , mongoc_collection_command with "insert" or mongoc_collection_insert

After working for awhile with the C driver , reading the tutorials and the API .
I little confused ,
According to this tutorial : http://api.mongodb.org/c/current/executing-command.html
i can execute DB and Collections commands which include also the CRUD commands.
And i can even get the Document cursor if i don't use "_simple" in the command API
so why do i need to use for example the mongoc_collection_insert() API command ?
What are the differences ? what is recommended ?
Thanks
This question is probably similar to what's the difference between using insert command or db.collection.insert() via the mongo shell.
mongoc_collection_insert() is specific function written to insert a document into a collection while mongoc_collection_command() is for executing any valid database commands on a collection.
I would recommend to use the API function (mongoc_collection_insert) whenever possible. For the following reasons:
The API functions had been written as an abstraction layer with a specific purpose so that you don't have to deal with other details related to the command.
For example, mongoc_collection_insert exposes the right parameters for inserting i.e. mongoc_write_concern_t and mongoc_insert_flags_t with the respective default value. On the other hand, mongoc_collection_command has broad range of parameters such as mongoc_read_prefs_t, skip, or limit which may not be relevant for inserting a document.
Any future changes for mongoc_collection_insert will more likely be considered with the correct context for insert.
Especially for CRUD, try to avoid using command because the MongoDB wire protocol specifies different request opcodes for command (OP_MSG: 1000) and insert (OP_INSERT: 2002).

Import "normal" MongoDB collections into DerbyJS 0.6

Same situation like this question, but with current DerbyJS (version 0.6):
Using imported docs from MongoDB in DerbyJS
I have a MongoDB collection with data that was not saved through my
Derby app. I want to query against that and pull it into my Derby app.
Is this still possible?
The accepted answer there links to a dead link. The newest working link would be this: https://github.com/derbyjs/racer/blob/0.3/lib/descriptor/query/README.md
Which refers to the 0.3 branch for Racer (current master version is 0.6).
What I tried
Searching the internets
The naïve way:
var query = model.query('projects-legacy', { public: true });
model.fetch(query, function() {
query.ref('_page.projects');
})
(doesn't work)
A utility was written for this purpose: https://github.com/share/igor
You may need to modify it to only run against a single collection instead of the whole database, but it essentially goes through every document in the database and modifies it with the necessary livedb metadata and creates a default operation for it as well.
In livedb every collection has a corresponding operations collection, for example profiles will have a profiles_ops collection which holds all the operations for the profiles.
You will have to convert the collection to use it with Racer/livedb because of the metadata on the document itself.
An alternative if you dont want to convert is to use traditional AJAX/REST to get the data from your mongo database and then just put it in your local model. This will not be real-time or synced to the server but it will allow you to drive your templates from data that you dont want to convert for some reason.

When should I use AQL?

In the context of ArangoDB, there are different database shells to query data:
arangosh: The JavaScript based console
AQL: Arangodb Query Language, see http://www.arangodb.org/2012/06/20/querying-a-nosql-database-the-elegant-way
MRuby: Embedded Ruby
Although I understand the use of JavaScript and MRuby, I am not sure why I would learn, and where I would use AQL. Is there any information on this? Is the idea to POST AQL directly to the database server?
AQL is ArangoDB's query language. It has a lot of ways to query, filter, sort, limit and modify the result that will be returned. It should be noted that AQL only reads data.
(Update: This answer was targeting an older version of ArangoDB. Since version 2.2, the features have been expanded and data modification on the database is also possible with AQL. For more information on that, visit the documentation link at the end of the answer.)
You cannot store data to the database with AQL.
In contrast to AQL, the Javascript or MRuby can read and store data to the database.
However their querying capabilities are very basic and limited, compared to the possibilities that open up with AQL.
It is possible though to send AQL queries from javascript.
Within the arangosh Javascript shell you would issue an AQL query like this:
arangosh> db._query('FOR user IN example FILTER user.age > 30 RETURN user').toArray()
[
{
_id : "4538791/6308263",
_rev : "6308263",
age : 31,
name : "Musterfrau"
}
]
You can find more info on AQL here:
http://www.arangodb.org/manuals/current/Aql.html

Mongo geospatial index and Meteor

I am wondering if it is possible to use a mongodb geospatial index with Meteor architecture.
Minimongo does not implement geospatial indices, but does this mean that we cannot use this mongo feature on the server side?
For example, with the todos app, if we use location on the todo, will it be possible to do:
// Publish complete set of lists to all clients.
Meteor.publish('todos', function (lon,lat) {
return Todos.find({loc: {$near:[lon,lat]}}).limit(2);
});
And on the client side :
Meteor.subscribe('todos', lon, lat );
Yes, you can use the MongoDB geospatial index within Meteor, and you can create that index from within your Meteor app too.
- Geospatial Search
I'm using the $within operator below, as opposed to the $near operator mentioned above, but this still applies:
Meteor.publish('places', function(box) {
return Places.find({ loc : { $within : { $box : box }}});
});
Reminder: These kinds of geo queries are only available on the server (currently).
- Creating a Geospatial Index from within Meteor (rather than in a MongoDB shell)
Places._ensureIndex({ loc : "2d" });
e.g. You could use the above in your bootstrap.js.
Also, you'll probably want to put your ensureIndex in Meteor.startup, or perhaps when you're inserting some initial data.
Warning: As mentioned here, the above method of calling ensureIndex is a work around for want of an official way to call it, so please expect that this might change.
Update: now reflects changes in Meteor 0.5.0, see #Dror's comment below.
Yes, I think you can.
On the server side, Meteor delegates find/update/.. into node-mongo-native call. You can take a look the code in packages/mongo-livedata/mongo_driver.js. And as I know, node-mongo-native supports geospatial index.