I have a monitoring script that looks something like this
client = pymongo.MongoClient()
for database in client.database_names():
iterator = client[database].command({"serverStatus":1})["opcounters"].iteritems()
for key, value in iterator:
log(key, data=value, database=database)
This has been giving me the same result for all of my opcounters. Looking at my graphs, I get data like this:
opcounters.command_per_second on test_database: 53.32K
opcounters.command_per_second on log_database: 53.32K
Obviously, "serverStatus" is indicative of the entire server, not just the database.
Is it possible to get opcounters for each database?
There are no per-database opcounts, at least for v2.8.0 or earlier. The op-counter structure that is used in serverStats is a global one. Each new count is recorded without the context of which db or collection was involved.
As an small aside the collStats command does not have op statistics at all so it wont be possible to calculate a database's total ops by aggregation either.
There's an open feature request you can watch/upvote in the MongoDB issue tracker: SERVER-2178: Track stats per db/collection.
Related
in a Meteor app, having real-time reactive updates between all connected clients is achieved with writing in collections, publishing and subscribing the right data. In normal case this means also database writes.
But what if I would like to sync particular data which does not need to be persistent and I would like to save the overhead of writing in the database ? Is it possible to use mini-mongo or other in-memory caching on the server by still preserving DDP synchronisation to all clients ?
Example
In my app I have a multiple collapsed threads and I want to show, which users currently expanded particular thread
Viewed by: Mike, Johny, Steven ...
I can store the information in the threads collection or make make a separate viewers collection and publish the information to the clients. But there is actually no meaning in making this information persistent an having the overhead of database writes.
I am confused by the collections documentation. which states:
OPTIONS
connection Object
The server connection that will manage this collection. Uses the default connection if not specified. Pass the return value of calling DDP.connect to specify a different server. Pass null to specify no connection.
and
... when you pass a name, here’s what happens:
...
On the client (and on the server if you specify a connection), a Minimongo instance is created.
But If I create a new collection and pass the option object with conneciton: null
// Creates a new Mongo collections and exports it
export const Presentations = new Mongo.Collection('presentations', {connection: null});
/**
* Publications
*/
if (Meteor.isServer) {
// This code only runs on the server
Meteor.publish(PRESENTATION_BY_MAP_ID, (mapId) => {
check(mapId, nonEmptyString);
return Presentations.find({ matchingMapId: mapId });
});
}
no data is being published to the clients.
TLDR: it's not possible.
There is no magic in Meteor that allow data being synced between clients while the data doesn't transit by the MongoDB database. The whole sync process through publications and subscriptions is triggered by MongoDB writes. Hence, if you don't write to database, you cannot sync data between clients (using the native pub/sub system available in Meteor).
After countless hours of trying everything possible I found a way to what I wanted:
export const Presentations = new Mongo.Collection('presentations', Meteor.isServer ? {connection: null} : {});
I checked the MongoDb and no presentations collection is being created. Also, n every server-restart the collection is empty. There is a small downside on the client, even the collectionHanlde.ready() is truthy the findOne() first returns undefined and is being synced afterwards.
I don't know if this is the right/preferable way, but it was the only one working for me so far. I tried to leave {connection: null} in the client code, but wasn't able to achieve any sync even though I implemented the added/changed/removed methods.
Sadly, I wasn't able to get any further help even in the meteor forum here and here
Our Java app saves its configurations in a MongoDB collections. When the app starts it reads all the configurations from MongoDB and caches them in Maps. We would like to use the change stream API to be able also to watch for updates of the configurations collections.
So, upon app startup, first we would like to get all configurations, and from now on - watch for any further change.
Is there an easy way to execute the following atomically:
A find() that retrieves all configurations (documents)
Start a watch() that will send all further updates
By atomically I mean - without potentially missing any update (between 1 and 2 someone could update the collection with new configuration).
To make sure I lose no update notifications, I found that I can use watch().startAtOperationTime(serverTime) (for MongoDB of 4.0 or later), as follows.
Query the MongoDB server for its current time, using command such as Document hostInfoDoc = mongoTemplate.executeCommand(new Document("hostInfo", 1))
Query for all interesting documents: List<C> configList = mongoTemplate.findAll(clazz);
Extract the server time from hostInfoDoc: BsonTimestamp serverTime = (BsonTimestamp) hostInfoDoc.get("operationTime");
Start the change stream configured with the saved server time ChangeStreamIterable<Document> changes = eventCollection.watch().startAtOperationTime(serverTime);
Since 1 ends before 2 starts, we know that the documents that were returned by 2 were at least same or fresher than the ones on that server time. And any updates that happened on or after this server time will be sent to us by the change stream (I don't care to run again redundant updates, because I use map as cache, so extra add/remove won't make a difference, as long as the last action arrives).
I think I could also use watch().resumeAfter(_idOfLastAddedDoc) (didn't try). I did not use this approach because of the following scenario: the collection is empty, and the first document is added after getting all (none) documents, and before starting the watch(). In that scenario I don't have previous document _id to use as resume token.
Update
Instead of using "hostInfo" for getting the server time, which couldn't be used in our production, I ended using "dbStats" like that:
Document dbStats= mongoOperations.executeCommand(new Document("dbStats", 1));
BsonTimestamp serverTime = (BsonTimestamp) dbStats.get("operationTime");
This is my first time working with a remote database, so bear with me.
I know via the docs that queries using the same syntax will make use of the cache. Ie: In the following code, if the first query is hit during the remote connection, and connection is broken before the second query executes, the second query will still work via the cache.
let scoresRef = FIRDatabase.database().referenceWithPath("scores")
scoresRef.queryOrderedByValue().queryLimitedToLast(4).observeEventType(.ChildAdded, withBlock: { snapshot in
print("The \(snapshot.key) dinosaur's score is \(snapshot.value)")
})
scoresRef.queryOrderedByValue().queryLimitedToLast(2).observeEventType(.ChildAdded, withBlock: { snapshot in
print("The \(snapshot.key) dinosaur's score is \(snapshot.value)")
})
Is the data in the document itself cached, causing a query in any form for data already fetched to succeed once offline. For example if in this example I had a 3rd, offline query that tried to fetch the 4th to last child of scores by its key, would it work via the cache?
When remote connection is working, will a FirDataEventType query go straight to the remote, or will a local query be run before a remote is run?
Thank you for any input you have!
In your current code, the second query will not have to retrieve additional data, since the children have already been retrieved.
But there are many subtleties in play here. And Firebase synchronizes the data when it changes, which allows for even more scenarios.
Instead of trying to imagine all the things that might be happening, it is probably more educational if you enable debug logging. This will show the actual data that is being retrieved by the client for each query.
I am replicating a collection (I only have access to mongo shell on the server). In the current collection all documents have a field called jsonURL. The value of this field is a url http://www.something.com/api/abc.json. I want to copy each document from oldCollection to newCollection, but I want also want to fetch data from that url and add that to each new document created.
I last time heard that XMLHTTPRequest was on mongo's list, but as a low priority feature (I can understand why). And as I found nothing in the documentation, I am guessing its still in the queue. I am hoping I can get something in forEach(function(eachDoc){});
Do I have any other way of achieving this. Thanks.
Same situation like this question, but with current DerbyJS (version 0.6):
Using imported docs from MongoDB in DerbyJS
I have a MongoDB collection with data that was not saved through my
Derby app. I want to query against that and pull it into my Derby app.
Is this still possible?
The accepted answer there links to a dead link. The newest working link would be this: https://github.com/derbyjs/racer/blob/0.3/lib/descriptor/query/README.md
Which refers to the 0.3 branch for Racer (current master version is 0.6).
What I tried
Searching the internets
The naïve way:
var query = model.query('projects-legacy', { public: true });
model.fetch(query, function() {
query.ref('_page.projects');
})
(doesn't work)
A utility was written for this purpose: https://github.com/share/igor
You may need to modify it to only run against a single collection instead of the whole database, but it essentially goes through every document in the database and modifies it with the necessary livedb metadata and creates a default operation for it as well.
In livedb every collection has a corresponding operations collection, for example profiles will have a profiles_ops collection which holds all the operations for the profiles.
You will have to convert the collection to use it with Racer/livedb because of the metadata on the document itself.
An alternative if you dont want to convert is to use traditional AJAX/REST to get the data from your mongo database and then just put it in your local model. This will not be real-time or synced to the server but it will allow you to drive your templates from data that you dont want to convert for some reason.