Atomic Operations with MongoDB + MongoCXX - mongodb

I am new to MongoDB, but experienced with C++. I am using the MongoCXX driver within my application. What I need to do now is to increment a counter shared by multiple programs. My thought was that this requires an atomic operation, but I do not see how to do this with MongoCXX.
Searching, I have found that it appears as though a legacy, deprecated API has exactly what I need:
http://mongocxx.org/api/legacy-v1/atomic__word__intrinsics_8h_source.html
In here, I see exactly what I want to do:
WordType fetchAndAdd(WordType increment)
However, I do not see anything like this in the current C++ driver.
http://mongocxx.org/api/mongocxx-3.6.2/annotated.html
Is there any way to do this type of operation with the current API that I am missing?

You can use something like https://docs.mongodb.com/manual/reference/operator/update/inc/.
Some drivers provide wrappers for server commands, but such wrappers aren't strictly necessary because you can always invoke the server commands directly.

Related

Can I notify and listen inside PostgreSQL procedures (functions)?

I have checked the documentation (for my version 9.3):
http://www.postgresql.org/docs/9.3/static/sql-notify.html
http://www.postgresql.org/docs/9.3/static/sql-listen.html
I have read multiple discussions and blogs about notify-listen in postgres.
They all use a listening process / interface, which is not implemented inside "classic" procedure (which is function in postgres anyway). They implement it in different language and/or environment, external to the postgres server (e.g. perl, C#).
My question: Is it possible to implement listen(ing) inside postgres function (language plpgsql) ? If not (what I assume from not being to able to find such topic / example), can someone explain a bit, why it can't be done, or maybe why it does not make sense to do it that way ?
It is a classic use case for Trigger Function in case you depend on a single table: https://www.postgresql.org/docs/current/plpgsql-trigger.html

How to avoid that much casts with MongoDb Java-Api

HI i'm working with the Java-Api of mongo-db.
I have to cast verry often like this
BasicDBList points = ((BasicDBList) ((BasicDBObject) currentObject.get("poly")).get("coordinates"));
which is not fun. Am i missing something or it is just the way to do it?
i think BasicDBObject should have functions like
BasicDBObject getBasicDBObject(String key)
BasicDBList getBasicDBList(String key)
Unfortunately, the current java driver is not perfect and it is difficult to avoid casting as you mentioned. However, java driver team is working on the next version and as far as I understand it will be completely rewritten.
In one of the mongodb meetup I heard that the new version will make use of asynchronous API, similar to the node driver. I guess we need to sit tight and wait for the next major release.
Alternatives, are (from Mongo Java drivers & mappers performances):
async Java driver
a library built on top of a driver, e.g. Morphia, Jongo, see POJOMappers

MongoDB 2.6: maxTimeMS on an instance/database level

MongoDB 2.6 introduced the .maxTimeMS() cursor method, which allows you to specify a max running time for each query. This is awesome for ad-hoc queries, but I wondered if there was a way to set this value on a per-instance or per-database (or even per-collection) level, to try and prevent locking in general.
And if so, could that value then be OVERWRITTEN on a per-query basis? I would love to set an instance level timeout of 3000ms or thereabouts (since that would be a pretty extreme running time for queries issued by my application), but then be able to ignore it if I had a report to run.
Here's the documentation from mongodb.org, for reference: http://docs.mongodb.org/manual/reference/method/cursor.maxTimeMS/#behaviors
Jason,
Currently MongoDB does not support a global / universal "maxTimeMS". At the moment this option is applicable to the individual operations only. If you wish to have such a global setting available in MongoDB, I would suggest raising a SERVER ticket at https://jira.mongodb.org/browse/SERVER along with use-cases that can take advantage of such setting.
I know this is old but came here with a similar question and decided to post my findings, The timeout, as a global parameter, is supported by the drivers as part of the connection string, which makes sense because it's the driver the one that can control this global parameters, here you can find documentation about this: https://docs.mongodb.com/manual/reference/connection-string/, but each driver can handle this slightly different (c# for example uses Mongo Settings parameter for this, python has it as parameters in the init constructor, etc)
To test this you can start a test server using mtools like this:
mlaunch --replicaset --name testrepl --nodes 3 --port 27000
Then an example in python will be like:
from pymongo import MongoClient
c = MongoClient(host="mongodb://localhost:27000,localhost:27001,localhost:27002/?replicaSet=testrepl&wtimeoutMS=2000&w=3")
c.test_database.col.insert({ "name": "test" })
I'm using the URI method so this can be used in other drivers, but Python also supports the parameters w and wtimeout, in this example all the write operations will be defaulted to 2 segs and 3 nodes have to be confirmed before returning, if you restart the database and use the wtimeout of 1 (meaning 1 ms) you will see an exception because the replication will take a bit longer to initialize the first time you execute the python script.
Hope this helps others coming with the same question.

DocPad and MongoDB

I'm looking for a way to use DocPad with MongoDB. Tried to google about that, but didn't find anything encouraging. Could you please give at least some hints?
Some parts of what I'm developing need to be persisted.
Thanks in advance.
Starting from version 6.55 released last month, DocPad creates a persistent db file in the root of the project called .docpad.db :
https://github.com/bevry/docpad/blob/master/HISTORY.md
I guess it's a first step in the persistent behaviour you need ; the documents may be parsed and inserted in a Mongo database, because behind the scene, DocPad uses QueryEngine which has an API similar to Mongo :
https://github.com/bevry/query-engine
More work is on the way regarding your concern. Have a look at this discussion that deals with the future architecture of DocPad, especially the importer / exporter decoupling :
https://github.com/bevry/docpad/issues/705
I've written some code that reads from Mongodb and returns an object that can be rendered into docs. I've also tried to write some code to provide the backend for basic editing of the database but the regeneration after update is not yet working (although it may be by the time you read this!). See https://github.com/simonh1000/docpad-plugin-mongo

mongodb why do we need getSisterDB

While playing with mognodb console help, I found a db.getSisterDB() method.
And I am curious what is the purpose of this method. Looking through mongodb documentation and a quick google search did not yield satisfactory results.
By typing db.getSisterDb.help generates an error and typing db.getSisterDB gives the following definition of this method:
function ( name ){
return this.getMongo().getDB( name );
}
which suggests that this is just a wrapper around getDB. My suggestion that it is used in to access databases in a replica set, but I would like to listen to a person who can give me a more thorough explanation.
In the shell, db is a reference to the current database. If you want to query against a different DB in the same mongod instance, the way to get a proper reference to it would be to use this method (which has an alias, more gender neutral getSiblingDB).
If you wanted to use the longer syntax, you could: db.getMongo().getDB(name) gets you the same thing as db.getSiblingDB(name) or db.getSisterDB(name) but the former is longer to type.
All of the above work the same way in standalone mongod as well as replica sets (and sharded clusters).
Im going to add to the accepted answer because I did not find what I wanted as a first result.
getSiblingDB exists for scripting, where the use helper is not available
getSiblingDB is the newer between the identical getSisterDB, so use sibling as getSisterDB is no longer in documentation
when used in the shell, getSiblingDB serves the purpose of getting a database without changing the db variable