Is eval that evil? - mongodb

I understand that eval locks the whole database, which can't be good for throughput - however I have a scenario where a very specific transaction involving several documents must be isolated.
Because that transaction does not happen very often and is fairly quick (a few updates on indexed queries), I was thinking of using eval to execute it.
Are their any pitfalls that I should be aware of (I have seen several eval=evil posts but without much explanation)?
Does it make a difference if the database is part of a replica set?

Many developers would suggest using eval is "evil" as their are obvious security concerns with potentially unsanitized JavaScript code executing within the context of the MongoDB instance. Normally MongoDB is immune to those types of injection attacks.
Some of the performance issues of using JavaScript in MongoDB via the eval command are mitigated in version 2.4, as muliple JavaScript operations can execute at the same time (depending on the setting of the nolock option). By default though, it takes a global lock (which is what you specifically want apparently).
When a eval is being used to try to perform an (ACID-like) transactional update to several documents, there's one primary concern. The biggest issue is that if all operations must succeed for the data to be in a consistent state, the developer is running the risk that a failure mid-way through the operation may result in a partially complete update to the database (like a hardware failure for example). Depending on the nature of the work being performed, replication settings, etc., the data may be OK, or may not.
For situations where database corruption could occur as a result of a partially complete eval operation, I would suggest considering an alternative schema design and avoiding eval. That's not to say that it wouldn't work 99.9999% of the time, it's really up to you to decide ultimately whether it's worth the risk.
In the case you describe, there are a few options:
{ version: 7, isCurrent: true}
When a version 8 document becomes current, you could for example:
Create a second document that contains the current version, this would be an atomic set operation. It would mean that all reads would potentially need to read the "find the current version" document first, followed by the read of the full document.
Use a timestamp in place of a boolean value. Find the most current document based on timestamp (and your code could clear out the fields of older documents if desired once the now current document has been set)

Related

Write operation during a long cursor operation

I use MongoDB 2.4 with a single DB.
I find all items in a collection (50.000+) and for each one, I insert it into another one.
it = coll1.find()
while (it.hasNext()) {
coll2.save(it.next())
}
Is it a performance issue to make intensive writes when a cusor is open on the same database ?
This essentially comes down to a question about concurrency ( http://docs.mongodb.org/manual/faq/concurrency/ ) being able to do reads on a single database level writer greedy lock performantly while creating a write intensive load.
MongoDB should be able to juggle your read lock with the write lock quite well here, interweaving operations and yielding the current operation under certain conditions that it sees fit to keep performance up (see link supplied above).
This is, of course, in contrast to SQL where read and write operations are isolated, as such this means that MongoDBs concurrency rules actually break the I in ACID. Of course, in SQL the lock is much more granular so you would get relative performance normally.
If you do see a performance hit, mainly due to IO (reading requires IO as well remember) then you might find it prudent to batch your writes into groups of maybe 1000, taking about a 5 second break after each batch to let the IO subside.
No as cursors are not atomic. Each read is its own atomic transaction. This means that mongo is not subject to the issues of ensuring that the cursor represents a single snapshot in time.

Why using Locking in MongoDB?

MoongoDB is from the NoSql era, and Lock is something related to RDBMS? from Wikipedia:
Optimistic concurrency control (OCC) is a concurrency control method for relational database management systems...
So why do i find in PyMongo is_locked , and even in driver that makes non-blocking calls, Lock still exists, Motor has is_locked.
NoSQL does not mean automatically no locks.
There always some operations that do require a lock.
For example building of index
And official MongoDB documentation is a more reliable source than wikipedia(none offense meant to wikipedia :) )
http://docs.mongodb.org/manual/faq/concurrency/
Mongo does in-place updates, so it needs to lock in order to modify the database. There are other things that need locks, so read the link #Tigra provided for more info.
This is pretty standard as far as databases and it isn't an RDBMS-specific thing (Redis also does this, but on a per-key basis).
There are plans to implement collection-level (instead of database-level) locking: https://jira.mongodb.org/browse/SERVER-1240
Some databases, like CouchDB, get around the locking problem by only appending new documents. They create a new, unique revision id and once the document is finished writing, the database points to the new revision. I'm sure there's some kind of concurrency control when changing which revision is used, but it doesn't need to block the database to do that. There are certain downsides to this, such as compaction needing to be run regularly.
MongoDB implements a Database level locking system. This means that operations which are not atomic will lock on a per database level, unlike SQL whereby most techs lock on a table level for basic operations.
In-place updates only occur on certain operators - $set being one of them, MongoDB documentation did used to have a page that displayed all of them but I can't find it now.
MongoDB currently implements a read/write lock whereby each is separate but they can block each other.
Locks are utterly vital to any database, for example, how can you ensure a consistent read of a document if it is currently being written to? And if you write to the document how do you ensure that you only apply that single update at once and not multiple updates at the same time?
I am unsure how version control can stop this in CouchDB, locks are really quite vital for a consistent read and are separate to version control, i.e. what if you wish to apply a read lock to the same version or read a document that is currently being written to a new revision? You will obviously see a lock queue appear. Even though version control might help a little with write lock saturation there will still be a write lock and it will still need to work on a level.
As for concurrency features; MongoDB has the ability (for one), if the data is not in RAM, to subside a operation for other operations. This means that locks will not just sit there waiting for data to be paged in and other operations will run in the mean time.
As a side note, MongoDB actually has more locks than this, it also has a JavaScript lock which is global and blocking, it does not have the normal concurrency features of regular locks.
and even in driver that makes non-blocking calls
Hmm I think you might be confused by what is meant as a "non-blocking" application or server: http://en.wikipedia.org/wiki/Non-blocking_algorithm

Is it possible to default all MongoDB writes to safe? What is the performance hit from doing this?

For MongoDB 2.2.2, is it possible to default all writes to safe, or do you have to include the right flags for each write operation individually?
If you use safe writes, what is the performance hit?
We're using MongoMapper on Rails.
If you are using the latest version of 10gen official drivers, then the default actually is safe, not fire-and-forget (which used to be the default).
You can read this blog post by 10gen co-founder and CTO which explains some history and announces that all MongoDB drivers as of late November use "safe" mode by default rather than "fire-and-forget".
MongoMapper is built on top of 10gen supported Ruby Driver, they also updated their code to be consistent with the new defaults. You can see the check-in and comments here for the master branch. Since I'm not certain what their release schedule is, I would recommend you ask on MongoMapper mailing list.
Even prior to this change you could set "safe" value on connection level in MongoMapper which is as good as global. Starting with 0.11, you can do it in mongo.yml file. You can see how in the release notes.
The bottom line is that you don't have to specify safe mode for each write, but you can still specify higher or lower than default durability for each individual write operation if you wish, but when you switch to the latest versions of everything, then you will be using "safe writes" globally by default.
I do not use mongomapper so I can only answer a little.
In terms of the database, depends. A safe write is basically (basically being the keyword) waiting for the database to do what it would normally do after you got a default "I am done" response from a fire and forget.
There is more work depending on how safe you want the write to be. A good example is a write to a single node and one to many nodes. If you write to that single node you will get a quicker response from the database than if you wish to replicate the command (safely) to other nodes in the network.
Any amount of safe write does, of course, cause a performance hit in the amount of writes you can send to the server since there is more work required before a response is given which means less writes able to be thrown at the database. The key thing is getting the balance just right for your application, between speed and durability of your data.
Many drivers now (including MongoDB PHP 1.3, using a write concern of 1: http://php.net/manual/en/mongo.writeconcerns.php ) are starting to use normal safe writes by default and normal fire and forget queries are starting to abolished by default.
By looking at the mongomapper documentation: http://mongomapper.com/documentation/plugins/safe.html it seems you must still add the flags everywhere.

mongodb: atomically rename two collections?

I have two existing collections "A" and "B". I need to rename "B" to "C", and rename "A" to "B", without permitting any writes to B during that time. The rename itself activates the global lock, but I need to prevent writes from occurring in between renames. Is this possible?
Here's my code:
db.B.renameCollection('C')
<-- prevent writes from occurring to B in between commands
db.A.renameCollection('B')
Edit: I'm using mongodb version 1.8.1, and changing versions is not currently an option.
Mongodb itself cannot handle this, the only way you could do this is with some custom code.
If this will only occur one time in your app ( I guess renaming collections is not something that is done often ) you could have a more 'aggressive' approach, where you search for a flag in your database that will mean 'collection db.B has been renamed but db.A not yet'. If all your writes check for this before submitting the write to the server and just return if the flag is set, it can protect the app from writing to db.A after db.B is renamed.
I consider this the 'aggressive' approach since it clearly affects performance ( still, reads are so fast, you probably won't feel it ).
If your app runs on a single web server (and not a web farm) you can have the synchronization mechanism on the web app itself, using thread synchronization tools like semaphores, etc or even some thread safe variable that will be used as the flag I suggested above. (depends on the server side technology you are using )
You can create a function named "renameCollection" and put a lock on it :
db.runCommand({eval:renameCollection,args:["Collection1","Collection2"],nolock:false});
The lock allows to do this kind of operations safely and make wait the requests
As you could guess: this is not possible. No transaction support, only atomic operations.
MongoDB has no sense of transactional renames, in fact I am not sure if SQL does in this case either, however you could accomplish this with a bit of server-side programming and a lock collection.
From your server side language you can fire off the commands while writing a row to a lock table, each query against B will check for lock, if not found will write otherwise will bail out.
This is a simple method however most likely a bit tedious, especially if you have a very segmented code base that does not house a standardised query layer between the server-side code and the database.
I should also note that renameCollection will not work on sharded collections, you most likely already knew that but I thought I would just say it anyway. In the case of sharded collection it would be better to "move" the collection instead via copy OPs.
I work for Tokutek on TokuMX with multi-statement transactions.
As other answers have said, MongoDB cannot do this (to the best of my knowledge), but TokuMX can. TokuMX has multi-statement transactions on non-sharded clusters. To perform this operation, you can do:
db.beginTransaction()
db.B.renameCollection('C')
db.A.renameCollection('B')
db.commitTransaction()

mongodb: should i always use the 'safe' option on updates

when dealing with mongodb, when should i use the {safe: true} on queries?
Right now I use the 'safe' option just to check if my queries were inserted or updated successfully. However, I feel this might be over kill.
Should i assume that 99% of the time, my queries (assuming they are properly written) will be inserted/updated, not have to worry about checking if they successfully inputted?
thoughts?
Assuming when you say queries you actually mean writes/inserts (the wording of your question makes me think this) then the Write Concern (safe, none, fsync, etc) can be used to get more speed and less safety when that is acceptable, and less speed and more safety when that is necessary.
As an example, a hypothetical Facebook-style application could use an unsafe write for "Likes" while it would use a very safe write for password changes. The logic behind this is that there will be many thousand "Like"-style updates happening a second, and it doesn't matter if one is lost, whereas password updates happen less regularly but it is essential that they succeed.
Therefore, try to tailor your Write Concern choice to the kind of update you are doing, based upon your speed and data integrity requirements.
Here is another use case where unsafe writes are an appropriate choice: You are making a large number of writes in very short order. In this case you might perform a number of writes, and then call get last error to see if any of them failed.
collection.setWriteConcern(WriteConcern.NORMAL)
collection.getDB().resetError()
List<
for (Something data : importData) {
collection.insert(makeDBObject(data))
}
collection.getDB().getLastError(WriteConcern.REPLICAS_SAFE).throwOnError()
If this block succeeds without an exception, then all of the data was inserted successfully. If there was an exception, then one or more of the write operations failed, and you will need to retry them (or check for a unique index violation, etc). In real life, you might call getLastError every 10 writes or so, to avoid having to resubmit lots of requests.
This pattern is very nice for performance when performing bulk inserts of large amounts of data.
Safe is only necessary on writes, not reads. Queries are only reads.