I would like to know if it is at all possible to have mongodb fail overs only using a single address. I know replica sets are typically used for this while relying on the driver to make the switch over, but I was hoping there may be a solution out there that would allow one address or hostname to automatically change over when the mongodb instance was recognized as being down.
Any such luck? I know there are solutions for MySQL, but I haven't had much luck with finding something for MongoDB.
Thanks!
Yes it is possible, the driver holds a cache map of your replica set which it will query for a new primary when the set suffers an election. This map is refreshed once every so often however, if your application restarts (process is quit or something, or each request of PHP fork mode) then the driver has no choice but to refresh its map. At this point you will suffer connectivity problems.
Of course the best thing to do is to add a seedlist.
Using a single IP defies the redundancy that is in-built into MongoDB.
Related
I have one PRIMARY instance and one SECONDARY instance of mongodb.
Many clients are using my two instances. Each client has its own read preference to "secondary"
My question is :
Is there a way to configure mongodb to set by default the read preference to "secondary" ?
Thanks
MC
Read preference is a client setting, not a server setting, so no, this is not possible as far as I know. An important feature of MongoDB is that you have very fine-grained control over the queries, i.e. you can use different read preferences and write concerns for each query.
It often makes sense to mix these, because losing a log entry might not be too bad while losing a payment is. Likewise, reading logs from the secondary might be fine, but if you want to coordinate a transaction, it might be safer to use the primary for reading (or you're using a paranoid write concern that requires full replication before considering the write successful).
I am new to MongoDB and have little experience at the moment so need a little help, we are looking at setting up MongoDB with a standard replica set. This contains, as I understand it, a primary and two secondary. My question is this, will the primary, and two secondary definitely require a different server or VM for example (I have read this is the case but still not sure) as we will be performing a fair number of writes each time a user logs into the system.
Currently we are just looking into the feasibility of this set up at the moment and nothing has been decided yet.
Thanks in advance.
For MongoDB 2.2.2, is it possible to default all writes to safe, or do you have to include the right flags for each write operation individually?
If you use safe writes, what is the performance hit?
We're using MongoMapper on Rails.
If you are using the latest version of 10gen official drivers, then the default actually is safe, not fire-and-forget (which used to be the default).
You can read this blog post by 10gen co-founder and CTO which explains some history and announces that all MongoDB drivers as of late November use "safe" mode by default rather than "fire-and-forget".
MongoMapper is built on top of 10gen supported Ruby Driver, they also updated their code to be consistent with the new defaults. You can see the check-in and comments here for the master branch. Since I'm not certain what their release schedule is, I would recommend you ask on MongoMapper mailing list.
Even prior to this change you could set "safe" value on connection level in MongoMapper which is as good as global. Starting with 0.11, you can do it in mongo.yml file. You can see how in the release notes.
The bottom line is that you don't have to specify safe mode for each write, but you can still specify higher or lower than default durability for each individual write operation if you wish, but when you switch to the latest versions of everything, then you will be using "safe writes" globally by default.
I do not use mongomapper so I can only answer a little.
In terms of the database, depends. A safe write is basically (basically being the keyword) waiting for the database to do what it would normally do after you got a default "I am done" response from a fire and forget.
There is more work depending on how safe you want the write to be. A good example is a write to a single node and one to many nodes. If you write to that single node you will get a quicker response from the database than if you wish to replicate the command (safely) to other nodes in the network.
Any amount of safe write does, of course, cause a performance hit in the amount of writes you can send to the server since there is more work required before a response is given which means less writes able to be thrown at the database. The key thing is getting the balance just right for your application, between speed and durability of your data.
Many drivers now (including MongoDB PHP 1.3, using a write concern of 1: http://php.net/manual/en/mongo.writeconcerns.php ) are starting to use normal safe writes by default and normal fire and forget queries are starting to abolished by default.
By looking at the mongomapper documentation: http://mongomapper.com/documentation/plugins/safe.html it seems you must still add the flags everywhere.
I'm evaluating MongoDB. I have a small 20GB subset of documents. Each is essentially a request log for a social game along with some captured state of the game the user was playing at that moment.
I thought I'd try finding game cheaters. So I wrote a function that runs server side. It calls find() on an indexed collection and sorts according to the existing index. Using a cursor it goes through all documents in indexed order. The index is {user_id,time}. So I'm going through each user's history, checking if certain values (money/health/etc) increase faster than is possible in the game. The script returns the first violation found. It does not collect violations.
The ONLY thing that this script does on the client is define the function and calls mymongodb.eval(myscript) on a mongod instance on another box.
The box that mongod is running on does fine. The one that the script is launched from starts losing memory and swap. Hours later: 8GB of RAM and 6GB of swap are being used on the client machine that did nothing more than launch a script on another box and wait for a return value.
Is the mongo client really that flakey? Have I done something wrong or made an incorrect assumption about mongo/mongod?
If you just want to open up a client connection to a remote database you should use the mongo command, not mongod. mongod starts up a server on your local machine. Not sure what specifying a url will do.
Try
mongo remotehost:27017
From the documentation:
Use map/reduce instead of db.eval() for long running jobs. db.eval blocks other operations!
eval is a function that blocks the entire server if you don't use a special flag. Again, from the docs:
If you don't use the "nolock" flag, db.eval() blocks the entire mongod process while running [...]
You are kind of abusing MongoDB here. Your current routine is strange, because it returns the first violation found, but it will have to re-check everything when run the next time (unless your user ids are ordered and you store the last evaluated user id).
Map/Reduce generally is the better option for a long-running task, but aggregating your data does not seem trivial. However, a map/reduce based solution would also solve the re-evaluation problem.
I'd probably return something like this from map/reduce:
user id -> suspicious actions, e.g.
------
2525454 -> [{logId: 235345435, t: ISODate("...")}]
What's the best solution for using Node.js and Redis to create an uptime monitoring system? Can I use Redis as a queue but is not the best way to save information, maybe MongoDB is?
It seems pretty simple but needing to have more than 1 server to guarantee the server is down and make everything work together is not so easy.
To monitor uptime, you would use a Cron job on the system. With each call, you would check to see if the host is up, and how long it would take. And in that script, you would save your data in Redis.
To do this in Node.JS, you would create a script that checks the status of the server. Just making a HTTP request to the server (Or Ping, w.e.) and recording if it fails or not. Then I would just record it to Redis. How you do it does not matter, because the script (if you run the cron every 30 seconds) has [30] seconds before the next run, so you dont have to worry about getting your query to the server. How you save your data is up to you, but in this case even MySQL would work (if you are only doing a small number of sites)
More on Cron # Wikipedia
Can I use Redis as a queue but is not
the best way to save information,
maybe MongoDB is?
You can(should) use Redis as your queue. It is going to be extremely fast.
I also think it is going to be very good option to save the information inside Redis. Unfortunately Redis does not do any timing(yet). I think you could/should use Beanstalkd to put messages on the queue that get delivered when needed(every x seconds). I also think cron is not that a very good idea because you would be needing a lot of them and when using a queue you could do your work faster(share load among multiple processes) also.
Also I don't think you need that much memory to save everything in memory(makes site fast) because dataset is going to be relative simple. Even if you aren't able(smart to get more memory if you ask me) to fit entire dataset in memory you can rely on Redis's virtual memory.
It seems pretty simple but needing to
have more than 1 server to guarantee
the server is down and make everything
work together is not so easy.
Sharding/replication is what I think you should read into to solve this problem(hard). Luckily Redis supports replication(sharding can also be achieved). MongoDB supports sharding/replication out of the box. To be honest I don't think you need sharding yet and your dataset is rather simple so Redis is going to be faster:
http://redis.io/topics/replication
http://www.mongodb.org/display/DOCS/Sharding+Introduction
http://www.mongodb.org/display/DOCS/Replication
http://ngchi.wordpress.com/2010/08/23/towards-auto-sharding-in-your-node-js-app/