Benefits of multiple memcached instances - memcached

Is there any difference between having 4 .5GB memcache servers running or one 2GB instance?
Does running multiple instances offer any benifits?

If one instance fails, you're still get advantages of using the cache. This is especially true if you are using the Consistenthashing that will bring the same data to the same instance, rather than spreading new reads/writes among the machines that are still up.
You may also elect to run servers on 32 bit operating systems, that cannot address more than around 3GB of memory.
Check the FAQ: http://www.socialtext.net/memcached/ and http://www.danga.com/memcached/

High availability is nice, and memcached will automatically distribute your cache across the 4 servers. If one of those servers dies for some reason, you can handle that error by either just continuing as if the cache was blank, redirecting to a different server, or any sort of custom error handling you want. If your 1x 2gb server dies, then your options are pretty limited.
The important thing to remember is that you do not have 4 copies of your cache, it is 1 cache, split amongst the 4 servers.
The only downside is that it's easier to run out of 4x .5 than it is to run out of 1x 2gb memory.

I would also add that theoretically, in case of several machines, it might save you some performance, as if you have a lot of frontends doing a lot of heavy reads, it's much better to split them into different machines: you know, network capabilities and processing power of one machine can become an upper bound for you.
This advantage is highly dependent on memcache utilization, however (sometimes it might be ways faster to fetch everything from one machine).

Related

MongoDB - how to best achieve active/active configuration?

I have an application which is very low on writes. I'm therefore interested in deploying a mongo installation which maximizes the read throughput for the hardware I have (3 database servers in one location). I don't really care for redundancy (backups), but would like automatic failover. Additionally, I'm fine with "eventual consistency", and don't mind if data which isn't the latest data is returned.
I've looked into both sharding and replica sets, and as far as I can tell, I don't really need to use sharding as its benefits suit more for applications with many writes.
I therefore went ahead and installed a replica set on the three servers I have, and I then set the reading preference to "Nearest", as that would allow reads to take place on any server.
The problem is, I later read that the client is "sticky" and basically once it has chosen a "nearest" mongo server, it's not likely to change it. Besides, even if it were to "check for nearest" again, it'll probably choose the same one over. This pretty much results in an active/passive configuration, without any load-balancing. I do have two application servers, so if they choose different mongo servers, it might work ok, but say I wanted to have more than 3 mongo servers in the replica set, then any servers besides specific two would be passive.
Basically my question is, what's the best way to have an active/active configuration for my deployment? All I want is for requests to go to free mongo servers rather than busy ones.
One way to force this which I thought of is to create three sharded-clusters (each server participating in all three), where each server is the primary in one of these clusters - but this is still not optimal, because besides the relative complexity involved in this configuration, this also doesn't guarantee complete load balancing (for example, in case all requests at a given moment happen to go to one specific shard).
What's the right way to achieve what I want? If it's not possible to achieve this kind of load balancing with mongo, would you recommend that I go with the sharded-clusters solution?
As you already suspected, scaling reads is not a "one size fits all" problem. Everything will depend on your data, your access patterns, your requirements and probably a few other things only you can determine.
In a nutshell, the main thing to consider is why a single server can't handle your read load. If it's because of the size of your data set and the size of your indexes then sharding your data across three shards will reduce the RAM requirements of each of them (or to put it another way will give you the combined RAM of all three systems). As long as you pick a good shard key (one that will distribute the load approximately evenly across all the systems) you will get almost three times the throughput on targeted queries.
If the main requirement for your reads is to reduce as much as possible the latency of reading the data, then a replica set can serve your purposes well as reading from the "nearest" node will reduce the network round-trip time without changing the duration of the operation on the MongoDB server. This assumes that your writes are infrequent enough or that your application has tolerance of possibly stale data.

Mixing Linux and Windows MongoDB replica set and is equal hardware important for sharding

My first question is really simple, I just want to know if I can mix Linux and Windows mongodb servers in my sharding/replica cluster, ofc with same version like 2.2?
For 2nd question, can someone explain what heppens if I have sharding servers with this hardware
Server 1 : High Cpu, SSD disk
Server 2 : Normal CPU, Sata disk
Server 3 : Normal CPU , SSD disk
How will sharding work when we have different hardware servers?
Is it important to build clusters on machines with almost the same hardware?
There shouldn't be a problem mixing Linux and Windows servers, assuming (as you stated) that the MongoDB versions are equivalent.
Regarding differences in hardware, MongoDB tries to distribute data evenly across shards, and each server will use as many available resources as possible to give the best performance possible. When another process on a server requests some resources, Mongo will relinquish those resources. This will usually mean swapping some data to disk until more RAM becomes available again.
Because your servers will do this, and other operations, at different speeds, the important question becomes "What happens when one server in a shard is running slowly". According to the FAQ:
If a shard is responding slowly, mongos will merely wait for the shard to return results.
So in practice, parts of your sharded collections will be slower than others, but everything should work just fine. You would probably have a better experience if the hardware matched, but it doesn't have to.

Adding a new secondary in MongoDB to Distribute Load

I have two shards on three machines (using mongodb 1.8.2):
nodeI including: shard1(primary) and shard2(primary)
nodeII including: shard1(secondary) and shard2(secondary)
nodeIII including: shard1(arbiter) and shard2 (arbiter)
NodeII load is getting very high(CPU and IO), and NodeI is high as well, but a little better than nodeII.
In my java client I designated code to only query NodeII, while NodeI is just used for writing.
I am planning to convert nodeIII from arbiter to secondary to share the read load on NodeII.
Do you think this is a good idea and if I do this, what should I consider, or do you have other suggestions to lower the load?
As long as the arbiter hardware has similar specifications to your secondary, the approach you are suggesting seems reasonable as it will distribute the secondary reads. Usually arbiters have very low hardware specs or are on shared hardware, but I am assuming that this is not the case in your configuration.
If you have an odd number of servers in the replica set you will no longer need an arbiter.
You may want to look into Read Preference here, in particular you might be interested in specifying tag sets to select a secondary.
Reading from a secondary does not necessarily "distribute" the load as you might expect. Without getting to the root of your performance problems, you may just be setting up for more challenges.
In particular, adding a secondary to your existing servers will:
increase the I/O load on the server where you add the secondary (you are now replicating & writing a full extra copy of the data)
provide more contention for reading from the server the secondary is syncing from
potentially cause that secondary to lag behind the primary during heavy read activity (which may be of concern if you are expecting strong consistency).
You should also consider what happens in the case of failure. If your servers are struggling under the current load, things will probably dramatically melt down if any one of your physical servers has problems and all the traffic ends up hitting a single server.
Ideally you should run mongostat or similar monitoring tools to get a better understanding of the performance characteristics of your servers and what might be contributing to the load (memory pressure, lock %, I/O, network, ..). It would be helpful if you could post a sampling of mongostat output to PasteBin or similar.
You should also review your common queries with explain() to understand index usage, and check if they require access to all shards or are being directed to a specific one.
If all 3 servers are the same hardware spec, as a short term improvement I would consider:
Removing the arbiters and replace them with secondary nodes. This will provide extra data redundancy in the event one of your servers fails and help prevent all of the load from landing on one server.
Stepping down the primary on NodeI, so that NodeI and NodeII each have a primary and secondary (rather than the two primaries on NodeI and two secondaries on NodeII). The primary and secondary servers have different write characteristics so this may balance the load better.
Checking your shard key(s) and common queries to confirm they will reasonably balance reads and writes. Potential problems including a "hot spot" where all writes to a collection hit a single shard .. or queries which hit all shards to get a result.
Testing the change in performance if you don't read from the secondaries. It may seem counter-intuitive, but reading from secondaries may actually be causing you other issues depending on the nature of your queries.
Lastly, you mention using 1.8.2. There are significant performance and locking/yielding improvements in MongoDB 2.0 and 2.2, as well as other bug fixes. It would be worth testing an upgrade in your development environment as this may address some of your issues.

MongoDB Sharding On One Machine

Does it make sense to implement mongodb sharding with say 100 shards on one beefier machine just to achieve higher concurrenct write into the database as I am told, there is a global lock for each monogod.exe process? Assuming that is possible, will that aproach give me higher write concurrency?
Running multiple mongods on a machine is not a good idea. Every one of the mongod processes will try to use all the available memory, forcing other mongod's memory mapped pages out of memory. This will create an enormous amount of swapping in most cases.
The global database lock is generally not a problem as is demonstrated in: http://blog.pythonisito.com/2011/12/mongodbs-write-lock.html
Only use one mongod per machine (but it's fine to add a mongos or config server as well), unless it's for some simple testing.
cheers,
Derick
I totally disagree. We run 8 shards per box in our setup. It consists of two head nodes each with two other machines for replication. 6 boxes total. These are beefy boxes with about 120GB of RAM, 32 Cores and 2TB each. By having 8 shards per box (we could go higher by the way this is set at 8 for historic purposes) we make sure we utilize the CPU efficiently. The RAM sorts itself out. You do have to watch the metrics and make sure you aren't paging too much but with SSD drives (which we have) if you do spill onto the disk drives it isn't too bad.
The only use case where I found running several mongod on the same server was to increase replication speed on high latency connection.
As highlighted by Derick, the write lock is not really your issue when running mongodb.
To answer your question : yes you can demonstrate mongo scaling with several instance per machine (4 instances per server sems to be enough) if your test does not involve too much data (otherwise page out will dramatically decrase your performance, I have already tested it)
However, instances will still compete for resources. All you will manage to do is to shift the database lock issue to a resource lock issue.
Yes, you can and in fact that's what we do for 50+ mil write-heavy database. Just make sure all your indexes per mongod fit into the RAM and there's room for growth and maintenance.
However, there's a small trade-off: Depending on what your target QPS is, this kind of sharing requires machines with more horsepower, whereas sharding on a single machine will not and in most cases you can do away with commodity, cheaper hardware.
Whatever the case is, do the series of performance tests (ageinst IO, Network, PQS etc) and establish your baseline carefully and consider SSD drives for storage and this may sound biased, but Linux XFS storage is also something to consider.

Sharding vs DFS

As far as I understand sharding (e.g in MongoDB) and distributed file systems (e.g. HDFS in HBase or HyperTable) are different mechanisms that databases use to scale-out, however I wonder how do they compare?
Traditional sharding involves breaking tables into a small number of pieces and running each piece (or "shard") in a separate database on a separate machine. Because of the large shard size, this mechanism can be prone to imbalances due to hot spots and unequal growth as was evidenced by the Foursquare incident. Also, because each shard is run on a separate machine, these systems can experience availability problems if one of the machines goes down. To mitigate this problem, most sharding systems, including MongoDB, implement replica groups. Each machine is replaced by a set of three machines in a master plus two slaves configuration. This way if a machine goes down, there are two remaining replicas to serve the data. There are a couple of problems with this design: First, if a replica fails in a replica group, and the group is only left with two members, to bring the replication count back to three, the data on one of these two machines needs to be cloned. Since there are only two machines in the entire cluster that can be used to re-create the replica, there will be enormous drag on one of these two machines while re-replication is taking place, causing serious performance problems on the shard in question (it takes over two hours to copy 1TB over a gigabit link). The second problem is that when one of the replicas goes down, it needs to be replaced with a new machine. Even if there is plenty of spare capacity across the cluster to resolve the replication problem, that spare capacity cannot be used to rectify the situation. The only way to solve it is to replace the machine. This becomes very challenging from an operational standpoint as cluster sizes grow up into the hundreds or thousands of machines.
The Bigtable+GFS design solves these problems. First, the table data is broken down into much finer grained "tablets". A typical machine in a Bigtable cluster will often have 500+ tablets. If an imbalance occurs, resolving it is just a simple matter of migrating a small number of tablets from one machine to another. If a TabletServer goes down, because the data set is broken down and replicated with such fine granularity, there can be hundreds of machines that participate in the recovery process, which distributes the recovery burden and speeds recovery time. Also, because the data is not tied to a specific machine or machines, the spare capacity on all machines in the cluster can be applied to the failure. There is no operational requirement to replace the machine since any of the spare capacity throughout the cluster can be used to rectify replication imbalance.
Doug Judd
CEO, Hypertable Inc.