Does Consistent Hashing always equal sticky sessions? - hash

I am learning about consistent hashing and I understand that provided the number of servers do not change, a load balancer using consistent hashing will always send the same request to the same backend server. I can see how this could be troublesome if one user is more CPU intensive than the others - he will continue to get mapped to the same server and it may cause performance issues for others. IMO, it makes more sense to use something like least connections or a mix of consistent hashing and least connections together.
For database partitions I am running into the same issue. I know a cache that is centralized for all the databases could solve this issue as intense queries only need to be executed once - is that an acceptable approach? Load balancer which uses consistent hashing to select databases, and no database is ever hit too much because we cache the results in a Redis cluster.
It appears like consistent hashing and sticky sessions go hand in hand, and sticky sessions add a lot of overhead. Am I wrong in this assumption?

Related

Propagate change in distributed in-memory cache

I've an application deployed on a cluster of 1000 commodity boxes. While starting, each instance of the application loads a non-trivial amount of data from database and uses this as cache. During a day, around 20%of this cached data needs to be updated.
What are the efficient ways of near simultaneous update of in-memory data of entire cluster? I thought of JMX, Zookeeper, but not sure if that would be really efficient/fast enough.
Well assuming you're using Memcached's consistent hashing, go a step further and have each cache replicate to their closest successor. This can lessen the problem but not entirely alleviate it but it's a simple solution, Gossip + CRDTs are another solution, Dynamo and Riak use a combination of Gossip, Consistent Hashing, and CRDTs.

Why can't CP systems also be CAP?

My understanding of the CAP acronym is as follows:
Consistent: every read gets the most recent write
Available: every node is available
Partion Tolerant: the system can continue upholding A and C promises when the network connection between nodes goes down
Assuming my understanding is more or less on track, then something is bother me.
AFAIK, availability is achieved via any of the following techniques:
Load balancing
Replication to a disaster recovery system
So if I have a system that I already know is CP, why can't I "make it full CAP" by applying one of these techniques to make it available as well? I'm sure I'm missing something important here, just not sure what.
It's the partition tolerance, that you got wrong.
As long as there isn't any partitioning happening, systems can be consistent and available. There are CA systems which say, we don't care about partitions. You can have them running inside racks with server hardware and make partitioning extremely unlikely. The problem is, what if partitions occur?
The system can either choose to
continue providing the service, hoping the other server is down rather than providing the same service and serving different data - choosing availability (AP)
stop providing the service, because it couldn't guarantee consistency anymore, since it doesn't know if the other server is down or in fact up and running and just the communication between these two broke off - choosing consistency (CP)
The idea of the CAP theorem is that you cannot provide both Availability AND Consistency, once partitioning occurs, you can either go for availability and hope for the best, or play it safe and be unavailable, but consistent.
Here are 2 great posts, which should make it clear:
You Can’t Sacrifice Partition Tolerance shows the idea, that every truly distributed system needs to deal with partitioning now and than and hence CA systems will break instantly at the first occurrence of a partition
CAP Twelve Years Later: How the "Rules" Have Changed is slightly more up to date and shows the CAP theorem more flexible, where developers can choose how applications behave during partitioning and can sacrifice a bit of consistency to gain some availability, ...
So to finally answer your question, if you take a CP system and replicate it more often, you might either run into overhead of messages sent between the nodes of the system to keep it consistent, or - in case a substantial part of the nodes fails or network partitioning occurs without any part having a clear majority, it won't be able to continue operation as it wouldn't be able to guarantee consistency anymore. But yes, these lines are getting more blurred now and I think the references I've provided will give you a much better understanding.

MongoDB - how to best achieve active/active configuration?

I have an application which is very low on writes. I'm therefore interested in deploying a mongo installation which maximizes the read throughput for the hardware I have (3 database servers in one location). I don't really care for redundancy (backups), but would like automatic failover. Additionally, I'm fine with "eventual consistency", and don't mind if data which isn't the latest data is returned.
I've looked into both sharding and replica sets, and as far as I can tell, I don't really need to use sharding as its benefits suit more for applications with many writes.
I therefore went ahead and installed a replica set on the three servers I have, and I then set the reading preference to "Nearest", as that would allow reads to take place on any server.
The problem is, I later read that the client is "sticky" and basically once it has chosen a "nearest" mongo server, it's not likely to change it. Besides, even if it were to "check for nearest" again, it'll probably choose the same one over. This pretty much results in an active/passive configuration, without any load-balancing. I do have two application servers, so if they choose different mongo servers, it might work ok, but say I wanted to have more than 3 mongo servers in the replica set, then any servers besides specific two would be passive.
Basically my question is, what's the best way to have an active/active configuration for my deployment? All I want is for requests to go to free mongo servers rather than busy ones.
One way to force this which I thought of is to create three sharded-clusters (each server participating in all three), where each server is the primary in one of these clusters - but this is still not optimal, because besides the relative complexity involved in this configuration, this also doesn't guarantee complete load balancing (for example, in case all requests at a given moment happen to go to one specific shard).
What's the right way to achieve what I want? If it's not possible to achieve this kind of load balancing with mongo, would you recommend that I go with the sharded-clusters solution?
As you already suspected, scaling reads is not a "one size fits all" problem. Everything will depend on your data, your access patterns, your requirements and probably a few other things only you can determine.
In a nutshell, the main thing to consider is why a single server can't handle your read load. If it's because of the size of your data set and the size of your indexes then sharding your data across three shards will reduce the RAM requirements of each of them (or to put it another way will give you the combined RAM of all three systems). As long as you pick a good shard key (one that will distribute the load approximately evenly across all the systems) you will get almost three times the throughput on targeted queries.
If the main requirement for your reads is to reduce as much as possible the latency of reading the data, then a replica set can serve your purposes well as reading from the "nearest" node will reduce the network round-trip time without changing the duration of the operation on the MongoDB server. This assumes that your writes are infrequent enough or that your application has tolerance of possibly stale data.

Adding a new secondary in MongoDB to Distribute Load

I have two shards on three machines (using mongodb 1.8.2):
nodeI including: shard1(primary) and shard2(primary)
nodeII including: shard1(secondary) and shard2(secondary)
nodeIII including: shard1(arbiter) and shard2 (arbiter)
NodeII load is getting very high(CPU and IO), and NodeI is high as well, but a little better than nodeII.
In my java client I designated code to only query NodeII, while NodeI is just used for writing.
I am planning to convert nodeIII from arbiter to secondary to share the read load on NodeII.
Do you think this is a good idea and if I do this, what should I consider, or do you have other suggestions to lower the load?
As long as the arbiter hardware has similar specifications to your secondary, the approach you are suggesting seems reasonable as it will distribute the secondary reads. Usually arbiters have very low hardware specs or are on shared hardware, but I am assuming that this is not the case in your configuration.
If you have an odd number of servers in the replica set you will no longer need an arbiter.
You may want to look into Read Preference here, in particular you might be interested in specifying tag sets to select a secondary.
Reading from a secondary does not necessarily "distribute" the load as you might expect. Without getting to the root of your performance problems, you may just be setting up for more challenges.
In particular, adding a secondary to your existing servers will:
increase the I/O load on the server where you add the secondary (you are now replicating & writing a full extra copy of the data)
provide more contention for reading from the server the secondary is syncing from
potentially cause that secondary to lag behind the primary during heavy read activity (which may be of concern if you are expecting strong consistency).
You should also consider what happens in the case of failure. If your servers are struggling under the current load, things will probably dramatically melt down if any one of your physical servers has problems and all the traffic ends up hitting a single server.
Ideally you should run mongostat or similar monitoring tools to get a better understanding of the performance characteristics of your servers and what might be contributing to the load (memory pressure, lock %, I/O, network, ..). It would be helpful if you could post a sampling of mongostat output to PasteBin or similar.
You should also review your common queries with explain() to understand index usage, and check if they require access to all shards or are being directed to a specific one.
If all 3 servers are the same hardware spec, as a short term improvement I would consider:
Removing the arbiters and replace them with secondary nodes. This will provide extra data redundancy in the event one of your servers fails and help prevent all of the load from landing on one server.
Stepping down the primary on NodeI, so that NodeI and NodeII each have a primary and secondary (rather than the two primaries on NodeI and two secondaries on NodeII). The primary and secondary servers have different write characteristics so this may balance the load better.
Checking your shard key(s) and common queries to confirm they will reasonably balance reads and writes. Potential problems including a "hot spot" where all writes to a collection hit a single shard .. or queries which hit all shards to get a result.
Testing the change in performance if you don't read from the secondaries. It may seem counter-intuitive, but reading from secondaries may actually be causing you other issues depending on the nature of your queries.
Lastly, you mention using 1.8.2. There are significant performance and locking/yielding improvements in MongoDB 2.0 and 2.2, as well as other bug fixes. It would be worth testing an upgrade in your development environment as this may address some of your issues.

PostgreSQL consuming large amount of memory for persistent connection

I have a C++ application which is making use of PostgreSQL 8.3 on Windows. We use the libpq interface.
We have a multi-threaded app where each thread opens a connection and keeps using without PQFinish it.
We notice that for each query (especially the SELECT statements) postgres.exe memory consumption would go up. It goes up as high as 1.3 GB. Eventually, postgres.exe crashes and forces our program to create a new connection.
Has anyone experienced this problem before?
EDIT: shared_buffer is currently set to be 128MB in our conf. file.
EDIT2: a workaround that we have in place right now is to call PQfinish for every transaction. But then, this slows down our processing a bit since establishing a connection every time is quite slow.
In PostgreSQL, each connection has a dedicated backend. This backend not only holds connection and session state, but is also an execution engine. Backends aren't particularly cheap to leave lying around, and they cost both memory and synchronization overhead even when idle.
There's an optimum number of actively working backends for any given Pg server on any given workload, where adding more working backends slows things down rather than speeding it up. You want to find that point, and limit the number of backends to around that level. Unfortunately there's no magic recipe for this, it mostly involves benchmarking - on your hardware and with your workload.
If you need more connections than that, you should use a proxy or pooling system that allows you to separate "connection state" from "execution engine". Two popular choices are PgBouncer and PgPool-II . You can maintain light-weight connections from your app to the proxy/pooler, and let it schedule the workload to keep the database server working at its optimum load. If too many queries come in, some wait before being executed instead of competing for resources and slowing down all queries on the server.
See the postgresql wiki.
Note that if your workload is read-mostly, and especially if it has items that don't change often for which you can determine a reliable cache invalidation scheme, you can also potentially use memcached or Redis to reduce your database workload. This requires application changes. PostgreSQL's LISTEN and NOTIFY will help you do sane cache invalidation.
Many database engines have some separation of execution engine and connection state built in to the core database engine's design. Sybase ASE certainly does, and I think Oracle does too, but I'm not too sure about the latter. Unfortunately, because of PostgreSQL's one-process-per-connection model it's not easy for it to pass work around between backends, making it harder for PostgreSQL to do this natively, so most people use a proxy or pool.
I strongly recommend that you read PostgreSQL High Performance. I don't have any relationship/affiliation with Greg Smith or the publisher*, I just think it's great and will be very useful if you're concerned about your DB's performance.
* ... well, I didn't when I wrote this. I work for the same company now.
The memory usage is not necessarily a problem. PostgreSQL uses shared memory for some caching, and this memory does not count towards the size of the process memory usage until it's actually used. The more you use the process, the larger parts of the shared buffers will be active in it's address space.
If you have a large value for shared_buffers, this will happen. If you have it too large, the process can run out of address space and crash, yes.
The problem is probably that you don't close the transaction,
In PostgreSQL even if you do only selects without DML it runs in transaction which need to be rollback.
By adding rollback at the end of the transaction will reduce your memory problem