I know memcache use consistency hashing to do shading.
But, do memcache do replication as disk storage?
I assume it will not. Since, losing one of the caching server just means the cache miss of that shard. It's not a single point of failure.
However, I still want to confirm.
No. Memcache does not support replication. Nor does it store any data on disk. Everything is stored on memory. This is the main reason memcache is so fast.
Also in a way memcache is not distributed either. It is the client which takes into account multiple servers of memcache, not the memcache servers. The servers are unaware about the existence of the other servers.
If you want replication you can take a look at repcache
Also redis is a good alternative which offers many more functionalities.
Redis and couchbase provide persistence - not sure if these are binary compatible with the memcache protocol.
For replication I suggest having a look at mcrouter. I wrote my own script for pre-seeding memcache instances for high availability. See https://symcbean.blogspot.com/2023/01/usable-memcache.html
Related
following situation: I am building a system that requires redundant microservices for failover or loadbalancing. So I am starting two (or more instances of a service) of for example a simple core rest service that provides data.
My Question is: How would you store the data? Using two JPA-instances to access the same database (both writing and reading) will result in problems, especially in layer 2 caching and in consistency. Since the database must be redundent itself (requirement) it might be possible to make each service instance accessing its own database, but how would you synchronize them? Is there any common solution for this?
Thanks in advance!
If you truly need a multi-master consistent database, then you will almost definitely need to implement this at the database layer.
I would not cache things that are transactionally sensitive. If you truly need to do this, and cannot specify a reasonable TTL in which content can be stale, then you will need to set up a pub/sub sort of mechanism to expire modified entities. A lot of this really depends on your data, how often it changes, can you separate cacheable vs non-cacheable data? These questions strongly influence your caching decisions.
If you don't want to re-invent master-master replication (which will be highly non-trivial), I suggest you choose a DB system that supports this out of the box.
This will not solve all your problems out of the box, but at least it solves the hard part of the problem. What you still will need to do is e.g. defining and implementing a conflict resolution strategy.
A good choice for a master-master DB system is CouchDB. It is open source and there are also service providers available, in case you don't want to host the DB by yourself. I'm sure there are other DB systems that provide master-master replication as well.
There are two completely separate layer in your case.
One for application servers and another one for database.
If you really need a scalable system -I think you need because you are mentioning about load balancing- then you should remove all the states out from you application .
For example you should not use layer 2 caching in your application instance instead you should use some external service like redis or memcache.
And you should use just one master database instance for writes and another replicate waiting for failover. To do that we are using Amazon RDS MultiAZ instances. There is just one master database which is replicated to another instance. In case of crash or something the second database is automatically set as master in a couple of seconds.
We want to take advantage of the No-Sql Databases in our applications, and we found out about Couchbase.
I've read about it on another stack question, where somebody says that you can configure Couchbase to work with Memcached only (so it saves data only on memory, not on disks also).
However, i haven't found anything about this in the documentation.
Is it possible to setup Couchbase server to work with RAM memory?
Or, you specify on the client side where the data should be saved? (disk or memory)
Yes, Just use memcached buckets. That's all
Check out: http://www.couchbase.com/docs/couchbase-manual-2.0/couchbase-introduction-basics.html
Couchbase 2.0 documentation explicitly states that it's an in-memory database. From my experience the buckets all exist in RAM. You can set the size of every bucket to partition your RAM appropriately.
My application uses Postgresql 9.0 and is composed by one or more stations that interacts with a global database: it is like a common client server application but to avoid any additional hardware, all stations include both client and server: a main station is promoted to act also as server, and any other act as a client to it. This solution permits me to be scalable: a user may initially need a single station but it can decide to expand to more in future without a useless separate server in the initial phase.
I'm trying to avoid that if main station goes down all others stop working; to do it the best solution could be to continuously replicate the main database to unused database on one or more stations.
Searching I've found that pgpool can be used for my needs but from all examples and tutorial it seems that point of failure moves from main database to server that runs pgpool.
I read something about multiple pgpool and heartbeat tool but it isn't clear how to do it.
Considering my architecture, where doesn't exist separated and specialized servers, can someone give me some hints ? In case of failover it seems that pgpool do everything in automatic, can I consider that failover situation can be handled by a standard user without the intervention of an administrator ?
For these kind of applications I really like Amazon's Dynamo design. The document by the link is quite big, but it is worth reading. In fact, there're applications that already implement this approach:
mongoDB
Cassandra
Project Voldemort
Maybe others, but I'm not aware. Cassandra started within Facebook, Voldemort is the one used by LinkedIn. Making things distributed and adding redundancy into your data distribution you will step away from traditional Master-Slave replication approaches.
If you'd like to stay with PostgreSQL, it shouldn't be a big deal to implement such approach. You will need to implement an extra layer (a proxy), that will decide based on pre-configured options how to retrieve/save the data.
The proxying layer can be implemented in:
application (requires lot's of work IMHO);
database;
as a middleware.
You can use PL/Proxy on the middleware layer, project originated in Skype. It is deeply integrated into the PostgreSQL, so I'd say it is a combination of options 2 and 3. PL/Proxy will require you to use functions for all kind of queries against the database.
In case you will hit performance issues, PgBouncer can be used.
Last note: any way you decide to go, a known amount of development will be required.
EDIT:
It all depends on what you call “failure” and what you consider system being in an interrupted state.
Let's look on the pgpool features.
Connection Pooling PostgreSQL is using a single process (fork) per session. Obviously, if you have a very busy site, you'll hit the OS limit. To overcome this, connection poolers are used. They also allow you to use your resources evenly, so generally it's a good idea to have pooler before your database.In case of pgpool outage you'll face a big number of clients unable to reach your database. If you'll point them directly to the database, avoiding pooler, you'll face performance issues.
Replication All your queries will be auto-replicated to slave instances. This has meaning for the DML and DDL queries.In case of pgpool outage your replication will stop and slaves will not be able to catchup with master, as there's no change tracking done outside pgpool (as far as I know).
Load Balance Your read-only queries will be spread across several instances, achieving nice response times, allowing you to put more bandwidth on the system.In case of pgpool outage your queries will suddenly run much slower, if the system is capable of handling such a load. And this is in the case that master database will catchup instead of failed pgpool.
Limiting Exceeding Connections pgpool will queue connections in case they're not being able to process immediately.In case of pgpool outage all such connections will be aborted, which might brake the DB/Application protocol, i.e. Application was designed to never get connection aborts.
Parallel Query A single query is executed on several nodes to reduce response time.In case of pgpool outage such queries will not be possible, resulting in a longer processing.
If you're fine to face such conditions and you don't treat them as a failure, then pgpool can serve you well. And if 5 minutes of outage will cost your company several thousands $, then you should seek for a more solid solution.
The higher is the cost of the outage, the more fine tuned failover system should be.
Typically, it is not just single tool used to achieve failover automation.
In each failure you will have to tweak:
DNS, unless you want all clients' reconfiguration;
re-initialize backups and failover procedures;
make sure old master will not try to fight for it's role in case it comes back (STONITH);
in my experience we're people from DBA, SysAdmin, Architects and Operations departments who decide proper strategies.
Finally, in my view, pgpool is a good tool, I do use it. But it is not designed as a complete failover solution, not without extra thinking, measures taken, scripts written. Thus I've provided links to the distributed databases, they provide a much higher level of availability.
And PostgreSQL can be made distributed with a little effort due to it's great extensibility.
First of all, I'd recommend checking out pgBouncer rather than pgpool. Next, what level of scaling are you attempting to reach? You might just choose to run your connection pooler on all your client systems (bouncer is light enough for this to work).
That said, vyegorov's answer is probably the direction you should really be looking at in this day and age. Are you sure you really need a database?
EDIT
So, the rather obvious answer is that pgPool creates a single point of failure if you only have one box running it. The obvious solution is to run multiple poolers across multiple boxes. You then need to engineer your application code to handle database disconnections. This is not as easy at it sounds, but basically you need to use 2-phase commit for non-idempotent changes. So to the greatest extent possible you should make your changes idempotent.
Based on your comments, I'd guess that maybe you have limited experience dealing with database replication? pgPool does statement based replication. There are tradeoffs here. The benefit is that it's very easy to set up. The downside is that there is no guarantee that data on the replicated databases will be identical. It is also (I believe but haven't checked lately) not compatible with 2pc.
My prior comment asking if you really need a database was driven by my perception that you have designed a system without going into much detail around this part of it. I have about 2 decades experience working on "this part" of similar systems. I expect you will find that there are no out of the box solutions and that the issues involved get very complicated. In other words, I'm suggesting you re-consider your design.
Try reading this blog (with lots of information about PostgreSQL and PgPool-II):
https://www.itenlight.com/blog/2016/05/21/PostgreSQL+HA+with+pgpool-II+-+Part+5
Search for "WATCHDOG" on that same blog. With that you can configure a PgPool-II cluster. Two machines on the same subnet are required, though, and a virtual IP on the same subnet.
Hope that this is useful for anyone trying the same thing (even if this answer is a lot late).
PGPool certainly becomes a single point of failure, but it is a much smaller one than a Postgres instance.
Though I have not attempted it yet, it should be possible to have two machines with PGPool installed, but only running on one. You can then use Linux-HA to restart PGPool on the standby host if the primary becomes unavailable, and to optionally fail it back again when the primary comes back. You can at the same time use Linux-HA to move a single virtual IP over as well, so that your clients can connect to a single IP for their Postgres services.
Death of the postgres server will make PGPool send queries to the backup Postgres (promoting it to master if necessary).
Death of the PGPool server will cause a brief outage (configurable, but likely in the region of <1min) until PGPool starts up on the standby, the IP address is claimed, and a gratuitous ARP sent out. Of course, the client will have to be intelligent enough to reconnect without dying.
although I've much experience writing code. I don't really have much experience deploying things. I am writing a project that uses mongodb for persistence, redis for meta-caching, and play for serving pages. I am deciding whether to buy a dedicated server vs buying multiple small/medium instance from amazon/linode (one for each, mongo, redis, play). I have thought of the trade-offs as below, I wonder if anyone can add to the list or provide further insights. I am leaning toward (b) buying two sets of instances from linode and amazon, so if one of them have an outage it will fail over to the other provider. Also if anyone has any tips for deploying scala/maven cluster or tools to do so, much appreciated.
A. put everything in one instance
Pros:
faster speed between database and page servlet (same host).
cheaper.
less end points to secure.
Cons:
harder to manage. (in my opinion)
harder to upgrade a single module. if there are installation issues, it might bring down the whole system.
B. put each module (mongo,redis,play) in different instances
Pros:
sharding is easier.
easier to create cluster for a single purpose. (i.e. cluster of redis)
easier to allocate resources between module.
less likely everything will fail at once.
Cons:
bandwidth between modules -> $
secure each connection and end point.
I can only comment about the technical aspects (not cost, serviceability, etc ...)
It is not mentioned whether the dedicated instance is a physical box, or simply a large VM. If the application generates a lot of roundtrips to MongoDB or Redis, then the difference will be quite significant.
With a VM, the cost of I/Os, OS scheduling and system calls is higher. These elements tend to represent an important part in the performance cost of efficient remote data stores like MongoDB or Redis, and the virtualization toll is higher for them.
From a system point of view, I would not put MongoDB and Redis/Play on the same box if the MongoDB database is expected to be larger than the available memory. MongoDB maps data files in memory, and relies on the OS to perform memory swapping. It is designed for this. The other processes are not. Swapping induced by MongoDB will have catastrophic consequences on Redis and Play response time if they are all on the same box. So I would at least separate MongoDB from Redis/Play.
If you plan to use Redis for caching, it makes sense to keep it on the same box than the Play server. Redis will use memory, but low CPU. Play will use CPU, but not much memory. So it seems a good fit. Also, I'm not sure it is possible from Play, but if you use a unix domain socket to connect to Redis instead of the TCP loopback, you can achieve about 50% more throughput for free.
I'm working on a project where a Postgresql database needs to be shared across several physical locations. Each location has limited connectivity, and may only have access to the outside world once or twice a day. So the database has to be available locally at each location, but must also synchronize with the master database when possible.
I am not yet familiar with replication or clustering. Are these good solutions? Or is there a better way of doing it? I would appreciate some advice on this. :)
NOTE: clashing of primary keys from different locations would not be an issue, this has been taken care of.
If the remote locations require read-only access to the data, you can set up asynchronous replication fairly easily using log shipping, which is a built-in feature of PostgreSQL. In this configuration, the master server drops WAL (write-ahead log) files to a shared location where the remote servers can periodically connect and read the logs to bring themselves up to date.
If all servers are performing writes independently, what you're looking for is asynchronous multi-master replication. The Postgres docs mention Bucardo and rubyrep as options for accomplishing this. According to the docs, both are limited to master-to-master replication (or master to multiple slaves), but Bucardo supposedly has true multi-master replication planned for version 5.0, and rubyrep mentions a method for keeping multiple servers synchronized.
(I have servers using PostgreSQL's log shipping and streaming replication features, but I have no direct experience with Bucardo or rubyrep.)