What is the GCP CloudSQL HA replication solution used for Postgresql - postgresql

Based on the descriptions it seems that for the Highly Available Postgresql, the File System (Block Device) Replication is used. However it is not officially confirmed.

I means the automatic failover if a zone goes down in a region and an automatic switch to another zone, in the same region, is documented here
The trick is the usage of a regional persistent disk. In fact, it's a low layer disk synchronisation, guaranteed by Google Cloud that allow that synchronisation. Not a postgreSQL feature.

Related

Google CloudSQL for PostgreSQL HA cluster downtime due to maintenance without failover

This morning we experienced a downtime of a little over 5 minutes on our Google CloudSQL for PostgreSQL High Available (HA) cluster. This was during the maintenance period that Google requires you to provide.
Google is clear on why they need the maintenance window (see here). What struck us was the duration of the downtime and that no failover was performed.
The documentation is clear on that the maintenance is performed on an instance (and not on the cluster as a whole). So why was the fallback not performed like is documented here? It could take up to 60 seconds, they say. But it took a little over 5 minutes.
And then again; it is a scheduled maintenance. Automated failover should not have to take place if you anticipate.
Did we misinterpret the documentation, do we have unrealistic expectations or did we misconfigure our application?
As described in the document that you are referring, it's intended only for the event of an instance or zone failure. In other words, only if the instance fails (become unresponsive) or if there is an issue in the zone where the MySQL/PostgreSQL instance is located that causes that the instance cannot be reached, then Cloud SQL will automatically switch to serving data from the standby instance.
Also, in the same document is indicated that the primary instance must be in a normal operating state, this is mentioned in the requirements section.

How to migrate Postgres SQL from one region to another region in Google cloud platform

In my gcp project App engine located in Central US and my Postgres Sql located in East US region.
Any one suggest best way to resolve the connection and latency issue for the above problem?
If I correctly understand there is high latency between your app engine and PostgreSql instance. You can avoid high latency if you enable Geo-replication for your instances.
You can also migrate your instances closer to each other but overall latency across the world would increase significantly.
So my suggestion would be to go for geo-replication.

Multi region high availability on GKE - what to do with the PostgreSQL database?

Google has ]this cool tool kubemci - Command line tool to configure L7 load balancers using multiple kubernetes clusters with which you can basically have a HA multi region Kubernetes setup. Which is kind of cool.
But let's say we have an basic architecture like this:
Front end is implemented as SPA and uses json API to talk to backend
Backend is a set of microservices which use PostgreSQL as a DB storage engine.
So I can create two Kubernetes Clusters on GKE, put both backend and frontend on them (e.g. let's say in London and Belgium) and all looks fine.
Until we think about the database. PostgreSQL is single master only, so it must be placed in one of the regions only. And If backend from London region starts to talk to PostgreSQL in Belgium region the performance will really be poor considering the 6ms+ latency between those regions.
So that whole HA setup kind of doesn't make any sense? Or am I missing something? One option to slightly mitigate the issue is would be have a readonly replica in the the "slave" region, and direct read-only queries there (is that even possible with PostgreSQL?)
This is a classic architecture scenario that has no easy solution. Making data available in multiple regions is a challenging problem that major companies spend a lot of time and money to solve.
PostgreSQL does not natively support multi-master writes. Your idea of a replica located in the other region with logic in your app to read and write to the correct database would work. This will give you fast local reads, but slower writes in one region. It's also more complicated code in you app and more work to handle failover of the master. Bandwidth and costs can also be problems with heavy updates.
Use 3rd-party solutions for multi-master Postgres (like Postgres-BDR by 2nd Quadrant) to offload the work to the database layer. This can get expensive and your application still has to manage data conflicts from two regions overwriting the same data at the same time.
Choose another database that supports multi-regional replication with multi-master writes. Cassandra (or ScyllaDB) is a good choice, or hosted options like Google Spanner, Azure CosmosDB, AWS DynamoDB Global Tables, and others. An interesting option is CockroachDB which supports the PostgreSQL protocol but is a scalable relational database and supports multiple regions.
If none of these options work, you'll have to create your own replication system. Some companies do this with a event-sourced / CQRS architecture where every write is a message sent to a central log, then applied in every location. This is a more work but provides the most flexibility. At this point you're also basically building your own database replication system.
If you have multi cluster ingress set up on two clusters in different regions, then the multi cluster ingress will only send traffic to the closest region to the user.
If the closest region is down, this is when traffic will be routed to the cluster in the other region.
So using the example you have provided, if there is traffic being sent to the backend and this user is closer to London, then traffic sent by this user will always be sent to London as long as the Region is up and running.
In regards dealing with latency, you will have to deal with the latency in this case as you cannot create a read replica within another region.
The benefit of this functionality (multi-cluster ingress) is that if one region goes down, then you have another region to route the traffic to.

Cross region replication of Google Cloud Postgresql instances

From my understanding of their documentation, the cloud postgresql service, being beta, does not yet support external replicas, which is what i thought i could use if i wanted a database to be replicated cross region.
This could very well end up being a blocker for our setup, since we need the data in separate regions.
I thought i'd investigate all the streaming replication options out there, and perhaps find one that does not require touching the host folder or custom config wise, which in my mind would end up looking like
master => streaming_replication_app => slave
but what from I've researched so far, no real streaming replication options are possible that are non intrusive.
Can you guys confirm or deny this and point me in the right direction?
I need to decide if the Cloud postgres is too limited as a solution or not
Thanks in advance,
Gabriel
Cross region replica is supported in Google Cloud Sql.
https://cloud.google.com/blog/products/databases/introducing-cross-region-replica-for-cloud-sql

Postgresql database shared across multiple geographic locations

I'm working on a project where a Postgresql database needs to be shared across several physical locations. Each location has limited connectivity, and may only have access to the outside world once or twice a day. So the database has to be available locally at each location, but must also synchronize with the master database when possible.
I am not yet familiar with replication or clustering. Are these good solutions? Or is there a better way of doing it? I would appreciate some advice on this. :)
NOTE: clashing of primary keys from different locations would not be an issue, this has been taken care of.
If the remote locations require read-only access to the data, you can set up asynchronous replication fairly easily using log shipping, which is a built-in feature of PostgreSQL. In this configuration, the master server drops WAL (write-ahead log) files to a shared location where the remote servers can periodically connect and read the logs to bring themselves up to date.
If all servers are performing writes independently, what you're looking for is asynchronous multi-master replication. The Postgres docs mention Bucardo and rubyrep as options for accomplishing this. According to the docs, both are limited to master-to-master replication (or master to multiple slaves), but Bucardo supposedly has true multi-master replication planned for version 5.0, and rubyrep mentions a method for keeping multiple servers synchronized.
(I have servers using PostgreSQL's log shipping and streaming replication features, but I have no direct experience with Bucardo or rubyrep.)