What is the replication method used for the GCP CloudSQL Read Replica for Postgresql - postgresql

Based on the descriptions it seems that for the Read Replicas for Postgresql, the Write-Ahead Log Shipping is used.
Is it logical? I have tables without primary keys, however when I spin up a read replica I have no problems so wonder what is being used behind the scenes.

Cloud SQL "native" Postgres replicas (e.g., those you create via Cloud Console/gcloud/API) uses Postgres' built-in streaming physical replication which involves WAL segment shipping from primary to replica(s).
The needing primary keys is a constraint that exists in logical replication using an extension like pg_logical. You can configure your own replication relationships using pg_logical (as well as postgres' own built in logical decoding on versions that support it) in Cloud SQL but this isn't the same as native replication offering which is fully managed by Google.
But to answer your question directly: Cloud SQL's native read replicas, like the one you referred, uses WAL shipping (physical/streaming replication) as you suspect. Which is why your tables replicate just fine without primary keys.

SELECT * FROM pg_replication_slots;
will show you the type of replication.

Related

Multi cloud PostgreSQL replication

I have an Azure-managed PostgreSQL database.
I want to create a logical replica of it at GCP, (Google-managed, if possible).
At Azure, I've set the Azure replication support to Logical. However, this just seems to allow me to create replicas inside Azure. What I want is to create a replica in GCP.
If this was not Azure-managed, but self-managed, I would be able to create a tunnel from Azure to GCP and then do the WAL copy replication.
One might wonder: why? Because I don't want to be locked with one vendor.
If that cross-cloud replication is not possible, what's the easiest way to pull the entire database off (possibly not just the data with pgdump, but all its internals too).
While this question is Azure -> GCP, it seems other alternatives like GCP -> AWS or other vendors are also not supported. Or what am I missing?
Cross-Cloud Replication from Azure Source PostgreSQL to GCP destination CloudSQL through Conventional Native Logical Replication is possible and I've tested that it's working. I'm sure that it would work for self managed database too.

Do replication slots of Postgres get duplicated in cross region replication?

I have a PostgreSQL DB on Amazon RDS. I need a replication available on a different AWS Region for having high availability. I read the Posgres Docs here. However, I'm not sure if the replication slots are also replicated (along with the lsn's).
Can someone throw some light on this? Also, if the replication slots are not duplicated on the RDS replica (in the different region), how do I manage a region failure?
In PostgreSQL, replication slots are not replicated. You can, however, create replication slots on standby servers, if you want to use cascading replication.
There is no need to replicate a replication slot.

How does RDS replicate a Postgres database through multiple availability zones

Is there some kind of native Postgres tool they use, or is it a custom one? Are the replicas always in sync or do they drift apart from time to time?
With Multi-AZ RDS replication is synchronous. And since AWS like to be in full control of their software, it’s most likely a customised replication (but I couldn’t tell you for sure).

How to setup cross region replica of AWS RDS for PostgreSQL

I have a RDS for PostgreSQL setup in ASIA and would like to have a read copy in US.
But unfortunately just found from the official site that only RDS for MySQL has cross-region replica but not for PostgreSQL.
And I saw this page introduced other ways to migrate data in to and out of RDS for PostgreSQL.
If not buy an EC2 to install a PostgreSQL by myself in US, is there any way the synchronize data from ASIA RDS to US RDS?
It all depends on the purpose of your replication. Is it to provide a local data source and avoid network latencies ?
Assuming that your goal is to have cross-region replication, you have a couple of options.
Custom EC2 Instances
You can create your own EC2 instances and install PostgreSQL so you can customize replication behavior.
I've documented configuring master-slave replication with PostgreSQL on my blog: http://thedulinreport.com/2015/01/31/configuring-master-slave-replication-with-postgresql/
Of course, you lose some of the benefits of AWS RDS, namely automated multi-AZ redundancy, etc., and now all of a sudden you have to be responsible for maintaining your configuration. This is far from perfect.
Two-Phase Commit
Alternate option is to build replication into your application. One approach is to use a database driver that can do this, or to do your own two-phase commit. If you are using Java, some ideas are described here: JDBC - Connect Multiple Databases
Use SQS to uncouple database writes
Ok, so this one is the one I would personally prefer. For all of your database writes you should use SQS and have background writer processes that take messages off the queue.
You will need to have a writer in Asia and a writer in the US regions. To publish on SQS across regions you can utilize SNS configuration that publishes messages onto multiple queues: http://docs.aws.amazon.com/sns/latest/dg/SendMessageToSQS.html
Of course, unlike a two phase commit, this approach is subject to bugs and it is possible for your US database to get out of sync. You will need to implement a reconciliation process -- a simple one can be a pg_dump from Asian and pg_restore into US on a weekly basis to re-sync it, for instance. Another approach can do something like a Cassandra read-repair: every 10 reads out of your US database, spin up a background process to run the same query against Asian database and if they return different results you can kick off a process to replay some messages.
This approach is common, actually, and I've seen it used on Wall St.
So, pick your battle: either you create your own EC2 instances and take ownership of configuration and devops (yuck), implement a two-phase commit that guarantees consistency, or relax consistency requirements and use SQS and asynchronous writers.
This is now directly supported by RDS.
Example of creating a cross region replica using the CLI:
aws rds create-db-instance-read-replica \
--db-instance-identifier DBInstanceIdentifier \
--region us-west-2 \
--source-db-instance-identifier arn:aws:rds:us-east-1:123456789012:db:my-postgres-instance

Load Balancing and Failover for Read-Only PostgreSQL Database

Scenario
Multiple application servers host web services written in Java, running in SpringSource dm Server. To implement a new requirement, they will need to query a read-only PostgreSQL database.
Issue
To support redundancy, at least two PostgreSQL instances will be running. Access to PostgreSQL must be load balanced and must auto-fail over to currently running instances if an instance should go down. Auto-discovery of newly running instances is desirable but not required.
Research
I have reviewed the official PostgreSQL documentation on this issue. However, that focuses on the more general case of read/write access to the database. Top google results tend to lead to older newsgroup messages or dead projects such as Sequoia or DB Balancer, as well as one active project PG Pool II
Question
What are your real-world experiences with PG Pool II? What other simple and reliable alternatives are available?
PostgreSQL's wiki also lists clustering solutions, and the page on Replication, Clustering, and Connection Pooling has a table showing which solutions are suitable for load balancing.
I'm looking forward to PostgreSQL 9.0's combination of Hot Standby and Streaming Replication.
Have you looked at SQL Relay?
The standard solution for something like this is to look at Slony, Londiste or Bucardo. They all provide async replication to many slaves, where the slaves are read-only.
You then implement the load-balancing independent of this - on the TCP layer with something like HAProxy. Such a solution will be able to do failover of the read connections (though you'll still loose transaction visibility on a failover, and have to start new transaction on the new slave - but that's fine for most people)
Then all you have left is failover of the master role. There are supported ways of doing it on all these systems. None of them are automatic by default (because automatic failover of a database master role is really dangerous - consider the situation you are in once you've got split brain), but they can be automated easily if the requirement needs this for the master as well.