I have a partitioned table in RDS1 (version 10) that I want to replicate to another RDS postgres (version10)
I'd like to know if it's possible. If yes, what would be the method that I should apply ?
The data should be replicated in real-time, very low latency. Partitions in the master are added and removed in daily basis.
The motivation for the replication is to reduce load from the master db.
Thanks!
Related
I have several postgres databases which need to be replicated as-is to a single aws redshift.
We have currently set up DMS services to the same. However, we keep encountering issues such as source database full, large column issues and most importantly the issue in DMS when new columns with defaults are added on postgres databases(This does not replicate with ongoing replication)
So, are there any other ways that we can set up this ongoing replication?
We have a replication setup from AWS RDS PostgreSQL to Kafka. The replication slot's restart_lsn is not moving and WALs keep pilling up.
I tried to remove all the Kafka replications and tried using logical replication and AWS DMS on the same postgreSQL instance, that too doesn't release it's position in WAL,Even though the changes are getting replicated to the target. Why replication slots are holding these WALs?
I am getting below error while using AWS data migration service.
Source - One single postgres rds instance having multiple databases in it (80 GB)
Target - One single postgres rds instance where each src database will be a schema in same database
Number of tables - Total number of tables including all databases is around 200
Replication instance - t2.medium
I created four tasks for replicating four databases from source into four different schemas in target. But the fifth task for fifth database is failing with below error -
ERROR: all replication slots are in use;, Error while executing the
query
How can I increase the replication slots so that dms can have 10 tasks running together?
Found the answer, There is an option to configure maximum number of replication slots in aws postgres rds instance which should solve my issue.
I am considering using AWS Aurora, however I am concerned for being locked into AWS indefinitely. So I am wondering how difficult it would be to transfer data from Aurora to my own Postgres database.
Thanks!
This is a very valid concern. Firstly, there is no seamless migration like there is from Postgres to Aurora. Following, needs to be considered:
How to do it: You will have to take a dump of your aurora db and then import it into postgres.
Because of 1 above; you cannot have concurrent CURD operations running on your aurora during migration. Hence, you need to shut down all products connecting to your aurora till you migrate to Postgres. Hence, there will be downtime.
Because of 2 ; Depending on size of your DB; it might take few mins ( few GB of data ) to many hours if you have huge DB.
Hence, you need to consider how much data you have and how much downtime you can live with if you want to migrate back to Postgres.
Currently I have 1 postgres instance which is starting to receive too much load and want create a cluster of 2 postgres nodes.
From reading the documentation for postgres and pgpool, it seems like I can only write to a master and read from a slave or run parallel queries.
What I'm looking for is a simple replication of a database but with master/slave based on which table is being updated. Is this possible? Am i missing it somewhere in the documentation?
e.g.
update users will be executed on server1 and replicated to server2
update big_table will be executed on server2 and replicated back to server1
What you are looking for is called MASTER/MASTER replication. This is supported natively (without PgPool) since 9.5. Note, that it's an "eventually consistent" architecture, so your application should be aware of possible temporary differences between the two servers.
See PG documentation for more details and setup instructions.