AWS RDS Postgres: WAL_LEVEL not set to 'logical' after changing rds.logical_replication = 1 - postgresql

I am in the process of setting up Debezium to work with AWS Aurora Postgres (postgres version 12.6).
For Debezium to work, the WAL (Write-ahead-logging) must be set to 'logical' and not 'replica'.
On AWS, this would require a DBA to set the rds.logical_replication parameter in the parameter group to be set to 1.
The above was done. The database instance was restarted.
To verify that the WAL level was changed to 'logical', I ran the following query:
show wal_level.
However, after running this query in postgres on the target database the result showed replica.
I further looked at the log events in the AWS management console and I saw these log events.
Would anyone have an idea why this might be? There is another environment where we were able to successfully set the rds.logical_replication to 1 and following a database restart, the WAL was set to logical. But for our main environment this is not the case. Looking at the parameter groups, between these two environments, they are the same.
Any help/ advice is appreciated. thanks.

Ok after contacting the AWS support I got the information that the parameter "rds.logical_replication=1" is only active on the instance that has the Writer role (open for read-write). When you set this parameter you will have to use the writer instance for logical replication. You can connect to the writer instance either via Cluster endpoint (recommended) or instance endpoint.
I was checking the read-only instance with SHOW wal_level; but in fact it was set on the read/write instance !!!

Related

rds.logical_replication didn't change to 1 on aurora serverless AWS

I need to change the rds.logical_replication parameter to 1 in order to get wal_level to logical.
I already changed the rds.logical_replication parameter, but when i run SHOW rds.logical_replication on my writer instance still off. Already tried applying the DB Cluster Parameter Group to the regional cluster and only to the writer instance.
I have 1 writer instance and 1 reader instance but they are serverless.
Is the ploblem that Aurora is serverless?
How can i change the parameter correctly to see the changes in the writer instance?
I tried changing the rds.logical_replication to 1 in the DB Cluster Parameter Group and applied to the Regional Cluster.
I tried changing the rds.logical_replication to 1 in the DB Cluster Parameter Group and applied only to the writer instance.
I tried to make a Parameter Group with the need values, but rds.logical_replication doesn't exist in the Parameter Group configuration.
Edit: I also rebooted the service.
I'm expecting to run SHOW rds.logical_replication and the variable is ON.
Aurora PostgreSQL–based DB clusters running Aurora Serverless v1 have the following limitations:
The logical replication feature available in Amazon RDS PostgreSQL and Aurora PostgreSQL isn't supported.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.html
But for Aurora Serverless v2 is supported, please, be sure that
Cluster Parameter Group is applied correctly and assign Cluster
Parameter Group to Cluster level. Follow next procedure to enable
rds.logical_replication. Stop and Start the Cluster to use new values.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Replication.Logical.html#AuroraPostgreSQL.Replication.Logical.Configure
https://aws.amazon.com/premiumsupport/knowledge-center/rds-postgresql-use-logical-replication/

How can I restart RDS aurora cluster with 0 downtime?

I am using RDS Aurora Postgresql 11.9 and I need to modify a static parmaeter in the parameter group. AWS says modifying a static parameter requires rebooting db instance. I have three instances in the cluster and one of them is write. I am looking for a solution to reboot the database without any downtime.
Deploy my application to use the write instance for read/write requests which make the 2 read instances idle.
Reboot the two read instances
Failover the write instance to make one of the read as primary db instance
Then reboot the old write instance
If I follow the above instruction, is it possible to achieve 0 downtime?
The answer is no. There will be a downtime in the failover phase

AWS - Postgres Aurora RDS Cluster & `max_parallel_workers_per_gather`

I have an Aurora cluster on which I'm trying to globally disable parallel queries (so that I can safely lean on SET for configuration parameters to handle multi-tenant-edness...), but don't seem to be able to get AWS to cooperate.
I have both modified an existing parameter group, and tried an entirely new parameter group, both times setting max_parallel_workers_per_gather to 0 via the RDS console.
However, once the modification is complete, when I then connect to the database and query this with SHOW max_parallel_workers_per_gather, I find that the value is still set to the default of 2.
Am I doing something wrong? Is there another way to set this parameter globally?
Thanks!
This query should tell you where the setting comes from:
SELECT setting, source, sourcefile, sourceline
FROM pg_settings
WHERE name = 'max_parallel_workers_per_gather';

RDS Postgres Logical Replication into EC2 - only rds_superusers can query or manipulate replication origins

We try to replicate from AWS RDS pg11 (pglogical 2.2.1) to pg12.
AWS RDS pg12 has only pglogical 2.3.0, which is not compatible to 2.2.1, and there is no way to downgrade (tried already). The replication starts and creates schemas in target, but stops then due to some errors (no need to cover it here).
As a workaround we want to replicate to EC2 instance with pg12 and pglogical 2.3.1 (compatible with 2.2.1 and should work well).
Both users are setup in both databases the same way, the nodes are OK. The replication fails with
ERROR: only rds_superusers can query or manipulate replication origins.
And no idea how to debug this issue.
As already mentioned by gsteiner: the user was not explicitly granted rds_superuser role. Even though I was using a role which was initially assigned by the AWS engine, looks like it "dropped out" of rds_superuser some time ago and I had to reassign.
While checking for roles, you don't see that you belong to rds_superuser(or not). So if something like this happens one might grant rds_superuser (again) to be sure that this one is resolved.
Best way to ensure that this works how intended is to create a new role right away in role rds_superuser.

AWS DMS "Load complete, replication ongoing" not working MongoDB to DocDB

I am trying to make a PoC for MongoDB to DocDB migration with DMS.
I've set up a MongoDB instance with some dummy data and an empty DocDB. Source and Target endpoints are also set in DMS and both of them are connecting successfully to my databases.
When I create a migration task in DMS everything seems to be working fine. All existing data is successfully replicated from the MongoDB instance to DocDB and the migration task state is at "Load complete, replication ongoing".
At this point I tried creating new entries in existing collections as well as creating new empty collections in MongoDB but nothing happens in DocDB. If I understand correctly the replication should be real time and anything I create should be replicated instantly?
Also there are no indication of errors or warnings whatsoever... I don't suppose its a connectivity issue to the databases since the initial data is being replicated.
Also the users I am using for the migration have admin privileges in both databases.
Does anyone have any suggestions?
#PPetkov - could you check the following?
1. Check if right privileges were assigned to the user in the MongoDB endpoint according to https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html.
2. Check if replicate set to capture changes was appropriately configured in the MongoDB instance.
3. Once done, try to search for "]E:" or "]W:" in the CloudWatch logs to understand if there are any noticeable failures/warnings.