RDS Postgres Replicas can scale up to 5 replicas. But when I create a replica, it creates it as a single instance, not as a cluster.
If I want to use RDS Postgres Read Replica clusters so that my single application can handle high TPS and the TPS can be shared by multiple RDS Replicas.
In know this is possible with Aurora replicas, as Aurora creates a cluster of replicas which has single endpoint and which can scale in or scale out. But All normal RDS
Postgres Replicas are created like single instances with different endpoints.
Is it possible to make RDS postgres replicas as a cluster with 1 endpoint?
Clusters are for Aurora, not for RDS. So you have to make sure you choose Aurora when you try to create your Database in AWS Console:
#Marin is correct.
RDS does not provide auto load balancing between running reader instances.
You have to manage load balancing between replica instances yourself.
In Aurora, there is auto load balancing as well as auto scaling amongst different reader instances.
Related
Deploy Mongo Database with a single master and two read replicas in the Kubernetes cluster of at least 3 worker nodes that are available in different availability zones.
Points to keep in mind while deploying the DB:
All replicas of DB should be deployed in a separate worker node of diff Availability Zone(For high availability).
Autoscale the read replicas if needed.
Data should be persistent.
Try to run the containers in nonprivileged mode if possible.
Use best practices as much as you can.
Push the task into a separate branch with a proper README file.
We have a mongo instance which is hosted on Kubernetes as Statefulset.
It's of around 3.5 TB size with persistent volumes attached.
We are looking for a way reduce backup time. What's the best way to backup and restore mongodb instance to and from AWS S3.
I've looked at Physical backup and Logical backup options using PBM. But not sure if they're suitable for instances deployed as statefulsets in k8s as they're of TBs size.
I wanted to deploy postgresql as database in my kubernetes cluster. As of now I've followed this tutorial.
By reading the whole thing I understood that we claimed a static storage before initiating the postgresql so that we have the data in case the pod fails. Also we can do replication by pointing to the same storage space to get our data back.
What happens if we use two workers nodes and the pods containing the database migrate to another node? I don’t think local storage will work.
hostPath volume is not recommended for production usage because of its ephemeral nature which means if the pod is rescheduled to another node the storage is not migrated and if the node reboots the data is lost.
For durable storage use external block or file storage systems mounted on the nodes using a supported CSI driver
For HA postgres I suggest you explore Postgres Operator which delivers an easy to run highly-available PostgreSQL clusters on Kubernetes (K8s) powered by Patroni. It is configured only through Postgres manifests (CRDs) to ease integration into automated CI/CD pipelines with no access to Kubernetes API directly, promoting infrastructure as code vs manual operations
I created an RDS Proxy with existing Aurora PostgreSQL cluster.
But I want to pair the proxy with specific read replica instance of the cluster. Is that possible?
From what AWS claims about RDS proxy:
The same consideration applies for RDS DB instances in replication configurations. You can associate a proxy only with the writer DB instance, not a read replica.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html
Should be possible now as per https://aws.amazon.com/about-aws/whats-new/2021/03/amazon-rds-proxy-adds-read-only-endpoints-for-amazon-aurora-replicas/
Try RDS Proxy Endpoint, which allows you to get use of read replicas:
You can create and connect to read-only endpoints called reader endpoints when you use RDS Proxy with Aurora clusters. These reader endpoints help to improve the read scalability of your query-intensive applications. Reader endpoints also help to improve the availability of your connections if a reader DB instance in your cluster becomes unavailable.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html#rds-proxy-endpoints
Is it possible to increase Cores of a running GCP CloudSQL in HA configuration with zero-downtime?
I created a new Cloud SQL PostgreSQL instance. I enabled the HA at the creation time. Latter, I modified it increasing the number of cores and I got the following message
So I really think that you will have a downtime even with HA since this one is triggered when the instance become unhealthy as is mentioned here