Can I initiate Azure PostgreSQL - Flexible server VM or DB failure for testing, without HA enabled? - azure-postgresql

Is there a way to simulate VM or DB failure if I'm using Azure PostgreSQL Flexible Server without any zone redundancy?
I know there is an on-demand failover if I have zone redundancy enabled (forced on-demand failover), but can I test RTO without high availability enabled?

Related

Is Storage and/or Storage Spaces Direct Validation required when setting up a WSFC for SQL AG with No Shared Storage?

So, I am in the process of setting up a WSFC to enable use of Always On Basic for SQL Server 2019. I am using Windows Server 2019 and have enabled Failover Clustering on both server nodes which are on the same domain. I am not planning to use shared storage in the cluster itself, only a fileshare on another node (not part of this cluster, but on the same domain) as the witness.
When running the Cluster Validation wizard, I get a "Physical disk {...} does not have the inquiry data (SCSI page 83h VPD descriptor) that is required by failover clustering." failure message.
As the cluster will not rely on any shared storage, can I safely deselect the Storage and Storage Spaces Direct tests during the validation and proceed with the set up?

SQLAlchemy with Aurora Serverless V2 PostgreSQL - many connections

I have an AWS Serverless V2 database setup (postgresql) that is being accessed from a compute cluster. The cluster launches a large number of jobs (>1000) and each job independently puts/pulls some data from the database. The Serverless cluster is setup to autoscale from 2 to 32 units as needed.
The code being run by each cluster job is using SQLAlchemy (either the ORM or the core). I am setting up each database connection with a null pool and pessimistic disconnect handling (i.e., pool_pre_ping=True). From my reading of the docs this should be handling disconnects due to being idle mid-connection.
Code is also written to access the DB, get the results, close the connection (to avoid idle connections), and then reopen the connection after processing (5-30 minutes). This is working well because once processing is completed, the new connections are staggered and the DB has scaled up.
My logs are showing the standard, all connections are taken error: psycopg2.OperationalError: FATAL: remaining connection slots are reserved for non-replication superuser and rds_superuser connections until the DB scales the available units high enough.
Questions:
Should I be configuring the SQLAlchemy connection differently? It feels like an anti-pattern to put in a custom retry to grab a connection while waiting for the DB to scale the number of available units as this type of capability seems to be built into SQLAlchemy usually.
Should I be using an RDS Proxy in front of the database? This also seems like an anti-pattern, adding a proxy in front of an autoscaling DB.
PG version is 10.

GCP - Can I use Compute Engine for Production MySQL DB

Is it alright to use Google Compute Engine virtual machines for MySQL DB?
db-n1-standard-2 costs around $97 DB for single Clould SQL instance and replication makes it double.
So I was wondering if its okay to use n1-standard-2 which costs around $48 and the applications will be in Kubernetes cluster and the pods would connect to Compute Engine VM for DB connection. Would the pods be able to connect to Compute Engine VM?
Also is it true that Google doesn't charge GKE Cluster Management Fees when using Zonal Kubernetes cluster? When I check with calculator it shows they don't charge management fees.
This is entirely up to your needs. If you want to be on call for DB failover and replication management, it will definitely be cheaper to run it yourself. Zalando has a lot of Postgres-on-Kubernetes automation that is very good, but at the end of the day who do you want waking up at 2AM if something breaks. I will never run another production SQL database myself as long as I live, it's just always worth the money.

Can we monitor AWS RDS PostgreSQL queries and stats using Newrelic?

I know that we can monitor the infrastructure and OS level metrics using Newrelic's integration with AWS. But how can we monitor the queries and DB level parameters using newrelic.
This feature was requested but it's not implemented by new relic yet, which basically pulling rds performance insights to newrelic
https://discuss.newrelic.com/t/add-rds-performance-insights-data-to-aws-integration/60821

real-time sync between local Postgres instance and Azure Cloud Postgres instance

I need to set up real time sync process between a on premise postgresql instance with cloud postgresql instance. Please let me know what are all the options available through which i can achieve it.
Do i have to use any specific tool or it can be managed through replication .
Please advice
Use PgPool
http://www.pgpool.net/mediawiki/index.php/Main_Page
from their web page:
pgpool-II can manage multiple PostgreSQL servers. Using the replication function enables creating a realtime backup on 2 or more physical disks, so that the service can continue without stopping servers in case of a disk failure.