can we get Azure mysql flexible server replication lag in seconds metric through API ? I am not sure if Azure provides an API to get that metric data - azure-rest-api

Is there an API to get mysql flexible server read replication lag in seconds metric data ?. We have an implementation in AWS where we are using the AWS api to get replication lag data to disable/enable mysql replica and we want to do the same in Azure. I have explored the Azure API documentation but was not able to get a clear solution if this is possible.
Please let me know if that is possible.

Related

User limitation on Postgresql synthetic monitoring using Airflow

I am trying to write a synthetic monitoring for my on-prem postgresql service, using airflow. The monitoring should return if a cluster is available for creating tables, writing and reading data, and deleting tables.
The clusters on my service are using SSL certificates for authentication, which means a client is required to provide a suitable client certificate in order to connect to the cluster.
Currently, I have implemented my monitoring by creating a global user which will have a certificate with permissions to all the cluster. The user will have permissions to create, write and read only on one schema, dedicated to this monitoring. Using airflow, I will connect with this user each of my postgresql clusters and try to create a table, write to it, read, and then delete it. If one of the actions fails - the DAG will write a log describing the reason for failure.
My main problem with this solution it not being able to limit such a powerful user with accessibility to all of my clusters. In case an intruder will get the user's client certificate, he would be able to explode the DB storage by writing huge amount of data or overload queries and fail the cluster.
I am looking for some ideas for limiting this user so it will be able to act only for it's purpose- the simple actions required for this monitoring, and could not be exploit by an attacker. Alternatively, I would appreciate any suggestions for different implementation for this monitoring.
I searched for build in postgresql configurations that will allow me to limit the dedicated monitoring schema / limiting the amount of queries performed by the user.

How do I efficiently migrate MongoDB to azure CosmosDB with the help of azure Databricks?

While searching for a service to migrate our on-premise MongoDB to Azure CosmosDB with Mongo API, We came across the service called, Azure Data Bricks. We have total of 186GB of data. which we need to migrate to CosmosDB with less downtime as possible. How can we improve the data transfer rate for that. If someone can give some insights to this spark based PaaS provided by Azure, It will be very much helpful.
Thank you
Have you referred the article given on our docs page?
In general you can assume the migration workload can consume entire provisioned throughput, the throughout provisioned would give an estimation of the migration speed. You could consider increasing the RUs at the time of migration and reduce it later.
The migration performance can be adjusted through these configurations:
Number of workers and cores in the Spark cluster
maxBatchSize
Disable indexes during data transfer

Consume events from AWS EventBridge in a self hosted kafka cluster outsite aws

We got a SaaS which is publishing it's events on AWS eventbridge (coulple of milion per day). We would love to consume those events and put them on our self hosted Kafka cluster. What would be the best methode to do this? We where thinking about lambda's, but the seem expensive for this use case and we don't to to manage to many other components. Does anyone have some good practices?
i was looking for a similar solution but in my case it is from eventbridge to MSK within AWS account. at this point looks like the only option is to use a lambda to feed into Kafka.
As per today, AWS only supports following Targets - https://docs.amazonaws.cn/en_us/eventbridge/latest/userguide/eb-targets.html#eb-console-targets
I have a similar use case where i need to send a message to AWS RabbitMQ or even to AWS Kafka as i need Priority Logic Implemented.
As AWS supports Lambda's, the message can be forwarded to the lambda from where it can be fed to any system

Can we monitor AWS RDS PostgreSQL queries and stats using Newrelic?

I know that we can monitor the infrastructure and OS level metrics using Newrelic's integration with AWS. But how can we monitor the queries and DB level parameters using newrelic.
This feature was requested but it's not implemented by new relic yet, which basically pulling rds performance insights to newrelic
https://discuss.newrelic.com/t/add-rds-performance-insights-data-to-aws-integration/60821

real-time sync between local Postgres instance and Azure Cloud Postgres instance

I need to set up real time sync process between a on premise postgresql instance with cloud postgresql instance. Please let me know what are all the options available through which i can achieve it.
Do i have to use any specific tool or it can be managed through replication .
Please advice
Use PgPool
http://www.pgpool.net/mediawiki/index.php/Main_Page
from their web page:
pgpool-II can manage multiple PostgreSQL servers. Using the replication function enables creating a realtime backup on 2 or more physical disks, so that the service can continue without stopping servers in case of a disk failure.