We need to host existing Azure Durable Function app outside of Azure. We can run the function app as a container, but we'll need to configure an alternate data store (which is currently using Azure Storage). I can see MS SQL is a supported alternate - see here - and this will work for us, but Postgres is more aligned with the direction we're headed, so would be preferable. Has anyone used Postgres as the storage provider for Azure Durable Function apps?
The language specific operations to deploy outside the azure can be performed by the steps mentioned in https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-overview?tabs=python
For the storage considerations refer: https://learn.microsoft.com/en-us/azure/azure-functions/storage-considerations
But there are no specific and perfect procedures for Postgre SQL and waiting for update from Azure.
Related
Is it possible to replicate a specific subset of data (certain schema,dbs') to a readonly copy of Azure postgres flexible server. Thanks Brian
From Logical replication and logical decoding in Azure Database for PostgreSQL - Flexible Server
Limitations
Read replicas - Azure Database for PostgreSQL read replicas are not
currently supported with flexible servers.
At the moment this is not possible with azure so we ended up using pglogical replication to copy specific databases over between the regions.
Read Replica feature will be available in Azure Database for PostgreSQL Flexible server around 6/30/22 in private preview and subsequently in public preview in Q3 quarter.
My current project have the following structure:
Starts with a script in jupyter notebook which dowloads data from a CRM API to put in a local PostgressSql database I run with PgAdmin. After that it runs cluster analysis, return some scoring values, creates a table in database with the results and updates this values in the CRM with another API call. This process will take between 10 to 20 hours (the API only allows 400 requests per minute).
The second notebook reads the database, detects last update, runs api call to update database since the last call, runs kmeans analysis to cluster the data, compare results with the previous call, updates the new ones and the CRM via API. This second process takes less than 2 hours in my estimation and I want this script to run every 24 hours.
After testing, this works fine. Now I'm evaluating how to put this in production in AWS. I understand for the notebooks I need Sagemaker and from I have seen is not that complicated, my only doubt here is if I can call the API without implementing aditional code or need some configuration. My second problem is database. I don't understand the difference between RDS which is the one I think I have to use for this and Aurora or S3. My goal is to write the less code as possible, but a have try some tutorial of RDS like this one: [1]: https://www.youtube.com/watch?v=6fDTre5gikg&t=10s, and I understand this connect my local postgress to AWS but I can't find the data in the amazon page, only creates an instance?? and how to connect to it to analysis this data from SageMaker. My final goal is to run the notebooks in the cloud and connect to my postgres in the cloud. Just some orientation about how to use this tools would be appreciated.
I don't understand the difference between RDS which is the one I think I have to use for this and Aurora or S3
RDS and Aurora are relational databases fully managed by AWS. "Regular" RDS allows you to launch the existing popular databases such as MySQL, PostgreSQSL and other which you can launch at home/work as well.
Aurora is in-house, cloud-native implementation databases compatible with MySQL and PosrgreSQL. It can store the same data as RDS MySQL or PosrgreSQL, but provides a number of features not available for RDS, such as more read replicas, distributed storage, global databases and more.
S3 is not a database, but an object storage, where you can store your files, such as images, csv, excels, similarly like you would store them on your computer.
I understand this connect my local postgress to AWS but I can't find the data in the amazon page, only creates an instance??
You can migrate your data from your local postgress to RDS or Aurora if you wish. But RDS nor Aurora will not connect to your existing local database, as they are databases themselfs.
My final goal is to run the notebooks in the cloud and connect to my postgres in the cloud.
I don't see a reason why you wouldn't be able to connect to the database. You can try to make it work, and if you encounter difficulties you can make new question on SO with RDS/Aurora setup details.
I work in a company that's using Serverless to build cloud-native applications and services. Today we use DynamoDB and SQL Databases with AWS Aurora.
We want to go with DocumentDB for our next application, but we could not find anything about Serverless and AWS DocumentDB. Does Serverless support AWS DocumentDB? If not, is there any plans to support it in the future?
Serverless supports any AWS resources that you can define using CloudFormation. As per the Serverless docs here:
Define your AWS resources in a property titled resources. What goes in
this property is raw CloudFormation template syntax, in YAML...
The YAML for creating a DocumentDB cluster is, going to look something like:
resources:
Resources:
DBCluster:
Type: "AWS::DocDB::DBCluster"
DeletionPolicy: Delete
Properties:
DBClusterIdentifier: "MyCluster"
MasterUsername: "MasterUser"
MasterUserPassword: "Password1234!"
DBInstance:
Type: "AWS::DocDB::DBInstance"
Properties:
DBClusterIdentifier: "MyCluster"
DBInstanceIdentifier: "MyInstance"
DBInstanceClass: "db.r4.large"
DependsOn: DBCluster
You can find the other CloudFormation resources that you can define in the resources parameter of your Serverless.yaml here.
DocumentDB is not a serverless service. You need to manage the backend server to use it.
Please refer to this blog: https://blogs.itemis.com/en/serverless-services-on-aws, you can see it is not in the list of "SERVERLESS SERVICES ON AWS".
No, this won't support serverless, if you really want this you can go with DynamoDB. Also, can see differences if you want.
DocumentDB
MongoDB is supported in this database, which provide ease to learn
Stored procedures are needed in this, where data retrieval and data accumulation is done with help
Document size is limited to 16MB and storage is maximized up to 64TB of data.
Daily backups are managed by the database itself, and can be recovered whenever required
This is costly as we require paying around $200/month even if the user uses only some instances of database or only used few hours.
AWS is not involved in the user credentials stored area as that will be stored in DB directly
Available in specific regions
Can be easily migrated out of AWS into any MongoDB
In case of primary node failure, service promotes read-replica to primary. Multi A-Z has to be configured by users. Backup can be copied across regions
DynamoDB
MongoDB is not directly supported i this and even not easy to migrate from MongoDB to DynamoDB
Stored procedures are not needed in this, which makes the process easier for users
There is no limit in the document size as it can be scaled up to the size of user requirements
Daily backups are not available which makes the user too backup the data which triggered explicitly by users, and can be recovered whenever needed
There is initial cost associated with this, but overall cost is less. Also, on-demand pricing is available where user manage with the lesser amount of $1/month. 25GB data is provided for free in first stage.
AWS controls the user access to the database through identity and access management where authentication and authorization is needed for low level as well
Available in all regions
Can not be easily migrated out of AWS into any MongoDB, you need to write a code to transform
Support global tables, which protect users against regional failure. Data is automatically replicated across multiple AZs in a single region.
We have setup a community postgresql service on Cloud Foundry (IBM Blumix). This is a free service and no automated backup and recovery is supported out of the box.
Is there a way to set up a standby server or a regular backup in case there is any data corruption/failure?
IBM compose and ElephantSQL can provide this service at a cost, butwe are not ready for it yet.
PostgreSQL is an experimental service and there is not a dashboard and other advanced features (Daily backup for example) that you can find in other services that you mentioned. If you want to do a backup you could write an ad-hoc script that 'saves'\exports all tables as you want and run it every day.
If you need PostegreSQL you can create a PostegreSQL by compose service $17.50 / mo for the first GB and $12 for Extra GB )
We used Postgresql Studio and deployed it on IBM Bluemix. The database service was connected to the pgstudio interface (This restricts the access to only connected databases). We also had to make minor changes to pgstudio so that we could use pg_dump with the interface.
The result: We could manually dump the data. This solution works well as we could take regular dumps (though manually).
In the free tier you are right in saying that you cant get the backup. Those features are available only in Compose for PostgresSQL service - but that's a paid service.
I have read that you can replicate a Cloud SQL database to MySQL. Instead, I want to replicate from a MySQL database (that the business uses to keep inventory) to Cloud SQL so it can have up-to-date inventory levels for use on a web site.
Is it possible to replicate MySQL to Cloud SQL. If so, how do I configure that?
This is something that is not yet possible in CloudSQL.
I'm using DBSync to do it, and working fine.
http://dbconvert.com/mysql.php
The Sync version do the service that you want.
It work well with App Engine and Cloud SQL. You must authorize external conections first.
This is a rather old question, but it might be worth noting that this seems now possible by Configuring External Masters.
The high level steps are:
Create a dump of the data from the master and upload the file to a storage bucket
Create a master instance in CloudSQL
Setup a replica of that instance, using the external master IP, username and password. Also provide the dump file location
Setup additional replicas if needed
VoilĂ !