I am trying to connect the postgres database hosted in azure storage account from within the flyway, flyway is running as docker image in docker container
docker run --rm flyway/flyway -url=jdbc:postgresql://postgres-azure-db:5432/postgres -user=user -password=password info but I am getting the error ERROR: Unable to obtain connection from database
Any idea/doc-link would be helpful
You have a similar error (different context, same general solution) in this flyway issue
my missing piece for reaching private Cloud SQL instances from private worker pools in Cloud Build was a missing network route.
The fix is ensuring the Service Networking VPC peer has the "Export custom routes" setting enabled, and that the Cloud Router advertises the route.
In your context (Azure), see "Quickstart: Use Azure Data Studio to connect and query PostgreSQL"
You can also try with a local Postgres instance first, and Azure Data Studio, for testing.
After exploring few option, I implemented the flyway using the Azure container instance. Create an ACI to store the flyway docker image and to execute the commands inside ACI, Also created a file share to keep the config file and sql scripts.
All these resource (Storage, ACI, file share) I created via the terraform scripts which is being triggered from Jenkins.
Related
I am trying to connect my cloud run app to cloud sql, here is my cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/npm'
env:
- DATABASE_URL=$_DATABASE_URL
entrypoint: npx
dir: './server'
args:
- 'prisma'
- 'migrate'
- 'deploy'
However, I keep on getting the error Please make sure your database server is running at '/cloudsql/learninfra001:us-central1:learninfra001-postgres':'5432'.
Here is the _DATABASE_URL I use for substitution variable postgresql://postgres:password#localhost/db?schema=public&host=/cloudsql/learninfra001:us-central1:learninfra001-postgres
I have made sure the following:
The default cloud run service account has Cloud SQL Client role
The database db is created
Within the cloudrun service, under connections, the Cloud SQL connections is pointing to the correct instance (learninfra001:us-central1:learninfra001-postgres)
Using white-listed public IP, I am able to connect to the DB. However, I just can't seem to get cloud run to work. Is there anything else I could check? Or is there a way to get more logging to see why it is not connecting?
In short, you'll need to enable a Cloud SQL connection to your Cloud SQL instance from Cloud Build.
See https://cloud.google.com/sql/docs/mysql/connect-build for details.
I was using Postgres addon on Heroku and was connecting to the database using DATABASE_URL env var.
I now need to switch to Google Cloud Platform Postgres. I've created an instance and successfully added my local connection to the Authorised networks. Yet it seems that Heroku does not provide a static IP for its apps.
My question is then – is it possible to connect my Heroku app to a Postgres database from the Google Cloud Platform? If yes – what's the best way to do it?
You will want to run the Cloud SQL Proxy alongside your application. This will allow your Heroku App to connect to Cloud SQL without the need to worry about changing and adding IPs to your Authorized Networks.
This thread might be useful for your use-case. Node and Cloud SQL with Heroku
I have created Azure Postgres Single Server with firewall and VNET rule for one subnet.(173.30.0.0/24)
Now I am trying to create another subnet(173.30.5.0/27) inside the same VNET, but while doing so the Postgres Server is getting recreated which deletes all available db instances.
I am using terraform scripts for creation of resources.
Can anyone explain why addition of new subnet is recreating Postgres Server.
I just started migrating my Airflow v2.0.2 codebase to MWAA (AWS Airflow service). I added the following to the requirements.txt (and uploaded it to the S3 bucket intended for sync):
apache-airflow-providers-postgres==2.0.0
But Postgres connection type doesn't show up in the new connection UI:
What's going on here and how can I resolve this issue?
This is a known problem with MWAA (currently). It does not install Postgres provider in webserver currently. From what I know, this might be solved in the future, but for now I believe the only solution is to define connection manually via secret manager https://docs.aws.amazon.com/mwaa/latest/userguide/connections-secrets-manager.html
So basically I have my cloud hasura with existing schema, relations tables etc... and i want to offline it using docker and try using metadata export and import that seems not working how can I do it or is there other ways to do it?
this is the docker i want to offline
this is my cloud i want to get the schemas or metadata
OR MAYBE I JUST MANUALLY RECREATE THE TABLES AND RELATIONS??
When using the steps outlined in the Hasura Quickstart with Docker page then the following steps would help get all the table definitions, relationships etc., setup on the local instance just like it is set up on hasura cloud instance.
Migrate all the database schema and metadata using the steps mentioned in Setting up migrations
Since you want to migrate from hasura cloud use the URL of the cloud instance in step 2. Perform steps 3-6 as described in the above link.
Bring up the local docker environment. Ideally edit the docker-compose.yaml file to set HASURA_GRAPHQL_ENABLE_CONSOLE: "false" before running docker-compose up -d.
Resume the process of applying migrations from step 7. Use the endpoint from local instance. For example,
$ hasura metadata apply --endpoint http://localhost:8080
$ hasura migrate apply --endpoint http://localhost:8080