AWS Airflow v2.0.2 doesn't show Postgres connection type - postgresql

I just started migrating my Airflow v2.0.2 codebase to MWAA (AWS Airflow service). I added the following to the requirements.txt (and uploaded it to the S3 bucket intended for sync):
apache-airflow-providers-postgres==2.0.0
But Postgres connection type doesn't show up in the new connection UI:
What's going on here and how can I resolve this issue?

This is a known problem with MWAA (currently). It does not install Postgres provider in webserver currently. From what I know, this might be solved in the future, but for now I believe the only solution is to define connection manually via secret manager https://docs.aws.amazon.com/mwaa/latest/userguide/connections-secrets-manager.html

Related

Connect Postgres db hosted in azure storage using docker

I am trying to connect the postgres database hosted in azure storage account from within the flyway, flyway is running as docker image in docker container
docker run --rm flyway/flyway -url=jdbc:postgresql://postgres-azure-db:5432/postgres -user=user -password=password info but I am getting the error ERROR: Unable to obtain connection from database
Any idea/doc-link would be helpful
You have a similar error (different context, same general solution) in this flyway issue
my missing piece for reaching private Cloud SQL instances from private worker pools in Cloud Build was a missing network route.
The fix is ensuring the Service Networking VPC peer has the "Export custom routes" setting enabled, and that the Cloud Router advertises the route.
In your context (Azure), see "Quickstart: Use Azure Data Studio to connect and query PostgreSQL"
You can also try with a local Postgres instance first, and Azure Data Studio, for testing.
After exploring few option, I implemented the flyway using the Azure container instance. Create an ACI to store the flyway docker image and to execute the commands inside ACI, Also created a file share to keep the config file and sql scripts.
All these resource (Storage, ACI, file share) I created via the terraform scripts which is being triggered from Jenkins.

How can I learn more about AWS's RDS Aurora PostgreSQL 9.6.19 upgrade failure?

I'm trying to upgrade an RDS database cluster engine from Aurora PostgreSQL 9.6.19 before its end of life, I made copy and tried to upgrade to 9.6.21 and 10.16 but everytime the same problem happens:
Database cluster is in a state that cannot be upgraded: Postgres cluster is in a state where pg_upgrade can not be completed successfully.
The status of the database is Available so maybe it refers to something else but I don't know what and how to fix it, I've tried looking for answers to no avail.
Has anyone fixed this?
The pg_upgrade_internal log file will usually contain details on any failures/errors.
You can take a look on these logs using the command line:
aws rds describe-db-log-files --db-instance-identifier my-db-instance
Or via console, or RDS API.
For more information take a look on these links: Upgrading the PostgreSQL DB engine for Amazon RDS, Viewing and listing database log files

Unable to use Postgis on Bluemix Compose for Postgresql

I have a Compose for Postgresql service on IBM Bluemix which isn't allowing me run PostGIS functions on my cloud foundry rails app. I have run "CREATE EXTENSION PostGIS;" and I have also added the adapter to database.yml. Compose for Postgresql says PostGIS comes installed by default
I am using Ruby on Rails with the rgeo gem and the error is
ERR NoMethodError: undefined method `st_point' for #
Can you please let me know if there is anything I need to do to get PostGIS working?
Please raise a support request asking for the postgis plugin to be enabled on your compose instance.
Answered my own question. The problem was with the rgeo gem and the adapter. I needed the postgis:// adapter for working with the gem.
Bluemix does not allow you to change the adapter in their connections. It will always be postgresql. To get around this I set a CUSTOM_DATABASE_URL environment variable with the connection string postgis://<username>:<password>#host:port/<db_name>. Using the cf client this would look like
cf set-env <app-name> CUSTOM_DATABASE_URL postgis://<username>:<password>#host:port/<db_name>
Then in the command for my container in the manifest.yml, I prepended setting DATABASE_URL = CUSTOM_DATABASE_URL, specifically
DATABASE_URL=$CUSTOM_DATABASE_URL &&.....
Its a workaround for now until Bluemix allows us to change the adapter in the connections.

AWS DMS Streaming replication : Logical Decoding Output Plugins(test_decoding) not accessible

I'm trying to migrate a PostgreSQL DB persisted on cloud (on DO droplet) to RDS using AWS Database Migration Service (DMS).
I've successfully configured the replication instance and endpoints.
I've created a task with Migrate existing data and replicate ongoing changes. When I start the task it shows some error ERROR: could not access file "test_decoding": No such file or directory.
I've tried to create a replication slot manually on my DB console it throws the same error.
I've followed the procedures which was suggested on the DMS documentation for Postgres
I'm using PostgreSQL 9.4.6 on my source endpoint.
I presume that the problem is the output plugin test_decoding was not accessible to do the replication.
Please assist me to resolve this. Thanks in advance!
You must install postgresql-contrib additional supplied modules on Your source endpoint.
If it is installed, make sure, directory where test_decoding module located is the same with directory where PostgreSQL expect it.
In *nix, You can check module directory by command:
pg_config --pkglibdir
If it is not the same, copy module, or make symlink, or some other solution You prefer.

Ambari doesnt start after postgresql Upgrade

We have a four-node Hadoop cluster with HDP 2.4 and Kerberos installed in it. As this is our production cluster, We wanted to have HA for all the services including the PostgreSQL database which is used by Hive, Ambari, and Oozie for storing the metadata. However, the version of our postgreSQL, which is 8.4.2 doesn't support the inbuilt feature(stream replication) of Postgres.
So, we have decided to upgrade PostgreSQL to a version(9.3) ambari supports.
I followed this link to upgrade the Postgres. Everything went well. Expect that, we are getting the following error when restarting ambari server.
Ambari Server running with administrator privileges.
Running initdb: This may take upto a minute.
Data directory is not empty!
[FAILED]
Could someone help?
Thanks.
Your server want's to initilize the Database. I guess your Server does not see the Ambari DB. Use ambari-server setup zu restore the database Connection. Than the sever should start perfectly.
I found the fix for the issue here.