Unzipped files in S3 have different name - postgresql

I am using flyway to handle db procedure and query migrations for PostgreSql.
When I build my project on local, a zip flyway file is generated and I can easily migrate those to my local postgres.
But when same zip file is uploaded to S3 and I try to migrate flyway on aws rds, flyway info shows only some stored procedures. Other procedures name gets changed like below :
original name : V0.0.6__get_some_function_name.sql
changes to : V0.0.6__get_??some_function_name.sql
And flyway info shows only version V0.0.1 to V0.0.5 pending and does not show from V0.0.6.
Can anyone help on this ?

It looks to me like the name contains some unicode unprintable characters that are most likely getting incorrectly decoded on the S3 or RDS side. It is yet unclear where exactly the problem is but since you mention that you are able to migrate to local PostgreSQL then it seems like the problem is not on the export side but on the import side.
It might be the collation configuration on your RDS instance, or something else in-between.

Related

How to Backup Postgres Database on AWS and restore locally?

I'm working on trying to setup my local database with some mock data to work with. We have a development AWS account with a postgres database. I would like to create a backup of it, export it to my local computer, and restore to my local postgres database.
I've been trying to find how to do this online, but everything I'm finding is on how to backup to AWS and to restore back to AWS. I tried creating a snapshot and exporting it via S3 - but the snapshot doesn't produce a sql file to restore from like I was expecting.
If anyone can point me in the right direction I would very much appreciate it :)
I am afraid that the only chance you have is pg_dump/pg_restore.
Even if Amazon lets you get your hands on its file system backups, which I doubt, they may be of little use to you, since Amazon runs modified versions of PostgreSQL and you cannot be sure that the physical file format is identical to PostgreSQL.

Persistant Data in Heroku Postgres - Ephemeral Filesystem

This might be a simple question, but I would like some clarification.
Based off the docs, Heroku has an ephemeral file system. How I interpret it is that anytime you upload a file to Heroku and there is a change in the configuration or the app is restarted, the files are gone.
However, I was wondering if this is the case if you upload data to Heroku Postgres through a dumps file.
For development, I am using a local Postgres server. From there, I would create a dumps file and then upload that file using commands found here:
https://stackoverflow.com/a/71206831/3100570
Now suppose my application makes a POST request to Heroku Postgres, would that data be persisted along with the initial data from the dumps file in the event that the application is restarted or crashed?
Ingesting data into your PostgreSQL database this way doesn't touch your dyno's filesystem. You are simply connecting to PostgreSQL and running the SQL commands contained in that file:
-f, --file=file
SQL file to run
The data will be stored in PostgreSQL in exactly the same way it would if you did a bunch of INSERTs yourself. You should have no problem ingesting data this way and then continuing to interact with your application as normal.

Copy data from Postgres DB (GCP Project A) to another Postgres DB (GCP Project B)

I would be happy to get your help / feedback re data load.
Goal:
Load source data from a Postgres database, which is located in GCP project A to another Postgres database, which is located in GCP project B.
Challenge:
Get a connection (I have an IAM account with sufficient rights to run a COPY TO / COPY FROM command) to the Postgres DB in GCP Project A and copy the table either to a CSV or create a dump that can be used in order to be inserted to another Postgres DB in GCP Project B.
How do I connect to the database (e.g. if I create a key, where shall I store the json keyfile and would that approach even be feasible?) with this IAM email account?
Other ways I've researched were to use psycopg2 (thus I could use the function cursor.copy_expert (which doesn’t need any superuser right or Postgres user credentials and copy the data), but I didn’t succeed in connecting to the database with psycopg2 due to challenges with cloud proxy.
Another idea was to use pg_dump or gcloud sql export csv.
I would be curious if some of you were facing a similar challenge and how did you solve it and what might be the best way/practice
You can have a try out database migration service. You can set up a continuous migration configuration and use Cloud SQL for PostgreSQL.
Hello after a lot of searching I've come to these solutions:
If you have continuous copy, you need to use the database migration service, check this documentation.
If you have one shot copy:
you can restore your instance, see the bottom page of this documentation
you can create a bucket and backup your instance on it, then import it from the other project

How to use PoWA at a backup database?

I'm giving a try on PoWA, but I've got a little problem.
My Postgres database is running on AWS RDS.
PoWA needs HypoPG in order to suggest indexes.
But RDS doesn't support HypoPG extension. So I had to install PoWA at my backup database (outside RDS).
The problem is: PoWA isn't analyzing the restored database. It can't recognizes any data. If I execute some SQL queries manually it works though.
Is there something that I can be missing?
And, when I tried Ankane Dexter, I could show it the log files path (dumped at the backup database in parallel). Is there a way to do so in PoWA?
Thanks.

How I can copy my local PostgreSQL database to Heroku for SpringBoot app

I have deployed my SpringBoot app to Heroku. Now I would like to copy my local PostgreSQL to Heroku.
I have found some information on devcenter.heroku.com.
However I don't understand enough about the using of file db.changelog-master.yaml.
Could anyone give me details about the simplest solutions to copy the database?
Create a valid dump of your local postgres database and host it somewhere publicly available. Now you will be able to restore this entire dataset (schema and records) with pg:backups:restore as shown here. The sole caveat here is that the target database must be completely empty for this to work. You can empty a Heroku postgres database with heroku pg:reset.
If you cannot take the approach listed above then you can run pg_restore directly from your local instance, provided your local version of Postgres is >= the target version of Postgres. This also applies to creating the dumpfile and is a requirement because pg utilities are not guaranteed to be forward compatible. Documentation for pg_restore is here.