How can we clean old DB postgres data from heroku? - postgresql

I have an posgresql database in heroku and I am doing playframework migrations.
I've reseted this database using heroku pg:reset DATABASE and also dropped all tables and evolutions. However, when I delete all my files from /conf/evolutions/defaults and try to set everything again, some old data persists, giving messages like
ERROR: column "step_description" of relation "step" already exists [ERROR:0, SQLSTATE:42701]

Related

Database Migration Service - Aurora PostgreSql -> CloudSQL fails with confusing error( Unable to drop table postgres)

Attempting to migrate from AWS Aurora PostgreSQL 13.4 to Google Cloud SQL PostgreSQL 13.
Migration job gives this error:
finished setup replication with errors: failed to drop database "postgres": generic::unknown: retry budget exhausted (10 attempts): pq: database "postgres" is being accessed by other users
The user the DMS is using only has SELECT permissions on the source database(Aurora)
I'm very confused as to why it is trying to drop the "postgres" database at all. Not sure if it is trying to drop the database in the source or destination. Not sure what I'm missing.
I've installed necessary extensions in the destination DB(pg_cron). No difference.
User in source database has SELECT on all tables/schemas outlined in the docs(including pglogical schema)
I've tried various PostgreSQL versions in the destination cluster( 13.x, 14.x). No difference.
The "Test connection" tool when creating the migration job, shows no errors. (There is a warning about a few tables not having Primary keys, but that's it.)

TypeORM migration entries lost from DB, `migration:run` re-runs them, then fails with "relation already exists"

I have a NestJS app with TypeORM, dockerized. I have synchronize turned off, using migrations instead. In the container entry point, I do yarn typeorm migration:run. It works well the first time around, and according to the logs it inserts records into the migrations table.
I noticed that when I start the project the next time it often tries to re-run migrations and fails (as expected) due to "relation already exists". At this point I can verify that entries are indeed missing from the migrations table via docker-compose exec db psql -U postgres -c 'SELECT * FROM "migrations" "migrations". The DB schema is up to date. When I insert a new record manually it gets an incremental ID after the missing records. So the records were there at some point.
I can't figure out what might cause entries in the migrations table to disappear (be rolled back?). This happens on the project linked above. It's a straightforward example project. I don't have an entity accidentally named "migrations". :)
As a workaround I currently insert into the migrations table manually:
docker-compose exec db psql -U postgres -c "INSERT INTO migrations (timestamp, name) VALUES ('1619623728180', 'AddTable1619623728180');"
Running specs that synchronized the DB was the issue.
I had a .env.test to use a different DB, but as it turns out that is not supported by dotenv. There are a few ways to make it work. I chose dotenv-flow/config and added it to my test script:
jest --collect-coverage --setupFiles dotenv-flow/config

How I can copy my local PostgreSQL database to Heroku for SpringBoot app

I have deployed my SpringBoot app to Heroku. Now I would like to copy my local PostgreSQL to Heroku.
I have found some information on devcenter.heroku.com.
However I don't understand enough about the using of file db.changelog-master.yaml.
Could anyone give me details about the simplest solutions to copy the database?
Create a valid dump of your local postgres database and host it somewhere publicly available. Now you will be able to restore this entire dataset (schema and records) with pg:backups:restore as shown here. The sole caveat here is that the target database must be completely empty for this to work. You can empty a Heroku postgres database with heroku pg:reset.
If you cannot take the approach listed above then you can run pg_restore directly from your local instance, provided your local version of Postgres is >= the target version of Postgres. This also applies to creating the dumpfile and is a requirement because pg utilities are not guaranteed to be forward compatible. Documentation for pg_restore is here.

Backup specific tables in AWS RDS Postgres Instance

I have two databases on Amazon RDS, both Postgres. Database 1 and 2
I need to restore an instance from a snapshot of Database 1 for my Staging environment. (Database 2 is my current Staging DB).
However, I want the data from a few of the tables in Database 2 to overwrite the tables in the newly restored snapshot. What is the best way to do this?
When restoring RDS from a Snapshot, a new database instance is created. If you only wish to copy a portion of the snapshot:
Restore the snapshot to a new (temporary) database
Connect to the new database and dump the desired tables using pg_dump
Connect to your staging server and restore the tables using pg_restore (most probably deleting any matching existing tables first)
Delete the temporary database
pg_dump actually outputs SQL commands that are then used to recreate tables and restore data. Look at the content of a dump to understand how the restore process actually works.
I hope this still works for someone else.
With my team we faced a similar issue. We also had 2 Postgres databases and we also just needed to backup some tables from db1 to db2.
What we did is to use a lambda function using Python (from AWS lambda ofc) that connected to both databases and validates if db1.table1 has the same data as db2.table1, if not, then the lambda function should write the missing data from db1.table1 into db2.table1. The approach of using lambda was because we wanted to automate the process due to the main db (let's say db1) is constantly being updated. In addition, it allowed us to only backup our desired tables (let's say 3 tables out of 10), instead of backing up the whole database.
Note: Maybe you want to do these writes using temporary tables to avoid issues with any constraints you have in your tables.

Why is Nhibernate SchemaExport unable to create a PostgreSQL database?

I have the following code to generate the schema for a database in Nhibernate
new SchemaExport(configuration).Execute(true, true, false);
but when run against a PostgreSQL database, I end up getting the following error
[NpgsqlException (0x80004005): FATAL: 3D000: database "dbname" does not exist]
If I however create the database manually, the schema is exported without errors. An so the question: Why is Nhibernate SchemaExport unable to create a PostgreSQL database and yet this works against other databases like SQLite, MsSqlCe and MsSql Server.
I have searched for online literature but have been unable to locate any highlighting on this issue.
I am using Nhibernate 3.3.1 with PostgreSQL 9.2.
You must create the database before you can create the tables and other objects within the database.
Do this with a CREATE DATABASE statement on a PostgreSQL connection - either in your app, or via psql or PgAdmin-III.
PostgreSQL doesn't support creating databases on demand / first access. Perhaps that's what your tool is expecting?
If you think the DB does exist and you can see it in other tools, maybe you're not connecting to the same database server? Check the server address and port.