Liquibase diffChangeLog command generates dropSequence changesets even if the tables are identical, what could be the issue? - postgresql

I have two postgresql tables. DB_X and DB_Y which is a copy of the first. Now I've made changes to several tables in DB_Y and I wanted to migrate those changes to DB_X using liquibase diffChangeLog which results in a db-changelog.xml file.
The db-changelog file has all the changesets for the changes I made in DB_Y but additionally it has dropSequence changesets for all tables even the ones I did not change anything.
I'm running the diff command from command line. My liquibase is v3.6.2 PostgreSQL 10.2
liquibase --driver=org.postgresql.Driver --changeLogFile=D:\db-changelog14May.xml
--url="jdbc:postgresql://localhost:5432/DB_X" --username=**** --password=**** diffChangeLog
--referenceUrl="jdbc:postgresql://localhost:5432/DB_Y" --referenceUsername=***** --referencePassword=*****
I expected the the db-changelog.xml file to have only changesets for items I had changed, since I have not changed any primary keys I do not expect any sequence related changesets. What could be the issue?

Related

TypeORM migration entries lost from DB, `migration:run` re-runs them, then fails with "relation already exists"

I have a NestJS app with TypeORM, dockerized. I have synchronize turned off, using migrations instead. In the container entry point, I do yarn typeorm migration:run. It works well the first time around, and according to the logs it inserts records into the migrations table.
I noticed that when I start the project the next time it often tries to re-run migrations and fails (as expected) due to "relation already exists". At this point I can verify that entries are indeed missing from the migrations table via docker-compose exec db psql -U postgres -c 'SELECT * FROM "migrations" "migrations". The DB schema is up to date. When I insert a new record manually it gets an incremental ID after the missing records. So the records were there at some point.
I can't figure out what might cause entries in the migrations table to disappear (be rolled back?). This happens on the project linked above. It's a straightforward example project. I don't have an entity accidentally named "migrations". :)
As a workaround I currently insert into the migrations table manually:
docker-compose exec db psql -U postgres -c "INSERT INTO migrations (timestamp, name) VALUES ('1619623728180', 'AddTable1619623728180');"
Running specs that synchronized the DB was the issue.
I had a .env.test to use a different DB, but as it turns out that is not supported by dotenv. There are a few ways to make it work. I chose dotenv-flow/config and added it to my test script:
jest --collect-coverage --setupFiles dotenv-flow/config

What differences should I expect between Postgres migration scripts for 9.6 vs later versions?

Background:
In one of our projects there is a Jenkins build step which generates a Postgres SQL script based on provided EF Core DbContext. The script generating step looks like
dotnet-ef migrations script -c MyDbContext
Since there is a requirement to generate a script for different versions of Postgres, the following steps are taken:
Postgres version is being set in C# code via SetPostgresVersion extension method
Version which is being set in step 1 is being passed as Environment variable.
These version-setting steps are working because I've got an error while trying to pass version '12' instead of required '12.0', the scripts are also generated without issues.
Question:
I need to come up with the first (lowest) Postgres version for which the generated script will be different from the baseline 9.6.
I have generated SQL scripts for Postgres 9.6, 10.0, 11.0, 12.0, 13.0, 13.1 and they all are identical.
This might be a correct result, but how do I verify it? What differences should I expect between, let's say, 9.6 and higher versions of Postgres (if indeed such differences exist)?
The actions performed in the migration script are following:
If a table does not exist then create it
If an index does not exist then create it
Insert records into MigrationHistory
Alter some tables, drop some columns and indexes, etc.
It is possible that the actions performed by the script are indeed the same for different Postgres versions, but is there any way to verify it?

How to copy a database schema from database A to database B

I have created a Postgresql database A using liquibase changesets. Now, I'm creating an application that allows creating a new database B and copies the schema from database A in real-time including the liquibase changesets as the database can still be updated later. Note that at the time of the copied schema in database A could already be updated, making the base changesets outdated.
My main question would be:
How to copy PostgreSQL schema x from database a (dynamically generated at run-time) to b using liquibase? Database b could be on another server.
If it's not possible with liquibase, what other tools or approaches would make this possible?
--
Let me add more context:
We initialize a new database a schema using liquibase changeset.
We can add a new table and field to the database an at run-time. Or during the time when the application is running. For example, we add a new table people to the schema of database a, which is not originally in the changeset. This is done using liquibase classes too. So changeset is added to databasechangelog table.
Now, we create a new database b.
We want to import the schema of the database a to b, with people table.
I hope that is clear.
Thanks.
All schema changes must be run through your schema migration tool
The point of using a database schema migration tool such as Liquibase or Flyway is to have a “single source of truth” regarding the structure of your database tables. Your set of Liquibase changesets (or Flyway scripts) is supposed to be that single source of truth for your database.
If you are altering the structure of you database at runtime, such as adding a table named people, outside the scope of your migration tool, well, then you have violated the rules of the game. You will have defeated the purpose of using a schema migration tool. The intention of using a schema migration tool is that you make all schema changes through that tool.
If you need to add a table while running in production, you should be dropping the physical file for the Liquibase changeset (or Flyway script) into the file system of your database server environment, and then invoking Liquibase (or Flyway) to run a migration.
Perhaps you have been misunderstanding the sequence of events:
If you have built a database on server "A", that means you installed Postgres, created an empty database, then installed the collection of Liquibase changesets you have carefully built, then ran a Liquibase migration operation on that server.
When you go to create a database on server "B", you should be following the same steps: Install Postgres, create an empty database, installing the very same collection of Liquibase changesets, and then running a Liquibase migration operation.
Alternatively, if making a copy of server "A" to create server "B", that copy should include the exact same Liquibase changesets. So at the end of your copy process, the two databases+changesets are identical.
Here's how I solved this problem of mine using the Liquibase Java library:
1.) Export the changelog from the source database into a temporary file (XML).
Liquibase liquibase = new Liquibase(liquibaseOutFile.getAbsolutePath(), new FileSystemResourceAccessor(), sourceDatabase);
liquibase.generateChangeLog(catalogAndSchema, changeLogWriter, new PrintStream(liquibaseOutFile.getAbsolutePath()), null);
2.) Execute the temporary file to the new data source.
Liquibase targetLiquibase = new Liquibase(liquibaseOutFile.getAbsolutePath(), new FileSystemResourceAccessor(), targetDatabase);
Contexts context = new Contexts();
targetLiquibase.update(context);
Here's the complete code: https://czetsuya-tech.blogspot.com/2019/12/generate-postgresql-schema-using-java.html

How to back up and restore an *entire* PostgreSQL intallation?

I would like to back up, and then restore, the contents of an entire postgres installtion, including all roles, all databases, everything.
During the restore, the target postgres installation shall be entirely replaced, so that its state is entirely as specified in the backup, no matter what's on the target postgres installation currently.
Existing answers include this and this, but none of them meets my requirements because
They use pg_dump, forcing me to list the databases manually. I don't want that. I want all databases.
They suggest pg_dumpall + psql, which doesn't work if the target installation already has (some part of) the tables; typical errors include ERROR: role "myuser" cannot be dropped because some objects depend on it, as a result the table is not dropped, and as a result the eventual COPY command importing data fails with e.g. ERROR: duplicate key value violates unique constraint.
They suggest copying on-disk data files, which doesn't work when you want to backup e.g. from version 9.5 and restore into version 9.6.
How can you backup, and restore, everything in a postgres installation, with one command each?

Will Liquibase updateSQL command line throw error if the changeset was already applied ?

The updateSQL liquibase command does not seem to throw an error when running updateSQL command line as it generates the relevant SQL statements for the changeset along with an entry to be created in DATABASECHANGELOG table.
My requirement is that I can only generate the SQL and hand it over to my DBA. But will liquibase throw error even if one of the changeSets in the xml fails to get created prior to generating the SQL for the changeset ?
Please help
UpdateSQL will not fail if a changeSet is already ran. The purpose of Liquibase is to track which changeSets have been applied and only "execute" changeSets that have not been ran while skipping ones that have.
If you run in regular update mode, Liquibase will directly execute each changeSet in turn. If you run in updateSql mode, Liquibase will not actually execute the SQL but instead output what it would have ran.
Liquibase will not throw any errors in updateSQL. However, if the state of the database you are going to execute the SQL file against is different than the database you run updateSQL against, the resulting SQL may not be valid. There is no re-checking of whether a changeSet has executed in the SQL output, it is just a simple "run these commands" script.