Control where liquibase DATABASECHANGELOG tables are created on Z/Os - db2

I'm testing using liquibase for DB2 on Z/Os. I have created several TEST databases running in their own table space. When I run liquibase, it applies my changes but it creates the DATABASECHANGELOG table in SYSDEFLT storage group.
Is it possible to designate where the databaseChangeLog tables are created? Instead of creating them in the SYSDEFLT storage group we would like to designate a user database for them.

Yes, you can. How you do it depends on how you run liquibase.
From the command line, either pass arguments or set the arguments in a liquibase.properties file.
The properties are liquibase.databaseChangeLogTableName and liquibase.databaseChangeLogLockTableName
If you are using other ways of running Liquibase, it will be similar.

Related

Checking consistency of SQL tables schemas in Airflow

I have an Airflow pipeline in which I create some tables in a PostgreSQL database (A). This works fine, as I have the tables schemas that reside in some .sql files, which are used to create the tables in aforementioned database. These definitions were downloaded manually from another database, which is also a PostgreSQL db (B).
I'd like to run a consistency check at the start of the Airflow main DAG to check that the tables schemas (found in .sql files) that I use to recreate them in (A) db, are identical to the ones on (B) db.
Note: I also have definitions for views in my .sql files.
How could I accomplish this? Any examples would also be helpful.

Run SQL script on multiple schemas with Flyway

I am migrating the DB using Flyway. I have a SQL script file which need to run on multiple schemas hosted on a single database.
In my SQL file if I mention ${db_schema} as the parameter and supply with different schema names, will that work? Is there any other approaches to handle this scenario?
SET search_path TO ${db_schema};
You should be able to use placeholders in flyway to handle more than one schema. Here's an article that outlines how that works.

How to copy a database schema from database A to database B

I have created a Postgresql database A using liquibase changesets. Now, I'm creating an application that allows creating a new database B and copies the schema from database A in real-time including the liquibase changesets as the database can still be updated later. Note that at the time of the copied schema in database A could already be updated, making the base changesets outdated.
My main question would be:
How to copy PostgreSQL schema x from database a (dynamically generated at run-time) to b using liquibase? Database b could be on another server.
If it's not possible with liquibase, what other tools or approaches would make this possible?
--
Let me add more context:
We initialize a new database a schema using liquibase changeset.
We can add a new table and field to the database an at run-time. Or during the time when the application is running. For example, we add a new table people to the schema of database a, which is not originally in the changeset. This is done using liquibase classes too. So changeset is added to databasechangelog table.
Now, we create a new database b.
We want to import the schema of the database a to b, with people table.
I hope that is clear.
Thanks.
All schema changes must be run through your schema migration tool
The point of using a database schema migration tool such as Liquibase or Flyway is to have a “single source of truth” regarding the structure of your database tables. Your set of Liquibase changesets (or Flyway scripts) is supposed to be that single source of truth for your database.
If you are altering the structure of you database at runtime, such as adding a table named people, outside the scope of your migration tool, well, then you have violated the rules of the game. You will have defeated the purpose of using a schema migration tool. The intention of using a schema migration tool is that you make all schema changes through that tool.
If you need to add a table while running in production, you should be dropping the physical file for the Liquibase changeset (or Flyway script) into the file system of your database server environment, and then invoking Liquibase (or Flyway) to run a migration.
Perhaps you have been misunderstanding the sequence of events:
If you have built a database on server "A", that means you installed Postgres, created an empty database, then installed the collection of Liquibase changesets you have carefully built, then ran a Liquibase migration operation on that server.
When you go to create a database on server "B", you should be following the same steps: Install Postgres, create an empty database, installing the very same collection of Liquibase changesets, and then running a Liquibase migration operation.
Alternatively, if making a copy of server "A" to create server "B", that copy should include the exact same Liquibase changesets. So at the end of your copy process, the two databases+changesets are identical.
Here's how I solved this problem of mine using the Liquibase Java library:
1.) Export the changelog from the source database into a temporary file (XML).
Liquibase liquibase = new Liquibase(liquibaseOutFile.getAbsolutePath(), new FileSystemResourceAccessor(), sourceDatabase);
liquibase.generateChangeLog(catalogAndSchema, changeLogWriter, new PrintStream(liquibaseOutFile.getAbsolutePath()), null);
2.) Execute the temporary file to the new data source.
Liquibase targetLiquibase = new Liquibase(liquibaseOutFile.getAbsolutePath(), new FileSystemResourceAccessor(), targetDatabase);
Contexts context = new Contexts();
targetLiquibase.update(context);
Here's the complete code: https://czetsuya-tech.blogspot.com/2019/12/generate-postgresql-schema-using-java.html

Dropped postgres tables recreated when database dropped and recreated [duplicate]

Whenever I create a new database from pgAdmin or using the command line (using CREATE DATABASE database_name), it's not empty.
It contains some tables that are part of a previous project I worked on.
I'm not yet very familiar with Psql so I don't know what I'm doing wrong.
You probably have created objects in the database template1.
Quote from the manual:
By default, the new database will be created by cloning the standard system database template1. A different template can be specified by writing TEMPLATE name. In particular, by writing TEMPLATE template0, you can create a virgin database containing only the standard objects predefined by your version of PostgreSQL. This is useful if you wish to avoid copying any installation-local objects that might have been added to template1.
So, anything that is in the template1 database will be copied over to the new database when you run create database.
Connect to the template1 database and drop all objects you don't want.

Why is my new PostgreSQL database not empty?

Whenever I create a new database from pgAdmin or using the command line (using CREATE DATABASE database_name), it's not empty.
It contains some tables that are part of a previous project I worked on.
I'm not yet very familiar with Psql so I don't know what I'm doing wrong.
You probably have created objects in the database template1.
Quote from the manual:
By default, the new database will be created by cloning the standard system database template1. A different template can be specified by writing TEMPLATE name. In particular, by writing TEMPLATE template0, you can create a virgin database containing only the standard objects predefined by your version of PostgreSQL. This is useful if you wish to avoid copying any installation-local objects that might have been added to template1.
So, anything that is in the template1 database will be copied over to the new database when you run create database.
Connect to the template1 database and drop all objects you don't want.