I need to add sql scripts to a new database created with code first approach. I couldn't find anything about that when googling for it. How is it done please?
Background:
I need to add triggers to the database that need to run everytime certain tables are updated (which is an external process not controlled by my application). So I need to install the triggers in the database on its creation.
Edit (09/16/22 3:34 pm)
Using a migration is not desired. Everything needs to be done in the code, which will already create the database if it is not present.
Edit (09/16/22 4:31 pm)
The script is not meant to be executed when the server starts. It's a trigger the db server should execute whenever a table gets changed (externally). So an ExecuteSqlRaw() call during startup of the server is not what I am looking for.
I have a lot of functions(500+) in the project. It’s spring boot + PostgreSQL.
How to configure liquibase to include all functions from directory and update them if needed?
For example
I modify a few functions inside.
Liquibase run “CREATE OR REPLACE …”(for each function) on app startup.
This way I don’t need any new changeset.
Are there any drawbacks of it?
I don’t like to copy paste 500 changesets.
Is there a way of knowing that a table's data has changed (insert/update/delete) without using a trigger on that table? Perhaps a global trigger to indicate changes on a table?
If you want notification of changes, you will need to add a trigger yourself. Firebird 3 added a new feature to simplify identifying changed rows, the pseudo-column RDB$RECORD_VERSION. This pseudo-column contains the transaction that created the current version of a row.
Alternatively, you could try and use the trace facility to monitor for changes, but that is not an out of the box solution, as you will need to write the necessary logic to parse the trace output (and take things like transaction commit/rollback into account).
I'm developing a system with database version control in LiquiBase. The system is still in pre-alpha development and there are a lot of changes that were reverted or supplemented by other changes (tables removed, columns added and removed).
The current change set reflects the whole development history with many failed experiments, and this whole is rollouted when initializing the database.
Because there is NO release version, I can start from scratch and pull actual DB state in single XML changeset.
Is there a way to tell LiquiBase to merge all change sets into one file, or the only way to do that is per hand?
Just use your existing database to generate change log that will be used from now on. For this you can use generateChangeLog command from command line, it will generate the changelog file with all the changeSets that represent your current state of the database. You can use this file in your project as initial db creation file, to be used on an empty database. Here's a link to docs.
There is a page in the Liquibase docs which discusses this scenario in detail:
http://www.liquibase.org/documentation/trimming_changelogs.html
To summarise, they recommend that you don't bother since consolidating your changelogs is both risky and low-reward.
If you do want to push ahead with this, then restarting the changelog using generateChangeLog, as suggested by #veljkost, is probably the easiest way. This is documented at http://www.liquibase.org/documentation/existing_project.html
Hence I didn't find automatic solution for this problem in case the changelog is already deployed on several databases in different states, I will describe here my solution for this problem:
Generate changelog of your current development state of database using liquibase generate changelog, like:
mvn liquibase:generateChangeLog -Dliquibase.outputChangeLogFile=current_state.yml
Audit generated changelog, check whether it looks good (liquibase is not perfect, it often generates stupid statements). Also if you have in your schema some static data, like dictionaries or so, which were previosuly populated using liquibase, you have to add these to generated changelog as well, you can export data from your database using generate changelog command mentioned above with -Dliquibase.diffTypes=data property.
Now to prevent the execution of generated changelog (it will obviously fail on existing databases, on prod, test, and other developers local envs), you can do this using for example liquibase changelogSync, or using liquibase contexts, but all this options require you to do some manual work on every database. You can achieve automatic result by adding the preConditions statements for your changeSets.
For changesets intended to run on empty databases (changelogs you generated in step 1. above) you can add something like this:
preConditions:
- onFail: MARK_RAN
- not:
- tableExists:
tableName: t_project
Where t_project is the table name that existed before (most likely this should be table added in first changeSet, so every database which runned at least one changeSet will have this). This will mark generated changelog as run on environments with existing schema, and will run generated changelog for every new database you would like to migrate.
Unfortunatelly you have to adjust all legacy changesets as well (I didn't found better solution yet, I did this change using regex and sed), you have to add something like that:
preConditions:
- onFail: MARK_RAN
- tableExists:
tableName: t_project
So opposite condition, from above one. With this all databases which runned at least one changeset in past, will continue to migrate (EXECUTED status of changesets) until changeset generated in step 1. above, and mark generated changesets with MARK_RAN. And for new databases, all previous changesets will be skipped, and first executed will be one generated in step 1. above.
With this solution you can push your merged changelog at any time, and every environment and developer won't have any problem with manual syncing or so.
The problems with SQL views is that every time I need to make a small change I need to create another migration. Being in a small startup, that's quite a hindrance to have to change something small to change the view.
Is it advisable to do the following
Drop and recreate view everytime I deploy my app;
This way, when I change something in the view, it will get updated in the database as soon as I deploy my app.
What you are describing is just another type of migration that get reversed on deploy. This may make sense for your business needs, and if you get blocked by this technique, you can always fall back to the regular migration system.
The best way to implement such a system in PostgreSQL is to create a schema that you drop on deploy. This way you don't have to create all the DROP VIEW ... commands, just DROP SCHEMA and everything in there will be deleted. Then you can run you procedure to rebuild it.
Example deploy script to execute on deploy:
/* Drop and rebuild the schema */
DROP SCHEMA IF EXISTS view_schema;
CREATE VIEW view_schema.my_users AS (SELECT * FROM users);
CREATE VIEW view_schema.my_products AS (SELECT * FROM products);
....