I'm developing a system with database version control in LiquiBase. The system is still in pre-alpha development and there are a lot of changes that were reverted or supplemented by other changes (tables removed, columns added and removed).
The current change set reflects the whole development history with many failed experiments, and this whole is rollouted when initializing the database.
Because there is NO release version, I can start from scratch and pull actual DB state in single XML changeset.
Is there a way to tell LiquiBase to merge all change sets into one file, or the only way to do that is per hand?
Just use your existing database to generate change log that will be used from now on. For this you can use generateChangeLog command from command line, it will generate the changelog file with all the changeSets that represent your current state of the database. You can use this file in your project as initial db creation file, to be used on an empty database. Here's a link to docs.
There is a page in the Liquibase docs which discusses this scenario in detail:
http://www.liquibase.org/documentation/trimming_changelogs.html
To summarise, they recommend that you don't bother since consolidating your changelogs is both risky and low-reward.
If you do want to push ahead with this, then restarting the changelog using generateChangeLog, as suggested by #veljkost, is probably the easiest way. This is documented at http://www.liquibase.org/documentation/existing_project.html
Hence I didn't find automatic solution for this problem in case the changelog is already deployed on several databases in different states, I will describe here my solution for this problem:
Generate changelog of your current development state of database using liquibase generate changelog, like:
mvn liquibase:generateChangeLog -Dliquibase.outputChangeLogFile=current_state.yml
Audit generated changelog, check whether it looks good (liquibase is not perfect, it often generates stupid statements). Also if you have in your schema some static data, like dictionaries or so, which were previosuly populated using liquibase, you have to add these to generated changelog as well, you can export data from your database using generate changelog command mentioned above with -Dliquibase.diffTypes=data property.
Now to prevent the execution of generated changelog (it will obviously fail on existing databases, on prod, test, and other developers local envs), you can do this using for example liquibase changelogSync, or using liquibase contexts, but all this options require you to do some manual work on every database. You can achieve automatic result by adding the preConditions statements for your changeSets.
For changesets intended to run on empty databases (changelogs you generated in step 1. above) you can add something like this:
preConditions:
- onFail: MARK_RAN
- not:
- tableExists:
tableName: t_project
Where t_project is the table name that existed before (most likely this should be table added in first changeSet, so every database which runned at least one changeSet will have this). This will mark generated changelog as run on environments with existing schema, and will run generated changelog for every new database you would like to migrate.
Unfortunatelly you have to adjust all legacy changesets as well (I didn't found better solution yet, I did this change using regex and sed), you have to add something like that:
preConditions:
- onFail: MARK_RAN
- tableExists:
tableName: t_project
So opposite condition, from above one. With this all databases which runned at least one changeset in past, will continue to migrate (EXECUTED status of changesets) until changeset generated in step 1. above, and mark generated changesets with MARK_RAN. And for new databases, all previous changesets will be skipped, and first executed will be one generated in step 1. above.
With this solution you can push your merged changelog at any time, and every environment and developer won't have any problem with manual syncing or so.
Related
Our Liquibase script can not be rerun because underlying column is already gone
Consider the following changesets:
A table "foo" is created, and "domain" is one of the columns in this table;
A constraint (in form of an index) is placed on the column "domain";
Column "domain" is dropped from the table "foo".
Now when we try to rerun all liquibase scripts (over already existing DB structure), changeset 2 fails with
[ERROR] Reason: liquibase.exception.DatabaseException: ERROR: column "domain" named in key does not exist
all because the "domain" column in the actual DB is already gone before changeset 2 is run.
Is there any better way to make these changesets runnable other than recreating the "domain" column in the table so that all 3 changesets can run?
there are hundreds of changesets in the system besides the 2 above;
the solution is strongly preferred to avoid any manual steps because there are dozens of environments in which the changesets must be rerun;
In a perfect world, a developer would have placed a preConditions on changeset 2 to check that not only the index is missing, but the underlying column exists, but we have to deal with what we have. It is my understanding that rewriting existing changesets is strongly discouraged in liquibase.
You can always add a preCondition to the changeSets #2 and #3 to check that the domain column exists, e.g.:
<preConditions onFail="MARK_RAN">
<columnExists tableName="foo" columnName="domain"/>
</preConditions>
If these changeSets will start to fail with the "different checksum error", than you can always provide the new checksum or just add <validCheckSum>ANY</validCheckSum>.
This way you'll be able to run these changeSets in all environments you need.
Rewriting the changeSets is discouraged, but writing preConditions for the changeSets is quire encouraged.
According to the comment, your problem is changing liquibase scripts directory location.
What actually happens is liquibase will compare liquibase script's relative path when executing these changesets in your scripts. You can find this relative path in the databasechangelog table and under column filename.
First thing you should understand is problem is not with the checksum. So it will not solve your problem.
The easiest thing that you can do is change the values in column filename in table databasechangelog. If you have more than one liquibase script files I suggest you to change them one by one. Simple sql query like this can do the job.
update databasechangelog set filename='<new_filename>' where filename='<old_filename>'
Note: You can make the situation worse if you did it wrong. Make sure you double check everything before you make any changes.
My application uses mongock 4.1.19 and when ever there is a changeSet with runAlways=true, there are duplicate entries getting created in the dbchangelog collection.
the below line does not seem to consider already executed case and may be resulting in duplicate changelog entries
Any pointers on how this can be addressed
https://github.com/cloudyrock/mongock-core/blob/91d15d65a22234f4a2e8d28c759d0641d36750e0/mongock-runner/mongock-runner-core/src/main/java/com/github/cloudyrock/mongock/runner/core/executor/MigrationExecutor.java#L139
Below Logger logged at startup -
RE-APPLIED - ChangeEntry{...}
It's not really duplicated. It creates a changelog entry per execution.
However, we understand this is not the more common desired behaviour, we are releasing a bugfix(4.3.8) for version 4 in the next days, probably today.
In version 5, which is under development, we'll keep this by default, plus updating the last_execution field we'll add, and add the option to insert a new entry per execution if desired.
i am new to pre and post deployment
To understand this i came across this:
“”When databases are created or upgraded, data may need to be added, changed, or deleted. Moreover, certain actions may have to occur on the database before and/or after the process completes. Deployment scripts can be used to accomplish this.””
I want to understand how this exactly works with an example
https://www.mssqltips.com/sqlservertutorial/3006/working-with-pre-and-post-deployment-scripts/
As pointed out in the site, a good example of a post deployment step is insertion of seed data.
For instance, you create a new currency table as part of the schema migration step. Then you insert the most commonly used currencies (say USD, EUR, etc.) so that they don't have to be inserted with a manual step.
Another example of post deployment step is populating data for a newly added column. For example you add a new column called IsPremium to the Customers table and want to set all customers with a start date > 5 years as true. A post deployment script is good place to do that.
Similarly scripts that run before the migration go into pre-deployment scripts. One example is locking certain table to ensure that the migration script is run only once, or setting a flag to indicate a migration is in progress.
I have three solutions. One is a schema solution that only has a schema File in it, lets call it the SchemaSolution.
The SchemaSolution is referenced in my other two solutions because the Solution1 creates xml instances of the schema in the SchemaSolution and drops it as self-correlating in the message box.
This works magically but if I want to update one of the solutions where the SchemaSolution is referenced (deploy to BizTalk) I always have to delete the other solutions. This is horrible and I was not able to find a solution until now.
Is there a (no hacky) way? I thought about merging all Projects into one solution, but this is the worst case scenario I can imagine to achieve my goal.
How can I deploy a project that is referenced in different solutions without deleting and redeploying everything?
BizTalk 2013R2 in use
No this is not supported and not recommended to try and hack your way into this idea (definitely need to alter the BizTalk database, and this is not even allowed by Microsoft i think).
I can give you 3 options:
Make the SchemaSolution as small as possible, like break it down into multiple schema solutions per process for instance, so the chances of you needing to change the solution will be smaller. Ideally, in this solution you would have 1 assembly/project per schema, so new schema's can be added without redeploy.
Another option would be to duplicate your schema's into your projects, this is a design choice you could make, but would require some more work as you need to specify schema's in your pipelines (or else it doesn't know which one you mean), and you have double work with changing the same schema's in multiple projects. The downside is, the schema's are not the same to BizTalk so you can't use it in another project without reference.
Your final option would be to get rid of the dependency of that schema completely, you can do this by creating your own internal/generic/cdm schema, which ideally would be more robust and less prone to changes. This schema would still be referenced by multiple projects, but since you're the one in charge of it, you can predict and mold it into your likings. Again, ideally, in this solution you would have 1 assembly/project per schema, so new schema's can be added without redeploy.
I have a very similar (if not the same) issue within a solution.
I have a set of integration projects dependent on a simple schema project. If I deploy one integration project, I must deploy the schema project, which means I must deploy all integration projects!
In order to deploy them independently, I simply turned the redeploy flag from true to false within properties (in VS) of the schema project..
This allows me to redeploy as many other dependent projects as I like without having to delete or mess around. I can deploy a single integration project with no effect on the others.
The only caveat, is when you redeploy, for some reason, VS flags the fact you have set redeploy to False on the schema project as an error and says that one of the projects was not deployed.
Not a true error, more of a warning imo.
I have been doing this in BT2016, I would assume you can do the same in 2013
I have a conventional spring-batch job where I read from database, process domain objects and write it out to a file.
I need to slightly tweak the functionality during the processor phase so that I can update and commit the domain object to the database and the write it out to a file. I would need the commit to happen instantly as I would require the database ID for the write phase.
When I tried updating the domain object and saving it, I noticed that the entity was getting committed after the write phase.
Is there any way to force the commit to happen instantly during the processor phase and continue as before?
I would need the commit to happen instantly as I would require the
database ID for the write phase.
i am not sure what id do you need, as you should already have one, when you are trying to update an (existing) entry
if you meant insert, you can work around this issue by using database specific functions to get the id of the inserted but not yet commited object
e.g. for oracle - Obtain id of an insert in the same statement
When I tried updating the domain object and saving it, I noticed that
the entity was getting committed after the write phase.
that is desired behaviour, because the write part is the last one within the (chunk)transaction, if it is successful - commit, if not - rollback, imagine a successful commit and a problem with the file, the item in the database would have a wrong state