Spring data DbUnit and liquibase for integration test - spring-data

I want to write integration test for my spring data repositories but i'm using liquibase to handle the incremental changes on the database. In this link is necesary use a changelog from liquibase, but i don't know what happen if there are many changeset inside the changelog file, and what if in the future you add a column to the table, how can i handle this? I cannot find more information about liquibase and DbUnit so I don't know how to use liquibase and DbUnit, Could you give any idea?

Related

Flyway migration test particular migration data transformation

I have migration that transforms complex jsonb object into another jsonb object but with different structure and store it in new column existing_column_name_v2.
How to test that to make sure I did everything ok?
Here are my steps I am thinking right now:
Apply migrations up to the last one
Feed the data
Apply last migration
Run tests in language of choice and compare results
I am not sure how to execute just migrations I need with Flyway.
To migrate to a specific version, use flyway migrate target=<version> as described in the Flyway docs.

Stored procedure for PostgreSQL on Liquibase Community

I read on Wikipedia that you need the Commercial version of Liquibase to deal with stored procedures. Can anybody please comment on this?
Thanks
https://en.wikipedia.org/wiki/Liquibase
No, you don't.
I typically put the code to create the function and procedure into a SQL script and then use to run it. The changeSet itself is defined as runOnChange=true so I only need to edit the file to make Liquibase apply the changeset
<changeSet id="1" author="foo" runOnChange="true">
<sqlFile path="procs/create_function.sql"
relativeToChangelogFile="true"/>
</changeSet>
I do the same with views and materialized views.
Liquibase community manager here.
As described in the answer by #a_horse_with_no_name, it is entirely possible to write a Liquibase changelog that creates stored procedures that will work just fine in the free version.
To do that you can use the XML changelog syntax with a <sql> or <sqlFile> tag, or you could use a formatted sql changelog.
The thing that the Pro version of Liquibase introduces is the ability to use the generateChangeLog and diffChangeLog commands to "reverse-engineer" stored logic (including stored procedures) from an existing database, generating XML changelogs that use the <createProcedure> tag.
Yes. There are two options that are available for stored procedures/stored logic: Liquibase Pro or Datical. You can get a free trial for Liquibase Pro to try them out to make sure they work for you at www.liquibase.org.

Liquibase from JPA classes, no database yet

We have started our application by our model classes, annotated with JPA annotations. We did not create any tables in the database yet.
Now, we would like to somehow generate a liquibase changelog by only looking at the JPA classes, maybe at file persistence.xml.
Most of the questions and answers about liquibase in SO suggest to run liquibase and compare it with the current state of the database. But that's not our case, because our database does not have any of the tables corresponding to the JPA entities, nor any of the liquibase control tables.
How do I generate a liquibase changelog file from the JPA entities?
you can take a look on liquibase-hibernate plugin https://github.com/liquibase/liquibase-hibernate/wiki
you can make a diff between your JPA entities against an empty database , and this will generate the whole changelog of the JPA entities..

How does liquibase handle execution from multiple users?

We have a PostgreSQL database that has consistently been updated using liquibase by a single user (a DB owner role). We recently ran an update command as another user (a system owner) and it registered as if it was a new DB/schema, meaning that liquibase tried to execute all changesets since the beginning, not just those that we expected to be the last few that were not in the databasechangelog table. Obviously this failed since those changesets had already been applied as the other user. However, it raised the question of how to handle this. Do you know why it's doing this? Is this a DB-specific issue or is this an issue at the liquibase level? Or is this an issue at all and we should accept as part of our business processes that all updates to a particular DB need to be executed by the same user?
Liquibase determines which changeSets have ran by selecting from the DATABASECHANGELOG table. What I would guess is happening is that the new user has a different default schema and so is looking in a different place for that table.
Depending on how you run Liquibase, there is a changelogSchemaName or similar attribute to control where Liquibase looks for the table.
It appears that Liquibase does is user-agnostic and that information is not recorded nor needed in the DATABASECHANGELOG.

How to migrate existing data managed with sqeryl?

There is a small project of mine reaching its release, based on squeryl - typesafe relational database framework for Scala (JVM based language).
I foresee multiple updates after initial deployment. The data entered in the database should be persisted over them. This is impossible without some kind of data migration procedure, upgrading data for newer DB schema.
Using old data for testing new code also requires compatibility patches.
Now I use automatic schema generation by framework. It seem to be only able create schema from scratch - no data persists.
Are there methods that allow easy and formalized migration of data to changed schema without completely dropping automatic schema generation?
So far I can only see an easy way to add columns: we dump old data, provide default values for new columns, reset schema and restore old data.
How do I delete, rename, change column types or semantics?
If schema generation is not useful for production database migration, what are standard procedures to follow for conventional manual/scripted redeployment?
There have been several discussions about this on the Squeryl list. The consensus tends to be that there is no real best practice that works for everyone. Having an automated process to update your schema based on your model is brittle (can't handle situations like column renames) and can be dangerous in production. Personally, I like the idea of "migrations" where all of your schema changes are written as SQL. There are a few frameworks that help with this and you can find some of them here. Personally, I just use a light wrapper around the psql command line utility to do schema migrations and data loading as it's a lot faster for the latter than feeding in the data over JDBC.