How to deal with previously valid Flyway migrations that became invalid - postgresql

I have a Spring Boot application that uses Flyway for database migrations in Postgres.
It's about four years old now, so we're talking Flyway 4.0.3, Spring Boot 1.3.x, and Postgres 9.x. Versions could probably be upgraded, but I'd like to fix any existing issues before doing that.
In the meantime, Postgres was upgraded to higher than 9.x. Unfortunately, with that, a few of the existing migrations became outdated as they contain deprecated syntax. So now starting the app with a fresh database (i.e. in a development environment) leads to those migrations failing. In production it is fine as those migrations already have already been applied and won't be again.
I am curious as to what the best practices are to go about this. I can't just go and fix the syntax in the existing migrations, as this will lead to the checksums in the production environment failing. I know repair is a thing, however I am unsure how it works and how to use it with Spring Boot.
Failing SQL:
UPDATE config
SET (description) = 'my description'
WHERE ...
Correct SQL:
UPDATE config
SET description = 'my description'
WHERE ...
Error:
Message : ERROR: source for a multiple-column UPDATE item must be a sub-SELECT or ROW() expression
EDIT 24/04/2020 Spring Boot solution:
After Grant Fitchey posted the correct answer about how to use repair for this below, I am just going to add how I did this with Spring Boot specifically. I just created a bean of FlywayMigrationStrategy that calls
repair before it calls migrate:
#Bean
public FlywayMigrationStrategy cleanMigrateStrategy() {
return flyway -> {
flyway.repair();
flyway.migrate();
};
}
When deploying this to the production environment, during startup the checksums in the schema_versions table in Postgres were fixed. In another version I will remove the flyway.repair(); line again, as otherwise obviously this would create the risk of applying invalid migrations.

You are looking for the repair option. I don't know directly how to call it through spring boot, but the documentation is here. This should take care of exactly what you're looking for.
So the first step in this case would be to fix the migrations so they execute correctly in the development environment. Development should now be fine, and flyway should migrate successfully.
On production you should now get a validation error because the checksums differ. Flyway repair will 'repair' the schema history table so that the checksums it has stored match the new ones on disk, and therefore flyway validation passes again.
Specifically what flyway repair is doing is making the schema history table match what you have on disk. It updates all the checksums for applied migrations to the checksums of the ones you have on disk (and therefore, only use this if you are confident the changes are identical). It also removes all failed migration entries from the table (again, only use this once you have cleaned up the database yourself).

Related

How to run Prisma schema update without erasing the PostgreSQL data?

I have a PostgreSQL db that is used by a Nest.Js / Prisma app.
We changed the name of a field in the Prisma schema and added a new field.
Now, when we want to update the PostreSQL structure, I'm running, as suggested by Prisma, the following commands:
npx prisma generate
and then
npx prisma migrate dev --name textSettings-added --create-only
The idea is to use the --create-only flag to review the migration before it is actually made.
However, when I run it I get a list of the changes to be made to the DB and the following message:
We need to reset the PostgreSQL database "my_database" at "my_db_name#cluster.us-east-1.rds.amazonaws.com:5432".
Do you want to continue? All data will be lost.
Of course I choose not to continue, because I don't want to lose the data. Upon inspection I see that the migration file actually contains DROP TABLE for the tables that I simply wanted to modify. How to avoid that?
So how do I run the update without affecting the data?
UPDATE:
I saw that running with --create-only creates a migration which can then be implemented on the DB level using prisma migrate dev, however, in that migration file there are still some commands that drop my previous tables because of some new parameters inside. How can I run prisma migration without deleting my PostgreSQL data?
UPDATE 2:
I don't want Prisma to drop my tables when I just updated them. The migration file generated, however, drops them and then alters them. Do you know what's the best procedure to avoid this drop? I saw somewhere I could first manually update the DB with the new options and then run the migration, so Prisma can find a way to update it, but that seems too manual to me... Maybe some other ideas?
For some cases like renaming tables or columns, Prisma's generated migration files need to be updated if they already contain data.
If that applies to your use case, Prisma's docs suggest to:
Make updates to the prisma schema
Create migration file without applying it (--create-only flag)
Update the migration script to remove the drops and instead write your custom query (e.g. RENAME <table_name> TO <new_name>)
Save and apply the migration (npx prisma migrate dev)
Note that those changes can lead to downtime (renaming a field or model), for which they have outlined the expand and contract pattern.
It might be a Prisma bug: https://github.com/prisma/prisma/issues/8053
I also recently had this situation. It probably should not try to run migration if you only want to create migration file.
But overall it is expected with Prisma to recreate your db sometimes. If you migration is breaking then it will be required to reset the data anyway when you apply it.
I suggest you to create some seeding script so you could consistently re-create the database state, it's very useful for your development environment.
More info

Anyway in GORM to preview the schema DDL that will run during migration before running it?

We are using Golang and GORM for ORM and database schema migrations. Using the defautl AUtoMigrate is nice to add tables/columns/indexes that are net new. One big issue with Postgres, specifically AWS AUrora Postgres is it seems even a simple alter table add column with no default and no not null constraints still creates a full on AccessExclusiveLock. This caused us downtime in production for what is otherwise a deployable infrastructure anytime with zero downtime.
Postgres claims this wont cause lock, but think Aurora has some innodb engine magic that does cause one, probably for read replica consistency ?
Regardless, in our deploy shell script trying to see if we can ask GORM to give us the DDL it would run, like a preview, without running it. Anyway to do that?

How do you make prisma redeploy to a database after you've removed all the tables from the database

I used prisma to generate the schema to a database, but due to changing an id column started getting errors. I deleted the tables and was going to redeploy the schema but I can't find a way to do that.
I've already tried doing things like prisma deploy, but it tells me I'm already sync'd up which isn't true. I need it to redeploy as if the schema was new and it seems to not want to do that.
TL;DR: Try prisma delete then prisma deploy.
Prisma create a database for management where it will hold all migration related data. Even if you deleted the tables manually, you didn't change the migration data about your service. Therefore, Prisma think everything is up-to-date.
Instead of manually deleting the tables, you should use prisma delete to let Prisma do a clean deleting of your tables. Then, you can re-run prisma deploy to rebuild the tables again.
Give this a try: "prisma deploy --force"
According to the documentation you have to accept data loss caused by schema changes with --force. But the tables are already deleted in your case, so I think it is ok to try it.
Prisma doc

EF - Moving from AutomaticMigrations to Manual Migrations

End of long day of testing various scenarios where I don't have to recreate a production database...
We started off with EF, and didn't get wise enough during development to move from automatic migrations to named migrations. Now I'm trying to rewind the clock, and create an initial migration that aligns with the production database.
Is this possible to align a model has with an automatic migration has in the migration table?
Should I just create an empty migration to get started with named migrations? My only problem with this is how to create the DB when a new developer joins... I could simply restore the db, and then apply migrations, but that ruins a beautiful EF migration story!
Delete the production DB, create, and write a script to re-import the data (sounds hacky).
Another wrinkle - the DB was created with EF5, and we are now developing with EF6.
Thanks in advance for your help.
It should be possible:
Delete the __MigrationHistory table
Delete any migrations in your project
Disable automatic migrations in your migrations configuration class
Add-Migration InitialCreate
Update-Database -Script
Execute the portion of the script that creates the __MigrationHistory table and inserts a row into it
Repeat steps 1 & 6 for any other existing databases
I also strongly recommend reading Code First Migrations in Team Environments.
If you don't like step 7 of bricelam's answer. Then keep reading.
Because upgrading from automatic migrations to manual can also be automated. And you don't need to delete the migrationhistory table like bricelam says.
In this case I assume you have one development machine, and multiple other (development) machines where databases already exist.
On your development machine you move your database into a safe location (keeping it safe to test your automatic upgrade procedure). (You might want to make sure your code is in the same exact state as all the database which you want to upgrade).
Delete any migrations in your project (you should't have any yet)
Disable automatic migrations in your migrations configuration class
Add-Migration InitialCreate
Put a backup (copy!) of your database back into place
On startup of your application (or your upgrade app) you do something like:
using (var db = new YourDatabaseContext())
{
InitialCreate.SkipInitialCreate = db.Database.Exists();
}
And in your InitialCreate you add something like:
public static bool SkipInitialCreate = false;
public override void Up()
{
// Skip initial create
if (SkipInitialCreate)
return;
This assumes that if a database exists it has the same structure as the one you had on your dev machine. Of course this could be way more complex than that for you. For more control you could inspect DbMigrator object for your config, and skip more than one migration. (would have been nice if you could query the hash of the model in the DbMigrator...)
After the code in step 4 add something like:
var configuration = new YourDatabaseConfiguration();
var migrator = new DbMigrator(configuration);
migrator.Update();
Start your application on your dev machine to test the upgrade. Your history table should have an automaticmigration record and a initialcreate record.
If everything works out you should be able to simple deploy and run the app on machines where the db already exists.
One very important aspect (and maybe the reason bricelam says you needed to delete the history table) is that if the AutomaticMigration record in that table is newer than one of your manual migrations you are going to have a Bad time. Because the migrator uses the dates to sort out the work it needs to do.
So if there are systems out there still automatically upgrading to the newest auto-upgrade model, then you are in a bit of a pickle.

MVC3 and Code First Migrations - "model backing the 'blah' context has changed since the database was created"

I started off my project by using Entity Framework Code First. When I was ready I uploaded my database and code to my host provider. Everything worked.
I need to add a new field to one of my classes and I don't want to loose the data in the database. Thus, I tried following some blog posts about using Code First Migrations. I did the following:
I backed up my remote (production) database.
I attached this database locally
I added the property to my class
PM> Enable-Migrations
PM> Add-Migration AddSortOrderToCar
PM> Update-Database
At this point I created a .bak file of the local database and then used that file to 'restore' to the remote one.
Lastly, I published the code to the remote site.
When I visit the site I get the following error message:
The model backing the 'blahblah' context has changed since the database was created. Consider using Code First Migrations to update the database.
What am I doing wrong?
From my experience that suggests that migration table is out of sync (even if your data isn't), and that's been part of the db schema now (since 4.3 I think - under system tables).
There could be many reasons and ways to experience that error , but most of the time...
The problematic part is some combination of manually backing/restoring the full database with code changes alongside - I'm not entirely certain as to why always.
In short, even if Db-s are the same migration table data might not be - and hash comparison may fail (still full restore sounds like good enough - but you have 'two sides').
What works for me is to use
Update-Database -Script
That creates a script with a 'migration difference',
which you can manually apply as an SQL script on the target server database (and you should get the right migration table rows inserted etc.).
If that still doesn't work - you can still do two things...
Remove the migration table (target - under system tables) - as per http://blogs.msdn.com/b/adonet/archive/2012/02/09/ef-4-3-automatic-migrations-walkthrough.aspx comments in there - that should fail back to previous behavior and if you're certain that your Db-s are the same - it's just going to 'trust you',
As a last resort I used - make a Update-Database -Script of the full schema (e.g. by initializing an empty db which should force a 'full script'),
find the INSERT INTO [__MigrationHistory] records,
just run those, insert them into the database,
and make sure that your databases - and code match,
that should make things run in sync again.
(disclaimer: this is not a bullet proof to work at all times, you may need to try a few things given your local scenarios - but should get you in sync)
I think in step 6 you need to run Update-Database -Verbose
Also this link is very helpful for updating database in EF with Scaffolding
http://www.asp.net/mvc/overview/older-versions/hands-on-labs/aspnet-mvc-4-entity-framework-scaffolding-and-migrations