In Azure DevOps, the way I used to update the SQL Server Database was with Entity Framework Core, using two tasks:
In my Build Pipeline: This task that generated a sql script with my db
migrations.
In the Release Pipeline: This task
to update the database using this script.
The thing is, now that I'm using a PostgreSQL database, I can't find an easy and clean way to update the database in the same way. I've seen there's another task for MySQL that does exactly the same my release pipeline task did with SQL Server, but nothing for PostgreSQL.
So I thought I could basically execute dotnet ef update database (with its proper options set) in the pipeline, but I was wondering if there's actually a way to keep updating the database in a smooth way as I did before.
I finally got to fix it.
There are two solutions I found to fix the issue.
First, there's a common fix for all the databases that support Entity Framework Migrations:
Using a .NET Core Task, we'll have to install the dotnet ef tool:
The task would look like this:
And this would be the YAML (in case you want to use it in out of the release pipeline):
- task: DotNetCoreCLI#2
displayName: 'dotnet custom'
inputs:
command: custom
custom: tool
arguments: 'install --global dotnet-ef --version 3.1.4 --ignore-failed-sources'
And once we have the required tools installed, with a CMD or a Bash Task, we'll have to execute a script like this:
dotnet ef database update -c <DBCONTEXT> -p <PROJECT> -s <STARTUP_PROJECT> -v --no-build
You just have to add the flag -c in case you have more than one context in your project (sometimes the other DbContexts can come from some nugget packages).
Notice I added the flag --no-build since I already built the project in the build pipeline to follow good practices.
The other option (and the one I finally used), it's been to use this task that basically does the same process, with the difference that it does it by using your already compiled .dll files, so you won't have to copy the entire project to make the migrations work. The setup of the task, although you have to fill many inputs, it's pretty straightforward, and it's supposed to work with other Databases as well.
However, if I had to use SQL Server or MySQL I would use a migrations script, since the process it's much easier (you just need to generate a .sql script and then it's the only file required for deploying the migrations).
Related
Context
I'm working a pipeline to manage a dot net core project. In my pipeline, at the build stage I run dotnet ef migrations script --context "MyDbContext" --project "./MySolution/MyDataProject/" --output "./migrationScript.sql" --idempotent to generate scripts which can later be run against the various test, staging, and production environments databases to sync the schema (I use the same approach in non-production environments as in production to ensure those scripts are tested).
It's my understanding that these scripts aren't generated based on the database context, but rather use the migrations which are already defined in the project; with those migrations having been defined via the add command (e.g. dotnet ef migrations add vNext --context "MyDbContext" --project "./MySolution/MyDataProject/"). As such, it's possible that a developer may make changes to the data model, forget to run the migrations add command to add a new migration, and thus create a solution that will fail to update the schema.
Question
Is there a way to test if the migrations are in sync with the code / if a new migration is needed?
Thoughts around Solutions
I'd hoped that by running dotnet ef migrations add when there are no changes to the dbContext there would be no output; so I could test for the presence of a new file and have the pipeline terminate if this had been missed (checking nothing into git; so the new migration isn't persisted / it's left for the developer to manually rerun this).
Looking at the available options, there's only add, remove, list, and script; so nothing obvious for performing this check.
I could run the add command before the script command in my pipeline to ensure it's been run; but then there may be issues with future runs, since if the added migration isn't pushed to git, the next iteration won't be aware of this migration.
If I were to push this new migration to git that resolves that issue; but then creates a new issue that every build creates a new migration, so we'll have a lot of redundant clutter / this approach won't be sustainable.
The best solution I can think of is to run the add command, then inspect the generated scripts to see if the Up or Down methods have anything in the functions' bodies; but that feels hacky; and I'm not certain it's enough (e.g. do the methods in the .Designer.cs file ever change without producing anything in Up/Down in the .cs file?).
I'm sure MS would have created something in the toolset for this; but I'm struggling to find it. Thank-you in advance for any help.
Existing / Similar Answers
In EF Core, how to check whether a migration is needed or not? - This point answers the question about applying migrations, but not about generating them.
Something that appears to work with a few simple test cases I've tried is to add a test migration, and then check if the <ContextName>ModelSnapshot.cs file has changed in the migrations directory.
That file only appears to change when the core migration has been amended.
I am using Entity Framework code first. I am able to update my database during development using Update-Database from Nuget Package Manager. Now I want to the database to be updated when building from my Azure Pipeline. Is there any documentation from Microsoft of this? How are others handling this?
(Not sure how much information you are looking for, so ask a follow-up/add a comment if this isn't enough.)
Server-side (e.g. in CI pipelines) as well as client-side, this can be done with the CLI instead of the VS specific package manager console:
dotnet ef database update --verbose
See Entity Framework Core tools reference - .NET Core CLI: dotnet ef database update for further information.
Generally, I would discourage anybody from automatically applying database migrations to a production environment.
Migrations should always be scripted to a file via dotnet ef migrations script, thoroughly checked and tested and only then be applied directly from this script (after creating a backup) to a production database.
I have been using Fluent Migrator (version 3.2.1) for some time and in my Visual Studio environment I use dotnet-fm to migrate or rollback my migrations. This is all great but now I want to automate this and use Azure Devops Pipelines to run the migration commands but I don't know how and where to start.
Has anyone done this and could be kind enough to point me in the right direction, maybe with some examples. I would greatly appreciate it!
How do build an Azure Devops Pipeline with Fluent Migrator task?
Not sure if what I did is exactly what you want. You could check if the information below is helpful.
According to the document Quickstart of fluentmigrator:
Created a .net core library project and add the package FluentMigrator, FluentMigrator.Runner, FluentMigrator.Runner.SQLite, Microsoft.Data.Sqlite.
Create a file called 20180430_AddLogTable.cs.
Build the project.
Open a cmd window, switch path to the project folder, and then execute the command line:
dotnet tool install -g FluentMigrator.DotNet.Cli
After install the FluentMigrator.DotNet.Cli and execute the command line:
dotnet fm migrate -p sqlite -c "Data Source=test.db" -a ".\bin\Debug\netcoreapp2.1\test.dll"
It works fine on my local side.
Then, submit the solution to the Azure devops repo, create a pipeline with following tasks:
NuGet tool installer
NuGet Restore
Dotnet build
Command line task with following scripts:
cd $(Build.SourcesDirectory)/test/test
dotnet tool install -g FluentMigrator.DotNet.Cli
dotnet fm migrate -p sqlite -c "Data Source=test.db" -a ".\bin\Debug\netcoreapp2.1\test.dll"
It works the same:
Hope this helps.
So I have a Dev and a Staging environments (Azure DevOps).
The CD pipeline generates a migrations script of the Dev environment DB.
This latter is executed by Staging release pipeline, to put the Staging DB up to date.
The generated script contains all the migrations (it is not a --from, --to script).
Although the command generating the migrations script uses the --idempotent parameter, to avoid executing migrations that were already brought to the Staging DB, some queries would still cause errors (when being syntax checked), for instance when they use some table properties that don't exist any more.
Is there any way to completely bypass/NotExecute the already applied migrations ?
I don't want to go with the --from, --to when generating the migration script, as the CD pipeline(using the Dev environment) can't know about what's been applied or not in the Staging environment. That would necessit writing a complex dedicated Powershell script (not time for it).
Based on my experience, and confirmed with my colleagues, I’m afraid that if you don’t want to use from to in the generated migration script or use powershell script to achieve this , there’s no such other method can let you bypass the applied migration then only apply the changed migration script.
—-
In local command line, if there’s some script can achieve this, it would also can be used in Azure Devops. In this issue, if you don’t want to use from to in EF migration command, powershell script would be the only way can achieve what you want to do.
I want to create one click deployment on azure pipelines to move Postgres Sql changes from dev to QA environment,similar to what we implement using SQL Server Database project where a Powershell script deploy the changes to the remote server.
I have tried pg_dump and psql commands which will create dump file and restore it on the remote server. It does not perform diffing ie(comparing database changes on source and destination , and only replicating the missing changes)
You've stumbled upon one of the features lacking in the Postgres ecosystem. One of the more elegant ways to solve migrations using Postgres' own tooling is to package up your migrations as a Postgres Extension. This requires you to generate the deployment scripts yourself, but it is a neat way of applying and packaging up the deployments.
There are a number of commercial tools that will assist in this process, such as Datical, Liquibase, and Flyway. Note, some of these still require you to generate the change statements yourself, some attempt to create them for you.
Generating change statements is a whole different animal and I recommend you look at schema diffing tools for Postgres to find what best suites your needs.