EF Migrations with Azure Pipeline Tasks - entity-framework

It appears to me that it's not straight forward to get migrations running with a yaml build pipeline with tasks because I'm getting the following error when I try and create a task that runs dotnet ef migrations.
==============================================================================
Task : PowerShell
Description : Run a PowerShell script on Linux, macOS, or Windows
Version : 2.151.1
Author : Microsoft Corporation
Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/utility/powershell
==============================================================================
Generating script.
========================== Starting Command Output ===========================
##[command]"C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command ". 'C:\agent\_work\_temp\a2cda363-57d3-416e-80ae-fb82a650ff74.ps1'"
Could not execute because the specified command or file was not found.
Possible reasons for this include:
* You misspelled a built-in dotnet command.
* You intended to execute a .NET Core program, but dotnet-ef does not exist.
* You intended to run a global tool, but a dotnet-prefixed executable with this name could not be found on the PATH.
##[error]PowerShell exited with code '1'.
##[section]Finishing: Roll back migrations
On the build server, I have dotnet-ef installed globally. So I'm not sure at this point why it's giving me this error. Below is the yml script.
- task: PowerShell#2
displayName: "Apply migrations"
inputs:
targetType: 'inline'
script: |
dotnet ef database update --project $(Build.SourcesDirectory)\DataLayer\DataLayer

I had some confusion until I followed this article. With CI/CD pipelines there is a clear line between build (CI) and deploy (CD) which EF Migrations straddle. The build is like baking a pizza and is responsible to produce artifacts, while the release is like delivering the pizza and is responsible to deploy and install the artifacts on the target. Running ef database update in a AzureDevOps build pipeline is like trying to deliver a pizza that is still in the oven--it might work but probably going to burn the car and make folks hangry.
The build phase can't know where or when the artifacts will ever be used, so they need to be packaged with everything they might need. We also want the freedom to deploy to any supported environment without a new build so that we can react as needed (testing, high volume, disaster recovery, etc). Further, only the release phase will know the correct database type, its state and secrets until it is actually deployed to the target server, so any DB operations should be its responsibility.
The ef database update migration command does 2 tasks:
A CI task when it generates scripts that can be run on the DB using the connection settings in the active configuration.
A CD task when it applies the scripts to the target database.
So, we need to split the ef migration into different tasks that can be performed in the appropriate mode. Fortunately dotnet-ef provides the ef migrations script command to package the updates to a file that can be included as a build artifact. We also want to run the migrations only when needed and not wipe out data every time we commit code, so the idempotent flag is exactly what we need:
idempotent
(computing) Describing an action which, when performed multiple times, has no further effect on its subject after the first time it is performed.
Awesome, but we still need to run the dotnet-ef command. As with the article I had issues with running the ef command as a dotnet task, so followed the advice and used a basic script command. The resulting build yaml is:
steps:
- script: 'dotnet tool install --global dotnet-ef'
displayName: Install dotnet EF
- script: 'dotnet ef migrations script --idempotent --output migrations.sql --project pathto\aproject.csproj'
displayName: Create EF Scripts
- task: CopyFiles#2
displayName: 'Copy EF Scripts to Staging'
inputs:
Contents: |
**\migrations.sql
TargetFolder: '$(build.artifactstagingdirectory)'
flattenFolders: true
- task: PublishBuildArtifacts#1
displayName: 'Publish Artifact'
inputs:
PathtoPublish: '$(build.artifactstagingdirectory)'
And then in the release, each stage needs to apply the script to the target database. The real power in this is that the same build can be pushed to multiple servers or environments where any custom configuration can be applied. In this example, a development Azure SQL instance:
- task: SqlAzureDacpacDeployment#1
displayName: 'Execute EF Migrations'
inputs:
azureSubscription: '$(azureSubscription)'
ServerName: '$(ServerName)'
DatabaseName: '$(DatabaseName)'
SqlUsername: '$(SqlUsername)'
SqlPassword: '$(SqlPassword)'
deployType: SqlTask
SqlFile: '$(System.DefaultWorkingDirectory)\**\migrations.sql'
Now we can predictably deploy our build to any environment with the click of a button, and we have more time to eat the pizza (once it is done delivering).

It turns out that it's a windows user issue. dotnet core 3 no longer has dotnet-ef built into dotnet and I knew this and installed dotnet-ef as global but apparently doing that is user specific on a machine. The dev that created the build pipeline needed to add dotnet-ef globally with their user on the same machine to get this working. Hope this helps someone else in a collaborative environment.

Related

Unable to find package {3rd Party Package}. No packages exist with this id in source(s)

I'm currently working on the project that has an Azure Artifact, specifically nuget packages,
and I'm using DotNetCoreCLI#2 for the dotnet restore and build, for the dotnet restore it is a success but for the build it always failed. Please see screenshot below. I don't know why it fails.
I've also included the vstsFeed in the build stage however still failing.
and this is my yaml file
You might be missing some key concepts of a multi-stage pipeline:
Stages are a good way of thinking of your entire continuous-delivery process, eg BUILD -> DEV -> TEST -> PROD. Some teams use stages to represent environments. Stages run in sequence or in parallel and can have dependencies between them to control their order. Stages must contain at least one or more jobs. If you had approval gates applied, approvals are required for the entire stage.
Jobs are often used to group large related activities together, like construction of a build artifact that will used in subsequent stages, deploying into an environment, running an automated regression suite, performing a security scan, etc. Jobs are comprised of at least one or more steps. The main advantage to having multiple jobs in a single stage is useful for parallelism, or re-running all jobs in the stage or just the failing ones.
Steps are the individual activities within a job.
The key thing you're missing here is unless you are running in a self-hosted build-agent pool with only one build agent, each "job" runs on a different machine. So performing a restore on one machine and then compiling on another machine will always fail.
The process you want:
NuGetAuthenticate. This creates a nuget.config on the build agent that points to the vstsFeed
DotNet Restore. This pulls the packages from the vsts feed to the build agent so that the solution has all the dependencies it needs to compile.
DotNet Build. Compile the project file using the dependencies.
- stages: "BUILD"
job: "BUILD"
steps:
- task: NuGetAuthenticate#1
displayName: 'Setup NuGet to use Azure Artifacts'
- task: DotNetCoreCLI#2
displayName: 'Restore NuGet Packages'
inputs:
command: restore
projects: '**/*.csproj'
vstsFeed: '<<GUID>>'
- task: DotNetCoreCLI#2
displayName: 'Compile'
inputs:
command: 'build'
projects: '**/*.csproj'
Next, add some tests, code scanning and then publish a 'build artifact' that can be downloaded at the start of the next stage.

Terraform: Error while loading schemas for plugin components

I have an Azure DevOps Build pipeline that publishes the entire repository as an artifact to be used with the Release pipeline.
# Publish artifacts to be used in release
- task: PublishBuildArtifacts#1
displayName: 'publish artifacts'
inputs:
PathtoPublish: '$(System.DefaultWorkingDirectory)'
ArtifactName: 'TerraformModule'
publishLocation: 'Container'
The build pipeline triggers the creation of a release pipeline where I try to deploy the terraform configuration.
I can successfully run terraform init in this pipeline but when I try to run plan or apply, I get the following error:
Looking at the screenshot, it looks like it tries to execute the command from /usr/local/bin instead of what I specified in the step? Confused by this. Below is the yaml for my plan step:
steps:
- task: ms-devlabs.custom-terraform-tasks.custom-terraform-release-task.TerraformTaskV3#3
displayName: 'terraform plan'
inputs:
provider: aws
command: plan
workingDirectory: '/home/vsts/work/r1/a/_terraform/TerraformModule/Projects/Potentium/Prod'
environmentServiceNameAWS: 'AWS-Terraform-Build'
I manually changed workingDirectory to where the Artifacts from the build pipeline were downloaded to. See log below for example:
2022-08-14T23:41:31.3359557Z Downloaded TerraformModule/Projects/Potentium/Prod/main.tf to /home/vsts/work/r1/a/_terraform/TerraformModule/Projects/Potentium/Prod/main.tf
The plan step in my build pipeline executes without any issues so I have a feeling it is something to do with the artefacts/extraction that is occurring in the download step. Looking for any advice.
I've had similar issues with the extraction phase, when using ExtractFiles#1 doing a similar thing with terraform. I think there's a bug in it, I could not get it to extract files back to the root of System.DefaultWorkingDirectory unless the root folder was included in the archiv, I am using ArchiveFiles#2. So I was ending up with /opt/az_devops/_work/*/s/s
My solution, was to shell out a command to do the extraction. No problems extracting to the root of System.DefaultWorkingDirectory
Just remember if you're running a subsequent terraform plan, by default the working directory System.DefaultWorkingDirectory will change between runs. So ensure you use these variables rather than an explicit reference.

How to generate DACPAC file

I'm trying to deploy my project in Azure DevOps through IIS website and SQL deployment. However I am struggling with deploying SQL part as I do not have a .dacpac file in my build artifacts. How do I generate this as any option that I have tried it ended up with failing of the process.
P.S. I do not have access to the VM where I am deploying due to restrictions. I can access database as I marked as DBO on the machine.
My question is also, do I need to generate this DACPAC file through build every time or it can be generated only once, stored on machine, and I point from deployment process to that file?
Thank you for your help!
However I am struggling with deploying SQL part as I do not have a .dacpac file in my build artifacts. How do I generate this as any option that I have tried it ended up with failing of the process. I can access database as I marked as DBO on the machine.
Firstly you have to create SQL Server Database Project using SSDT (or Azure Data Studio insiders preview) by importing objects of the live database.
The database project then is to be placed into a repository
The pipeline (classic or yaml) is to have a build task MSBuild#1. Here is an YAML example. It generates the dacpac
- task: MSBuild#1
displayName: 'Build solution YourDatabase.sln'
inputs:
solution: 'src/YourDatabase.sln'
This task compiles the database project and produces dacpac file(s)
Then produced files are to be extracted:
- task: CopyFiles#2
displayName: 'Extract DACPACs'
inputs:
CleanTargetFolder: false
SourceFolder: '$(agent.builddirectory)\s\src\YourDatabase\bin\Debug\'
Contents: '*.dacpac'
TargetFolder: '$(build.artifactstagingdirectory)'
And finally, published as the artefact
- task: PublishPipelineArtifact#1
displayName: 'Publish Artifact'
inputs:
targetPath: '$(build.artifactstagingdirectory)'
artifact: 'drop'
Deployment of the dacpac is the final goal and can be done using SqlDacpacDeploymentOnMachineGroup#0, however, this is out of the scope of the original question
My question is also, do I need to generate this DACPAC file through build every time or it can be generated only once, stored on machine, and I point from deployment process to that file?
It depends.
Classic pipelines have a separation of BUILD and RELEASE phases. In this case, you can build it once and reuse that dacpac for many future releases.
In case of multi-stage yaml pipelines, it is common that every pipeline run triggers build and deployment stages, because they are still belong to the same pipeline and run as a single unit work.

'Allow duplicates to be skipped' warning and 409 error for NuGet push on Azure DevOps Server

In Azure DevOps Server (version 2019.0.1) running on a Windows Server 2019 agent, with the 'Allow duplicates to be skipped' option selected for NuGet push task, a warning is displayed:
The 'Allow duplicates to be skipped' option is currently only available on Azure Pipelines. If NuGet.exe encounters a conflict, the task will fail.
The task results in the following error that causes the task to fail indicating that the above warning applies:
Response status code does not indicate success: 409 (Conflict - The feed already contains 'MyPackage X.Y.Z'. (DevOps Activity ID: 1A57312F-3C56-4E4D-9E78-73C7072A288F)).
I'm wondering if this issue is particular to Azure DevOps Server (rather than Azure DevOps Services), or if I'm doing something wrong, or if there is another workaround. I noticed someone else has the same issue from this comment on another question where it was mentioned that the option was available after someone asked how to ignore error 409 (duplicate package).
I would like to ignore duplicate packages using the NuGet task and ideally the 'Allow duplicates to be skipped' option on Azure DevOps Server. I'm aware that it could be resolved using scripting, but I'd prefer to avoid that if possible. Any help appreciated.
I don't know about the Azure DevOps task, but if you upgrade to nuget.exe 5.1, you can use the new -SkipDuplicate option. This should work for any NuGet server that correctly implements the NuGet protocol and on any CI server/agent.
If you're using the NuGetCommand#2 Azure Pipelines task, you can use the allowPackageConflicts parameter.
allowPackageConflicts
It allows the task to report success even if some of your packages are rejected with 409 Conflict errors.
This option is currently only available on Azure Pipelines and using Windows agents. If NuGet.exe encounters a conflict, the task will fail. This option will not work and publish will fail if you are within a proxy environment.
— https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/package/nuget
If you switch to azure pipelines (it seems it's the new way of doing things) you can use dotnet commands.
the option --skip-duplicate will be available in dotnet core 3.1 (still in preview) for the dotnet nuget push command (no need to use NuGet command as it's already available in dotnet).
But you can use it now if you install the latest .NET Core.
For example, this is a stage that will grab whichever nuGet you've got in a specific folder, install the latest dotnet core that supports the skip duplicates and push it to the repository feed.
- stage:
displayName: 'Release'
condition: succeeded()
jobs:
- job: 'Publish'
displayName: 'Publish nuGet Package'
steps:
- download: current
artifact: $(PIPELINE_ARTIFACT_NAME)
displayName: 'Download pipeline artifact'
- script: ls $(PATH_PIPELINE_ARTIFACT_NAME)
displayName: 'Display contents of downloaded articacts path'
- task: NuGetAuthenticate#0
displayName: 'Authenticate in NuGet feed'
- task: UseDotNet#2
displayName: 'Use .NET Core sdk 3.1 (preview)'
inputs:
packageType: sdk
version: '3.1.100-preview2-014569'
installationPath: $(Agent.ToolsDirectory)/dotnet
- script: dotnet nuget push $(PATH_PIPELINE_ARTIFACT_NAME)/**/*.nupkg --source $(NUGET_FEED) --api-key $(NUGET_API_KEY) --skip-duplicate
displayName: 'Uploads nuGet packages'

How to make the Nuget restore work faster?

We are building CD pipeline using VSTS hosted build servers. It takes more than 3 minutes to restore Nuget. This is too much time.
How can I make it run faster? Is there any sort of caching system we can use?
UPDATE: Caching is now generally available (docs)
Caching is currently on the feature pipeline with a TBD date. In the mean time you can use the Upload Pipeline Artifact/Download Pipeline Artifact tasks to store results in your Azure DevOps account to speed up up/downloads.
The Work-in-progress can be tracked here.
In the mean time, the Microsoft 1ES (one engineering system, internal organization) has released their internal solution that uses Universal Packages to store arbitrary packages in your Azure DevOps account. It's very fast because it can sync the delta between previous packages. There is a sample on how to configure your Azure Pipeline to store the NuGet package cache in your Sources Directory in order for the task to cache them.
variables:
NUGET_PACKAGES: $(Build.SourcesDirectory)/packages
keyfile: '**/*.csproj, **/packages.config, salt.txt'
vstsFeed: 'feed name'
steps:
- task: 1ESLighthouseEng.PipelineArtifactCaching.RestoreCache#1
displayName: 'Restore artifact'
inputs:
keyfile: $(keyfile)
targetfolder: $(NUGET_PACKAGES)
vstsFeed: $(vstsFeed)
In my scenario, Nuget restore ran quickly when run interactively, but very slowly when run through CD pipeline (Jenkins). Setting revocation check mode to offline reduced my Nuget restore times from 13+ minutes to under 30 seconds (I found this solution here)
I set an environment variable in my build script prior to running Nuget restore:
SET NUGET_CERT_REVOCATION_MODE=offline
Disclaimer: Turning off certificate revocation has implications - see this link.