I am using Azure Devops Pipelines (YAML).
I have a AzureFileCopy#2 task which copies file from the source into a Storage Account. The Storage Account is created dynamically by an earlier ARM deploy task (the ARM task outputs the SA name which is then parsed into a variable for later consumption).
The AzureFileCopy#2 task works perfectly and copies all the files into the Storage Account. But, I notice in the run that the AzureFileCopy#2 task actually runs twice - once by me and once as a "pre-job". The pre-job of course fails with a warning that it can't reference the Storage Account (because by that stage I haven't created the variable).
Fortunately, it's only a warning but it is rather annoying to have that warning in every run.
I believe that pre-jobs can't be disabled (though I could drop that is a a feature enhancement) so is there a better way of handing this presumably common scenario?
Thanks in advance
EDIT: Obfuscated YAML added:
variables:
My.Configuration: 'Release'
My.SQLProject: 'contoso.api'
My.ARMProject: 'contoso.azure.templates'
My.IntEnvironment: 'i'
My.ResourceGroupNumber: 66
My.ArtifactLocation: 'drop'
# BUILD STAGES ARE HERE
- stage: 'Stage_Deploy'
displayName: 'Stage Deploy'
jobs:
- deployment: 'Job_Deploy'
pool:
vmImage: 'windows-2019'
displayName: 'Job Deploy'
environment: 'env1'
strategy:
runOnce:
deploy:
steps:
- download: none
- task: DownloadPipelineArtifact#2
displayName: 'Download Pipeline Artifacts from Drop'
inputs:
buildType: 'current'
targetPath: '$(Pipeline.Workspace)'
- task: AzureResourceManagerTemplateDeployment#3
displayName: 'ARM Deployment'
inputs:
deploymentScope: 'Resource Group'
azureResourceManagerConnection: 'CONTOSO CONNECTION'
subscriptionId: 'aaaaaaaa-0000-0000-00000-aaaaaaaaaaaaa'
action: 'Create Or Update Resource Group'
resourceGroupName: 'contoso-$(My.IntEnvironment)-eun-core-$(My.ResourceGroupNumber)-rg'
location: 'North Europe'
templateLocation: 'Linked artifact'
csmFile: '$(Pipeline.Workspace)/$(My.ArtifactLocation)/$(My.ARMProject)/azuredeploy.json'
csmParametersFile: '$(Pipeline.Workspace)/$(My.ArtifactLocation)/$(My.ARMProject)/azuredeploy.parameters.json'
overrideParameters: '-environment $(My.IntEnvironment)'
deploymentMode: 'Incremental'
deploymentOutputs: 'ARMOutput'
- task: PowerShell#2
condition: true
displayName: 'Parse ARM Template Outputs'
inputs:
targetType: filePath
filePath: '$(Pipeline.Workspace)/$(My.ArtifactLocation)/$(My.ARMProject)/Parse-ARMOutput.ps1'
arguments: '-ARMOutput ''$(ARMOutput)'''
- task: AzureFileCopy#2
condition: true
displayName: 'Copy Static Web Content to SA'
inputs:
SourcePath: '$(Pipeline.Workspace)/$(My.ArtifactLocation)'
azureSubscription: 'CONTOSO CONNECTION'
Destination: AzureBlob
storage: '$(ARM.AppDataStorageName)'
ContainerName: static
Then, when I run it, the following stages happen:
1. Initialize job
2. Pre-job: Copy Static Web Content to SA
It is this pre-job that, in the debugging shows this:
##[debug]StorageAccountRM=$(ARM.AppDataStorageName)
<other debug lines followed by...>
##[warning]Can\'t find loc string for key: StorageAccountDoesNotExist
Later on the task "Copy Static Web Content to SA" runs as a normal task and it runs fine.
Fortunately, it's only a warning but it is rather annoying to have
that warning in every run. I believe that pre-jobs can't be disabled
(though I could drop that is a a feature enhancement) so is there a
better way of handing this presumably common scenario?
Sorry but I'm afraid it's not supported to disable the warning. The warning occurs because it's by design.
(The source code of AzureFileCopyV2 task causes this behavior.)
More details:
We can find source of that task here. It contains one task.json file in which defines content like this:
"instanceNameFormat": "$(Destination) File Copy",
"prejobexecution": {
"Node": {
"target": "PreJobExecutionAzureFileCopy.js"
}
},
The task.json file describes the build or release task and is what the build/release system uses to render configuration options to the user and to know which scripts to execute at build/release time. Since the task.json has definition like prejobexecution, the predefined pre-job task will do the check about the inputs defined in PreJobExecutionAzureFileCopy.js file. Including the Storage Account.
So this is something by design of the code of AzureFileCopyV2 task, we can't disable the warning in pre-job task. If you do want to resolve that warning, you can consider using AzureFileCopyV1 in which doesn't define the prejobexecution, but this is not recommended. Compared with Version1, Version2 has some improvement and fixes some old issues.
Related
I am trying to publish a Blazor net core app using Azure Pipelines, but I constantly get a 500 error on the Web Deployment stage.
Once the pipeline runs I check through Kudu console and the only two files on the server are an empty web.config and FAILED TO INITIALIZE RUN FROM PACKAGE.txt with Run From Package Initialization failed. inside.
Below is the YAML of the pipeline.
pool:
name: Azure Pipelines
#Your build pipeline references an undefined variable named ‘Parameters.RestoreBuildProjects’. Create or edit the build pipeline for this YAML file, define the variable on the Variables tab. See https://go.microsoft.com/fwlink/?linkid=865972
#Your build pipeline references the ‘BuildConfiguration’ variable, which you’ve selected to be settable at queue time. Create or edit the build pipeline for this YAML file, define the variable on the Variables tab, and then select the option to make it settable at queue time. See https://go.microsoft.com/fwlink/?linkid=865971
steps:
- task: DotNetCoreCLI#2
displayName: Restore
inputs:
command: restore
projects: '$(Parameters.RestoreBuildProjects)'
feedsToUse: config
nugetConfigPath: NuGet.Config
- task: DotNetCoreCLI#2
displayName: Publish
inputs:
command: publish
publishWebProjects: false
projects: '**/TPL/Server/TPL.Server.csproj'
arguments: '--configuration $(BuildConfiguration) --output $(build.artifactstagingdirectory)'
modifyOutputPath: false
- task: AzureRmWebAppDeployment#4
displayName: 'Azure App Service Deploy: tpl'
inputs:
azureSubscription: '**hidden**'
WebAppName: tpl
deployToSlotOrASE: true
ResourceGroupName: TPL
SlotName: test
packageForLinux: '$(build.artifactstagingdirectory)/**/*.zip'
Deleting and recreating the slot fixed this. I previously had old way CI (from portal's deployment center menu) and my hunch is that didn't get disconnected properly or something like that didn't get cleaned up.
I have created a pipeline using only YAML.
I have defined the deployment part like this:
- stage: AzureDevOpsStaging
displayName: Deploy build artifacts to staging environment
dependsOn: BuildSolution
condition: succeeded('BuildSolution')
jobs:
- deployment: DeployArtifacts
displayName: Deploy artifacts
environment:
name: AzureDevOpsStaging
resourceType: VirtualMachine
strategy:
runOnce:
deploy:
steps:
- download: current
artifact: drop
- task: IISWebAppDeploymentOnMachineGroup#0
displayName: Deploy artifacts to IIS
inputs:
webSiteName: 'mysite-staging'
package: '$(Pipeline.Workspace)\drop\*.zip'
xmlTransformation: true
When I run this I get:
##[warning]Unable to apply transformation for the given package. Verify the following.
##[warning]1. Whether the Transformation is already applied for the MSBuild generated package during build. If yes, remove the <DependentUpon> tag for each config in the csproj file and rebuild.
##[warning]2. Ensure that the config file and transformation files are present in the same folder inside the package.
Things that I've checked:
Both Web.config and Web.AzureDevOpsStaging.config files are in the zip/artifact
Name of stage - The docs say stage must have the same name as your transform config file; that is: Web.AzureDevOpsStaging.config.
Name of .config transform file - the name of the .config transform file is Web.AzureDevOpsStaging.config
Name of environment (the docs doesn't say the name has to be the same as Web.ThisPart.config but I still named the environment
AzureDevOpsStaging just in case.)
But again doing all of the above results in the Web.config not being transformed.
I got it to work with using the file transform task instead which is referenced in the docs from the IIS Web App Deploy task:
- stage: AzureDevOpsStaging
displayName: Deploy build artifacts to staging environment
dependsOn: BuildSolution
condition: succeeded('BuildSolution')
jobs:
- deployment: DeployArtifacts
displayName: Deploy artifacts
environment:
name: AzureDevOpsStaging
resourceType: VirtualMachine
strategy:
runOnce:
deploy:
steps:
- download: current
artifact: drop
- task: FileTransform#1
inputs:
folderPath: '$(Pipeline.Workspace)\drop\*.zip'
enableXmlTransform: true
xmlTransformationRules: -transform **\*.AzureDevOpsStaging.config -xml **\*.config
- task: IISWebAppDeploymentOnMachineGroup#0
displayName: Deploy artifacts to IIS
inputs:
webSiteName: 'mysite-staging'
package: '$(Pipeline.Workspace)\drop\*.zip'
So can someone please explain to me how I am supposed to configure my YAML to get it to work using only the IISWebAppDeploymentOnMachineGroup#0 task?
And if this is not possible am I using the task FileTransform#1 properly?
Also, I saw there is a version FileTransform#2 as well. That task didn't have one of the properties that #1 has so I reverted to using v1 instead. But would be great if someone has a bit more info on this newer version and if it's going to deprecate #1 in the future?
Btw, I also got xmlTransformation: true to work with classic release pipeline under the Releases tab in Azure DevOps using the UI. But again I don't want to use the classic stuff, I want to do everything in YAML.
And if this is not possible am I using the task FileTransform#1 properly?
The answer is yes.
The task FileTransform is the one I am using and use frequently.
When I use it in the YAML pipeline as you set:
- task: FileTransform#1
displayName: 'File Transform'
inputs:
folderPath: '$(Pipeline.Workspace)\drop\*.zip'
enableXmlTransform: true
xmlTransformationRules: '-transform **\*.UAT.config -xml **\*.config'
fileType: xml
It works fine on my side:
In order to perform the conversion correctly, it is necessary to ensure that the syntax in config is correct, and the specified directory is correct
For context, I am trying to use an Azure build pipeline to build multiple flavors of an Android app. Each flavor has its own separate signing keystore, and all of those keystores are stored in my 'secure files' in the library.
However, when I try to dereference the $(Keystore) variable during the 'android signing' task, it doesn't seem to recognize that that is a variable that exists, and tries instead to locate a file called '$(Keystore)'
Am I doing something wrong here? This seems like it should work.
A sanitized example looks like this:
# Android
# Build your Android project with Gradle.
# Add steps that test, sign, and distribute the APK, save build artifacts, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/android
trigger:
- feat/ci-setup
pool:
vmImage: 'macos-latest'
variables:
${{ if startsWith(variables['build.sourceBranch'], 'refs/heads/feat/') }}:
Branch_Type: 'feature'
${{ if startsWith(variables['build.sourceBranch'], 'refs/heads/hotfix/') }}:
Branch_Type: 'hotfix'
${{ if startsWith(variables['build.sourceBranch'], 'refs/heads/release/') }}:
Branch_Type: 'release'
${{ if eq(variables['Branch_Type'], 'release') }}:
Configuration: 'release'
ConfigurationCC: 'Release'
${{ if ne(variables['Branch_Type'], 'release') }}:
Configuration: 'debug'
ConfigurationCC: 'Debug'
jobs:
- job: Build
variables:
- group: android_keystores
strategy:
maxParallel: 2
matrix:
Flavor_1:
AppFlavor: '1'
AppFlavorCC: '1'
Keystore: 'flavor1.keystore'
KeyAlias: 'flavor1'
KeystorePass: '$(flavor1_storepass)'
KeyPass: '$(flavor1_keypass)'
Flavor_2:
AppFlavor: '2'
AppFlavorCC: '2'
Keystore: 'flavor2.keystore'
KeyAlias: 'flavor2'
KeystorePass: '$(flavor2_storepass)'
KeyPass: '$(flavor2_keypass)'
steps:
- task: Gradle#2
inputs:
workingDirectory: ''
gradleWrapperFile: 'gradlew'
gradleOptions: '-Xmx3072m'
publishJUnitResults: false
tasks: 'assemble$(AppFlavorCC)$(ConfigurationCC)'
- task: AndroidSigning#3
displayName: Signing .apk
inputs:
apkFiles: 'app/build/outputs/apk/$(AppFlavor)/$(Configuration)/*.apk'
apksign: true
apksignerKeystoreFile: '$(Keystore)'
apksignerKeystorePassword: '$(KeystorePass)'
apksignerKeystoreAlias: '$(KeyAlias)'
apksignerKeyPassword: '$(KeyPass)'
zipalign: true
- task: Bash#3
displayName: Move APK to Artifact Folder
continueOnError: true
inputs:
targetType: 'inline'
script: |
mv \
app/build/outputs/apk/$(AppFlavor)/$(Configuration)/*.apk \
$(Build.ArtifactStagingDirectory)/$(ArtifactName)/
- task: PublishBuildArtifacts#1
displayName: Publish Build Artifacts
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
ArtifactName: 'Blueprint-Build'
publishLocation: 'Container'
But when the pipeline runs I am told this:
There was a resource authorization issue: "The pipeline is not valid. Job Build: Step AndroidSigning input keystoreFile references secure file $(Keystore) which could not be found. The secure file does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz."
Azure DevOps: Populating secure file references with job matrix variables
This is a limitation from the task itself.
When we test it with Classic mode, we could find out that the value of the option Keystore file could not be entered manually, we could only select a certain file through the drop-down menu:
That the reason why it doesn't seem to recognize that that is a variable that exists, and tries instead to locate a file called '$(Keystore)'.
To resolve this issue, you could change the task version from 3 to 1, which supports manual input:
And as another solution, you could also use the command line to sign the *.apk:
Android apk signing: sign an unsigned apk using command line
You're missing the step to download the Secure File. Unlike variable groups, you need to explicitly download them to have access via the secure file name.
You'll want to add something similar to the example task below to your steps to pull the secure file. Then, you'll access your secure file via NAME_PARAMETER.secureFilePath:
- task: DownloadSecureFile#1
displayName: "Download Keyfile 1"
name: "YOUR_SECUREFILE_NAME"
inputs:
secureFile: keyfile1
- task: AndroidSigning#3
displayName: Signing .apk
inputs:
apkFiles: 'app/build/outputs/apk/$(AppFlavor)/$(Configuration)/*.apk'
apksign: true
apksignerKeystoreFile: '$(YOUR_SECUREFILE_NAME.secureFilePath)'
apksignerKeystorePassword: '$(KeystorePass)'
apksignerKeystoreAlias: '$(KeyAlias)'
apksignerKeyPassword: '$(KeyPass)'
zipalign: true
Our team is implementing an Azure DevOps testing pipeline. After our initial commit to create the pipeline .yml file this error message was displayed. After looking into it, I realized I forgot to include the trigger in the .yml. However after adding, it this error message hasn't gone away. The pipeline is working as expected though, we are just using a manual trigger which is shown below. The only listed issue is from the our original commit. Is there a way I can acknowledge this error to make it go away or am I potentially missing a different error that I just haven't noticed yet? Thanks for any help in advance, please let me know if I can provide any additional information.
Here are the error messages that I am seeing when I view the runs of that pipeline. I also included a screen shot of how I'm setting up my trigger.
Edit: As request I included the actual .yml file code below with slight naming modifications. We do have some custom plugins such as creating files for files that are untracked but still needed to be created. So you might need to remove those to test this.
trigger:
- none
pool:
name: myPool
demands:
- msbuild
- visualstudio
steps:
- task: NuGetToolInstaller#0
displayName: 'Use NuGet 4.4.1'
inputs:
versionSpec: 4.4.1
- task: NuGetCommand#2
displayName: 'NuGet restore'
inputs:
restoreSolution: '$(Parameters.solution)'
- task: eliostruyf.build-task.custom-build-task.file-creator#6
displayName: 'Create Connection Strings file'
inputs:
filepath: '$(System.DefaultWorkingDirectory)/ID_Web/config/ConnectionStrings.config'
filecontent: |
<connectionStrings>
</connectionStrings>
endWithNewLine: true
- task: eliostruyf.build-task.custom-build-task.file-creator#6
displayName: 'Create Developer Settings File'
inputs:
filepath: '$(System.DefaultWorkingDirectory)/ID_Web/config/developerAppSettings.config'
filecontent: |
<appSettings>
</appSettings>
endWithNewLine: true
- task: eliostruyf.build-task.custom-build-task.file-creator#6
condition: contains(variables['Agent.Name'], '1')
displayName: 'Create Developer Integration Setting for agent 1'
inputs:
filepath: '$(System.DefaultWorkingDirectory)/ID_Test/config/developerIntegrationSettings.config'
filecontent: |
<developerIntegrationSettings>
<add key="ModelsIntegrationTestDb" value="Models_IntegrationTest_BuildAgent1"/>
<add key="ErrorsIntegrationTestDb" value="Errors_IntegrationTest_BuildAgent1"/>
</developerIntegrationSettings>
endWithNewLine: true
- task: VisualStudioTestPlatformInstaller#1
displayName: 'Visual Studio Test Platform Installer'
inputs:
versionSelector: latestStable
# Build the solution.
- task: VSBuild#1
displayName: 'Build solution'
inputs:
solution: '$(Parameters.solution)'
msbuildArgs: '/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation="$(build.artifactstagingdirectory)\\"'
platform: '$(BuildPlatform)'
configuration: '$(BuildConfiguration)'
clean: true
# Run all unit tests in parallel
- task: VSTest#2
displayName: 'Run Unit Tests'
inputs:
testAssemblyVer2: |
**\*ID_Test*.dll
!**\*TestAdapter.dll
!**\obj\**
searchFolder: '$(System.DefaultWorkingDirectory)/ID_Test'
testFiltercriteria: '(FullyQualifiedName!~Integration & FullyQualifiedName!~Ioc)'
runOnlyImpactedTests: false
vsTestVersion: toolsInstaller
runSettingsFile: 'ID_Test/.runsettings'
runInParallel: true
runTestsInIsolation: false
codeCoverageEnabled: false
testRunTitle: 'Unit Tests'
platform: '$(BuildPlatform)'
configuration: '$(BuildConfiguration)'
diagnosticsEnabled: true
rerunFailedTests: true
# Run integration tests serially
- task: VSTest#2
displayName: 'Run Integration Tests'
inputs:
testAssemblyVer2: |
**\*ID_Test*.dll
!**\*TestAdapter.dll
!**\obj\**
searchFolder: '$(System.DefaultWorkingDirectory)/ID_Test'
testFiltercriteria: '(FullyQualifiedName~Integration | FullyQualifiedName~Ioc)'
runOnlyImpactedTests: false
vsTestVersion: toolsInstaller
runSettingsFile: 'ID_Test/.runsettings'
runTestsInIsolation: true
codeCoverageEnabled: false
testRunTitle: 'Integration Tests'
platform: '$(BuildPlatform)'
configuration: '$(BuildConfiguration)'
diagnosticsEnabled: true
rerunFailedTests: true
# Clean agent directories
- task: mspremier.PostBuildCleanup.PostBuildCleanup-task.PostBuildCleanup#3
displayName: 'Clean Agent Directories'
Edit (2): Included below is a screen shot of what I am using for trigger settings now, originally it was unchecked. Checking it doesn't seem to have any affect though.
I had the same and was about to curl up in a ball and cry when I found out the real issue. As the wonderful message is saying, it has absolutely nothing to do with trigger :)
I assume you created a new branch with your YAML file for testing purpose before merging it to master. So you need to setup your build to point to this branch because the file doesn't exist on your main branch.
Here the steps :
Edit your pipeline
Click the 3 dots on top right > Triggers
Click YAML tab > Get sources
Change the 'Default branch for manual and scheduled builds' to point to the branch where your .yml file is
So we gave up on this problem since it wasn't having any effect and we couldn't find the problem. After about a week or two it just stopped showing up. So I assume this was just some quirk with the Azure DevOps and not a problem with the pipeline itself.
According to your description, this issue looks more like an episodic issue. In YAML files, you don't have to include triggers. YAML pipelines are configured by default with a CI trigger on all branches. You can create a new pipeline and copy your YAML file to see if there are still any error messages.
Or, the issue could come from Classic UI triggers. On the pipeline editing page, select More actions-> Triggers.
Then you can check if there is anything illegal. If you want to use the trigger in the YAML file, leave the '
Override the YAML continuous integration trigger from here' check box off.
We have a working classic build job in azure Devops with an self hosted agent pool. But when we tried to convert this build job to yaml method, while executing no agents are getting assigned and its hanging. Could you please correct me here if i am doing something task.
Error
"All eligible agents are disabled or offline"
below is the converted yaml file from classic build - agent job
pool:
name: MYpool
demands: maven
#Your build pipeline references an undefined variable named ‘Parameters.mavenPOMFile’. Create or edit the build pipeline for this YAML file, define the variable on the Variables tab. See https://go.microsoft.com/fwlink/?linkid=865972
steps:
- task: Maven#3
displayName: 'Maven pom.xml'
inputs:
mavenPomFile: '$(Parameters.mavenPOMFile)'
- task: CopyFiles#2
displayName: 'Copy Files to: $(build.artifactstagingdirectory)'
inputs:
SourceFolder: '$(system.defaultworkingdirectory)'
Contents: '**/*.war'
TargetFolder: '$(build.artifactstagingdirectory)'
condition: succeededOrFailed()
- task: PublishBuildArtifacts#1
displayName: 'Publish Artifact: Root'
inputs:
PathtoPublish: '$(build.artifactstagingdirectory)'
ArtifactName: Root
condition: succeededOrFailed()
- task: CopyFiles#2
displayName: 'Copy wars to build directory'
inputs:
SourceFolder: '$(build.artifactstagingdirectory)/target'
TargetFolder: '/home/myadmin/builds/$(build.buildnumber)'
- task: CopyFiles#2
displayName: 'copying docker file to Build Directory'
inputs:
SourceFolder: Admin
TargetFolder: '/home/myadmin/builds/$(build.buildnumber)'
- bash: |
# Write your commands here
mv /home/myadmin/builds/$(build.buildnumber)/mypack0.0.1.war /home/myadmin/builds/$(build.buildnumber)/ROOT.war
displayName: 'Name war file Root.war'
- task: Docker#2
displayName: 'Build the docker image'
inputs:
repository: 'mycontainerregistry.azurecr.io/myservice'
command: build
Dockerfile: '/home/myadmin/builds/$(build.buildnumber)/Dockerfile'
tags: '$(Build.BuildNumber)-DEV'
- bash: |
# Write your commands here
docker login mycontainerregistry.azurecr.io
docker push mycontainerregistry.azurecr.io/myservice:$(Build.BuildNumber)-DEV
displayName: 'Push Docker Image'
- task: CopyFiles#2
displayName: 'Copy Deployment file'
inputs:
SourceFolder: /home/myadmin/kubernetes
TargetFolder: '/home/myadmin/builds/$(build.buildnumber)'
- task: qetza.replacetokens.replacetokens-task.replacetokens#3
displayName: 'Replace image in deployment file'
inputs:
rootDirectory: '/home/myadmin/builds/$(build.buildnumber)'
targetFiles: '**/*.yml'
In my previous answer, I said when I wait for nearly 20-30 mins, the interface of agent will prompt below message.
In fact, this is a process which upgrade the agent to latest version automatically.
Yes, when you using YAML with private agent, the agent version must be the latest one. No matter you add the demands or not.
For our system, the agent version is a implicit demand that your agent must satisfied with the latest one when you applying it in YAML.
If it is not satisfied, it will be blocked and the agent upgrade process will be forced to be performed automatically by system after some times.
So, to execute the private agent in YAML successfully, please upgrade the agent to latest one manually.
Since what my colleague and I talked are all private to microsoft in this ticket, sorry you could not get visible on this summary. So, here I take the screenshots about it, and you can refer to it: https://imgur.com/a/4OnzHp3
We are still working on why the system prompting so confusing message like: All eligible agents are disabled or offline. And, am trying to do some contribution to let this message more clear, for example: no agents meet demands: agent version xxx.