I wanted to start using containers within by YAML build pipeline in Azure DevOps. The pipeline works just fine if I exclude the following code snippet.
container:
image: my-image-name:1.0
endpoint: my-endpoint-in-ado
When I tried following approach the pipeline validated, but then of course failed authentication since the repository is private
container: my-image-name:1.0
I'm not sure whether I am missing something trivial, but when I contacted colleague from another team he has it implemented in the same way but for him it works.
The error I'm getting via Azure DevOps UI is following (keep in mind that the error is gone if I remove the container section):
EDIT:
I've found out that the problem I am facing is because (for some reason) when adding containers section to resources the engine cannot read anymore the information from the repositories section. On picture below, when I remove lines 7, 29 and 30 everything works fine and the container is pulled in the pipeline. Problem is, that I need that variable from line 29 further in my scripts and as far as I know there is no other way to grab the details of repositories via other variables or any other way than I am already using.
Please follow below steps to check the result.
By reference to this doc: Build and push to Azure Container Registry, we succeed to push my own image to an Azure container registry.
By reference to this doc: Container reference and Container resource, we need to create a Docker Registry service connection which type is Azure Container Registry. And we enable "Grant access permission to all pipelines" option so this service connection can be used in all pipelines without additional authentication. Please note that this service connection should be successfully validated before using it in yaml pipeline.
After these actions, we can succeed to use this container in yaml pipeline, like below.
pool:
vmImage: ubuntu-latest
resources:
containers:
- container: linux
image: edwardregistery.azurecr.io/pipelines-javascript-docker:latest
endpoint: my_acr_connection # reference to a service connection for the private registry
jobs:
- job: a
container: linux # reference
steps:
- script: echo "hello world!"
Update>>I can reproduce your issue An error occurred while loading the YAML build pipeline. Value cannot be null. Parameter name: values when setting variable variables: - name: active_branch value: $[ replace(resources.repositories['test'].ref, 'refs/heads/', '') ]if there are container resource and repository resource in yaml resources. However, it works if you remove container resource and there is only the repository resource. We suggest that you submit it here to contact the product group to investigate this issue further.
Related
I have an ADO pipeline I'm trying to run as a containerized job. The yaml is setup with the following line:
container: myDockerHub/myRepo:myTag
Where that actually points to a tag in a private repo on DockerHub. The job errors with a message that access to the repo is denied and may require a login. Which is perfectly true. It's a private repo that does require a login. But how do I tell ADO to login to the repo?
I have a service connection setup to DockerHub, and I use docker login successfully in other non-containerized jobs where a script is spinning up a docker image. But since this is using the container global option, I don't see any way to "preface" it with a login instruction. What do I need to get it to work here?
I don't see anything about authentication on the Microsoft documentation on container jobs
You can use your DockerHub service connection with the endpoint property:
container:
image: registry:myimage
endpoint: private_dockerhub_connection
I have created a Docker Compose in my pipeline and Azure created the code. The azureSubscription and the azureContainerRegistry connection are very clear.
I tried to replace them with variable from the Library but when the pipeline starts I immediately get an error.
There was a resource authorization issue: "The pipeline is not valid. Job Build: Step DockerCompose1 input azureSubscriptionEndpoint references service connection $(AzureSubscription) which could not be found. The service connection does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz. Job Build: Step DockerCompose2 input azureSubscriptionEndpoint references service connection $(AzureSubscription) which could not be found. The service connection does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz."
Basically, Azure DevOps can't replace the variable with the value for those particular parameters. I don't want to send around those configurations for obviuos reasons.
I saw some old posts where Microsoft said this was an issue in DevOps. Is this issue still there? Is there any way to move those values in the Libray or a variables?
This is still an issue. It have to be an literal or variables defined in YAML. It cannot be variable provied via variable group for instance. Please check these topics:
How to parametrize azureSubscription in azure devops template task
Azure subscription endpoint ID cannot be provided through a variable in build definition YAML file
Azure subscription endpoint ID cannot be provided through a variable in build definition YAML file
Experiencing terraform for the very time, I'm following the document from this link to put in my terraform files in a release pipeline that I have with Azure DevOps. Everything runs perfectly fine until the step where it initializes the terraform. It fails with the following error message:
The storage account itself is provisioned and the key of that also is persisted successfully in the environment variables as per the document.
The YAML I have for terraform init in Azure DevOps Release pipeline is:
And the terraform script for the backend service is:
The variables are stored as environment variables inside the release pipeline and there is a replace token task that replaces __ with string empty:
Her is the step in the pipeline that create the resource group and storage account:
And finally, the PS scripts that store the storage key in the ENV vars:
Also, I can't understand why the get http from the error message has env appended to the terraform.tformstate.
I'm running out of ideas why it fails with that exception and what is expecting actually.
I've been Googling around but have been failing so far to resolve the issue. Appreciate your help/thoughts on this.
Looks like you misspelled storageaccount for your variable. So the value is not substituted. You have sotrageaccount. The t and o are swapped.
While Christian Pearce did answer the immediate question, there is underlying problem for this message.
There is something wrong with your Storage Account settings
The issue I had was that I placed path information into the Container name
- task: TerraformTaskV3#3
displayName: Terraform Init
inputs:
provider: azurerm
command: init
backendServiceArm: [service connection]
backendAzureRmResourceGroupName: [resource group]
backendAzureRmStorageAccountName: [storage account name]
backendAzureRmContainerName: [container]/subfolder <-- This is bad and belongs in the Key field
backendAzureRmKey: ${{ parameters.name }}/${{ parameters.environment }}.tfstate
workingDirectory: $(Pipeline.Workspace)/Infrastructure
FYI: I got the same error when specifying container name (which was not created yet in the storage account) with the upper case which turned out not to be allowed in Azure Storage.
Devops folks,
I am pushing the build pipeline output to Azure Artefacts - Universal packages for a full stack .net application. The application builds successfully and produces an output in $(Build.ArtifactStagingDirectory)
Would like to publish all these build outputs to UniversalPackages and let release pipeline take it over from there.
I have checked for below things:
1. Permission - Project Collection Build Service - Contributor Role.
2. Task Configuration Confirmed below "UniversalPackages"
- task: UniversalPackages#0
inputs:
command: 'publish'
publishDirectory: '$(Build.ArtifactStagingDirectory)/**/*.nupkg'
feedsToUsePublish: 'internal'
vstsFeedPublish: '123456fg-test-1234-1234-31161a66dc4d/b92b3313-ab41-4044-test-e94146618efb'
vstsFeedPackagePublish: 'Text here-Services-Package'
versionOption: 'minor'
packagePublishDescription: 'Contains some text here)'
verbosity: 'Trace'
sorry for YAML indenting issues,
Below is the pipeline log
2020-04-04T02:36:03.9835393Z Publishing package: test, version: 0.0.1 using feed id: 76a3991f-e6fc-767b-a0dc-90e38c54e558, project: 7813b7e3-bbf1-4355-9263-31161a66dc4d
2020-04-04T02:36:04.0147395Z [command]D:\ABCAgent\_work\_tool\artifacttool\0.2.151\x64\ArtifactTool.exe universal publish --feed 76a3991f-e6fc-767b-a0dc-90e38c54e558 --service https://dev.azure.com/QWERTY/ --package-name test --package-version 0.0.1 --path D:\AzureAgentBuild\_work\1\a --patvar UNIVERSAL_PUBLISH_PAT --verbosity None --description "" --project 7813b7e3-bbf1-4355-9263-31161a66dc4d
2020-04-04T02:36:09.2875733Z {"#t":"2020-04-04T02:36:08.7883701Z","#m":"[GetDedupManifestArtifactClientAsync] Try 1/5, non-retryable exception caught. Throwing. Details:\r\nNo LastRequestResponse on exception VssServiceResponseException: Forbidden","#i":"b2d31574","#l":"Warning","#x":"Microsoft.VisualStudio.Services.WebApi.VssServiceResponseException: Forbidden\r\n
Microsoft.VisualStudio.Services.WebApi.VssServiceResponseException:
Forbidden
This permission error can be converted as 403 error code, which means the account does not have enough operation permission to publish package to universal package.
You said you had assign 'Project Collection Build Service' with 'Contributor' role. BUT, this is not a solution for all scenario. It only available while the build pipeline is using 'Project Collection Build Service' account, a collection level service account. There still has another scenario, the pipeline may using project-level service account.
You can with the methods I shared in this answer. Check this to get another similar issue and explanation.
Method 1:
Please go Feed settings => Permissions, add your project-level build service account and assign it Contributor role. Its account name should like {Project Name} Build Service ({Org Name}).
Re-run your pipeline to see whether it can run successfully.
Method 2:
Go Project settings => Settings, and make sure Limit job authorization scope to current project is disabled:
Only it disabled, the service account that pipeline used is collection-level one. At this time, your original permission configuration would be available now.
After playing around with permission for pipelines build service, the root cause was found to be with proxy being blocking the universal package with a forbidden error.
We just removed the proxy from the on-prem, self-hosted build agent and used Azure Express Route to route the traffic. This simple change fixed the issue.
Also, you can doublecheck in the Billing options if the Artifact free space is used up. I fixed it using this option.
I'm trying to create my first release pipeline, however I keep getting this error:
Exception Message: The pipeline is not valid. Job Phase_1: Step AzureResourceGroupDeployment input ConnectedServiceName references service connection
which could not be found. The service connection does not exist or has not been authorized for use. For authorization details,
refer to https://aka.ms/yamlauthz. (type PipelineValidationException)
I've tried to follow the instructions in the link, however the "Authorize Resources" button does not exist.
"Allow all pipelines to use this service connection" is already enabled and I have recreated the deployment task after enabling this.
How do I authorise the resource?
I had the same issue, and I initally missed the fact that you need to click the 'Authorize resources' button that appears, as shown below
Also in my case, my pipeline was missing variables that included the correct service connection name. These were set up in a variable group that was already being used by another pipeline. I needed to link them in my new pipeline:
Edit pipeline > select elipsis at top right > triggers > Variables > Variable groups > Link variable group
You can either use existing Service principal or create a new one. All you need is in documentation already.
Create an Azure Resource Manager service connection using automated security
From Azure DevOps -> Project setting -> Service connection: Then click on "New Service Connection".Choose "Azure resource Manager" as type of service connection. Select Service Principal (Automatic). Run your pipeline
My "Service connection" which defined the service principal connection had been created separately to the task in my release pipeline.
In order for "Authorize Resources" to occur, you must create a new connection from the task itself (you may need to use the advanced options to add an existing service principal).
under "Azure subscription" click the name of the subscription you wish to use
Click the drop down next to "Authorize" and open advanced options
Click " use the full version of the service connection dialog."
Enter all your credentials and hit save
The admittely silly solution for me was to avoid declaring and using the Service Connection as a variable, i.e. in the case of connecting to an Azure Container Registry:
Failed
pool:
vmImage: 'ubuntu-20.04'
variables:
dockerRegistryServiceConnection: 'my-service-connection'
baseContainerUrl: 'myregistry.azurecr.io/my_image:latest'
container:
image: $(baseContainerUrl)
endpoint: $(dockerRegistryServiceConnection)
Worked
pool:
vmImage: 'ubuntu-20.04'
variables:
baseContainerUrl: 'myregistry.azurecr.io/my_image:latest'
container:
image: $(baseContainerUrl)
endpoint: my-service-connection
In "classic release pipelines", a release is a snapshot of a release pipeline, with all the settings of the pipeline materialized, and a deployment is the execution of a pipeline stage in the release.
In our case, a release generating this error during deployment predated a service principal renewal. We could edit the release to use the new service principal and successfully deploy the modified release. The release pipeline did not need modifications.
Navigate to the release overview of the relevant release.
https://dev.azure.com/<user>/<project>/_releaseProgress?...
Select Edit > Edit release.
Click Edit tasks for the relevant stage.
Select the Azure App Service Deploy step.
Choose the correct Azure subscription and App Service name values.
These values will likely be the same as in the release pipeline
In our case these fields were empty and "needed attention".
Click Save.
Click Deploy.
The changes persist, so the corrected release can be deployed a second time without incident.
It is unclear whether this applies to YAML pipelines but I would guess no.
I was having this error because I was declaring the variable group containing the service connection name at stage level.
The error was fixed once I declared the variable group at pipeline level.
In my case, I was trying to use [AzureAppServiceSettings][1] task, and was using
azureSubscription with subscription id in the YAML file.
The YAML file
- task: AzureAppServiceSettings#1
displayName: Update App Settings of the Logic App
inputs:
azureSubscription: '$(azureSubscriptionId)'
resourceGroupName: ...
appName: ...
appSettings: ...
I got the same error and after clicking Authorise resources, I got error of no plan found for identifier xxxx.
Needed to change my YAML file to use connection name in Azure DevOps.
- task: AzureAppServiceSettings#1
displayName: Update App Settings of the Logic App
inputs:
azureSubscription: 'Nonprod Connection'
resourceGroupName: ...
appName: ...
appSettings: ...
So Nonprod Connection is one of my service connections in Azure Devops.
After fixing the value of azureSubscription, the pipeline doesn't show up the error anymore.
I had a similar problem and noticed that adding a task to run an Azure PowerShell script was also getting a similar error.
Turned out that the problem was solved when I verified the service connection twice within the project that was having the problem: