Not able to find a way pass image from ACR as artifact to Azure DevOps pipeline JSON.
In other words, I am trying to replicate artifact from Azure DevOps Releases(see attached image), want user to have option to select image from ACR while running the JSON pipeline.
Image from ACR as artifact in Azure DevOps Releases
You can use the container resources to consume a container image as part of your yaml pipeline. And you can use runtime parameters to allow user to select the images while running the pipeline. See below example:
1, Define runtime parameters to let user select the images.
parameters:
- name: ACRimage
type: string
default: image1
values:
- image1
- image2
- image3
Then when clicking the Run to run the pipeline, user will be given the option to select which image to use in the pipeline.
2, Add ACR container resources in your pipeline.
Before you can add ACR container resource. You need to create Docker Registry service connection
Then you can define the container resource in your pipeline like below:
resources:
containers:
- container: ACRimage
image: ${{parameters.ACRimage}}
endpoint: ACR-service-connection
So the full yaml pipeline looks like below:
parameters:
- name: ACRimage
type: string
default: image1
values:
- image1
- image2
- image3
resources:
containers:
- container: ACRimage
image: ${{parameters.ACRimage}}
endpoint: ACR-service-connection
trigger: none
pool:
vmImage: 'ubuntu-latest'
steps:
You can use a Container Resource Block
You can use a first class container resource type for Azure Container
Registry (ACR) to consume your ACR images. This resources type can be
used as part of your jobs and also to enable automatic pipeline
triggers.
trigger:
- none # Disbale trigger on the repository itself
resources:
containers:
- container: string # identifier for the container resource
type: ACR
azureSubscription: string # Azure subscription (ARM service connection) for container registry;
resourceGroup: string # resource group for your ACR
registry: string # registry for container images
repository: string # name of the container image repository in ACR
trigger: true
If you wnat to trigger only on certain tags (or exclude certain tags) you can replace the trigger value like below
trigger:
tags:
include: [ string ] # image tags to consider the trigger events, defaults to any new tag
exclude: [ string ] # image tags on discard the trigger events, defaults to none
A complete pipeline example:
trigger:
- none # Disable trigger on the repository itself
resources:
containers:
- container: myId # identifier for the container resource
type: ACR
azureSubscription: test # Azure subscription (ARM service connection) for container registry;
resourceGroup: registry # resource group for your ACR
registry: myregistry # registry for container images
repository: hello-world # name of the container image repository in ACR
trigger: true
pool:
vmImage: 'ubuntu-latest'
steps:
- bash: |
echo "The registry is: $(resources.container.myId.registry)"
echo "The repository is: $(resources.container.myId.repository)"
echo "The tag is: $(resources.container.myId.tag)"
If you push anew image to the helloworld repository the pipeline will start
docker pull hello-world:latest
docker tag hello-world:latest myregistry.azurecr.io/hello-world:newtag
docker push hello-world:latest myregistry.azurecr.io/hello-world:newtag
The result of the script step is
The registry is: myregistry
The repository is: hello-world
The tag is: newtag
Sorry to inform this but azure yaml pipelines doesn't support this.
What danielorn suggested 'rerources.containers', that is used to run your build stages in that container. I don't want to do that.
Aim is to deploy take image tag from user & deploy that image. So need image needs to be passed as artifact just like in Release pipeline.
Sadly this is not supported as of now in YAMl pipelines, I got a confirmation azure team.
Related
I want to create a parameter in YAML deploy pipeline to let user mention the build id they want to pass for deployment while running manually.
How can I use that specific build id passed as parameter during deployment inside deployment pipeline?
Deployment pipeline resource definition is:
resources:
pipelines:
- pipeline: build
source: build_pipeline_name
trigger:
branches:
- master
Choosing from Resources is not an option due to access restriction on the Environments we are using in pipeline.
If you want to download just specific artifact you won't be able to do this usisng just resource, as you cannot parameterize resources. However, if this is your goal you can parameterize this task:
parameters:
- name: runId
type: number
# Download an artifact named 'WebApp' from a specific build run to 'bin' in $(Build.SourcesDirectory)
steps:
- task: DownloadPipelineArtifact#2
inputs:
source: 'specific'
artifact: 'WebApp'
path: $(Build.SourcesDirectory)/bin
project: 'FabrikamFiber'
pipeline: 12
runVersion: 'specific'
runId: ${{ parameters.runId }}
However, I'm not sure if I understood you.
Context
I'm creating a CI/CD configuration for an application having this repository configuration (each repository in the same Organization and Project):
Frontend repository (r1)
API Service repository (r2)
Infrastructure As Code repo (r3)
Within the repository r3 there are the solution's Azure DevOps Pipelines, each one of them has been configured for Manual & Scheduled trigger on develop branch:
Frontend CI Pipeline p1
Backend CI Pipeline p2
Deployment Pipeline p3
The behavior I want is
Git commit on r1 repo
Pipeline p1 on repo r3 triggered (this will create artifacts, apply a tag and notify)
Pipeline p3 triggered by p1 completion (this will deploy the artifacts)
Pipeline p1 looks like the following
trigger: none
resources:
containers:
- container: running-image
image: ubuntu:latest
options: "-v /usr/bin/sudo:/usr/bin/sudo -v /usr/lib/sudo/libsudo_util.so.0:/usr/lib/sudo/libsudo_util.so.0 -v /usr/lib/sudo/sudoers.so:/usr/lib/sudo/sudoers.so -v /etc/sudoers:/etc/sudoers"
repositories:
- repository: frontend
name: r1
type: git
ref: develop
trigger:
branches:
include:
- develop
exclude:
- main
name: $(SourceBranchName)_$(date:yyyyMMdd)$(rev:.r) - Frontend App [CI]
variables:
- name: imageName
value: fronted-app
- name: containerRegistryConnection
value: apps-registry-connection
pool:
vmImage: "ubuntu-latest"
stages:
- stage: Build
displayName: Build and push
jobs:
- job: JobBuild
displayName: Build job
container: running-image
steps:
- checkout: frontend
displayName: Checkout Frontend repository
path: fe
persistCredentials: true
...
Pipeline p3 looks like the following
name: $(SourceBranchName)_$(date:yyyyMMdd)$(rev:.r) - App [CD]
trigger: none
resources:
containers:
- container: running-image
image: ubuntu:latest
options: "-v /usr/bin/sudo:/usr/bin/sudo -v /usr/lib/sudo/libsudo_util.so.0:/usr/lib/sudo/libsudo_util.so.0 -v /usr/lib/sudo/sudoers.so:/usr/lib/sudo/sudoers.so -v /etc/sudoers:/etc/sudoers"
pipelines:
- pipeline: app-fe-delivery
source: "p1"
trigger:
stages:
- Build
branches:
include:
- develop
pool:
vmImage: "ubuntu-latest"
stages:
- stage: Delivery
jobs:
- job: JobDevelopment
steps:
- template: ../templates/template-setup.yaml # Template reference
parameters:
serviceDisplayName: ${{ variables.serviceDisplayName }}
serviceName: ${{ variables.serviceName }}
...
Issue
Even if followed step by step all the rules exposed in the official documentation:
Pipeline p1 is never triggered by any commit on develop branch in r1 repository
Even if manually run Pipeline p1, Pipeline p3 is never triggered
Remarks
As stated in the pipelines YAML reference, Triggers are enabled by default
in the same documentation, if no branch include filter is expressed, the trigger will happen on all branches
As stated in the triggers for Checkout Multiple repositories in pipelines triggers happens only for repos in Azure DevOps repositories
is it possible to disable pipeline CI triggers (trigger: none) and have resource's repositories triggers happening
Build agent user has been authorized to access and queue new builds
Couple possible solutions.
First off believe your issue is with:
trigger: none
This means the pipeline will only work manually. In the documentation you referenced :
Triggers are enabled by default on all the resources. However, you can choose to override/disable triggers for each resource.
The way this is configured all push triggers are disabled.
One possible way to achieve what you are attempting I believe is to remove the trigger:none from p1 and p3
If I read your question correctly you are trying to do a CI/CD build deployment on the repository. If so, may I suggest if the scenario is appropriate (i.e. a Build will always trigger a deployment) then combine these pipelines into one and put an if statement around the deployment stage similar to:
- ${{ if eq(variables['Build.SourceBranch'], 'refs/heads/master')}}:
Also, if deploying to multiple environments this can be followed up with a loop indented in one line:
- ${{ each environmentNames in parameters.environmentNames }}:
I noticed you are already using template so this would be just moving the template call up from the job to the stage and have it act as a wrapper. Feel free to provide feedback. If this answer isn't appropriate, I can update it accordingly.
I'm trying to run a container job running inside a locally built and cached Docker image (from a Dockerfile) instead of pulling the image from registry. Based on my tests so far, the agent only tries to pull the image from a registry and doesn't search the image locally. I know this functionality is not documented, however I wonder if there is a way to make it work.
stages:
- stage: Build
jobs:
- job: Docker
steps:
- script: |
docker build -t builder:0.1 .
displayName: 'Build Docker'
- job: GCC
dependsOn: Build
container: builder:0.1
steps:
- script: |
cd src
make
displayName: 'Run GCC'
I am afraid there is no such way to find the locally cached image and run a container job on it for the time being.
From the documentation you mentioned:
Containers can be hosted on registries other than Docker Hub. To host an image on Azure Container Registry or another private container registry
If you define a container in YAML file, it will extract the image from the docker hub by default.
Or you can add the endpoint field to specify other registry(e.g. Azure Container Registry).
Here is the definition of the Container:
container:
image: string # container image name
options: string # arguments to pass to container at startup
endpoint: string # endpoint for a private container registry
env: { string: string } # list of environment variables to add
This means that the container job will directly extract the image from the registry and cannot search the local cache.
But this requirement is valuable.
You could add your request for this feature on our UserVoice site, which is our main forum for product suggestions.
I am exploring Azure Pipeline As Code and would like to understand how to make use of "deploymentMode" for validating and deploying ARM templates for each Azure environments.
I already have Release Pipelines created in Azure DevOps via Visual Builder for deployment tasks with one main ARM template and multiple paramater JSON files corresponding to each environment in Azure. Each of those pipeline has two stages. One for validation of ARM templates and Second for deployment.
I am now trying to converting those release pipelines to Azure Pipeline as Code in YAML format and would like to create one YAML file consolidating deployment validation tasks (deploymentMode: 'Validation') for each environment first followed by actual deployment (deploymentMode: 'Incremental').
1) Is it a right strategy for carrying out Azure DevOps Pipeline As code for a multi environment release cycle?
2) Will the YAML have two stages (one for validation and another one for deployment) and each stage having many tasks (each task for one environment)?
3) Do I need to create each Azure Environment first in 'Environments' section under Pipelines and configure the virtual machine for managing the deployment of various environments via YAML file?
Thanks.
According to your requirements, you could configure virtual machines for each azure environments in the Azure Pipeline -> Environments. Then you could reference the environments in Yaml code.
Here are the steps, you could refer to them.
Step1: Configure virtual machine for each Azure Environments.
Note: If the virtual machines are under the same environment, you need to add tags for each virtual machine. Tags can be used to distinguish virtual machines in the same environment.
Step2: You could create the Yaml file and add multiple stages (e.g. validation stage and deployment stage) in it. Each stage can use the environments and contain multiple tasks.
Here is an example:
trigger:
- master
stages:
- stage: validation
jobs:
- deployment: validation
displayName: validation ARM
environment:
name: testmachine
resourceType: VirtualMachine
tags: tag
strategy:
runOnce:
deploy:
steps:
- task: AzureResourceManagerTemplateDeployment#3
...
- task:
...
- stage: deployment
jobs:
- deployment: deployment
displayName: deploy
environment:
name: testmachine
resourceType: VirtualMachine
tags: tag
strategy:
runOnce:
deploy:
steps:
- task: AzureResourceManagerTemplateDeployment#3
...
- task:
...
Here are the docs about using multiple stages and virtual machines.
Hope this helps.
I want to build a docker image in my pipeline and then run a job inside it, without pushing or pulling the image.
Is this possible?
It's by design that you can't pass artifacts between jobs in a pipeline without using some kind of external resource to store it. However, you can pass between tasks in a single job. Also, you specify images on a per-task level rather than a per-job level. Ergo, the simplest way to do what you want may be to have a single job that has a first task to generate the docker-image, and a second task which consumes it as the container image.
In your case, you would build the docker image in the build task and use docker export to export the image's filesystem to a rootfs which you can put into the output (my-task-image). Keep in mind the particular schema to the rootfs output that it needs to match. You will need rootfs/... (the extracted 'docker export') and metadata.json which can just contain an empty json object. You can look at the in script within the docker-image-resource for more information on how to make it match the schema : https://github.com/concourse/docker-image-resource/blob/master/assets/in. Then in the subsequent task, you can add the image parameter in your pipeline yml as such:
- task: use-task-image
image: my-task-image
file: my-project/ci/tasks/my-task.yml
in order to use the built image in the task.
UDPATE: the PR was rejected
This answer doesn't currently work, as the "dry_run" PR was rejected. See https://github.com/concourse/docker-image-resource/pull/185
I will update here if I find an approach which does work.
The "dry_run" parameter which was added to the docker resource in Oct 2017 now allows this (github pr)
You need to add a dummy docker resource like:
resources:
- name: dummy-docker-image
type: docker-image
icon: docker
source:
repository: example.com
tag: latest
- name: my-source
type: git
source:
uri: git#github.com:me/my-source.git
Then add a build step which pushes to that docker resource but with "dry_run" set so that nothing actually gets pushed:
jobs:
- name: My Job
plan:
- get: my-source
trigger: true
- put: dummy-docker-image
params:
dry_run: true
build: path/to/build/scope
dockerfile: path/to/build/scope/path/to/Dockerfile