Azure DevOps locally cached container job - azure-devops

I'm trying to run a container job running inside a locally built and cached Docker image (from a Dockerfile) instead of pulling the image from registry. Based on my tests so far, the agent only tries to pull the image from a registry and doesn't search the image locally. I know this functionality is not documented, however I wonder if there is a way to make it work.
stages:
- stage: Build
jobs:
- job: Docker
steps:
- script: |
docker build -t builder:0.1 .
displayName: 'Build Docker'
- job: GCC
dependsOn: Build
container: builder:0.1
steps:
- script: |
cd src
make
displayName: 'Run GCC'

I am afraid there is no such way to find the locally cached image and run a container job on it for the time being.
From the documentation you mentioned:
Containers can be hosted on registries other than Docker Hub. To host an image on Azure Container Registry or another private container registry
If you define a container in YAML file, it will extract the image from the docker hub by default.
Or you can add the endpoint field to specify other registry(e.g. Azure Container Registry).
Here is the definition of the Container:
container:
image: string # container image name
options: string # arguments to pass to container at startup
endpoint: string # endpoint for a private container registry
env: { string: string } # list of environment variables to add
This means that the container job will directly extract the image from the registry and cannot search the local cache.
But this requirement is valuable.
You could add your request for this feature on our UserVoice site, which is our main forum for product suggestions.

Related

Use ACR ( azure container registry) as input to Azure Devops json pipeline

Not able to find a way pass image from ACR as artifact to Azure DevOps pipeline JSON.
In other words, I am trying to replicate artifact from Azure DevOps Releases(see attached image), want user to have option to select image from ACR while running the JSON pipeline.
Image from ACR as artifact in Azure DevOps Releases
You can use the container resources to consume a container image as part of your yaml pipeline. And you can use runtime parameters to allow user to select the images while running the pipeline. See below example:
1, Define runtime parameters to let user select the images.
parameters:
- name: ACRimage
type: string
default: image1
values:
- image1
- image2
- image3
Then when clicking the Run to run the pipeline, user will be given the option to select which image to use in the pipeline.
2, Add ACR container resources in your pipeline.
Before you can add ACR container resource. You need to create Docker Registry service connection
Then you can define the container resource in your pipeline like below:
resources:
containers:
- container: ACRimage
image: ${{parameters.ACRimage}}
endpoint: ACR-service-connection
So the full yaml pipeline looks like below:
parameters:
- name: ACRimage
type: string
default: image1
values:
- image1
- image2
- image3
resources:
containers:
- container: ACRimage
image: ${{parameters.ACRimage}}
endpoint: ACR-service-connection
trigger: none
pool:
vmImage: 'ubuntu-latest'
steps:
You can use a Container Resource Block
You can use a first class container resource type for Azure Container
Registry (ACR) to consume your ACR images. This resources type can be
used as part of your jobs and also to enable automatic pipeline
triggers.
trigger:
- none # Disbale trigger on the repository itself
resources:
containers:
- container: string # identifier for the container resource
type: ACR
azureSubscription: string # Azure subscription (ARM service connection) for container registry;
resourceGroup: string # resource group for your ACR
registry: string # registry for container images
repository: string # name of the container image repository in ACR
trigger: true
If you wnat to trigger only on certain tags (or exclude certain tags) you can replace the trigger value like below
trigger:
tags:
include: [ string ] # image tags to consider the trigger events, defaults to any new tag
exclude: [ string ] # image tags on discard the trigger events, defaults to none
A complete pipeline example:
trigger:
- none # Disable trigger on the repository itself
resources:
containers:
- container: myId # identifier for the container resource
type: ACR
azureSubscription: test # Azure subscription (ARM service connection) for container registry;
resourceGroup: registry # resource group for your ACR
registry: myregistry # registry for container images
repository: hello-world # name of the container image repository in ACR
trigger: true
pool:
vmImage: 'ubuntu-latest'
steps:
- bash: |
echo "The registry is: $(resources.container.myId.registry)"
echo "The repository is: $(resources.container.myId.repository)"
echo "The tag is: $(resources.container.myId.tag)"
If you push anew image to the helloworld repository the pipeline will start
docker pull hello-world:latest
docker tag hello-world:latest myregistry.azurecr.io/hello-world:newtag
docker push hello-world:latest myregistry.azurecr.io/hello-world:newtag
The result of the script step is
The registry is: myregistry
The repository is: hello-world
The tag is: newtag
Sorry to inform this but azure yaml pipelines doesn't support this.
What danielorn suggested 'rerources.containers', that is used to run your build stages in that container. I don't want to do that.
Aim is to deploy take image tag from user & deploy that image. So need image needs to be passed as artifact just like in Release pipeline.
Sadly this is not supported as of now in YAMl pipelines, I got a confirmation azure team.

How to retain docker-image after release-candidate build for future push to prod registry

In our release-candidate pipeline to our QA (think "stage") environment, we are using a vmImage to build our docker-container and then pushing it to our Non-Prod registry.
pool:
vmImage: "ubuntu-latest"
steps:
- task: pseudo-code: ## get everything prepped for buildAndPush
- task: Docker#2
inputs:
containerRegistry: "Our Non-Prod Registroy"
repository: "apps/app-name"
command: "buildAndPush"
Dockerfile: "**/Dockerfile"
tags: |
$(Build.SourceBranchName)
These are release-candidates. Once the code is approved for release, we want to push the same container to our Production registry; however, we can't figure out how to do it in this framework. Seems like we have only two options:
rebuild it in a different run of the pipeline later which will push it to our Production registry
push every release-candidate container to our Production registry
I don't like either of these options. What I want is a way to retain the container somewhere, and then to push it to the our Production registry when we decide that we want to release it.
How would we do this?
You can actually
Push docker image to your non-production registry
Once your it is approved for the release (I don't know what exactly you mean by saying this in terms on your pipeline) you can (Please check this topic)
You can pull the image, tag it and push it to the new registry.
docker pull old-registry/app
docker tag old-registry/app new-registry/app
docker push new-registry/app
You can also take a look here where is described above case.

How to write CI/CD pipeline to run integration testing of java micro services on Google kubernetes cluster?

Background:
I have 8-9 private clusterIP spring based microservices in a GKE cluster. All of the microservices are having integration tests bundled with them. I am using bitbucket and using maven as build tool.
All of the microservices are talking to each other via rest call with url: http://:8080/rest/api/fetch
Requirement: I have testing enviroment ready with all the docker images up on GKE Test cluster. I want that as soon as I merge the code to master for service-A, pipeline should deploy image to tes-env and run integration test cases. If test cases passes, it should deploy to QA-environment, otherwise rollback the image of service-A back to previous one.
Issue: On every code merge to master, I am able to run JUNIT test cases of service-A, build its docker image, push it on GCR and deploy it on test-env cluster. But how can I trigger integration test cases after the deployment and rollback to previously deployed image back if integration test cases fails? Is there any way?
TIA
You can create different steps for each part:
pipelines:
branches:
BRANCH_NAME:
- step:
script:
- BUILD
- step:
script:
- DEPLOY
- step:
script:
- First set of JUNIT test
- step:
script:
- Run Integration Tests (Here you can add if you fail to do rollback)
script:
- Upload to QA
There are many ways you can do it. From the above information its not clear which build tool you are using.
Lets say if you are using bamboo you can create a task for the same and include it in the SDLC process. Mostly the task can have bamboo script or ansible script.
You could also create a separate shell script to run the integration test suite after deployment.
You should probably check what Tekton is offering.
The Tekton Pipelines project provides k8s-style resources for declaring CI/CD-style pipelines.
If you use Gitlab CICD you can break the stages as follows:
stages:
- compile
- build
- test
- push
- review
- deploy
where you should compile the code in the first stage, then build the docker images from it in the next and then pull images and run them to do all your tests (including the integration tests)
here is the mockup of how it will look like:
compile-stage:
stage: compile
script:
- echo 'Compiling Application'
# - bash my compile script
# Compile artifacts can be used in the build stage.
artifacts:
paths:
- out/dist/dir
expire_in: 1 week
build-stage:
stage: build
script:
- docker build . -t "${CI_REGISTRY_IMAGE}:testversion" ## Dockerfile should make use of out/dist/dir
- docker push "${CI_REGISTRY_IMAGE}:testversion"
test-stage1:
stage: test
script:
- docker run -it ${CI_REGISTRY_IMAGE}:testversion bash unit_test.sh
test-stage2:
stage: test
script:
- docker run -d ${CI_REGISTRY_IMAGE}:testversion
- ./integeration_test.sh
## You will only push the latest image if the build will pass all the tests.
push-stage:
stage: push
script:
- docker pull ${CI_REGISTRY_IMAGE}:testversion
- docker tag ${CI_REGISTRY_IMAGE}:testversion -t ${CI_REGISTRY_IMAGE}:latest
- docker push ${CI_REGISTRY_IMAGE}:latest
## An app will be deployed on staging if it has passed all the tests.
## The concept of CICD is generally that you should do all the automated tests before even deploying on staging. Staging can be used for User Acceptance and Quality Assurance Tests etc.
deploy-staging:
stage: review
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://$CI_ENVIRONMENT_SLUG.$KUBE_INGRESS_BASE_DOMAIN
on_stop: stop_review
only:
- branches
script:
- kubectl apply -f deployments.yml
## The Deployment on production environment will be manual and only when there is a version tag committed.
deploy-production:
stage: deploy
environment:
name: prod
url: https://$CI_ENVIRONMENT_SLUG.$KUBE_INGRESS_BASE_DOMAIN
only:
- tags
script:
- kubectl apply -f deployments.yml
when:
- manual
I hope the above snippet will help you. If you want to learn more about deploying microservices using gitlab cicd on GKE read this

How to Enable Docker layer caching in Azure DevOps

I am running the below yaml script to build docker images and push into kubernetes cluster but at the same time I wanted to enable docker layer caching in the azure DevOps while building the yaml script.Could you please explain how to enable or how to add the task in azure devops to do this.
Yaml:
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
variables:
tag: 'web'
DockerImageName: 'boiyaa/google-cloud-sdk-nodejs'
steps:
- task: Docker#2
inputs:
command: 'build'
Dockerfile: '**/Dockerfile'
tags: 'web'
- script: |
echo ${GCLOUD_SERVICE_KEY_STAGING} > ${HOME}/gcp-key.json
gcloud auth activate-service-account --key-file ${HOME}/gcp-key.json --project ${GCLOUD_PROJECT_ID_STAGING}
gcloud container clusters get-credentials ${GCLOUD_PROJECT_CLUSTER_ID_STAGING} \
--zone ${GCLOUD_PROJECT_CLUSTER_ZONE_STAGING} \
--project ${GCLOUD_PROJECT_ID_STAGING}
displayName: 'Setup-staging_credentials'
- bash: bash ./deploy/deploy-all.sh staging
displayName: 'Deploy_script_staging'
Here's how I fixed this. I just pull the latest version of the image from my registry (Azure Container Registry in my case) to the Azure DevOps hosted agent. Then I add --cache-from to the Docker build arguments pointing to this latest tag which it just downloaded to the local machine/cache.
- task: Docker#2
inputs:
containerRegistry: '$(ContainerRegistryName)'
command: 'login'
- script: "docker pull $(ACR_ADDRESS)/$(REPOSITORY):latest"
displayName: Pull latest for layer caching
continueOnError: true # for first build, no cache
- task: Docker#2
displayName: build
inputs:
containerRegistry: '$(ContainerRegistryName)'
repository: '$(REPOSITORY)'
command: 'build'
Dockerfile: './dockerfile '
buildContext: '$(BUILDCONTEXT)'
arguments: '--cache-from=$(ACR_ADDRESS)/$(REPOSITORY):latest'
tags: |
$(Build.BuildNumber)
latest
- task: Docker#2
displayName: "push"
inputs:
command: push
containerRegistry: "$(ContainerRegistryName)"
repository: $(REPOSITORY)
tags: |
$(Build.BuildNumber)
latest
Docker layer caching is not supported in Azure DevOps currently. The reason is stated as below:
In the current design of Microsoft-hosted agents, every job is dispatched to a newly provisioned virtual machine. These virtual machines are cleaned up after the job reaches completion, not persisted and thus not reusable for subsequent jobs. The ephemeral nature of virtual machines prevents the reuse of cached Docker layers.
However:
Docker layer caching is possible using self-hosted agents. You can try creating your on-premise agents to run your build pipeline.
You may need to disable the Job's option 'Allow scripts to access the OAuth token'. For $(System.AccessToken) is passed to docker build using a --build-arg ACCESS_TOKEN=$(System.AccessToken), and its value varies for every run, which will invalidate the cache.
You can also you use Cache task and docker save/load commands to upload the saved Docker layer to Azure DevOps server and restore it on the future run. Check this thread for more information.
Another workaround as described in this blog is to use --cache-from and --target in your Dockerfile.
If the above workarounds are not satisfying, you can submit a feature request to Microsoft Develop Team. Click Suggest a Feature and choose Azure DevOps.
Edit: as pointed out in the comments, this feature is actually available without BuildKit. There's an example here on how to use a Docker image as the cache source during a build.
By adding the variable DOCKER_BUILDKIT: 1 (see this link) to the pipeline job and installing buildx, I managed to achieve layer caching by storing the cache as a separate image. See this link for some basics
Here's an example step in Azure DevOps
- script: |
image="myreg.azurecr.io/myimage"
tag=$(Build.SourceBranchName)-$(Build.SourceVersion)
cache_tag=cache-$(Build.SourceBranchName)
docker buildx create --use
docker buildx build \
-t "${image}:${tag}"
--cache-from=type=registry,ref=${image}:${cache_tag}\
--cache-to=type=registry,ref=${image}:${cache_tag},mode=max \
--push \
--progress=plain \
.
displayName: Build & push image using remote BuildKit layer cache
This of course will require each run to download the image cache, but for images that have long-running installation steps in the Docker build process this is definitely faster (in our case from about 8 minutes to 2).
It looks like Microsoft introduced Pipeline Caching for Azure Devops a while ago and it's possible to cache docker layers. See this link.
You can also set up "local" Docker layer caching in your Pipeline VM if you don't want to push up your cache to a container registry. You'll need the following steps:
- task: Docker#2
displayName: Login to ACR
inputs:
command: login
containerRegistry: $(SERVICE_NAME)
- task: Cache#2
displayName: Cache task
inputs:
key: 'docker | "$(Agent.OS)" | "$(Build.SourceVersion)"'
path: /tmp/.buildx-cache
restoreKeys: 'docker | "$(Agent.OS)"'
- bash: |
docker buildx create --driver docker-container --use
docker buildx build --cache-to type=local,dest=/tmp/.buildx-cache-new --cache-from type=local,src=/tmp/.buildx-cache --push --target cloud --tag $REGISTRY_NAME/$IMAGE_NAME:$TAG_NAME .
displayName: Build Docker image
# optional: set up deploy steps here
- task: Docker#2
displayName: Logout of ACR
inputs:
command: logout
containerRegistry: $(SERVICE_NAME)
The key here is to set up Docker buildx and run it with the --cache-to and --cache-from flags instead of using the Azure Docker task. You'll also need to use the Cache task to make sure the Docker cache is reloaded in subsequent pipeline runs, and you'll have to set up a manual swap step where the newly-generated cache replaces the old cache.
Had the same problem. Turns out it was the task "npm authenticate" that was breaking the caching by inserting a new token each time. I just put a static .npmrc file into the Pipeline > Library > SecureFiles depot and everything became unbelievably fast:
- task: DownloadSecureFile#1
name: 'npmrc'
displayName: 'Download of the npmrc authenticated'
inputs:
secureFile: '.npmrc'
- task: CopyFiles#2
inputs:
SourceFolder: $(Agent.TempDirectory)
contents: ".npmrc"
TargetFolder: $(Build.SourcesDirectory)/Code
OverWrite: true
displayName: 'Import of .npmrc'
- task: Docker#2
displayName: Build an image
inputs:
command: build
dockerfile: '$(Build.SourcesDirectory)/Dockerfile'
tags: |
$(tag)
The only drawback of this is that personnal access token last a year. So you need to replace your securefile every year...

Is it possible to build a docker image without pushing it?

I want to build a docker image in my pipeline and then run a job inside it, without pushing or pulling the image.
Is this possible?
It's by design that you can't pass artifacts between jobs in a pipeline without using some kind of external resource to store it. However, you can pass between tasks in a single job. Also, you specify images on a per-task level rather than a per-job level. Ergo, the simplest way to do what you want may be to have a single job that has a first task to generate the docker-image, and a second task which consumes it as the container image.
In your case, you would build the docker image in the build task and use docker export to export the image's filesystem to a rootfs which you can put into the output (my-task-image). Keep in mind the particular schema to the rootfs output that it needs to match. You will need rootfs/... (the extracted 'docker export') and metadata.json which can just contain an empty json object. You can look at the in script within the docker-image-resource for more information on how to make it match the schema : https://github.com/concourse/docker-image-resource/blob/master/assets/in. Then in the subsequent task, you can add the image parameter in your pipeline yml as such:
- task: use-task-image
image: my-task-image
file: my-project/ci/tasks/my-task.yml
in order to use the built image in the task.
UDPATE: the PR was rejected
This answer doesn't currently work, as the "dry_run" PR was rejected. See https://github.com/concourse/docker-image-resource/pull/185
I will update here if I find an approach which does work.
The "dry_run" parameter which was added to the docker resource in Oct 2017 now allows this (github pr)
You need to add a dummy docker resource like:
resources:
- name: dummy-docker-image
type: docker-image
icon: docker
source:
repository: example.com
tag: latest
- name: my-source
type: git
source:
uri: git#github.com:me/my-source.git
Then add a build step which pushes to that docker resource but with "dry_run" set so that nothing actually gets pushed:
jobs:
- name: My Job
plan:
- get: my-source
trigger: true
- put: dummy-docker-image
params:
dry_run: true
build: path/to/build/scope
dockerfile: path/to/build/scope/path/to/Dockerfile