What does Lock services in Azure Pipelines' task DockerCompose actually do? - docker-compose

I am learning to use Azure Pipelines for CI/CD. I read the official document and found the Docker Compose task has an action called Lock services. I have no idea what this action actually do and what it means by locking the images.
Can anyone explain it to me, or provide me some examples on when and how to use it?

We have public the source code of this task, so you can check this page to analyze what the exactly action this command do.
For image, there has 2 different identifies: tag and digest. Now, let's assume one scenario:
Most of time, a tagged image in Container Registry is mutable, so with appropriate permissions you or anyone can update/push/delete an image with the same tag to that registry. However, when you deploy a image to production env, you could not sure whether the image with one specific tag does not been overwritten and it is the one you want to deployed.
At this time, digest would be a best choice for you.
Because digest is a SHA256 calculated from the image and identifies it uniquely. Once there has any changes to your image, the corresponding SHA256 value will be changes also.
Explanation of this action:
Check this code line(defined here). It's work logic is read out the image(s) used in the docker-compose.yml file, pull image(s) and generate a digest for them. Next a new docker-compose.yml file is automatically generated, which the image will be specified with digest in this new docker-compose.yml file.
Sample:
The task definition i used:
- task: DockerCompose#0
displayName: 'Lock services'
inputs:
containerregistrytype: 'Container Registry'
dockerRegistryEndpoint: {service connection name}
dockerComposeFile: 'Docker/docker-compose.yml'
action: 'Lock services'
removeBuildOptions: true
The docker-compose.yml:
version: '3'
services:
web:
image: xxxx/testwebapp
ports:
- "1983:80"
newsfeed:
image: xxx/merlin
redis:
image: redis
See the build log of this task:
And the contents of new docker-compose.yml which generated.
(List them by using cat xxx command):
Now, when you deploy the images to production, just use the new docker-compose.yml file the task generated automatically. This can guarantee the deployed image is the version you built at the beginning, even if someone overwrites this image later.

Related

Running a Concourse-Task with an registry-image resource

I am using Concourse-CI in combination with a private Docker registry and everything works fine. However, I want to run a task as an image I provide via the registry. To clarify: I don't want to run the image within the task, the task source should be my image. Unfortunately I wasn't able to find an example on here or on the Concourse-CI docs.
My resource:
resources:
- name: my-image
type: registry-image
source:
repository: ((registry-url))/my-image
username: ...
password: ...
ca_certs:
- ((registry-cert))
So, if I'm correct, the task/config/source cannot take a resource but an anonymous-resource where I would provide a docker.io link.
I am very appreciative for some help. :)
Edit: OK, so my first mistake was to only look at the Task schema, I can configure an image (https://concourse-ci.org/jobs.html#schema.step.task-step.image) but when I do:
- task: test
image: my-image
config:
platform: linux
inputs:
run:
...
I get this error: find or create container on worker 4c38517c9713: no image plugin configured.
Ok,
so the answer was to make the image privileged for some reason...

GitHub Actions: How to dynamically set environment url based on deployment step output?

I found out about a really nice GitHub Actions Feature called Environments. Using the appropriate syntax a Environment could also be created inside a GitHub Action workflow.yml like this:
environment:
name: test_environment
url: https://your-apps-url-here.com
As the docs state thats a valid way to create GitHub Action Environments:
Running a workflow that references an environment that does not exist
will create an environment with the referenced name.
But inside my current GitHub Action workflow is there a way I dynamically set the url based on a deployment step output? I have a dynamic URL resulting from the deployment process to AWS which I can't define up-front.
The job workflow docs tell us that there's also a way of using expressions inside the url field:
environment:
name: test_environment
url: ${{ steps.step_name.outputs.url_output }}
Now imagine a ci.yml workflow file that uses AWS CLI to deploy a static website to S3, where we used a tool like Pulumi to dynamically create a S3 Bucket inside our AWS account. We can read the dynamically created S3 url using the following command pulumi stack output bucketName. The deploy step inside the ci.yml could then look like this:
- name: Deploy Nuxt.js generated static site to S3 Bucket via AWS CLI
id: aws-sync
run: |
aws s3 sync ../dist/ s3://$(pulumi stack output bucketName) --acl public-read
echo "::set-output name=s3_url::http://$(pulumi stack output bucketUrl)"
working-directory: ./deployment
There are 2 crucial points here: First we should use id inside the deployment step to define a step name we could easily access via step_name inside our environment:url. Second we need to define a step output using echo "::set-output name=s3_url::http://$(pulumi stack output bucketUrl)". In this example I create a variable s3_url. You could replace the pulumi stack output bucketUrl with any other command you'd like or tool you use, which responds with your dynamic environment url.
Be also sure to add a http:// or https:// in order to prevent an error message like this:
Environment URL 'microservice-ui-nuxt-js-hosting-bucket-bc75fce.s3-website.eu-central-1.amazonaws.com' is not a valid http(s) URL, so it will not be shown as a link in the workflow graph.
Now the environment definition at the top of our ci.yml can access the s3_url output variable from our deployment step like this:
jobs:
ci:
runs-on: ubuntu-latest
environment:
name: microservice-ui-nuxt-js-deployment
url: ${{ steps.aws-sync.outputs.s3_url }}
steps:
- name: Checkout
...
Using steps.aws-sync we reference the deployment step directly, since we defined it with the id. The appended .outputs.s3_url then directly references the variable containing our S3 url. If you defined everything correctly the GitHub Actions UI will render the environment URL directly below the finished job:
Here's also a fully working workflow embedded inside a example project.

Docker Compose task on Azure DevOps cannot start daemon

I'm unable to run the Docker Compose task on Azure DevOps and every solution I've looked up online, either makes no sense, or does not work for my scenario.
The job output for the failure is:
This is a very simple process, artifacts are copied to a folder during build, and the docker-compose.yml and .dockerfile is added to this directory, which then needs to be run.
One article explained that if you add your docker-compose.yml to the same folder as the files the image will be hosting and the .dockerfile, that it might cause the daemon to fall over and generate this generic error, so I've added a .dockerignore file, but this issues persists.
I'm using a Hosted Agent - Ubuntu-18.04.
My task looks like this:
steps:
- task: DockerCompose#0
displayName: 'Run a Docker Compose command'
inputs:
azureSubscription: 'Test Dev Ops'
azureContainerRegistry: '{"loginServer":"testdevops.azurecr.io", "id" : "/subscriptions/{subscription_key}/resourceGroups/Test.Devops/providers/Microsoft.ContainerRegistry/registries/testdevops"}'
dockerComposeFile: '$(System.DefaultWorkingDirectory)/$(Release.PrimaryArtifactSourceAlias)/test.ng.$(Build.BuildNumber)/dist/testweb/docker-compose-build.yml'
dockerComposeCommand: build
arguments: '--build-arg azure_pat=$(System.AccessToken) --build-arg azure_username=Azure'
The idea here is that this container is composed and delivered straight to Azure's Container Registry.
I have ensured that the user that's running this process, as been granted permissions in that ACR, as well as added the user to the Administrative group in Azure DevOps.
A lot of responses talks about adding the user to the Docker group, but this is a Hosted Agent, not a private agent, so there is no such option.
I have even tried installing Docker CLI before this task, but nothings working.
Am I being daft to think that I can compose in Azure DevOps?
Edit
The contents of my artifacts folder looks something like this:
This error message is extremely misleading. If anyone from Microsoft is looking at this question, please consider making the error more specific, if possible.
It turned out, I missed a semi-colon in a build task that replaced tokens before the build artifacts was pushed from the build output, and because of that, the yaml file had a #{..} token inside of it, which caused the docker-compose to fail.
It had nothing to do with permissions, nor a .dockerignore file, very misleading.

Is it possible to use variables in a codeship-steps.yml file?

We currently use Codeship Pro to push Docker images to a private registry on AWS, as well as to deploy those images to an ECS cluster.
However, the codeship-steps.yml file includes a hard-coded region name for which AWS region I'm pushing to. For example:
- name: push_production
service: app
type: push
image_name: 123456789012.dkr.ecr.us-east-1.amazonaws.com/project/app-name
image_tag: "{{.Timestamp}}"
tag: master
registry: https://123456789012.dkr.ecr.us-east-1.amazonaws.com
dockercfg_service: aws_generator
I would like to be able to fairly easily switch this to deploy to a different AWS region. Thus the question:
Is it possible to use variables in a codeship-steps.yml file?
I know some of the properties can use a handful of built-in variables provided by Codeship (such as the {{.Timestamp}} value used for the image_tag property), but I don't know if, for example, values from an env_file can be used in the image_name, registry, and/or command properties of a step.
I'm imagining something like this...
codeship-steps.yml:
- name: push_production
service: app
type: push
image_name: "123456789012.dkr.ecr.{{.AWS_REGION}}.amazonaws.com/project/app-name"
image_tag: "{{.Timestamp}}"
tag: master
registry: "https://123456789012.dkr.ecr.{{.AWS_REGION}}.amazonaws.com"
dockercfg_service: aws_generator
... but that results in an "error parsing image name during push step: invalid reference format" on the push step.
I've tried simply not specifying the registry in the image_name...
image_name: project/app-name
... but I get a "Build Error: no basic auth credentials" on the push step. At this point, I'm running out of ideas.
Is it possible to use [environment] variables in a codeship-steps.yml file?
While the image_tag can take advantage of Go templates, the same is not the case for image_name, registry, or anything else. This is a separate set of templating variables that are accessible only to the image_tag generation.
As for environment variables in general (CI environment variables or those defined in the service configs), these values can be used in codeship-steps.yml on the command step when passed through a shell command. For example:
- service: app
command: echo The branch name is: $CI_BRANCH
Results in:
The branch name is: $CI_BRANCH
- service: app
command: /bin/sh -c 'echo The branch name is: $CI_BRANCH'
Results in:
The branch name is: master
As for your 'no basic auth credentials' error message, it's possible that there's an issue with how you are retrieving the basic auth credentials for access to your image registry. If you are on a MacOS device, I would recommend that you review our documentation on how to generate Docker credentials.

How to parameterise concourse task files

I'm pretty impressed by the power and simplicity of Concourse. Since my pipelines keep growing I decided to move the tasks to separate files. One of the tasks use a custom Docker image from our own private registry. So, in that task file I have:
image_resource:
type: docker-image
source:
repository: docker.mycomp.com:443/app-builder
tag: latest
username: {{dckr-user}}
password: {{dckr-pass}}
When I do a set-pipeline, I pass the --load-from-vars argument to load credentials etc from a seperate file.
Now here's my problem: I notice that the vars in my pipeline files are replaced with the actual correct values, but once the task runs, the afore mentioned {{dckr-user}} and {{dckr-pass}} are not replaced.
How do I achieve this?
In addition to what was provided in this answer
If specifically you are looking to use private images in a task, you can do the following in your pipeline.yml:
resources:
- name: some-private-image
type: docker
params:
repository: ...
username: {{my-username}}
password: {{my-password}}
jobs:
- name: foo
plan:
- get: some-private-image
- task: some-task
image: some-private-image
Because this is your pipeline, you can use --load-vars-from, which will first get your image as a resource and then use it for the subsequent task.
You can also see this article on pre-fetching ruby gems in test containers on Concourse
The only downside to this is you cannot use this technique when running a fly execute.
As of concourse v3.3.0, you can set up Credential Management in order to use variables from one of the supported credential managers which are currently Vault, Credhub, Amazon SSM, and Amazon Secrets Manager. So you don't have to separate your task files partially in the pipeline.yml anymore. The values you set in the Vault will be also accessible from the task.yml files.
And since v3.2.0 {{foo}} is deprecated in favor of ((foo)).
Using the Credential Manager you can parameterize:
source under resources in a pipeline
source under resource_types in a pipeline
webhook_token under resources in a pipeline
image_resource.source under image_resource in a task config
params in a pipeline
params in a task config
For setting up vault with concourse you can refer to:
https://concourse-ci.org/creds.html
You can always define tasks in a pipeline.yml...
For example:
jobs:
- name: dotpersecond
plan:
- task: dotpersecond
config:
image_resource:
type: docker-image
source:
repository: docker.mycomp.com:443/app-builder
tag: latest
username: {{dckr-user}}
password: {{dckr-pass}}
run:
path: sh
args:
- "-c"
- |
for i in `seq 1000`; do echo hi; sleep 2; done