How to parameterise concourse task files - concourse

I'm pretty impressed by the power and simplicity of Concourse. Since my pipelines keep growing I decided to move the tasks to separate files. One of the tasks use a custom Docker image from our own private registry. So, in that task file I have:
image_resource:
type: docker-image
source:
repository: docker.mycomp.com:443/app-builder
tag: latest
username: {{dckr-user}}
password: {{dckr-pass}}
When I do a set-pipeline, I pass the --load-from-vars argument to load credentials etc from a seperate file.
Now here's my problem: I notice that the vars in my pipeline files are replaced with the actual correct values, but once the task runs, the afore mentioned {{dckr-user}} and {{dckr-pass}} are not replaced.
How do I achieve this?

In addition to what was provided in this answer
If specifically you are looking to use private images in a task, you can do the following in your pipeline.yml:
resources:
- name: some-private-image
type: docker
params:
repository: ...
username: {{my-username}}
password: {{my-password}}
jobs:
- name: foo
plan:
- get: some-private-image
- task: some-task
image: some-private-image
Because this is your pipeline, you can use --load-vars-from, which will first get your image as a resource and then use it for the subsequent task.
You can also see this article on pre-fetching ruby gems in test containers on Concourse
The only downside to this is you cannot use this technique when running a fly execute.

As of concourse v3.3.0, you can set up Credential Management in order to use variables from one of the supported credential managers which are currently Vault, Credhub, Amazon SSM, and Amazon Secrets Manager. So you don't have to separate your task files partially in the pipeline.yml anymore. The values you set in the Vault will be also accessible from the task.yml files.
And since v3.2.0 {{foo}} is deprecated in favor of ((foo)).
Using the Credential Manager you can parameterize:
source under resources in a pipeline
source under resource_types in a pipeline
webhook_token under resources in a pipeline
image_resource.source under image_resource in a task config
params in a pipeline
params in a task config
For setting up vault with concourse you can refer to:
https://concourse-ci.org/creds.html

You can always define tasks in a pipeline.yml...
For example:
jobs:
- name: dotpersecond
plan:
- task: dotpersecond
config:
image_resource:
type: docker-image
source:
repository: docker.mycomp.com:443/app-builder
tag: latest
username: {{dckr-user}}
password: {{dckr-pass}}
run:
path: sh
args:
- "-c"
- |
for i in `seq 1000`; do echo hi; sleep 2; done

Related

Running a Concourse-Task with an registry-image resource

I am using Concourse-CI in combination with a private Docker registry and everything works fine. However, I want to run a task as an image I provide via the registry. To clarify: I don't want to run the image within the task, the task source should be my image. Unfortunately I wasn't able to find an example on here or on the Concourse-CI docs.
My resource:
resources:
- name: my-image
type: registry-image
source:
repository: ((registry-url))/my-image
username: ...
password: ...
ca_certs:
- ((registry-cert))
So, if I'm correct, the task/config/source cannot take a resource but an anonymous-resource where I would provide a docker.io link.
I am very appreciative for some help. :)
Edit: OK, so my first mistake was to only look at the Task schema, I can configure an image (https://concourse-ci.org/jobs.html#schema.step.task-step.image) but when I do:
- task: test
image: my-image
config:
platform: linux
inputs:
run:
...
I get this error: find or create container on worker 4c38517c9713: no image plugin configured.
Ok,
so the answer was to make the image privileged for some reason...

GitHub Actions: How to dynamically set environment url based on deployment step output?

I found out about a really nice GitHub Actions Feature called Environments. Using the appropriate syntax a Environment could also be created inside a GitHub Action workflow.yml like this:
environment:
name: test_environment
url: https://your-apps-url-here.com
As the docs state thats a valid way to create GitHub Action Environments:
Running a workflow that references an environment that does not exist
will create an environment with the referenced name.
But inside my current GitHub Action workflow is there a way I dynamically set the url based on a deployment step output? I have a dynamic URL resulting from the deployment process to AWS which I can't define up-front.
The job workflow docs tell us that there's also a way of using expressions inside the url field:
environment:
name: test_environment
url: ${{ steps.step_name.outputs.url_output }}
Now imagine a ci.yml workflow file that uses AWS CLI to deploy a static website to S3, where we used a tool like Pulumi to dynamically create a S3 Bucket inside our AWS account. We can read the dynamically created S3 url using the following command pulumi stack output bucketName. The deploy step inside the ci.yml could then look like this:
- name: Deploy Nuxt.js generated static site to S3 Bucket via AWS CLI
id: aws-sync
run: |
aws s3 sync ../dist/ s3://$(pulumi stack output bucketName) --acl public-read
echo "::set-output name=s3_url::http://$(pulumi stack output bucketUrl)"
working-directory: ./deployment
There are 2 crucial points here: First we should use id inside the deployment step to define a step name we could easily access via step_name inside our environment:url. Second we need to define a step output using echo "::set-output name=s3_url::http://$(pulumi stack output bucketUrl)". In this example I create a variable s3_url. You could replace the pulumi stack output bucketUrl with any other command you'd like or tool you use, which responds with your dynamic environment url.
Be also sure to add a http:// or https:// in order to prevent an error message like this:
Environment URL 'microservice-ui-nuxt-js-hosting-bucket-bc75fce.s3-website.eu-central-1.amazonaws.com' is not a valid http(s) URL, so it will not be shown as a link in the workflow graph.
Now the environment definition at the top of our ci.yml can access the s3_url output variable from our deployment step like this:
jobs:
ci:
runs-on: ubuntu-latest
environment:
name: microservice-ui-nuxt-js-deployment
url: ${{ steps.aws-sync.outputs.s3_url }}
steps:
- name: Checkout
...
Using steps.aws-sync we reference the deployment step directly, since we defined it with the id. The appended .outputs.s3_url then directly references the variable containing our S3 url. If you defined everything correctly the GitHub Actions UI will render the environment URL directly below the finished job:
Here's also a fully working workflow embedded inside a example project.

Azure Pipelines parameter value from variable template

We would like to deploy components of our application to developer's local machines and want it to be easy enough for our co-workers to use and easy enough for us to maintain. These are virtual machines with a certain naming convention, for instance: VM001, VM002, and so on.
I can define these machines, and use the value later on in the pipeline, in a parameter in YAML like this:
parameters:
- name: stage
displayName: Stage
type: string
values:
- VM001
- VM002
- And so on...
I then only have to maintain one stage, because the only thing that really differs is the stage name:
stages:
- stage: ${{ parameters.stage }}
displayName: Deploy on ${{ parameters.stage }}
- jobs:
...
The idea behind defining the machines in the parameters like this is that developers can choose their virtual machine from the 'Stage' dropdown when they want to deploy to their own virtual machine. By setting the value of the parameter to the virtual machine, the stage is named and the correct library groups will also be linked up to the deployment (each developer has their own library groups where we store variables such as accounts and secrets).
However, we have multiple components that we deploy through multiple pipelines. So each component gets its own YAML pipeline and for each pipeline we will have to enter and maintain the same list of virtual machines.
We already use variable and job templates for reusability. I want to find a way to create a template with the list of machines and pass it to the parameter value. This way, we only need to maintain one template so whenever someone new joins the team or someone leaves, we only need to update one file instead of updating all the pipelines.
I've tried to pass the template to the parameter value using an expression like this:
variables:
- name: VirtualMachinesList
value: VirtualMachinesList.yml
parameters:
- name: stage
displayName: Stage
type: string
values:
- ${{ variables.VirtualMachinesList }}
The VirtualMachinesList.yml looks like this:
variables:
- name: VM001
value: VM001
- name: VM002
value: VM002
- And so on...
This gives the following error when I try to run the pipeline:
A template expression is not allowed in this context
I've also tried changing the parameter type to object. This results in a text field with a list of all the virtual machines and you can select the ones you don't want to deploy to and remove them. This isn't very user-friendly and also very error-prone, so not a very desirable solution.
Is there a way to pass the list of virtual machines to the parameter value from a single location, so that developers can choose their own virtual machine to deploy to?
I know you want to maintain the list of virtual machines in one place, and also keep the function that developers can choose the vm from the dropdown to deploy to. But i am afraid it cannot be done currently. Runtime parameters doesnot support template yet. You can submit a user voice here regarding this issue.
Currently you can keep only one function, either maintain the vms in one place or developer can choose their vm from the dropdown.
1, To maintain the virtual machines in one place. You can define a variable template to hold the virtual machines. And make the developer to type their vm to deploy to. See below:
Define an empty runtime parameter to let the developer to type in.
parameters:
- name: vm
type: string
default:
Define the variable template to hold the VMS
#variable.yml template
variables:
vm1: vm1
vm2: vm2
...
Then in the pipeline define a variable to refer to the vm variable in the variables template. See below
variables:
- template: variables.yml
- name: vmname
value: $[variables.${{parameters.vm}}]
steps:
- powerhsell: echo $(vmname)
2, To make the developer have the convenience to choose their vm from the dropdown. You have to define these machines parameters in all pipeline.
You're really close. You'll want to update how you're consuming your variable template to:
variables:
- template: variable-template.yml
Here's a working example (assuming both the variable template and consuming pipeline are within the same directory of a repository):
variable-template.yml:
variables:
- name: VM001
value: VM001
- name: VM002
value: VM002
example-pipeline.yml:
name: Stackoverflow-Example-Variables
trigger:
- none
variables:
- template: variable-template.yml
stages:
- stage: StageA
displayName: "Stage A"
jobs:
- job: output_message_job
displayName: "Output Message Job"
pool:
vmImage: "ubuntu-latest"
steps:
- powershell: |
Write-Host "Root Variable: $(VM001), $(VM002)"
For reference, here's the MS documentation on variable template usage:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops#variable-reuse

Is it possible to use variables in a codeship-steps.yml file?

We currently use Codeship Pro to push Docker images to a private registry on AWS, as well as to deploy those images to an ECS cluster.
However, the codeship-steps.yml file includes a hard-coded region name for which AWS region I'm pushing to. For example:
- name: push_production
service: app
type: push
image_name: 123456789012.dkr.ecr.us-east-1.amazonaws.com/project/app-name
image_tag: "{{.Timestamp}}"
tag: master
registry: https://123456789012.dkr.ecr.us-east-1.amazonaws.com
dockercfg_service: aws_generator
I would like to be able to fairly easily switch this to deploy to a different AWS region. Thus the question:
Is it possible to use variables in a codeship-steps.yml file?
I know some of the properties can use a handful of built-in variables provided by Codeship (such as the {{.Timestamp}} value used for the image_tag property), but I don't know if, for example, values from an env_file can be used in the image_name, registry, and/or command properties of a step.
I'm imagining something like this...
codeship-steps.yml:
- name: push_production
service: app
type: push
image_name: "123456789012.dkr.ecr.{{.AWS_REGION}}.amazonaws.com/project/app-name"
image_tag: "{{.Timestamp}}"
tag: master
registry: "https://123456789012.dkr.ecr.{{.AWS_REGION}}.amazonaws.com"
dockercfg_service: aws_generator
... but that results in an "error parsing image name during push step: invalid reference format" on the push step.
I've tried simply not specifying the registry in the image_name...
image_name: project/app-name
... but I get a "Build Error: no basic auth credentials" on the push step. At this point, I'm running out of ideas.
Is it possible to use [environment] variables in a codeship-steps.yml file?
While the image_tag can take advantage of Go templates, the same is not the case for image_name, registry, or anything else. This is a separate set of templating variables that are accessible only to the image_tag generation.
As for environment variables in general (CI environment variables or those defined in the service configs), these values can be used in codeship-steps.yml on the command step when passed through a shell command. For example:
- service: app
command: echo The branch name is: $CI_BRANCH
Results in:
The branch name is: $CI_BRANCH
- service: app
command: /bin/sh -c 'echo The branch name is: $CI_BRANCH'
Results in:
The branch name is: master
As for your 'no basic auth credentials' error message, it's possible that there's an issue with how you are retrieving the basic auth credentials for access to your image registry. If you are on a MacOS device, I would recommend that you review our documentation on how to generate Docker credentials.

Triggering tasks on Semver change: triggers jobs out or order

Here's what I'm trying to achieve:
I have a project with a build job for a binary release. The binary takes a while to cross-compile for each platform, so I only want to release build to be done when a release is tagged, but I want the local-native version to build and tests to run for each checked-in version.
Based on the flight-school demo... so far, my pipeline configuration looks like this:
resources:
- name: flight-school
type: git
source:
uri: https://github.com/nbering/flight-school
branch: master
- name: flight-school-version
type: semver
source:
driver: git
uri: https://github.com/nbering/flight-school
branch: master
file: version
jobs:
- name: test-app
plan:
- get: flight-school
trigger: true
- task: tests
file: flight-school/build.yml
- name: release-build
plan:
- aggregate:
- get: flight-school-version
trigger: true
- get: flight-school
passed: [test-app]
- task: release-build
file: flight-school/ci/release.yml
This produces a pipeline in the Web UI that looks like this:
The problem is that when I update the "release" file in the git repository, the semver resource, "flight-school-version" can check before the git resource "flight-school", causing the release build to be processed from the git version assigned to the previous check-in.
I'd like a way to work around this so that the release build appears as a separate task, but only triggers when the version is bumped.
Some things I've thought of so far
Create a separate git resource with a tag_filter set so that it only runs when a semver tag has been push to master
Pro: Jobs only run when tag is pushed
Con: Has the same disconnected-inheritance problem for tests as the semver-based example above
Add the conditional check for a semver tag (or change diff on a file) using the git history in the checkout as part of the build script
Pro: Will do basically what I want without too much wrestling with Concourse
Con: Can't see the difference in the UI without actually reading the build output
Con: Difficult to compose with other tasks and resource types to do something with the binary release
Manually trigger release build
Pro: Simple to set up
Con: Requires manual intervention.
Use the API to trigger a paused build step on completion of tests when a version change is detected
Con: Haven't seen any examples of others doing this, seems really complicated.
I haven't found a way to trigger a task when both the git resource and semver resource change.
I'm looking for either an answer to solve the concurrency problem in my above example, or an alternative pattern that would produce a similar release workflow.
Summary
Here's what I came up with for a solution, based on suggestions from the Concourse CI slack channel.
I added a parallel "release" track, which filters on tags resembling a semantic versioning versions. The two tracks share task configuration files and build scripts.
Tag Filtering
The git resource supports a tag_filter option. From the README:
tag_filter: Optional. If specified, the resource will only detect commits
that have a tag matching the expression that have been made against
the branch. Patterns are glob(7)
compatible (as in, bash compatible).
I used a simple glob pattern to match my semver tags (like v0.0.1):
v[0-9]*
At first I tried an "extglob" pattern, matching semantic versions exactly, like this:
v+([0-9]).+([0-9]).+([0-9])?(\-+([-A-Za-z0-9.]))?(\++([-A-Za-z0-9.]))
That didn't work, because the git resource isn't using the extglob shell option.
The end result is a resource that looks like this:
resource:
- name: flight-school-release
type: git
source:
uri: https://github.com/nbering/flight-school
branch: master
tag_filter: 'v[0-9]*'
Re-Using Task Definitions
The next challenge I faced was avoiding re-writing my test definition file for the release track. I would have to do this because all the file paths use the resource name, and I now have a resource for release, and development. My solution is to override the resource with an option on the get task.
jobs:
- name: test-app-release
plan:
- get: flight-school
resource: flight-school-release
trigger: true
- task: tests
file: flight-school/build.yml
Build.yml above is the standard example from the flight school tutorial.
Putting It All Together
My resulting pipeline looks like this:
My complete pipeline config looks like this:
resources:
- name: flight-school-master
type: git
source:
uri: https://github.com/nbering/flight-school
branch: master
- name: flight-school-release
type: git
source:
uri: https://github.com/nbering/flight-school
branch: master
tag_filter: 'v[0-9]*'
jobs:
- name: test-app-dev
plan:
- get: flight-school
resource: flight-school-master
trigger: true
- task: tests
file: flight-school/build.yml
- name: test-app-release
plan:
- get: flight-school
resource: flight-school-release
trigger: true
- task: tests
file: flight-school/build.yml
- name: build-release
plan:
- get: flight-school
resource: flight-school-release
trigger: true
passed: [test-app-release]
- task: release-build
file: flight-school/ci/release.yml
In my opinion you should manually click the release-build button, and let everything else be automated. I'm assuming you are manually bumping your version number, but it seems better to move that manual intervention to releasing.
What I would do is have put at the end of release-build that bumps your minor version. Something like:
- put: flight-school-version
params:
bump: minor
That way you will always be on the correct version, once you release 0.0.1, you are done with it forever, you can only go forward.