Concourse call job from another job with parameters - concourse

I have a job with many tasks like this:
- name: main-job
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- get: <git-resource-3>
- task: <task-1>
file: <git-resource>/<path>/<task-1-no-db>.yml
- task: <task-2>
tags: ['<specific-tag>']
file: <git-resource>/<path>/<task-1>.yml
params:
DATABASE_HOST: <file>
DATABASE: <my-db-1>
- task: <task-2>
tags: ['<specific-tag>']
file: <git-resource>/<path>/<task-1>.yml
params:
DATABASE_HOST: <file>
DATABASE: <my-db-1>
The problem for me is, I have to literally call the same job but instead of DATABASE params being my-db-1, I want it to be my-db-2.
The only way I am able to do this is by having new job and pass the params, literally copy the entire set of lines. My job is too fat, as in has too many tasks in them, so copying it though is the obvious solution, I am wondering if there's a way to re-use by having multiple pipelines and one main pipeline that essentially calls these pipelines with the param for DATABASE passed or have two small jobs that calls this main job with different params something like this:
- name: <call-main-job-with-db-1>
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- task: <call-main-job-task>
params:
DATABASE_HOST: <file>
DATABASE: <my-db-1>
- name: <call-main-job-with-db-2>
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- task: <call-main-job-task>
params:
DATABASE: <my-db-2>
I am not sure if this is even possible since I didn't find any example of this.

Remember you are using YAML, so you can use YAML features like "Anchors"
You will find some additional information about "Anchors" in this link. Look for "EXTRA YAML FEATURES"
YAML also has a handy feature called 'anchors', which let you easily duplicate
content across your document. Both of these keys will have the same value: anchored_content: &anchor_name This string will appear as the
value of two keys. other_anchor: *anchor_name
# Anchors can be used to duplicate/inherit properties
base: &base
name: Everyone has same name
foo: &foo
<<: *base
age: 10
bar: &bar
<<: *base
age: 20
Try this for your Concourse Pipeline:
common:
db_common: &db_common
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- task: <call-main-job-task>
params:
jobs:
- name: <call-main-job-with-db-1>
<<: *db_common
DATABASE_HOST: <file>
DATABASE: <my-db-1>
- name: <call-main-job-with-db-2>
<<: *db_common
DATABASE: <my-db-2>
NOTE: Remember that you can have as many Anchors as you want, you can define two or more anchors for the same Job/Task/Resource, etc.

You need to just copy and paste the task as you do in the question description. Concourse expects an expressive yaml, there is no branching or logic allowed. If you don't want to copy and paste so much yaml, then you can do some yaml generation magic to simplify what you look at and work with, but concourse will want the full yaml with each job defined separately.

Concourse has this fan in fan out paradigm, where you want to keep the jobs simple and short. Use a scripting language e.g. like python or ruby to make your pipeline creation more flexible.
Personally i use one pipeline.yml.erb file where i render different job templates inside. I try to keep my job.yml.erb as generic as possible so i can reuse them for different pipelines.
To bring it to the next level you could specify a meta config.yml and use this config inside your templates to generate your pipeline depending on what you specified in the config.

Related

How can I pass a pre-defined pipeline build parameter to a template in Azure DevOps pipelines?

Template:
parameters:
- name: PathPrefix
displayName: 'Path prefix'
type: string
default: ''
steps:
- task: DotNetCoreCLI#2
displayName: 'dotnet restore'
inputs:
command: restore
projects: ${{parameters.PathPrefix}}**/$(Build.DefinitionName).sln
Pipeline:
resources:
repositories:
- repository: devops
name: foo/devops
type: git
ref: master
trigger:
branches:
include:
- refs/heads/*
jobs:
- job: Job_1
displayName: Agent job 1
pool:
vmImage: windows-latest
steps:
- checkout: self
- template: azure/pipelines/pipeline.yaml#devops
parameters:
PathPrefix: $(Build.DefinitionName)
Run error during the restore step:
##[error]No files matched the search pattern.
Even setting verbosityRestore: detailed on the restore step doesn't give me any more information.
If I don't set PathPrefix, it seems to use the default empty string and find the solution file (in some cases). However, if I do set the prefix, which is needed in some repos, it can't find the file. I've tried various ways of referencing the parameter within the template (${{}}, $(), $[] and others) and different ways of specifying it within the pipeline, including hard-coding the path (though I want to use the variable instead), but nothing works.
I thought maybe variables would work instead, so I also tried specifying variables in the pipeline and using them in the template, but that results in the same error. Defining the variable in the template gave me a compilation error for the template (unexpected token 'variable' or something similar).
Look at what you're passing and mentally expand the results.
${{parameters.PathPrefix}}**/$(Build.DefinitionName).sln
If Build.DefinitionName is foo, and you pass that in as PathPrefix, then what you get is:
foo**/foo.sln
It looks like you want an extra forward slash in there, so you get foo/**/foo.sln.

Azure Yaml Pipelines - Dynamic object parameter to template

I would like to trigger a job template with an object as parameter.
Unfortunately, even based on the examples I couldn't find a way to do that.
I would appreciate if someone could guide me how to achieve this.
What I want to achieve, is to replace the ["DEPLOY", "CONFIG"] part with a dynamic variable:
- template: job-template.yaml
parameters:
jobs: ["DEPLOY", "CONFIG"]
This is not possible. YAML is very limited here and you may read more about this here
Yaml variables have always been string: string mappings.
So for instance you can define paramaters as complex type
Template file
parameters:
- name: 'instances'
type: object
default: {}
- name: 'server'
type: string
default: ''
steps:
- ${{ each instance in parameters.instances }}:
- script: echo ${{ parameters.server }}:${{ instance }}
Main file
steps:
- template: template.yaml
parameters:
instances:
- test1
- test2
server: someServer
But you are not able to do it dynamically/programmatically as every output you will create will end up as simple string.
What you can do is to pass as string and then using powershell split that string. But it all depends what you want to run further because you won't be able to simply iterate over yaml structure in that way. All what you can do is to run in in powershell loop and do something, but it can be not enough for you.
It's possible with some logic. see below
- template: job-template.yaml
parameters:
param: ["DEPLOY", "CONFIG"]
and in job-template.yaml file you can define. So every job name will be different
parameters:
param: []
jobs:
- ${{each jobName in parameters.param}}:
- job: ${{jobName}}
steps:
- task: Downl......

Fill runtime azure pipeline parameters from external source

We looking to create a pipeline to update our multi-tenant azure environment. We need to perform some actions during the update per tenant. To accomplish this, we would like to create a job per tenant, so we can process tenants in parallel. To accomplish this, I want to use a runtime parameter to pass the tenants to update to my pipeline as follows:
parameters:
- name: tenants
type: object
the value of the tenants parameter might look like something like this:
- Name: "customer1"
Someotherproperty: "some value"
- Name: "customer2"
Someotherproperty: "some other value"
to generate the jobs, we do something like this:
stages:
- stage:
jobs:
- job: Update_Tenant
strategy:
matrix:
${{ each tenant in parameters.Tenants }}:
${{ tenant.tenantName }}:
name: ${{ tenant.tenantName }}
someproperty: ${{ tenant.otherProperty }}
maxParallel: 2
steps:
- checkout: none
- script: echo $(name).$(someproperty)
Now what we need, is some way to fill this tenants parameter. Now I tried a few solutions:
Ideally I would like to put a build stage before the Update_Tenants stage to call a REST api to get the tenants, and expand the tenants parameter when the Update_Tenants stage starts, but this is not supported AFAIK, since parameter expansion is done when the pipeline starts.
A less ideal but still workable option would have been to create a variable group yaml file containing the tenants, and include this variable group in my pipeline, and use the ${{ variables.Tenants }} syntax to reference them. However, for some reason, variables can only be strings.
The only solution I can currently think of, is to create a pipeline that calls a REST api to get the tenants to update, and then uses the azure devops api to queue the actual update process with the correct parameter value. But this feels like a bit of a clunky workaround to accomplish this.
Now my question is, are there any (better?) alternatives to accomplish what I want to do?
Maybe this can help. I was able to use external source (.txt file) to fill array variable in azure pipelines.
Working example
# Create a variable
- bash: |
arrVar=()
for images in `cat my_images.txt`;do
arrVar+=$images
arrVar+=","
done;
echo "##vso[task.setvariable variable=list_images]$arrVar"
# Use the variable
# "$(list_images)" is replaced by the contents of the `list_images` variable by Azure Pipelines
# before handing the body of the script to the shell.
- bash: |
echo my pipeline variable is $(list_images)
Sources (there is also example for matrix)
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-a-job-scoped-variable-from-a-script
Other sources
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/runtime-parameters?view=azure-devops&tabs=script
To accomplish this, we would like to create a job per tenant, so we
can process tenants in parallel.
Apart from rolling deployment strategy, you can also check Strategies and Matrix.
You can try something like this unless you have to use Runtime parameters:
jobs:
- job: Update
strategy:
matrix:
tenant1:
Someotherproperty1: '1.1'
Someotherproperty2: '1.2'
tenant2:
Someotherproperty1: '2.1'
Someotherproperty2: '2.2'
tenant3:
Someotherproperty1: '3.1'
Someotherproperty2: '3.2'
maxParallel: 3
steps:
- checkout: none
- script: echo $(Someotherproperty1).$(Someotherproperty2)
displayName: 'Echo something'

Passing parameters between concourse jobs / tasks

What's the best way to pass parameters between concourse tasks and jobs? For example; if my first task generates a unique ID, what would be the best way to pass that ID to the next job or task?
If you are just passing between tasks within the same job, you can use artifacts (https://concourse-ci.org/running-tasks.html#outputs) and if you are passing between jobs, you can use resources (like putting it in git or s3). For example, if you are passing between tasks, you can have a task file
---
platform: linux
image_resource: # ...
outputs:
- name: unique-id
run:
path: project-src/ci/fill-in-output.sh
And the script fill-in-output.sh will put the file that contains the unique ID into path unique-id/. With that, you can have another task that takes the unique-id output as an input (https://concourse-ci.org/running-tasks.html#inputs) and use that unique id file.
Additionally to tasks resources will place files automagically for you in their working directory.
For example I have a pipeline job as follows
jobs:
- name: build
plan:
- get: git-some-repo
- put: push-some-image
params:
build: git-some-repo/the-image
- task: Use-the-image-details
config:
platform: linux
image_resource:
type: docker-image
source:
repository: alpine
inputs:
- name: push-some-image
run:
path: sh
args:
- -exc
- |
ls -lrt push-some-image
cat push-some-image/repository
cat push-some-image/digest
Well see the details of the image push from push-some-image
+ cat push-some-image/repository
xxxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/path/image
+ cat push-some-image/digest
sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Passing data within a job's tasks could easily be done with input/output artifacts (files), As Clara Fu noted.
For the case between jobs, when simple e.g. 'string' data has to be passed , and using a git is an overkill, the 'keyval' resource[1] seems to be a good solution.
The readme describes that the data is stored and managed as a standard properties file.
https://github.com/SWCE/keyval-resource

How to add parameters to the included task files in Concourse CI

If task file (file: task.yml) in pipeline (pipeline.yml) config needs to contain some {{properties}}, what is a proper way to add them?
In my case, I want to use a custom docker image from repository that uses authentication, and I don't want to hardcode/commit credentials in task yml itself.
Is the a way to do that currently without moving task config to the main pipeline yml?
Clarification: I want to parameterize task.yml file itself (for example, input: {{input_name}}).
In your task.yml you can specify required params, e.g:
params:
USERNAME:
PASSWORD:
And then provide them in pipeline.yml:
jobs:
- name: my-job
plan:
- get: ci-files
- task: my-task
file: ci-files/task.yml
params:
USERNAME: {{username}}
PASSWORD: {{password}}
Configure pipeline as:
fly set-pipeline -p pipeline-name -c pipeline.yml -v=USERNAME=my-username -v=PASSWORD=my-password
Then these params will be available to you as environment variables inside your task.