What's the best way to pass parameters between concourse tasks and jobs? For example; if my first task generates a unique ID, what would be the best way to pass that ID to the next job or task?
If you are just passing between tasks within the same job, you can use artifacts (https://concourse-ci.org/running-tasks.html#outputs) and if you are passing between jobs, you can use resources (like putting it in git or s3). For example, if you are passing between tasks, you can have a task file
---
platform: linux
image_resource: # ...
outputs:
- name: unique-id
run:
path: project-src/ci/fill-in-output.sh
And the script fill-in-output.sh will put the file that contains the unique ID into path unique-id/. With that, you can have another task that takes the unique-id output as an input (https://concourse-ci.org/running-tasks.html#inputs) and use that unique id file.
Additionally to tasks resources will place files automagically for you in their working directory.
For example I have a pipeline job as follows
jobs:
- name: build
plan:
- get: git-some-repo
- put: push-some-image
params:
build: git-some-repo/the-image
- task: Use-the-image-details
config:
platform: linux
image_resource:
type: docker-image
source:
repository: alpine
inputs:
- name: push-some-image
run:
path: sh
args:
- -exc
- |
ls -lrt push-some-image
cat push-some-image/repository
cat push-some-image/digest
Well see the details of the image push from push-some-image
+ cat push-some-image/repository
xxxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/path/image
+ cat push-some-image/digest
sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Passing data within a job's tasks could easily be done with input/output artifacts (files), As Clara Fu noted.
For the case between jobs, when simple e.g. 'string' data has to be passed , and using a git is an overkill, the 'keyval' resource[1] seems to be a good solution.
The readme describes that the data is stored and managed as a standard properties file.
https://github.com/SWCE/keyval-resource
Related
I have written a bash script it process some data and puts in one file. My intention is to give slack alert if there is content in that file if not it should not give the alert. Is there a way to do it? In Concourse
You should take advantage of the Concourse community's open source resource types. There's a list here. There is a slack resource listed on that page, but I use the one here (not included in the list above because it has not been added by the authors) https://github.com/cloudfoundry-community/slack-notification-resource.
That will give you the ability to add a put step in your job plan to send a slack resource. As for the logic of your original ask, you can use try and on_success. Your task might look something like this:
- try:
task: do-a-thing
config:
platform: linux
image_resource:
type: registry-image
source:
repository: YOUR_TASK_IMAGE
tag: latest
inputs:
- name: some-input
params:
FILE: some-file
run:
path: /bin/sh
args:
- -ec
- |
[ ! -z `cat some-input/${FILE}` ]
on_success:
put: slack
params:
<your slack resource put params go here>
The on_success part will run if the code defined in the task's run section returns 0. The script listed there just checks to see if there are more than zero bytes in the file. Because the task is wrapped in a try step, regardless of whether or not the task succeeds (and hence, sends you a message), the step will succeed and move to the next step in the plan.
I have a Concourse job that pulls a repo into a docker image and then executes a command on it, now I need to execute a script that comes form the docker image and after it is done execute a command inside the repo, something like this:
run:
dir: my-repo-resource
path: /get-git-context.sh && ./gradlew
args:
- build
get-git-context.sh is the script coming from my docker image and .gradlew is the standard gradlew inside my repo with the build param, I am getting the following error with this approach:
./gradlew: no such file or directory
Meaning the job cd'd into / when executing the first command, executing only one command works just fine.
I've also tried adding two run sections:
run:
path: /get-git-context.sh
run:
dir: my-repo-resource
path: ./gradlew
args:
- build
But only the second part is executed, what is the correct way to concat these two commands?
We usually solve this by wrapping the logic in a shell script and setting the path: /bin/bash with corresponding args (path to the script).
run:
path: /bin/sh
args:
- my-repo_resource/some-ci-folder/build_script.sh
The other option would be to define two tasks and pass the resources through the job's workspace, but we usually do more steps than just two and this would result in complex pipelines:
plan:
- task: task1
config:
...
outputs:
- name: taskOutput
run:
path: /get-git-context.sh
- task: task2
config:
inputs:
## directory defined in task1
- name: taskOutput
run:
path: ./gradlew
args:
- build
I would like to tag my git commits as they are deployed to the various environments in my concourse pipeline with the name of the environment. For example, in my UAT deployment job, I would like to do something like:
- put: master-resource <-- a git resource
params:
repository: master <-- the resource local directory
tag: 'uat'
force: true <-- replace the tag, if it already exists
tag_only: true
This would seem like a common -or at least simple, thing to do however the value of the 'tag' parameter can only be the path to a file -there is no option to pass a constant/literal value.
I see two possible solutions but none of them seems 'simple' enough:
Create a file myself, but to do that (ideally?) I wish there were some kind of file resource that I could use to create the file.
The last alternative would be to create a custom task, and even there I was struggling to find a way to pass the name of the tag as a parameter.
Any suggestions on what would be the best way to accomplish my goal in the simplest way, or alternatively how to implement options 1 or 2?
Thanks!
The reason that tag takes in a file is so that you can dynamically set the tag of the commit based on information you imply during the course of the pipeline.
So, the best way I can see to do something like this would be workflow #2 that you described above.
So you would want something like this:
- task: generate-git-tag
params:
TAG: {{some-passed-in-tag}}
config:
platform: linux
image_resource:
type: docker-image
source:
repository: ruby
outputs:
- name: tag-file
params:
TAG:
run:
path: /bin/bash
args:
- -c
- |
echo "${TAG}" >> tag-file/tag.txt
- put: master-resource <-- a git resource
params:
repository: master <-- the resource local directory
tag: tag-file/tag.txt
force: true <-- replace the tag, if it already exists
tag_only: true
I have a job with many tasks like this:
- name: main-job
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- get: <git-resource-3>
- task: <task-1>
file: <git-resource>/<path>/<task-1-no-db>.yml
- task: <task-2>
tags: ['<specific-tag>']
file: <git-resource>/<path>/<task-1>.yml
params:
DATABASE_HOST: <file>
DATABASE: <my-db-1>
- task: <task-2>
tags: ['<specific-tag>']
file: <git-resource>/<path>/<task-1>.yml
params:
DATABASE_HOST: <file>
DATABASE: <my-db-1>
The problem for me is, I have to literally call the same job but instead of DATABASE params being my-db-1, I want it to be my-db-2.
The only way I am able to do this is by having new job and pass the params, literally copy the entire set of lines. My job is too fat, as in has too many tasks in them, so copying it though is the obvious solution, I am wondering if there's a way to re-use by having multiple pipelines and one main pipeline that essentially calls these pipelines with the param for DATABASE passed or have two small jobs that calls this main job with different params something like this:
- name: <call-main-job-with-db-1>
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- task: <call-main-job-task>
params:
DATABASE_HOST: <file>
DATABASE: <my-db-1>
- name: <call-main-job-with-db-2>
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- task: <call-main-job-task>
params:
DATABASE: <my-db-2>
I am not sure if this is even possible since I didn't find any example of this.
Remember you are using YAML, so you can use YAML features like "Anchors"
You will find some additional information about "Anchors" in this link. Look for "EXTRA YAML FEATURES"
YAML also has a handy feature called 'anchors', which let you easily duplicate
content across your document. Both of these keys will have the same value: anchored_content: &anchor_name This string will appear as the
value of two keys. other_anchor: *anchor_name
# Anchors can be used to duplicate/inherit properties
base: &base
name: Everyone has same name
foo: &foo
<<: *base
age: 10
bar: &bar
<<: *base
age: 20
Try this for your Concourse Pipeline:
common:
db_common: &db_common
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- task: <call-main-job-task>
params:
jobs:
- name: <call-main-job-with-db-1>
<<: *db_common
DATABASE_HOST: <file>
DATABASE: <my-db-1>
- name: <call-main-job-with-db-2>
<<: *db_common
DATABASE: <my-db-2>
NOTE: Remember that you can have as many Anchors as you want, you can define two or more anchors for the same Job/Task/Resource, etc.
You need to just copy and paste the task as you do in the question description. Concourse expects an expressive yaml, there is no branching or logic allowed. If you don't want to copy and paste so much yaml, then you can do some yaml generation magic to simplify what you look at and work with, but concourse will want the full yaml with each job defined separately.
Concourse has this fan in fan out paradigm, where you want to keep the jobs simple and short. Use a scripting language e.g. like python or ruby to make your pipeline creation more flexible.
Personally i use one pipeline.yml.erb file where i render different job templates inside. I try to keep my job.yml.erb as generic as possible so i can reuse them for different pipelines.
To bring it to the next level you could specify a meta config.yml and use this config inside your templates to generate your pipeline depending on what you specified in the config.
If task file (file: task.yml) in pipeline (pipeline.yml) config needs to contain some {{properties}}, what is a proper way to add them?
In my case, I want to use a custom docker image from repository that uses authentication, and I don't want to hardcode/commit credentials in task yml itself.
Is the a way to do that currently without moving task config to the main pipeline yml?
Clarification: I want to parameterize task.yml file itself (for example, input: {{input_name}}).
In your task.yml you can specify required params, e.g:
params:
USERNAME:
PASSWORD:
And then provide them in pipeline.yml:
jobs:
- name: my-job
plan:
- get: ci-files
- task: my-task
file: ci-files/task.yml
params:
USERNAME: {{username}}
PASSWORD: {{password}}
Configure pipeline as:
fly set-pipeline -p pipeline-name -c pipeline.yml -v=USERNAME=my-username -v=PASSWORD=my-password
Then these params will be available to you as environment variables inside your task.