I have a Concourse job that pulls a repo into a docker image and then executes a command on it, now I need to execute a script that comes form the docker image and after it is done execute a command inside the repo, something like this:
run:
dir: my-repo-resource
path: /get-git-context.sh && ./gradlew
args:
- build
get-git-context.sh is the script coming from my docker image and .gradlew is the standard gradlew inside my repo with the build param, I am getting the following error with this approach:
./gradlew: no such file or directory
Meaning the job cd'd into / when executing the first command, executing only one command works just fine.
I've also tried adding two run sections:
run:
path: /get-git-context.sh
run:
dir: my-repo-resource
path: ./gradlew
args:
- build
But only the second part is executed, what is the correct way to concat these two commands?
We usually solve this by wrapping the logic in a shell script and setting the path: /bin/bash with corresponding args (path to the script).
run:
path: /bin/sh
args:
- my-repo_resource/some-ci-folder/build_script.sh
The other option would be to define two tasks and pass the resources through the job's workspace, but we usually do more steps than just two and this would result in complex pipelines:
plan:
- task: task1
config:
...
outputs:
- name: taskOutput
run:
path: /get-git-context.sh
- task: task2
config:
inputs:
## directory defined in task1
- name: taskOutput
run:
path: ./gradlew
args:
- build
Related
I have the following template in my .gitlab-ci.yml file:
x-android-build-tools: &android_build_tools
image: jangrewe/gitlab-ci-android
stage: build
script:
- export GRADLE_USER_HOME=$(pwd)/.gradle
- chmod +x ./gradlew
artifacts:
expire_in: 1 hours
paths:
- app/build/
I want to extend the script part to make actual builds. For example:
android-stage-build:
<<: *android_build_tools
environment: stage
only:
- dev
after_script:
- ./gradlew :app:assembleDebug
It works well, but it has a problem. Docker launches the both jobs instead of ignoring the template.
Is there way to run only the "android-stage-build" job which triggers the template job when it will be needed?
In order to make gitlab ci ignore the first entry, you need to add a dot (.) in front of the definition.
.x-android-build-tools: &android_build_tools
image: jangrewe/gitlab-ci-android
stage: build
script:
- export GRADLE_USER_HOME=$(pwd)/.gradle
- chmod +x ./gradlew
artifacts:
expire_in: 1 hours
paths:
- app/build/
android-stage-build:
<<: *android_build_tools
environment: stage
only:
- dev
after_script:
- ./gradlew :app:assembleDebug
Besides that, I think, based on what I read here, you don't want to use after_script.
I think you want to use the before_script in the template, and on the build stage specific the script-key.
The main difference is that after_script also runs if the script fails. And by what I read here, it does not look like you would like that to happen.
Yes. You're simply missing a . :)
See https://docs.gitlab.com/ee/ci/yaml/yaml_optimization.html#anchors
It should work if you write it like this:
.x-android-build-tools: &android_build_tools
image: jangrewe/gitlab-ci-android
stage: build
script:
- export GRADLE_USER_HOME=$(pwd)/.gradle
- chmod +x ./gradlew
artifacts:
expire_in: 1 hours
paths:
- app/build/
android-stage-build:
<<: *android_build_tools
environment: stage
only:
- dev
after_script:
- ./gradlew :app:assembleDebug
I know its not quite simple to do this and tried to explore many approaches but either I couldn't understand it properly or didn't work for me.
I have a concourse job which runs angular build (ng build) and creates /dist folder. This works well.
jobs:
- name: cache
plan:
- get: source
trigger: true
- get: npm-cache
- name: build
plan:
- get: source
trigger: true
passed: [cache]
- get: npm-cache
passed: [cache]
- task: run build
file: source/ci/build.yml
build.yml
---
platform: linux
image_resource:
type: docker-image
source: { repository: alexsuch/angular-cli, tag: '7.3' }
inputs:
- name: source
- name: npm-cache
path: /cache
outputs:
- name: artifact
run:
path: source/ci/build.sh
build.sh
#!/bin/sh
mv cache/node_modules source
cd source
npm rebuild node-saas # temporary fix
npm run build_prod
cp -R dist ../artifact/
I have mentioned output as artifact where I am storing the dist content.
But when I am trying to use this in next job, it doesn't work. Failed with missing input error.
Here is the next job that supposed to consume this dist folder:
jobs:
...
...
- name: list
plan:
- get: npm-cache
passed: [cache, test, build]
trigger: true
- task: list-files
config:
platform: linux
image_resource:
type: registry-image
source: { repository: busybox }
inputs:
- name: artifact
run:
path: ls
args: ['-la', 'artifact/']
Can anyone please help me with this. How I can use the dist folder in above job.
I'm not quite sure why would you want to have different plan definitions for each task but here is the simplest way of doing what you want to do:
jobs:
- name: deploying-my-app
plan:
- get: source
trigger: true
passed: []
- get: npm-cache
passed: []
- task: run build
file: source/ci/build.yml
- task: list-files
file: source/ci/list-files.yml
build.yml
---
platform: linux
image_resource:
type: docker-image
source: { repository: alexsuch/angular-cli, tag: '7.3' }
inputs:
- name: source
- name: npm-cache
path: /cache
outputs:
- name: artifact
run:
path: source/ci/build.sh
list-files.yml
---
platform: linux
image_resource:
type: registry-image
source: { repository: busybox }
inputs:
- name: artifact
run:
path: ls
args: ['-la', 'artifact/']
build.sh
#!/bin/sh
mv cache/node_modules source
cd source
npm rebuild node-saas # temporary fix
npm run build_prod
cp -R dist ../artifact/
Typically you would pass folders as inputs and outputs between TASKS instead of JOBS (althought there's some alternatives)
Concourse is statelessness and that is the idea behind it. But if you want to pass something between jobs the only way to do that is to use a concourse resource and depending on the nature of the project that could be anything from a git repo to a s3 bucket, docker image etc. You can create your own custom resources too.
Using something like s3 concourse resource for example
This way you can push your artifact to an external storage and then use it again on the next jobs on the get step as a resource. But that just may create some unnecesary complexity understanding that what you want to do is pretty straightforward
In my experience I found that sometimes the visual aspect of a job plan in the concourse dashboard gives the impression that a job-plan should by task atomic, which is not always needed
Hope that helps.
I'm experimenting with building a gradle based java app. My pipeline looks like this:
---
resources:
- name: hello-concourse-repo
type: git
source:
uri: https://github.com/ractive/hello-concourse.git
jobs:
- name: gradle-build
public: true
plan:
- get: hello-concourse-repo
trigger: true
- task: build
file: hello-concourse-repo/ci/build.yml
- task: find
file: hello-concourse-repo/ci/find.yml
The build.yml looks like:
---
platform: linux
image_resource:
type: docker-image
source:
repository: java
tag: openjdk-8
inputs:
- name: hello-concourse-repo
outputs:
- name: output
run:
path: hello-concourse-repo/ci/build.sh
caches:
- path: .gradle/
And the build.sh:
#!/bin/bash
export ROOT_FOLDER=$( pwd )
export GRADLE_USER_HOME="${ROOT_FOLDER}/.gradle"
export TERM=${TERM:-dumb}
cd hello-concourse-repo
./gradlew --no-daemon build
mkdir -p output
cp build/libs/*.jar output
cp src/main/docker/* output
ls -l output
And finally find.yml
---
platform: linux
image_resource:
type: docker-image
source: {repository: busybox}
inputs:
- name: output
run:
path: ls
args: ['-alR']
The output of ls at the end of the bash.sh script shows me that the output folder contains the expected files, but the find task only shows empty folders:
What am I doing wrong that the output folder that I'm using as an input in the find task is empty?
The complete example can be found here with the concourse files in the ci subfolder.
You need to remember some things:
There is an initial working directory for your tasks, lets call it '.' (Unless you specify 'dir'). In this initial directory you will find a directory for all your inputs and outputs.
i.e.
./hello-concourse-repo
./output
When you declare an output, there's no need to create a folder 'output' from your script, it will be created automatically.
If you navigate to a different folder in your script, you need to return to the initial working directory or use relative paths to find other folders.
Below you will find the updated script with some comments to fix the problem:
#!/bin/bash
export ROOT_FOLDER=$( pwd )
export GRADLE_USER_HOME="${ROOT_FOLDER}/.gradle"
export TERM=${TERM:-dumb}
cd hello-concourse-repo #You changed directory here, so your 'output' folder is in ../output
./gradlew --no-daemon build
# Add this line to return to the initial working directory or use ../output or $ROOT_FOLDER/output when compiling.
#mkdir -p output <- This line is not required, you already defined an output with this name
cp build/libs/*.jar ../output
cp src/main/docker/* ../output
ls -l ../output
Since you are defining ROOT_FOLDER variable you can use it to navigate.
You are still inside hello-concourse-repo and need to move output up one level.
What's the best way to pass parameters between concourse tasks and jobs? For example; if my first task generates a unique ID, what would be the best way to pass that ID to the next job or task?
If you are just passing between tasks within the same job, you can use artifacts (https://concourse-ci.org/running-tasks.html#outputs) and if you are passing between jobs, you can use resources (like putting it in git or s3). For example, if you are passing between tasks, you can have a task file
---
platform: linux
image_resource: # ...
outputs:
- name: unique-id
run:
path: project-src/ci/fill-in-output.sh
And the script fill-in-output.sh will put the file that contains the unique ID into path unique-id/. With that, you can have another task that takes the unique-id output as an input (https://concourse-ci.org/running-tasks.html#inputs) and use that unique id file.
Additionally to tasks resources will place files automagically for you in their working directory.
For example I have a pipeline job as follows
jobs:
- name: build
plan:
- get: git-some-repo
- put: push-some-image
params:
build: git-some-repo/the-image
- task: Use-the-image-details
config:
platform: linux
image_resource:
type: docker-image
source:
repository: alpine
inputs:
- name: push-some-image
run:
path: sh
args:
- -exc
- |
ls -lrt push-some-image
cat push-some-image/repository
cat push-some-image/digest
Well see the details of the image push from push-some-image
+ cat push-some-image/repository
xxxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/path/image
+ cat push-some-image/digest
sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Passing data within a job's tasks could easily be done with input/output artifacts (files), As Clara Fu noted.
For the case between jobs, when simple e.g. 'string' data has to be passed , and using a git is an overkill, the 'keyval' resource[1] seems to be a good solution.
The readme describes that the data is stored and managed as a standard properties file.
https://github.com/SWCE/keyval-resource
I'm following Stark & Wayne tutorial and got into a problem:
Pipeline fails with
hijack: Backend error: Exit status: 500, message {"Type":"","Message": "
runc exec: exit status 1: exec failed: container_linux.go:247:
starting container process caused \"exec format error\"\n","Handle":""}
I have one git resource and one job with one task:
- task: test
file: resource-ci/ci/test.yml
test.yml file:
platform: linux
image_resource:
type: docker-image
source:
repository: busybox
tag: latest
inputs:
- name: resource-tool
run:
path: resource-tool/scripts/deploy.sh
deploy.sh is simple dummy file with one echo command
echo [+] Testing in the process ...
So what could it be?
This error means that the shell it's trying to invoke in your script is unavailable on the container running your task.
Busybox doesn't come with bash, it only comes with /bin/sh, check the shebang in deploy.sh, making sure it looks like:
#!/bin/sh
# rest of script
I also ran into this error when I forgot to add a ! at the top of my pipelines shell script:
#/bin/bash
# rest of script