I am trying to run a sed command to replace an environment variable indicating the current build id. According to the documentation CIRCLE_BUILD_NUM contains the value I'm looking for, and according to this example it should be super easy to use in a command.
Below is the config file, below that the bundle.gradle file the sed command is being executed on, and below that is the result. As you can see the sed command simply treates CIRCLE_BUILD_NUM as a string vs grabbing the build number.
config.yml
version: 2.1
orbs:
android: circleci/android#0.2.0
jobs:
build:
executor: android/android
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- run:
name: testing env vars
command: sed 's/${BUILD_NUM_1}/${CIRCLE_BUILD_NUM}/g' -i build.gradle
bundle.gradle
// Top-level build file where you can add configuration options common to all sub-projects/modules.
def buildNum = ${BUILD_NUM_1}
output
// Top-level build file where you can add configuration options common to all sub-projects/modules.
def buildNum = ${CIRCLE_BUILD_NUM}
The following sed command worked.
- run:
name: testing env vars
command: sed "s/_buildNum/${CIRCLE_BUILD_NUM}/g" -i build.gradle
Related
I was trying to do the following :
echo ${{secrets.key}} > myfile
But unfortunately, this doesn't work since myfile would be empty after this when i checked.
How do i save the content of github secret into a file ?
You can first pass the secret to an env var, then echo it to the file, remember to quote the env var to prevent line feed from breaking the command.
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: print secrets
run: |
echo "$MY_SECRET" >> config.ini
cat config.ini
shell: bash
env:
MY_SECRET: ${{secrets.KEY}}
Check that the secrets.key actually exists before doing the above.
I have a Concourse job that pulls a repo into a docker image and then executes a command on it, now I need to execute a script that comes form the docker image and after it is done execute a command inside the repo, something like this:
run:
dir: my-repo-resource
path: /get-git-context.sh && ./gradlew
args:
- build
get-git-context.sh is the script coming from my docker image and .gradlew is the standard gradlew inside my repo with the build param, I am getting the following error with this approach:
./gradlew: no such file or directory
Meaning the job cd'd into / when executing the first command, executing only one command works just fine.
I've also tried adding two run sections:
run:
path: /get-git-context.sh
run:
dir: my-repo-resource
path: ./gradlew
args:
- build
But only the second part is executed, what is the correct way to concat these two commands?
We usually solve this by wrapping the logic in a shell script and setting the path: /bin/bash with corresponding args (path to the script).
run:
path: /bin/sh
args:
- my-repo_resource/some-ci-folder/build_script.sh
The other option would be to define two tasks and pass the resources through the job's workspace, but we usually do more steps than just two and this would result in complex pipelines:
plan:
- task: task1
config:
...
outputs:
- name: taskOutput
run:
path: /get-git-context.sh
- task: task2
config:
inputs:
## directory defined in task1
- name: taskOutput
run:
path: ./gradlew
args:
- build
My aim is to create github workflow for publishing the plugin. Here I have to enter some command line arguments after executing some command. Can someone please tell me, whether there is a way to set the command line arguments in github action ?
If I understand you correctly - you're trying to build your GitHub Action and don't know how to pass arguments to it, correct? If so, you can use inputs mechanism to pass arguments to your action. For example:
JavaScript Action
action.yml
...
inputs:
version:
description: Plugin version
required: true
runs:
using: 'node12'
main: index.js
...
index.js
const core = require('#actions/core');
const version = core.getInput('version');
Docker Action
action.yml
...
inputs:
version:
description: Plugin version
required: true
runs:
using: 'docker'
args:
- ${{ inputs.version }}
...
docker-entrypoint.sh
#!/bin/sh -l
echo $1 # your version
Usage:
workflow.yml
...
steps:
name: Your action usage
uses: mysuperaction#master
with:
version: 1
...
I'm experimenting with building a gradle based java app. My pipeline looks like this:
---
resources:
- name: hello-concourse-repo
type: git
source:
uri: https://github.com/ractive/hello-concourse.git
jobs:
- name: gradle-build
public: true
plan:
- get: hello-concourse-repo
trigger: true
- task: build
file: hello-concourse-repo/ci/build.yml
- task: find
file: hello-concourse-repo/ci/find.yml
The build.yml looks like:
---
platform: linux
image_resource:
type: docker-image
source:
repository: java
tag: openjdk-8
inputs:
- name: hello-concourse-repo
outputs:
- name: output
run:
path: hello-concourse-repo/ci/build.sh
caches:
- path: .gradle/
And the build.sh:
#!/bin/bash
export ROOT_FOLDER=$( pwd )
export GRADLE_USER_HOME="${ROOT_FOLDER}/.gradle"
export TERM=${TERM:-dumb}
cd hello-concourse-repo
./gradlew --no-daemon build
mkdir -p output
cp build/libs/*.jar output
cp src/main/docker/* output
ls -l output
And finally find.yml
---
platform: linux
image_resource:
type: docker-image
source: {repository: busybox}
inputs:
- name: output
run:
path: ls
args: ['-alR']
The output of ls at the end of the bash.sh script shows me that the output folder contains the expected files, but the find task only shows empty folders:
What am I doing wrong that the output folder that I'm using as an input in the find task is empty?
The complete example can be found here with the concourse files in the ci subfolder.
You need to remember some things:
There is an initial working directory for your tasks, lets call it '.' (Unless you specify 'dir'). In this initial directory you will find a directory for all your inputs and outputs.
i.e.
./hello-concourse-repo
./output
When you declare an output, there's no need to create a folder 'output' from your script, it will be created automatically.
If you navigate to a different folder in your script, you need to return to the initial working directory or use relative paths to find other folders.
Below you will find the updated script with some comments to fix the problem:
#!/bin/bash
export ROOT_FOLDER=$( pwd )
export GRADLE_USER_HOME="${ROOT_FOLDER}/.gradle"
export TERM=${TERM:-dumb}
cd hello-concourse-repo #You changed directory here, so your 'output' folder is in ../output
./gradlew --no-daemon build
# Add this line to return to the initial working directory or use ../output or $ROOT_FOLDER/output when compiling.
#mkdir -p output <- This line is not required, you already defined an output with this name
cp build/libs/*.jar ../output
cp src/main/docker/* ../output
ls -l ../output
Since you are defining ROOT_FOLDER variable you can use it to navigate.
You are still inside hello-concourse-repo and need to move output up one level.
What's the best way to pass parameters between concourse tasks and jobs? For example; if my first task generates a unique ID, what would be the best way to pass that ID to the next job or task?
If you are just passing between tasks within the same job, you can use artifacts (https://concourse-ci.org/running-tasks.html#outputs) and if you are passing between jobs, you can use resources (like putting it in git or s3). For example, if you are passing between tasks, you can have a task file
---
platform: linux
image_resource: # ...
outputs:
- name: unique-id
run:
path: project-src/ci/fill-in-output.sh
And the script fill-in-output.sh will put the file that contains the unique ID into path unique-id/. With that, you can have another task that takes the unique-id output as an input (https://concourse-ci.org/running-tasks.html#inputs) and use that unique id file.
Additionally to tasks resources will place files automagically for you in their working directory.
For example I have a pipeline job as follows
jobs:
- name: build
plan:
- get: git-some-repo
- put: push-some-image
params:
build: git-some-repo/the-image
- task: Use-the-image-details
config:
platform: linux
image_resource:
type: docker-image
source:
repository: alpine
inputs:
- name: push-some-image
run:
path: sh
args:
- -exc
- |
ls -lrt push-some-image
cat push-some-image/repository
cat push-some-image/digest
Well see the details of the image push from push-some-image
+ cat push-some-image/repository
xxxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/path/image
+ cat push-some-image/digest
sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Passing data within a job's tasks could easily be done with input/output artifacts (files), As Clara Fu noted.
For the case between jobs, when simple e.g. 'string' data has to be passed , and using a git is an overkill, the 'keyval' resource[1] seems to be a good solution.
The readme describes that the data is stored and managed as a standard properties file.
https://github.com/SWCE/keyval-resource