I want to execute a task in Argo workflow if a string starts with a particular substring.
For example, my string is tests/dev-or.yaml and I want to execute task if my string starts with tasks/
Here is my workflow but the condition is not being validated properly
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: conditional-
spec:
entrypoint: conditional-example
arguments:
parameters:
- name: should-print
value: "tests/dev-or.yaml"
templates:
- name: conditional-example
inputs:
parameters:
- name: should-print
steps:
- - name: print-hello
template: whalesay
when: "{{inputs.parameters.should-print }} startsWith 'tests/'"
- name: whalesay
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["cowsay hello"]
Below is the error it is giving when I run the workflow
WorkflowFailed 7s workflow-controller Invalid 'when' expression 'tests/dev-or.yaml startsWith 'tests/'': Unable to access unexported field 'yaml' in token 'or.yaml'
Seems it is not accepting -, .yaml and / while evaluating the when condition.
Any mistake am making in my workflow? What's the right way to use this condition?
tl;dr - use this: when: "'{{inputs.parameters.should-print}}' =~ '^tests/'"
Parameter substitution happens before the when expression is evaluated. So the when expression is actually tests/dev-or.yaml startsWith 'tests/'. As you can see, the first string needs quotation marks.
But even if you had when: "'{{inputs.parameters.should-print}}' startsWith 'tests/'" (single quotes added), the expression would fail with this error: Cannot transition token types from STRING [tests/dev-or.yaml] to VARIABLE [startsWith].
Argo Workflows conditionals are evaluated as govaluate expressions. govaluate does not have any built-in functions, and Argo Workflows does not augment it with any functions. So startsWith is not defined.
Instead, you should use govaluate's regex comparator. The expression will look like this: when: "'{{inputs.parameters.should-print}}' =~ '^tests/'".
This is the functional Workflow:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: conditional-
spec:
entrypoint: conditional-example
arguments:
parameters:
- name: should-print
value: "tests/dev-or.yaml"
templates:
- name: conditional-example
inputs:
parameters:
- name: should-print
steps:
- - name: print-hello
template: whalesay
when: "'{{inputs.parameters.should-print}}' =~ '^tests/'"
- name: whalesay
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["cowsay hello"]
Related
We have a Tekton pipeline and want to replace the image tags contents of our deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-api-spring-boot
spec:
replicas: 3
revisionHistoryLimit: 3
selector:
matchLabels:
app: microservice-api-spring-boot
template:
metadata:
labels:
app: microservice-api-spring-boot
spec:
containers:
- image: registry.gitlab.com/jonashackt/microservice-api-spring-boot#sha256:5d8a03755d3c45a3d79d32ab22987ef571a65517d0edbcb8e828a4e6952f9bcd
name: microservice-api-spring-boot
ports:
- containerPort: 8098
imagePullSecrets:
- name: gitlab-container-registry
Our Tekton pipeline uses the yq Task from Tekton Hub to replace the .spec.template.spec.containers[0].image with the "$(params.IMAGE):$(params.SOURCE_REVISION)" name like this:
- name: substitute-config-image-name
taskRef:
name: yq
runAfter:
- fetch-config-repository
workspaces:
- name: source
workspace: config-workspace
params:
- name: files
value:
- "./deployment/deployment.yml"
- name: expression
value: .spec.template.spec.containers[0].image = \"$(params.IMAGE)\":\"$(params.SOURCE_REVISION)\"
Sadly the yq Task doesn't seem to work, it produces a green
Step completed successfully, but shows the following errors:
16:50:43 safelyRenameFile [ERRO] Failed copying from /tmp/temp3555913516 to /workspace/source/deployment/deployment.yml
16:50:43 safelyRenameFile [ERRO] open /workspace/source/deployment/deployment.yml: permission denied
Here's also a screenshot from our Tekton Dashboard:
Any idea on how to solve the error?
The problem seems to be related to the way how the Dockerfile of https://github.com/mikefarah/yq now handles file permissions (for example this fix among others). The 0.3 version of the Tekton yq Task uses the image https://hub.docker.com/layers/mikefarah/yq/4.16.2/images/sha256-c6ef1bc27dd9cee57fa635d9306ce43ca6805edcdab41b047905f7835c174005 which produces the error.
One work-around to the problem could be the usage of the yq Task version 0.2 which you can apply via:
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/yq/0.2/yq.yaml
This one uses the older docker.io/mikefarah/yq:4#sha256:34f1d11ad51dc4639fc6d8dd5ade019fe57cf6084bb6a99a2f11ea522906033b and works without the error.
Alternatively you can simply create your own yq based Task that won't have the problem like this:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: replace-image-name-with-yq
spec:
workspaces:
- name: source
description: A workspace that contains the file which need to be dumped.
params:
- name: IMAGE_NAME
description: The image name to substitute
- name: FILE_PATH
description: The file path relative to the workspace dir.
- name: YQ_VERSION
description: Version of https://github.com/mikefarah/yq
default: v4.2.0
steps:
- name: substitute-with-yq
image: alpine
workingDir: $(workspaces.source.path)
command:
- /bin/sh
args:
- '-c'
- |
set -ex
echo "--- Download yq & add to path"
wget https://github.com/mikefarah/yq/releases/download/$(params.YQ_VERSION)/yq_linux_amd64 -O /usr/bin/yq &&\
chmod +x /usr/bin/yq
echo "--- Run yq expression"
yq e ".spec.template.spec.containers[0].image = \"$(params.IMAGE_NAME)\"" -i $(params.FILE_PATH)
echo "--- Show file with replacement"
cat $(params.FILE_PATH)
resources: {}
This custom Task simple uses the alpine image as base and installs yq using the Plain binary wget download. Also it uses yq exactly as you would do on the command line locally, which makes development of your expression so much easier!
As a bonus it outputs the file contents so you can check the replacement results right in the Tekton pipeline!
You need to apply it with
kubectl apply -f tekton-ci-config/replace-image-name-with-yq.yml
And should now be able to use it like this:
- name: replace-config-image-name
taskRef:
name: replace-image-name-with-yq
runAfter:
- dump-contents
workspaces:
- name: source
workspace: config-workspace
params:
- name: IMAGE_NAME
value: "$(params.IMAGE):$(params.SOURCE_REVISION)"
- name: FILE_PATH
value: "./deployment/deployment.yml"
Inside the Tekton dashboard it will look somehow like this and output the processed file:
I have a docker image that receive an env var name SINCE_DATE.
I have created a cronjob to run that container and I want to pass it the current date.
How can I do it?
Trying this, I get the literal string date -d "yesterday 23:59"
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-cron
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: my-cron
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: SINCE_DATE
value: $(date -d "yesterday 23:59")
You could achieve it by overwriting container Entrypoint command and set environment variable.
In your case it would looks like:
containers:
- name: my-cron
image: nginx
#imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- bash
- -c
- |
export SINCE_DATE=`date -d "yesterday 23:59"`
exec /docker-entrypoint.sh
Note:
Nginx docker-entrypoint.sh in located in / If your image have different path, you should use it, for example exec /usr/local/bin/docker-entrypoint.sh
Very similar use-case can be found in this Stack question
What does this solution?
It will overwrite default script set in the container ENTRYPOINT with the same script but beforehand set dynamically environment variable.
I solved the same problem recently using KubeMod, which patches resources as they are created/updated in K8S. It is nice for this use case since it requires no modification to the original job specification.
In my case I needed to insert a date into the middle of a previously existing string in the spec, but it's the same concept.
For example, this matches a specific job by regex, and alters the second argument of the first container in the spec.
apiVersion: api.kubemod.io/v1beta1
kind: ModRule
metadata:
name: 'name-of-your-modrule'
namespace: default
spec:
type: Patch
match:
- select: '$.metadata.name'
matchRegex: 'regex-that-matches-your-job-name'
- select: '$.kind'
matchValue: 'Job'
patch:
- op: replace
path: '/spec/template/spec/containers/0/args/1'
select: '$.spec.template.spec.containers[0].args[1]'
value: '{{ .SelectedItem | replace "Placeholder Value" (cat "The time is" (now | date "2006-01-02T15:04:05Z07:00")) | squote }}'
I'm exploring an easy way to read K8S resources in the Argo workflow. The current documentation is focusing mainly on create/patch with conditions (https://argoproj.github.io/argo/examples/#kubernetes-resources), while I'm curious if it's possible to perform "action: get", extra part of the resource state (or full resource) and pass it downstream as artifact or result output. Any ideas?
action: get is not a feature available from Argo.
However, it's easy to use kubectl from within a Pod and then send the JSON output to an output parameter. This uses a BASH script to send the JSON to the result output parameter, but an explicit output parameter or an output artifact are also viable options.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: kubectl-bash-
spec:
entrypoint: kubectl-example
templates:
- name: kubectl-example
steps:
- - name: generate
template: get-workflows
- - name: print
template: print-message
arguments:
parameters:
- name: message
value: "{{steps.generate.outputs.result}}"
- name: get-workflows
script:
image: bitnami/kubectl:latest
command: [bash]
source: |
some_workflow=$(kubectl get workflows -n argo | sed -n 2p | awk '{print $1;}')
kubectl get workflow "$some_workflow" -ojson
- name: print-message
inputs:
parameters:
- name: message
container:
image: alpine:latest
command: [sh, -c]
args: ["echo result was: '{{inputs.parameters.message}}'"]
Keep in mind that kubectl will run with the permissions of the Workflow's ServiceAccount. Be sure to submit the Workflow using a ServiceAccount which has access to the resource you want to get.
I need to define an env var with name contains '.' characters, and Kubenetes does not seem to like it.
spec:
containers:
env:
- name: "com.my.app.dir"
value: "/myapp/subdir/"
I tried single quotes, double quotes, backslashes, double backslashes, and many other ways. Still cannot make it work. I wonder if anyone knows a way to escape the '.' characters. Thanks in advance.
Kubernetes doesn't have a problem setting an environment variable with a .
Here's a simple spec that logs the environment by directly running the node executable
apiVersion: v1
kind: Pod
metadata:
name: env-node
spec:
containers:
- image: 'node:12-slim'
name: env-node
command:
- node
- '-pe'
- process.env
env:
- name: OTHER
value: here
- name: 'ONE_two-Three.four'
value: 'diditwork'
And the environment output (with some kubernetes default vars removed for brevity)
{
PATH: '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
HOSTNAME: 'env-node',
NODE_VERSION: '12.16.1',
OTHER: 'here',
'ONE_two-Three.four': 'diditwork',
HOME: '/root'
}
Most shells (sh, bash, zsh) won't accept environment variables with a . in them. POSIX defines [a-zA-Z_][a-zA-Z0-9_]* as the allowed characters in the name of an environment variable.
So running the same node process via a shell:
spec:
containers:
- image: 'node:12-slim'
name: nodeenvtest-simple-shell
command:
- sh
- '-c'
- 'node -e "console.log(process.env)"'
env:
- name: 'ONE_two-Three.four'
value: 'diditwork'
- name: 'OTHER'
value: 'here'
Results in a missing environment variable:
{
NODE_VERSION: '12.16.1',
HOSTNAME: 'env-shell',
HOME: '/root',
OTHER: 'here',
PATH: '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
PWD: '/'
}
If there is no shell between the container and the app running, a . after the first character in the environment variable should be fine.
I'm trying out Argo workflow and would like to understand how to freeze a step. Let's say that I have 3 step workflow and a workflow failed at step 2. So I'd like to resubmit the workflow from step 2 using successful step 1's artifact. How can I achieve this? I couldn't find the guidance anywhere on the document.
I think you should consider using Conditions and Artifact passing in your steps.
Conditionals provide a way to affect the control flow of a
workflow at runtime, depending on parameters. In this example
the 'print-hello' template may or may not be executed depending
on the input parameter, 'should-print'. When submitted with
$ argo submit examples/conditionals.yaml
the step will be skipped since 'should-print' will evaluate false.
When submitted with:
$ argo submit examples/conditionals.yaml -p should-print=true
the step will be executed since 'should-print' will evaluate true.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: conditional-
spec:
entrypoint: conditional-example
arguments:
parameters:
- name: should-print
value: "false"
templates:
- name: conditional-example
inputs:
parameters:
- name: should-print
steps:
- - name: print-hello
template: whalesay
when: "{{inputs.parameters.should-print}} == true"
- name: whalesay
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["cowsay hello"]
If you use conditions in each step you will be able to start from a step you like with appropriate condition.
Also have a loot at this article Argo: Workflow Engine for Kubernetes as author explains the use of conditions on coinflip example.
You can see many examples on their GitHub page.