I created a cluster workflow template, which will do some tasks. And I will use last step output as current workflow parameters. When I ref this template, I don't know how can I get the output from cluster workflow task/step.
Cluster Workflow Template
apiVersion: argoproj.io/v1alpha1
kind: ClusterWorkflowTemplate
metadata:
name: gen-params
spec:
templates:
- name: tasks
steps:
- - name: prepare
template: prepare
- - name: gen-params
template: gen-params
...
Workflow
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
name: demo
spec:
entrypoint: main
templates:
- name: main
steps:
- - name: gen-params
templateRef:
name: gen-params
template: tasks
clusterScope: true
- - name: calculate
template: calculate
arguments:
parameters:
- name: params
value: "{{steps.gen-params.steps.gen-params.outputs.result}}" # not work
...
Your issue is likely less about the usage of a WorkflowTemplate/ClusterWorkflowTemplate and more to do with the fact that you are attempting to access output from a "nested" workflow step.
You can achieve this by defining an output parameter of the top-level tasks template in your ClusterWorkflowTemplate which takes its value from the output result of the last step in that tasks template.
Your WorkflowTemplate would look like this:
apiVersion: argoproj.io/v1alpha1
kind: ClusterWorkflowTemplate
metadata:
name: gen-params
spec:
templates:
- name: tasks
steps:
- - name: prepare
template: prepare
- - name: gen-params
template: gen-params
outputs:
parameters:
- name: "nested-gen-params-result"
valueFrom:
parameter: "{{steps.gen-params.outputs.result}}"
After making that change, you'll be able reference the output of the ClusterWorkflowTemplate-defined step of your top-level Workflow using {{steps.gen-params.outputs.parameters.nested-gen-params-result}}
Argo's nested-workflow example shows some other similar patterns.
templateRef is simply a link, used to populate the YAML of the Workflow step. You should interact with the gen-params step in the same way that you would if you'd just copy/pasted the YAML from the gen-params ClusterWorkflowTemplate directly into your new Workflow.
In this case, you should access the result of the gen-params step with this: {{steps.gen-params.outputs.result}}.
Related
I have a Argo WorkflowTemplate that looks like this:
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: test-container-command
spec:
entrypoint: main
templates:
- name: main
inputs:
parameters:
- name: command
container:
image: alpine:latest
command: "{{ inputs.parameters.command }}"
env: # some predefined env
What I want to do is to create a WorkflowTemplate that can execute an arbitrary command specified by the input parameter command. That way, users of this WorkflowTemplate can supply the parameter command with an array of strings and then execute it like:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: test-run-
spec:
workflowTemplateRef:
name: test-container-command
entrypoint: main
arguments:
parameters:
- name: command
value:
- echo
- hello
However, when I try to save this WorkflowTemplate, the Argo server gave me this error message:
Bad Request: json: cannot unmarshal string into Go struct field Container.workflow.spec.templates.container.command of type []string
It seems that Argo expects the field .spec.templates.container.command to be an array of strings, but it treat "{{ inputs.parameters.command }}" as a string, even though I'm trying to supply the parameter command with an array of strings.
Is there any way to achieve what I was trying to do as the WorkflowTemplate test-container-command, i.e. provide a WorkflowTemplate for the user to execute arbitrary commands with a predefined container and env?
As it says in error command should be list of strings.
You need to reformat your template to:
container:
image: alpine:latest
command: ["{{ inputs.parameters.command }}"]
Now argo should create your workflow without any problems
You could also use
container:
image: alpine:latest
command:
- "{{ inputs.parameters.command }}"
Edit
As you want to run some commands within the image it would be much better instead of container use script template
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: test-container-command
spec:
entrypoint: main
templates:
- name: main
inputs:
parameters:
- name: command
- name: extraEnv
script:
image: alpine:latest
command: [ "sh" ]
env:
- { name: ENV1, value: "foo" }
- { name: ENV2, value: "{{ inputs.parameters.extraEnv }}" }
source: |
{{ inputs.parameters.command }}
Workflow:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: my-workflow-
spec:
entrypoint: main
arguments:
parameters:
- name: configmap
value: my-configmap
- name: secret
value: my-secret
templates:
- name: main
steps:
- - name: main
templateRef:
name: my-template
template: main
arguments:
parameters:
- name: configmap
value: "{{workflow.parameters.configmap}}"
- name: secret
value: "{{workflow.parameters.secret}}"
Template:
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: my-template
spec:
entrypoint: main
templates:
- name: main
inputs:
parameters:
- name: configmap
parameters:
- name: secret
container:
image: my-image:1.2.3
envFrom:
- configMapRef:
name: "{{inputs.parameters.configmap}}"
- secretRef:
name: "{{inputs.parameters.secret}}"
When deployed through the Argo UI I receive the following error from Kubernetes when starting the pod:
spec.containers[1].envFrom: Invalid value: \"\": must specify one of: `configMapRef` or `secretRef`
Using envFrom is supported and documented in the Argo documentation: https://argoproj.github.io/argo-workflows/fields/. Why is Kubernetes complaining here?
As mentioned in the comments, there are a couple issues with your manifests. They're valid YAML, but that YAML does not deserialize into valid Argo custom resources.
In the Workflow, you have duplicated the parameters key in spec.templates[0].inputs.
In the WorkflowTemplate, you have placed the configMapRef and secretRef names at the same level as the keys. configMapRef and secretRef are objects, so the name key should be nested under each of those.
These are the corrected manifests:
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: my-template
spec:
entrypoint: main
templates:
- name: main
inputs:
parameters:
- name: configmap
- name: secret
container:
image: my-image:1.2.3
envFrom:
- configMapRef:
name: "{{inputs.parameters.configmap}}"
- secretRef:
name: "{{inputs.parameters.secret}}"
---
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: my-workflow-
spec:
entrypoint: main
arguments:
parameters:
- name: configmap
value: my-configmap
- name: secret
value: my-secret
templates:
- name: main
steps:
- - name: main
templateRef:
name: my-template
template: main
arguments:
parameters:
- name: configmap
value: "{{workflow.parameters.configmap}}"
- name: secret
value: "{{workflow.parameters.secret}}"
Argo Workflows supports IDE-based validation which should help you find/avoid these issues.
I want to be able to POST a big piece of data to a webhook in Argo. In my Sensor definition I get the data from the request and put it into a "raw" artifact on the Workflow. Since the data is base64 encoded, I use a Sprig template to decode the encoded data.
Unfortunately when I use a large amount of data Kubernetes refuses to process the generated Workflow-definition.
Example with raw data
This example works for small amounts of data.
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: webhook
spec:
template:
serviceAccountName: argo-events-sa
dependencies:
- name: input-dep
eventSourceName: webhook-datapost
eventName: datapost
triggers:
- template:
name: webhook-datapost-trigger
k8s:
group: argoproj.io
version: v1alpha1
resource: workflows
operation: create
source:
resource:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: webhook-datapost-
spec:
entrypoint: basefile
imagePullSecrets:
- name: regcred
arguments:
artifacts:
- name: filecontents
raw:
data: ""
templates:
- name: basefile
serviceAccountName: argo-events-sa
inputs:
artifacts:
- name: filecontents
path: /input.file
container:
image: alpine:latest
command: ["ls"]
args: ["/input.file"]
parameters:
- src:
dependencyName: input-dep
dataTemplate: "{{ .Input.body.basedata | b64dec }}"
dest: spec.arguments.artifacts.0.raw.data
Error with larger dataset
When I trigger the example above with a small dataset, this works as expected. But when I use a large dataset, I get an error:
Pod "webhook-datapost-7rwsm" is invalid: metadata.annotations: Too long: must have at most 262144 bytes
I understand that this is due to copying the entire raw data into the Workflow-template. This large template is then rejected by Kubernetes.
I am looking for a method to copy the data from a webhook POST-request into an artifact, without the entire payload being copied into the Workflow-template. Is there a possibility with Argo?
Why does the workflow just end on an arrow pointing down?
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: workflow-template-whalesay-template
spec:
templates:
- name: whalesay-template
inputs:
parameters:
- name: message
container:
image: docker/whalesay
command: [cowsay]
This is the worflowtemplate I'm using. I applied this to k8s before the next step.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
name: workflow-template-dag-diamond
generateName: workflow-template-dag-diamond-
spec:
entrypoint: diamond
templates:
- name: diamond
dag:
tasks:
- name: A
templateRef:
name: workflow-template-whalesay-template
template: whalesay-template
arguments:
parameters:
- name: message
value: A
This workflow references the previous step. Workflow is doing what its suppose to do but I can't see the green dots on the UI.
Given the following kustomize patch:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flux
spec:
template:
spec:
containers:
- name: some-name
args:
- --some-key=some-value
...
- --git-url=https://user:${PASSWORD}#domain.de
I want to use kubectl apply -k and somehow pass a value for ${PASSWORD} which I can set from my build script.
The only solution I got to work so far was replacing the ${PASSWORD} with sed, but I would prefer a kustomize solution.
As #Jonas already suggested you should consider using Secret. It's nicely described in this article.
I want to use kubectl apply -k and somehow pass a value for
${PASSWORD} which I can set from my build script.
I guess your script can store the generated password as a variable or save it to some file. You can easily create a Secret as follows:
$ kustomize edit add secret sl-demo-app --from-literal=db-password=$PASSWORD
or from a file:
$ kustomize edit add secret sl-demo-app --from-file=file/path
As you can read in the mentioned article:
These commands will modify your kustomization.yaml and add a
SecretGenerator inside it.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
patchesStrategicMerge:
- custom-env.yaml
- replica-and-rollout-strategy.yaml
secretGenerator:
- literals:
- db-password=12345
name: sl-demo-app
type: Opaque
kustomize build run in your project directory will create among others following Secret:
apiVersion: v1
data:
db-password: MTIzNDU=
kind: Secret
metadata:
name: sl-demo-app-6ft88t2625
type: Opaque
...
More details you can fine in the article.
If we want to use this secret from our deployment, we just have, like
before, to add a new layer definition which uses the secret.
For example, this file will mount the db-password value as
environement variables
apiVersion: apps/v1
kind: Deployment
metadata:
name: sl-demo-app
spec:
template:
spec:
containers:
- name: app
env:
- name: "DB_PASSWORD"
valueFrom:
secretKeyRef:
name: sl-demo-app
key: db.password
In your Deployment definition file it may look similar to this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flux
spec:
template:
spec:
containers:
- name: some-name
env:
- name: "PASSWORD"
valueFrom:
secretKeyRef:
name: git-secret
key: git.password
args:
- --some-key=some-value
...
- --git-url=https://user:${PASSWORD}#domain.de