How to put an Argo webhook trigger parameter into an artifact? - kubernetes

I want to be able to POST a big piece of data to a webhook in Argo. In my Sensor definition I get the data from the request and put it into a "raw" artifact on the Workflow. Since the data is base64 encoded, I use a Sprig template to decode the encoded data.
Unfortunately when I use a large amount of data Kubernetes refuses to process the generated Workflow-definition.
Example with raw data
This example works for small amounts of data.
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: webhook
spec:
template:
serviceAccountName: argo-events-sa
dependencies:
- name: input-dep
eventSourceName: webhook-datapost
eventName: datapost
triggers:
- template:
name: webhook-datapost-trigger
k8s:
group: argoproj.io
version: v1alpha1
resource: workflows
operation: create
source:
resource:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: webhook-datapost-
spec:
entrypoint: basefile
imagePullSecrets:
- name: regcred
arguments:
artifacts:
- name: filecontents
raw:
data: ""
templates:
- name: basefile
serviceAccountName: argo-events-sa
inputs:
artifacts:
- name: filecontents
path: /input.file
container:
image: alpine:latest
command: ["ls"]
args: ["/input.file"]
parameters:
- src:
dependencyName: input-dep
dataTemplate: "{{ .Input.body.basedata | b64dec }}"
dest: spec.arguments.artifacts.0.raw.data
Error with larger dataset
When I trigger the example above with a small dataset, this works as expected. But when I use a large dataset, I get an error:
Pod "webhook-datapost-7rwsm" is invalid: metadata.annotations: Too long: must have at most 262144 bytes
I understand that this is due to copying the entire raw data into the Workflow-template. This large template is then rejected by Kubernetes.
I am looking for a method to copy the data from a webhook POST-request into an artifact, without the entire payload being copied into the Workflow-template. Is there a possibility with Argo?

Related

error: unable to recognize "a.yaml": no matches for kind "Sensor" in version "argoproj.io/v1alpha1"

I got the following YAML from:
https://raw.githubusercontent.com/argoproj/argo-events/stable/examples/sensors/webhook.yaml
and saved it at a.yaml
However, when I do
kubectl apply -f a.yaml
I get:
error: unable to recognize "a.yaml": no matches for kind "Sensor" in version "argoproj.io/v1alpha1"
Not sure why Sensor is not valid "Kind"
apiVersion: argoproj.io/v1
kind: Sensor
metadata:
name: webhook
spec:
template:
serviceAccountName: operate-workflow-sa
dependencies:
- name: test-dep
eventSourceName: webhook
eventName: example
triggers:
- template:
name: webhook-workflow-trigger
k8s:
operation: create
source:
resource:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: webhook-
spec:
entrypoint: whalesay
arguments:
parameters:
- name: message
# the value will get overridden by event payload from test-dep
value: hello world
templates:
- name: whalesay
inputs:
parameters:
- name: message
container:
image: docker/whalesay:latest
command: [cowsay]
args: ["{{inputs.parameters.message}}"]
parameters:
- src:
dependencyName: test-dep
dataKey: body
dest: spec.arguments.parameters.0.value
The Kubernetes api can be extended.
Kubernetes by default does not know this kind.
You have to install it.
Check this with your System-admin on a Production-Side.
There are two requirements for this to work:
You need a cluster with Alpha Features Enabled:
https://cloud.google.com/kubernetes-engine/docs/how-to/creating-an-alpha-cluster
AND
You need argo events installed
kubectl create ns argo-events
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-events/stable/manifests/namespace-install.yaml
Then you can install a webhook Sensor.

Argo Workflow: "Bad Request: json: cannot unmarshal string into Go struct field"

I have a Argo WorkflowTemplate that looks like this:
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: test-container-command
spec:
entrypoint: main
templates:
- name: main
inputs:
parameters:
- name: command
container:
image: alpine:latest
command: "{{ inputs.parameters.command }}"
env: # some predefined env
What I want to do is to create a WorkflowTemplate that can execute an arbitrary command specified by the input parameter command. That way, users of this WorkflowTemplate can supply the parameter command with an array of strings and then execute it like:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: test-run-
spec:
workflowTemplateRef:
name: test-container-command
entrypoint: main
arguments:
parameters:
- name: command
value:
- echo
- hello
However, when I try to save this WorkflowTemplate, the Argo server gave me this error message:
Bad Request: json: cannot unmarshal string into Go struct field Container.workflow.spec.templates.container.command of type []string
It seems that Argo expects the field .spec.templates.container.command to be an array of strings, but it treat "{{ inputs.parameters.command }}" as a string, even though I'm trying to supply the parameter command with an array of strings.
Is there any way to achieve what I was trying to do as the WorkflowTemplate test-container-command, i.e. provide a WorkflowTemplate for the user to execute arbitrary commands with a predefined container and env?
As it says in error command should be list of strings.
You need to reformat your template to:
container:
image: alpine:latest
command: ["{{ inputs.parameters.command }}"]
Now argo should create your workflow without any problems
You could also use
container:
image: alpine:latest
command:
- "{{ inputs.parameters.command }}"
Edit
As you want to run some commands within the image it would be much better instead of container use script template
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: test-container-command
spec:
entrypoint: main
templates:
- name: main
inputs:
parameters:
- name: command
- name: extraEnv
script:
image: alpine:latest
command: [ "sh" ]
env:
- { name: ENV1, value: "foo" }
- { name: ENV2, value: "{{ inputs.parameters.extraEnv }}" }
source: |
{{ inputs.parameters.command }}

How do I access a private Container Registry from IBM Cloud Delivery Pipeline (Tekton)

I am trying to use a container image from a private container registry in one of my tasks.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: echo-hello-world
spec:
steps:
- name: echo
image: de.icr.io/reporting/status:latest
command:
- echo
args:
- "Hello World"
But when I run this task within an IBM Cloud Delivery Pipeline (Tekton) the image can not be pulled
message: 'Failed to pull image "de.icr.io/reporting/status:latest": rpc error: code = Unknown desc = failed to pull and unpack image "de.icr.io/reporting/status:latest": failed to resolve reference "de.icr.io/reporting/status:latest": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized'
I read several tutorials and blogs, but so far couldn't find a solution. This is probably what I need to accomplish, so that the IBM Cloud Delivery Pipeline (Tekton) can access my private container registry: https://tekton.dev/vault/pipelines-v0.15.2/auth/#basic-authentication-docker
So far I have created a secret.yaml file in my .tekton directory:
apiVersion: v1
kind: Secret
metadata:
name: basic-user-pass
annotations:
tekton.dev/docker-0: https://de.icr.io # Described below
type: kubernetes.io/basic-auth
stringData:
username: $(params.DOCKER_USERNAME)
password: $(params.DOCKER_PASSWORD)
I am also creating a ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: default-runner
secrets:
- name: basic-user-pass
And in my trigger definition I telling the pipeline to use the default-runner ServiceAccount:
apiVersion: tekton.dev/v1beta1
kind: TriggerTemplate
metadata:
name: theTemplateTrigger
spec:
resourcetemplates:
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun-$(uid)
spec:
serviceAccountName: default-runner
pipelineRef:
name: hello-goodbye
I found a way to pass my API key to my IBM Cloud Delivery Pipeline (Tekton) and the tasks in my pipeline are now able to pull container images from my private container registry.
This is my working trigger template:
apiVersion: tekton.dev/v1beta1
kind: TriggerTemplate
metadata:
name: theTemplateTrigger
spec:
params:
- name: pipeline-dockerconfigjson
description: dockerconfigjson for images used in .pipeline-config.yaml
default: "eyJhdXRocyI6e319" # ie. {"auths":{}} base64 encoded
resourcetemplates:
- apiVersion: v1
kind: Secret
data:
.dockerconfigjson: $(tt.params.pipeline-dockerconfigjson)
metadata:
name: pipeline-pull-secret
type: kubernetes.io/dockerconfigjson
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun-$(uid)
spec:
pipelineRef:
name: hello-goodbye
podTemplate:
imagePullSecrets:
- name: pipeline-pull-secret
It first defines a parameter called pipeline-dockerconfigjson:
params:
- name: pipeline-dockerconfigjson
description: dockerconfigjson for images used in .pipeline-config.yaml
default: "eyJhdXRocyI6e319" # ie. {"auths":{}} base64 encoded
The second part turns the value passed into this parameter into a Kubernetes secret:
- apiVersion: v1
kind: Secret
data:
.dockerconfigjson: $(tt.params.pipeline-dockerconfigjson)
metadata:
name: pipeline-pull-secret
type: kubernetes.io/dockerconfigjson
And this secret is then pushed into the imagePullSecrets field of the PodTemplate.
The last step is to populate the parameter with a valid dockerconfigjson and this can be accomplished within the Delivery Pipeline UI (IBM Cloud UI).
To create a valid dockerconfigjson for my registry de.icr.io I had to use the following kubectl command:
kubectl create secret docker-registry mysecret \
--dry-run=client \
--docker-server=de.icr.io \
--docker-username=iamapikey \
--docker-password=<MY_API_KEY> \
--docker-email=<MY_EMAIL> \
-o yaml
and then within the output there is a valid base64 encoded .dockerconfigjson field.
Please also note that there is a public catalog of sample tekton tasks:
https://github.com/open-toolchain/tekton-catalog/tree/master/container-registry
More on IBM Cloud Continuous Delivery Tekton:
https://www.ibm.com/cloud/blog/ibm-cloud-continuous-delivery-tekton-pipelines-tools-and-resources
Tektonized Toolchain Templates: https://www.ibm.com/cloud/blog/toolchain-templates-with-tekton-pipelines
The secret you created (type basic-auth) would not allow Kubelet to pull your Pods images.
The doc mentions those secrets are meant to provision some configuration, inside your tasks containers runtime. Which may then be used during your build jobs, pulling or pushing images to registries.
Although Kubelet needs some different configuration (eg: type dockercfg), to authenticate when pulling images / starting containers.

Rancher k3sv1.22.5 supports which apiversion

I wanted to know in rancher k3sv1.22.5 which apiversion is compatible and which attributes it supports, from where I would get to know about the above details, I have created one pipeline yaml with some attributes but i wanted to add more so guys its a request could you please help me.
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: quarkus-setup-pl
spec:
params:
- name: deployment-name
type: string
description: the unique name for this deployment
tasks:
- name: quarkus-setup-build-task
taskRef:
name: quarkus-setup-build-task
resources: {}

How can I get sub workflow steps/tasks output?

I created a cluster workflow template, which will do some tasks. And I will use last step output as current workflow parameters. When I ref this template, I don't know how can I get the output from cluster workflow task/step.
Cluster Workflow Template
apiVersion: argoproj.io/v1alpha1
kind: ClusterWorkflowTemplate
metadata:
name: gen-params
spec:
templates:
- name: tasks
steps:
- - name: prepare
template: prepare
- - name: gen-params
template: gen-params
...
Workflow
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
name: demo
spec:
entrypoint: main
templates:
- name: main
steps:
- - name: gen-params
templateRef:
name: gen-params
template: tasks
clusterScope: true
- - name: calculate
template: calculate
arguments:
parameters:
- name: params
value: "{{steps.gen-params.steps.gen-params.outputs.result}}" # not work
...
Your issue is likely less about the usage of a WorkflowTemplate/ClusterWorkflowTemplate and more to do with the fact that you are attempting to access output from a "nested" workflow step.
You can achieve this by defining an output parameter of the top-level tasks template in your ClusterWorkflowTemplate which takes its value from the output result of the last step in that tasks template.
Your WorkflowTemplate would look like this:
apiVersion: argoproj.io/v1alpha1
kind: ClusterWorkflowTemplate
metadata:
name: gen-params
spec:
templates:
- name: tasks
steps:
- - name: prepare
template: prepare
- - name: gen-params
template: gen-params
outputs:
parameters:
- name: "nested-gen-params-result"
valueFrom:
parameter: "{{steps.gen-params.outputs.result}}"
After making that change, you'll be able reference the output of the ClusterWorkflowTemplate-defined step of your top-level Workflow using {{steps.gen-params.outputs.parameters.nested-gen-params-result}}
Argo's nested-workflow example shows some other similar patterns.
templateRef is simply a link, used to populate the YAML of the Workflow step. You should interact with the gen-params step in the same way that you would if you'd just copy/pasted the YAML from the gen-params ClusterWorkflowTemplate directly into your new Workflow.
In this case, you should access the result of the gen-params step with this: {{steps.gen-params.outputs.result}}.