error: unable to recognize "a.yaml": no matches for kind "Sensor" in version "argoproj.io/v1alpha1" - kubernetes

I got the following YAML from:
https://raw.githubusercontent.com/argoproj/argo-events/stable/examples/sensors/webhook.yaml
and saved it at a.yaml
However, when I do
kubectl apply -f a.yaml
I get:
error: unable to recognize "a.yaml": no matches for kind "Sensor" in version "argoproj.io/v1alpha1"
Not sure why Sensor is not valid "Kind"
apiVersion: argoproj.io/v1
kind: Sensor
metadata:
name: webhook
spec:
template:
serviceAccountName: operate-workflow-sa
dependencies:
- name: test-dep
eventSourceName: webhook
eventName: example
triggers:
- template:
name: webhook-workflow-trigger
k8s:
operation: create
source:
resource:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: webhook-
spec:
entrypoint: whalesay
arguments:
parameters:
- name: message
# the value will get overridden by event payload from test-dep
value: hello world
templates:
- name: whalesay
inputs:
parameters:
- name: message
container:
image: docker/whalesay:latest
command: [cowsay]
args: ["{{inputs.parameters.message}}"]
parameters:
- src:
dependencyName: test-dep
dataKey: body
dest: spec.arguments.parameters.0.value

The Kubernetes api can be extended.
Kubernetes by default does not know this kind.
You have to install it.
Check this with your System-admin on a Production-Side.

There are two requirements for this to work:
You need a cluster with Alpha Features Enabled:
https://cloud.google.com/kubernetes-engine/docs/how-to/creating-an-alpha-cluster
AND
You need argo events installed
kubectl create ns argo-events
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-events/stable/manifests/namespace-install.yaml
Then you can install a webhook Sensor.

Related

Configuring Argo output artifacts defined in a WorkflowTeamplate from Workflow

With the following WorkflowTemplate with an output artifact defined with the name messagejson. I am trying to configure it to use S3 in a Workflow:
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: file-output
spec:
entrypoint: writefile
templates:
- name: writefile
container:
image: alpine:latest
command: ["/bin/sh", "-c"]
args: ["echo hello | tee /tmp/message.json; ls -l /tmp; cat /tmp/message.json"]
outputs:
artifacts:
- name: messagejson
path: /tmp/message.json
---
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: read-file-
spec:
entrypoint: read-file
templates:
- name: read-file
steps:
- - name: print-file-content
templateRef:
name: file-output
template: writefile
arguments:
artifacts:
- name: messagejson
s3:
endpoint: 1.2.3.4
bucket: mybucket
key: "/rabbit/message.json"
insecure: true
accessKeySecret:
name: my-s3-credentials
key: accessKey
secretKeySecret:
name: my-s3-credentials
key: secretKey
However, I get Error (exit code 1): You need to configure artifact storage. More information on how to do this can be found in the docs: https://argoproj.github.io/argo-workflows/configure-artifact-repository/. The same works if I try to configure input artifacts from a Workflow but not output artifacts.
Any idea?

Argo Workflow error when using envFrom field

Workflow:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: my-workflow-
spec:
entrypoint: main
arguments:
parameters:
- name: configmap
value: my-configmap
- name: secret
value: my-secret
templates:
- name: main
steps:
- - name: main
templateRef:
name: my-template
template: main
arguments:
parameters:
- name: configmap
value: "{{workflow.parameters.configmap}}"
- name: secret
value: "{{workflow.parameters.secret}}"
Template:
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: my-template
spec:
entrypoint: main
templates:
- name: main
inputs:
parameters:
- name: configmap
parameters:
- name: secret
container:
image: my-image:1.2.3
envFrom:
- configMapRef:
name: "{{inputs.parameters.configmap}}"
- secretRef:
name: "{{inputs.parameters.secret}}"
When deployed through the Argo UI I receive the following error from Kubernetes when starting the pod:
spec.containers[1].envFrom: Invalid value: \"\": must specify one of: `configMapRef` or `secretRef`
Using envFrom is supported and documented in the Argo documentation: https://argoproj.github.io/argo-workflows/fields/. Why is Kubernetes complaining here?
As mentioned in the comments, there are a couple issues with your manifests. They're valid YAML, but that YAML does not deserialize into valid Argo custom resources.
In the Workflow, you have duplicated the parameters key in spec.templates[0].inputs.
In the WorkflowTemplate, you have placed the configMapRef and secretRef names at the same level as the keys. configMapRef and secretRef are objects, so the name key should be nested under each of those.
These are the corrected manifests:
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: my-template
spec:
entrypoint: main
templates:
- name: main
inputs:
parameters:
- name: configmap
- name: secret
container:
image: my-image:1.2.3
envFrom:
- configMapRef:
name: "{{inputs.parameters.configmap}}"
- secretRef:
name: "{{inputs.parameters.secret}}"
---
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: my-workflow-
spec:
entrypoint: main
arguments:
parameters:
- name: configmap
value: my-configmap
- name: secret
value: my-secret
templates:
- name: main
steps:
- - name: main
templateRef:
name: my-template
template: main
arguments:
parameters:
- name: configmap
value: "{{workflow.parameters.configmap}}"
- name: secret
value: "{{workflow.parameters.secret}}"
Argo Workflows supports IDE-based validation which should help you find/avoid these issues.

How to put an Argo webhook trigger parameter into an artifact?

I want to be able to POST a big piece of data to a webhook in Argo. In my Sensor definition I get the data from the request and put it into a "raw" artifact on the Workflow. Since the data is base64 encoded, I use a Sprig template to decode the encoded data.
Unfortunately when I use a large amount of data Kubernetes refuses to process the generated Workflow-definition.
Example with raw data
This example works for small amounts of data.
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: webhook
spec:
template:
serviceAccountName: argo-events-sa
dependencies:
- name: input-dep
eventSourceName: webhook-datapost
eventName: datapost
triggers:
- template:
name: webhook-datapost-trigger
k8s:
group: argoproj.io
version: v1alpha1
resource: workflows
operation: create
source:
resource:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: webhook-datapost-
spec:
entrypoint: basefile
imagePullSecrets:
- name: regcred
arguments:
artifacts:
- name: filecontents
raw:
data: ""
templates:
- name: basefile
serviceAccountName: argo-events-sa
inputs:
artifacts:
- name: filecontents
path: /input.file
container:
image: alpine:latest
command: ["ls"]
args: ["/input.file"]
parameters:
- src:
dependencyName: input-dep
dataTemplate: "{{ .Input.body.basedata | b64dec }}"
dest: spec.arguments.artifacts.0.raw.data
Error with larger dataset
When I trigger the example above with a small dataset, this works as expected. But when I use a large dataset, I get an error:
Pod "webhook-datapost-7rwsm" is invalid: metadata.annotations: Too long: must have at most 262144 bytes
I understand that this is due to copying the entire raw data into the Workflow-template. This large template is then rejected by Kubernetes.
I am looking for a method to copy the data from a webhook POST-request into an artifact, without the entire payload being copied into the Workflow-template. Is there a possibility with Argo?

How can I make steps/tasks from WorkflowTemplate show up in Argo workflows UI?

Why does the workflow just end on an arrow pointing down?
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: workflow-template-whalesay-template
spec:
templates:
- name: whalesay-template
inputs:
parameters:
- name: message
container:
image: docker/whalesay
command: [cowsay]
This is the worflowtemplate I'm using. I applied this to k8s before the next step.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
name: workflow-template-dag-diamond
generateName: workflow-template-dag-diamond-
spec:
entrypoint: diamond
templates:
- name: diamond
dag:
tasks:
- name: A
templateRef:
name: workflow-template-whalesay-template
template: whalesay-template
arguments:
parameters:
- name: message
value: A
This workflow references the previous step. Workflow is doing what its suppose to do but I can't see the green dots on the UI.

applying task in k8s pod

I'm trying to run kubectl -f pod.yaml but getting this error. Any hint?
error: error validating "/pod.yaml": error validating data: [ValidationError(Pod): unknown field "imagePullSecrets" in io.k8s.api.core.v1.Pod, ValidationError(Pod): unknown field "nodeSelector" in io.k8s.api.core.v1.Pod, ValidationError(Pod): unknown field "tasks" in io.k8s.api.core.v1.Pod]; if you choose to ignore these errors, turn validation off with --validate=false
pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: gpu-pod-10.0.1
namespace: e6a5089f-8e9e-4647-abe3-b8d775079565
spec:
containers:
- name: main
image: bded587f4604
imagePullSecrets: ["testo", "awsecr-cred"]
nodeSelector:
kubernetes.io/hostname: 11-4730
tasks:
- name: traind
command: et estimate -e v/lat/exent_sps/enet/default_sql.spec.txt -r /out
completions: 1
inputs:
datasets:
- name: poa
version: 2018-
mountPath: /in/0
You have an indentation error on your pod.yaml definition with imagePullSecrets and you need to specify the - name: for your imagePullSecrets. Should be something like this:
apiVersion: v1
kind: Pod
metadata:
name: gpu-test-test-pod-10.0.1.11-e8b74730
namespace: test-e6a5089f-8e9e-4647-abe3-b8d775079565
spec:
containers:
- name: main
image: test.io/tets/maglev-test-bded587f4604
imagePullSecrets:
- name: testawsecr-cred
...
Note that imagePullSecrets: is plural and an array so you can specify multiple credentials to multiple registries.
If you are using Docker you can also specify multiple credentials in ~/.docker/config.json.
If you have the same credentials in imagePullSecrets: and configs in ~/.docker/config.json, the credentials are merged.