Argo Workflow Stuck in Progressing - kubernetes

I've created a test Argo Workflow to help me understand how I can CI/CD approach to deploy an Ansible Playbook. When I create the app in Argo CD, it syncs fine, but then it just gets stuck on Progressing and never gets out of that state.
I tried digging around to see if there was any indication in the logs, but I'm fairly new to Argo. It doesn't even get to the point where it's creating any pods to do any of the steps.
Thoughts?
Here is my workflow:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
name: ansible-test
spec:
entrypoint: ansible-test-ci
arguments:
parameters:
- name: repo
value: ****
- name: revision
value: '1.6'
templates:
- name: ansible-test-ci
steps:
- - name: checkout
template: checkout
#- - name: test-playbook
# template: test-playbook
# arguments:
# artifacts:
# - name: source
# from: "{{steps.checkout.outputs.artifacts.source}}"
- - name: deploy
template: deploy
arguments:
artifacts:
- name: source
from: "{{steps.checkout.outputs.artifacts.source}}"
- name: checkout
inputs:
artifacts:
- name: source
path: /src
git:
repo: "{{workflow.parameters.repo}}"
#revision: "{{workflow.parameters.revision}}"
#sshPrivateKeySecret:
# name: my-secret
# key: ssh-private-key # kubectl create secret generic my-secret --from-file=ssh-private-key=~/.ssh/id_rsa2
outputs:
artifacts:
- name: source
path: /src
container:
image: alpine/git:latest
command: ["/bin/sh", "-c"]
args: ["cd /src && git status && ls -l"]
#- name: test-playbook
# inputs:
# artifacts:
# - name: source
# path: /ansible/
# container:
# image: ansible/ansible-runner:latest
# command: ["/bin/sh", "-c"]
# args: ["
# cd /ansible &&
# ansible-playbook playbook.yaml -i inventory
# "]
- name: deploy
inputs:
artifacts:
- name: source
path: /ansible/
container:
image: ansible/ansible-runner:latest
command: ["/bin/sh", "-c"]
args: ["
cd /ansible &&
ansible-playbook playbook.yaml -i inventory
"]
Images of what's going on in Argo CD:

I ended up solving this by adding a ServiceAccount and Role resource to the namespace that Argo Workflow was trying to run within.
Here's the Role I added:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: workflow-role
rules:
# pod get/watch is used to identify the container IDs of the current pod
# pod patch is used to annotate the step's outputs back to controller (e.g. artifact location)
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- watch
- patch
# logs get/watch are used to get the pods logs for script outputs, and for log archival
- apiGroups:
- ""
resources:
- pods/log
verbs:
- get
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: workflow-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: workflow-role
subjects:
- kind: ServiceAccount
name: default

Related

error: unable to recognize "a.yaml": no matches for kind "Sensor" in version "argoproj.io/v1alpha1"

I got the following YAML from:
https://raw.githubusercontent.com/argoproj/argo-events/stable/examples/sensors/webhook.yaml
and saved it at a.yaml
However, when I do
kubectl apply -f a.yaml
I get:
error: unable to recognize "a.yaml": no matches for kind "Sensor" in version "argoproj.io/v1alpha1"
Not sure why Sensor is not valid "Kind"
apiVersion: argoproj.io/v1
kind: Sensor
metadata:
name: webhook
spec:
template:
serviceAccountName: operate-workflow-sa
dependencies:
- name: test-dep
eventSourceName: webhook
eventName: example
triggers:
- template:
name: webhook-workflow-trigger
k8s:
operation: create
source:
resource:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: webhook-
spec:
entrypoint: whalesay
arguments:
parameters:
- name: message
# the value will get overridden by event payload from test-dep
value: hello world
templates:
- name: whalesay
inputs:
parameters:
- name: message
container:
image: docker/whalesay:latest
command: [cowsay]
args: ["{{inputs.parameters.message}}"]
parameters:
- src:
dependencyName: test-dep
dataKey: body
dest: spec.arguments.parameters.0.value
The Kubernetes api can be extended.
Kubernetes by default does not know this kind.
You have to install it.
Check this with your System-admin on a Production-Side.
There are two requirements for this to work:
You need a cluster with Alpha Features Enabled:
https://cloud.google.com/kubernetes-engine/docs/how-to/creating-an-alpha-cluster
AND
You need argo events installed
kubectl create ns argo-events
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-events/stable/manifests/namespace-install.yaml
Then you can install a webhook Sensor.

How will I differentiate the two yml files for GitLab ci cd yaml script file?

I've the GitLab ci/cd yaml file script
services:
- docker:19.03.11-dind
workflow:
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH == "developer" || $CI_COMMIT_BRANCH == "stage"|| ($CI_COMMIT_BRANCH =~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i))
when: always
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH != "developer" || $CI_COMMIT_BRANCH != "stage"|| ($CI_COMMIT_BRANCH !~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i))
when: never
stages:
- build
- Publish
- deploy
cache:
paths:
- .m2/repository
- target
build_jar:
image: maven:3.8.3-jdk-11
stage: build
script:
- mvn clean install package -DskipTests=true
artifacts:
paths:
- target/*.jar
docker_build:
stage: Publish
image: docker:19.03.11
services:
- docker:19.03.11-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
deploy_dev:
stage: deploy
image: stellacenter/aws-helm-kubectl
before_script:
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" provider-service-dev.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_DEV $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f provider-service-dev.yml
only:
- developer
deploy_stage:
stage: deploy
image: stellacenter/aws-helm-kubectl
before_script:
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" provider-service-stage.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_STAGE $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f provider-service-stage.yml
only:
- stage
this yaml script which I combine from two branches i.e developer and stage
but I'm having two yml files for separate branches (for developer and stage)
provider-service-dev.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: provider-app
namespace: stellacenter-dev
labels:
app: provider-app
spec:
replicas: 1
selector:
matchLabels:
app : provider-app
template:
metadata:
labels:
app: provider-app
spec:
containers:
- name: provider-app
image: registry.gitlab.com/stella-center/backend-services/provider-service:<VERSION>
imagePullPolicy: Always
ports:
- containerPort: 8092
imagePullSecrets:
- name: gitlab-registry-token-auth
---
apiVersion: v1
kind: Service
metadata:
name: provider-service
namespace: stellacenter-dev
spec:
type: NodePort
selector:
app: provider-app
ports:
- port: 8092
targetPort: 8092
provider-service-stage.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: provider-app
namespace: stellacenter-stage-uat
labels:
app: provider-app
spec:
replicas: 1
selector:
matchLabels:
app : provider-app
template:
metadata:
labels:
app: provider-app
spec:
containers:
- name: provider-app
image: registry.gitlab.com/stella-center/backend-services/provider-service:<VERSION>
imagePullPolicy: Always
ports:
- containerPort: 8092
imagePullSecrets:
- name: gitlab-registry-token-auth
---
apiVersion: v1
kind: Service
metadata:
name: provider-service
namespace: stellacenter-stage-uat
spec:
type: NodePort
selector:
app: provider-app
ports:
- port: 8092
targetPort: 8092
I have separately mentioned in the gitlab cicd yaml script but it shows the error like
$ kubectl apply -f provider-service-dev.yml
Error from server (NotFound): error when creating "provider-service-dev.yml": namespaces "stellacenter-dev" not found
Error from server (NotFound): error when creating "provider-service-dev.yml": namespaces "stellacenter-dev" not found
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
It shows error in the last line of the script. It shows that the namespaces can't found but that was there only . I don't know how to sort it out this. Please Kindly help me to sort it out
The error is saying your namespace is missing. You must first create the namespace stellacenter-dev.

Configuring Argo output artifacts defined in a WorkflowTeamplate from Workflow

With the following WorkflowTemplate with an output artifact defined with the name messagejson. I am trying to configure it to use S3 in a Workflow:
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: file-output
spec:
entrypoint: writefile
templates:
- name: writefile
container:
image: alpine:latest
command: ["/bin/sh", "-c"]
args: ["echo hello | tee /tmp/message.json; ls -l /tmp; cat /tmp/message.json"]
outputs:
artifacts:
- name: messagejson
path: /tmp/message.json
---
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: read-file-
spec:
entrypoint: read-file
templates:
- name: read-file
steps:
- - name: print-file-content
templateRef:
name: file-output
template: writefile
arguments:
artifacts:
- name: messagejson
s3:
endpoint: 1.2.3.4
bucket: mybucket
key: "/rabbit/message.json"
insecure: true
accessKeySecret:
name: my-s3-credentials
key: accessKey
secretKeySecret:
name: my-s3-credentials
key: secretKey
However, I get Error (exit code 1): You need to configure artifact storage. More information on how to do this can be found in the docs: https://argoproj.github.io/argo-workflows/configure-artifact-repository/. The same works if I try to configure input artifacts from a Workflow but not output artifacts.
Any idea?

Set node label to the pod environment variable

How to set Node label to Pod environment variable? I need to know the label topology.kubernetes.io/zone value inside the pod.
The Downward API currently does not support exposing node labels to pods/containers. There is an open issue about that on GitHib, but it is unclear when it will be implemented if at all.
That leaves the only option to get node labels from Kubernetes API, just as kubectl does. It is not easy to implement, especially if you want labels as environment variables. I'll give you an example how it can be done with an initContainer, curl, and jq but if possible, I suggest you rather implement this in your application, for it will be easier and cleaner.
To make a request for labels you need permissions to do that. Therefore, the example below creates a service account with permissions to get (describe) nodes. Then, the script in the initContainer uses the service account to make a request and extract labels from json. The test container reads environment variables from the file and echoes one.
Example:
# Create a service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: describe-nodes
namespace: <insert-namespace-name-where-the-app-is>
---
# Create a cluster role that allowed to perform describe ("get") over ["nodes"]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: describe-nodes
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
---
# Associate the cluster role with the service account
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: describe-nodes
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: describe-nodes
subjects:
- kind: ServiceAccount
name: describe-nodes
namespace: <insert-namespace-name-where-the-app-is>
---
# Proof of concept pod
apiVersion: v1
kind: Pod
metadata:
name: get-node-labels
spec:
# Service account to get node labels from Kubernetes API
serviceAccountName: describe-nodes
# A volume to keep the extracted labels
volumes:
- name: node-info
emptyDir: {}
initContainers:
# The container that extracts the labels
- name: get-node-labels
# The image needs 'curl' and 'jq' apps in it
# I used curl image and run it as root to install 'jq'
# during runtime
# THIS IS A BAD PRACTICE UNSUITABLE FOR PRODUCTION
# Make an image where both present.
image: curlimages/curl
# Remove securityContext if you have an image with both curl and jq
securityContext:
runAsUser: 0
# It'll put labels here
volumeMounts:
- mountPath: /node
name: node-info
env:
# pass node name to the environment
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: APISERVER
value: https://kubernetes.default.svc
- name: SERVICEACCOUNT
value: /var/run/secrets/kubernetes.io/serviceaccount
- name: SCRIPT
value: |
set -eo pipefail
# install jq; you don't need this line if the image has it
apk add jq
TOKEN=$(cat ${SERVICEACCOUNT}/token)
CACERT=${SERVICEACCOUNT}/ca.crt
# Get node labels into a json
curl --cacert ${CACERT} \
--header "Authorization: Bearer ${TOKEN}" \
-X GET ${APISERVER}/api/v1/nodes/${NODENAME} | jq .metadata.labels > /node/labels.json
# Extract 'topology.kubernetes.io/zone' from json
NODE_ZONE=$(jq '."topology.kubernetes.io/zone"' -r /node/labels.json)
# and save it into a file in the format suitable for sourcing
echo "export NODE_ZONE=${NODE_ZONE}" > /node/zone
command: ["/bin/ash", "-c"]
args:
- 'echo "$$SCRIPT" > /tmp/script && ash /tmp/script'
containers:
# A container that needs the label value
- name: test
image: debian:buster
command: ["/bin/bash", "-c"]
# source ENV variable from file, echo NODE_ZONE, and keep running doing nothing
args: ["source /node/zone && echo $$NODE_ZONE && cat /dev/stdout"]
volumeMounts:
- mountPath: /node
name: node-info
You could use InitContainer
...
spec:
initContainers:
- name: node2pod
image: <image-with-k8s-access>
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
...
Ref: Node Label to Pod
Edit Update
A similar Solution could be Inject node labels into Kubernetes pod

using generateName for yaml file to be installed by helm

I have upload.yaml file which is uploads a script to mongo, I package with helm.
apiVersion: batch/v1
kind: Job
metadata:
generateName: upload-strategy-to-mongo-v2
spec:
parallelism: 1
completions: 1
template:
metadata:
name: upload-strategy-to-mongo
spec:
volumes:
- name: upload-strategy-to-mongo-scripts-volume
configMap:
name: upload-strategy-to-mongo-scripts-v3
containers:
- name: upload-strategy-to-mongo
image: mongo
env:
- name: MONGODB_URI
value: ####
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-user
key: ####
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-user
key: #####
volumeMounts:
- mountPath: /scripts
name: upload-strategy-to-mongo-scripts-volume
command: ["mongo"]
args:
- $(MONGODB_URI)/ravnml
- --username
- $(MONGODB_USERNAME)
- --password
- $(MONGODB_PASSWORD)
- --authenticationDatabase
- admin
- /scripts/upload.js
restartPolicy: Never
---
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: upload-strategy-to-mongo-scripts-v3
data:
upload.js: |
// Read the object from file and parse it
var data = cat('/scripts/strategy.json');
var obj = JSON.parse(data);
// Upsert strategy
print(db.strategy.find());
db.strategy.replaceOne(
{ name : obj.name },
obj,
{ upsert: true }
)
print(db.strategy.find());
strategy.json: {{ .Files.Get "strategy.json" | quote }}
now I am using generateName to generate a custom name every time I install it. I require to have multiple packages been installed and I require the name to be dynamic.
Error
When I install this script with helm install <name> <tar.gz file> -n <namespace> I get the following error
Error: rendered manifests contain a resource that already exists. Unable to continue with install: could not get information about the resource: resource name may not be empty
but I am able to install if I don't use generateName. Any ideas?
I looked at various resources but they don't seem to answer how to install via helm.
references looked:
Add random string on Kubernetes pod deployment name https://github.com/kubernetes/kubernetes/issues/44501 ;
https://zknill.io/posts/kubernetes-generated-names/
This seems to be a known issue. Helm doesn't work with generateName. For unique names, you can use the Helm's build in properties like Revision or Name. See the following link for reference:
https://github.com/helm/helm/issues/3348#issuecomment-482369133