Unable to update or delete existing argo events sensor and EventSource - kubernetes

Experiencing issue while modifying or deleting an existing argo events sensor.
Tried to modify a sensor
I tried to apply changes to an existing sensor.
But new changes are not taking effect. When it gets triggered, it is still using old triggers.
Tried to delete a sensor
Unable to delete. kubectl delete hangs forever. Only way is to delete whole namespace.
Using :
Argo-events version - v1.7.5
Kubernetes - v1.24.4+k3s1 ( testing in local - docker-desktop with K3d )
Since deleting everything & redoing is not an option when working in production environment, like to know if it's a known issue with argo-events or if I am doing something wrong.

As of release v1.7.5, there is a bug in default sensor & eventSource kubernetes resource yaml values.
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
....
finalizers:
- sensor-controller
....
It has finalizers as sensor-controller.
In v1.7.0+, argo events team has merged sensor controller & source controller into argo-events-controller-manager.
I believe, event sensor and event source are pointing to wrong controller
It should ideally be pointing to argo-events-controller
To resolve this issue till this bug is fixed in argo-events kubernetes charts:
Update your sensor & event source definitions to have finalizers as empty array.
# example sensor with empty finalizers
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: minio
finalizers: [] # <-- this one
spec:
dependencies:
- name: test-dep
eventSourceName: minio
eventName: example
triggers:
- template:
name: http-trigger
http:
url: http://http-server.argo-events.svc:8090/hello
payload:
- src:
dependencyName: test-dep
dataKey: notification.0.s3.bucket.name
dest: bucket
- src:
dependencyName: test-dep
contextKey: type
dest: type
method: POST
retryStrategy:
steps: 3
duration: 3s

Related

ArgoCD stuck in deleting resource

I’m having an issue where ArgoCD when deleting the resources is getting stuck because it tries to delete the child’s first and then the parents.
This works well for some cases but I have cases where this doesn’t work for instance, certificates.. it deletes the certificate request but because the certificate still exists it recreated the certificate request.
And it just stays there deleting and recreating :/
Is there a a way to specify an order or just tell Argo to delete it all at once?
Thanks!
Yup so... here is the all thing:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: previews
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: previews
source:
repoURL: git#github.com:myrepo.git
targetRevision: HEAD
path: helm
destination:
server: https://kubernetes.default.svc
namespace: previews
syncPolicy:
automated:
selfHeal: true
prune: true

argocd - stuck at deleting but resources are already deleted

argoproj/argocd:v1.8.7
Have a helm chart (1 with ingress, 1 with deployment/service/cm).
It has sync policies of automated (prune and self-heal). When trying to delete them from the argocd dashboard, they are getting deleted (no more on the k8s cluster), however the status on the dashboard has been stuck at Deleting.
If I try to click sync, it shows -> Unable to deploy revision: application is deleting.
Any ideas why it's stuck on Deleting status even though all resources have been deleted already ? Is there a way to refresh the status in the dashboard to reflect that actual state?
Thanks!
================
Update:
After doing cascade delete, this is the screenshot (i've removed the app names that why it's white for some part)
Doing kubectl get all -A shows all resources isn't present anymore (e.g even the cm, svc, deploy, etc)
I was actually able to make this work by updating the Application yaml:
Add spec.syncPolicy.allowEmpty: true
Remove metadata.finalizers
The working version without getting stuck at Deleting status:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: service-name
namespace: argocd
spec:
destination:
server: https://kubernetes.default.svc
namespace: argocd
project: proj-name
source:
path: service-name
repoURL: ssh://...git
targetRevision: dev
helm:
valueFiles:
- ../values.yaml
- ../values_version_dev.yaml
syncPolicy:
automated:
prune: true
allowEmpty: true
selfHeal: true
This has happened to me several times. In every case it was because I had two declarations of applications of the same name.

How to inject secrets from Google Secret Manager into K8s pod?

What is the best practice for injecting a secret from Google Secret Manager into a Kubernetes deployment? I have stored the admin password for my Grafana instance into Google Secret Manager. The Grafana instance was deployed using a helm chart on Google Kubernetes Engine. I did try using kube-secrets-init, which is a Kubernetes mutating admission webhook, that mutates any K8s Pod that is referencing a secret Google Secret Manager. I followed the instructions, but when I deploy my Grafana instance, I get the following error:
Internal error occurred: failed calling webhook "secrets-init.doit-intl.com": expected webhook response of admission.k8s.io/v1, Kind=AdmissionReview, got /, Kind=
This is the file used to deploy the mutating webhook:
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: mutating-secrets-init-webhook-cfg
labels:
app: secrets-init-webhook
webhooks:
- name: secrets-init.doit-intl.com
clientConfig:
service:
name: secrets-init-webhook-svc
namespace: default
path: "/pods"
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLekNDQWhPZ0F3SUJBZ0lSQVAyQ3BnQjlEVGpZbk5xSVBlM01aTTB3RFFZSktvWklodmNOQVFFTEJRQXcKTHpFdE1Dc0dBMVVFQXhNa09ETTVPVFptTW1ZdE1qSmtPQzAwT0RaakxUazNaVGt0TXpsbE0yWXlObUV5T0RaagpNQjRYRFRJeE1ETXhNREF5TWpZMU0xb1hEVEkyTURNd09UQXpNalkxTTFvd0x6RXRNQ3NHQTFVRUF4TWtPRE01Ck9UWm1NbVl0TWpKa09DMDBPRFpqTFRrM1pUa3RNemxsTTJZeU5tRXlPRFpqTUlJQklqQU5CZ2txaGtpRzl3MEIKQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBeTVRU2ZDTitoWERoTG04ZGRrN2Zzbk1HMG84bm9ZeVhUaC9ZeW1UZApiUGRrRGFTT3g1eU9weWtGb1FHV3RNaWR6RTNzd2FWd0x6WjFrdkpCaHZIWm43YzBsRDBNKytJbmNGV2dqZjEzCjdUS2RYZjI1TEFDNkszUVl3MmlzMTc5L1U1U2p2TUVCUFdzMkpVcFZ1L2s2Vm50UmZkMWtLSmpST2tVVTVMWlgKajVEZncyb2prNlZTeSs3MDh4aTBlZU14bjNzUzU1Q3hUSGJzNkdBWTRlOXVRUVhpT2dCWXl4UG90Nlk2Vk9WSApDcW1yTXQ3V1ZTQ0xvOVJDb1V3NjlLSnQ5aWVDdW13QnpWMW4xNXF5bExhNXF0MWdWa3h2RkF3MDRweUxWMnBCCmowSFNXdVQ3L2w4Q1dYeXhMdnlNNFoxeEc3VFQva3FSMElHRyt5YWI4Snk3cFFJREFRQUJvMEl3UURBT0JnTlYKSFE4QkFmOEVCQU1DQWdRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVVRFSjFJN3phSGJkRQp0amxTUEtJdGU2VlhXbTB3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQU1YaU9BbGcweDlqTE5Zbm1IS3MrNW1ECmVhbnhUdmZxWHFCSlphK1ZPZnFYNm4xMzBncGk2cnM2K2lxUnh6bkVtVUJVZWJGM2s4c2VSUFBzenFvRzh4OFMKOXNHaE1idlBkVjZleTByMUR0VGFQQnk2blFXUUdqTzhXV2JNd21uakRlODhCQzZzckp1UEpCM0ZLVVYwSWF1NQo5bHhKaW5OamtoMXp4OHpYNVV2anNXeXg2dk5Ha0lZQy9KR2pTem5NRGhzZEVEbmE0R2FqOHR0TUlPWjduRG9JCkdkeWxCNnFwVUgvZmVsMURoTGlRWFIvL0cyRkdHRERQc3BUc1ZmczV5N2p3d2NURGgwYnpSZmpjdjhBRXR1cDQKWjlQSU9hNUdhN0NXbVJIY0FHSXBnSGdzUTZ5VC95N3krVVluM1pmVW44NEYwdERFMi9HbnN5ekRWZDM4cHZBPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
admissionReviewVersions: ["v1"]
sideEffects: None
rules:
- operations: [ "CREATE" ]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["pods"]
If I understood everything correctly - the current problem in your case is the fact, that there is no AdmissionReview support in V1.
There is a related opened github issue since last year: Add support for v1 AdmissionReview.
If we go more deeper, we will see that kube-secrets-init use as upstream slok/kubewebhook
And kubewebhook has its own opened issue: Add support for v1 AdmissionReview #72
Current stage:
As per authors comment, new v2.0.0-beta.1 has been released but seems noone tested it.
In kube-secrets-init issue there is proposal to update and release V2 as well..
So,
kube-secrets-init is not compatible with v1 version yet. It's pending fix till someone will give feedback to its upstream project (slok/kubewebhook) in version v2.0.0-beta.1

ArgoCD Helm chart - Repository not accessible

I'm trying to add a helm chart (https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) to ArgoCD.
When I do this, I get the following error:
Unable to save changes: application spec is invalid: InvalidSpecError: repository not accessible: repository not found
Can you guys help me out please? I think I did everything right but it seems something's wrong...
Here's the Project yaml.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prom-oper
namespace: argocd
spec:
project: prom-oper
source:
repoURL: https://prometheus-community.github.io/helm-charts
targetRevision: "13.2.1"
path: prometheus-community/kube-prometheus-stack
helm:
# Release name override (defaults to application name)
releaseName: prom-oper
version: v3
values: |
... redacted
directory:
recurse: false
destination:
server: https://kubernetes.default.svc
namespace: prom-oper
syncPolicy:
automated: # automated sync by default retries failed attempts 5 times with following delays between attempts ( 5s, 10s, 20s, 40s, 80s ); retry controlled using `retry` field.
prune: false # Specifies if resources should be pruned during auto-syncing ( false by default ).
selfHeal: false # Specifies if partial app sync should be executed when resources are changed only in target Kubernetes cluster and no git change detected ( false by default ).
allowEmpty: false # Allows deleting all application resources during automatic syncing ( false by default ).
syncOptions: # Sync options which modifies sync behavior
- CreateNamespace=true # Namespace Auto-Creation ensures that namespace specified as the application destination exists in the destination cluster.
# The retry feature is available since v1.7
retry:
limit: 5 # number of failed sync attempt retries; unlimited number of attempts if less than 0
backoff:
duration: 5s # the amount to back off. Default unit is seconds, but could also be a duration (e.g. "2m", "1h")
factor: 2 # a factor to multiply the base duration after each failed retry
maxDuration: 3m # the maximum amount of time allowed for the backoff strategy
and also the configmap where I added the helm repo
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
name: argocd-cm
namespace: argocd
data:
admin.enabled: "false"
repositories: |
- type: helm
url: https://prometheus-community.github.io/helm-charts
name: prometheus-community
The reason you are getting this error is because the way the Application is defined, Argo thinks it's a Git repository instead of Helm.
Define the source object with a "chart" property instead of "path" like so:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prom-oper
namespace: argocd
spec:
project: prom-oper
source:
repoURL: https://prometheus-community.github.io/helm-charts
targetRevision: "13.2.1"
chart: kube-prometheus-stack
You can see it defined on line 128 in Argo's application-crd.yaml

Why does my kubernetes webook only get called on create and not on update?

I have a working mutating admission hook for kubernetes. It is called when I first deploy and app using helm. But it is not called when I update using helm. It will in fact call it if I change the version number for the deployment. But if only the content changed, then it skips calling the hook.
How can I make it always call the hook for any deployment?
Here is my hook config:
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
name: appcfg-mutator
webhooks:
- name: appcfg-mutator.devops.primerica.com
clientConfig:
service:
name: appcfg-mutator
namespace: appcfg-mutator
path: "/"
caBundle: {{ .Values.webhook.caBundle }}
rules:
- operations: ["*"]
apiGroups: [""]
apiVersions: ["v1","v1beta1","v1beta2"]
resources: ["pod","deployments","namespaces","services"]
failurePolicy: Fail
I log all requests as soon as they arrive and before deserializing the http rq body so I can see it's not getting called each update. Only on create, delete or when version field in yaml is changed.
Turns out I had a typo in my mutator config for "pod" instead of "pods". Plus, I was misunderstanding and expecting to see "deployments" updates since I was actually changing the "Deployment" kind yaml. Its just "pods" that I needed.
Here is the correction:
resources: ["pods","deployments","namespaces","services"]