Validating Webhook for configmap - kubernetes

I have Validating Webhook that triggers when some CRDs resources get [CREATE, UPDATE] operations.
I wanted to add for that, a specific configmap that will trigger that validating webhook.
Under the same namespace, I have multiple CRDs and configmaps, but I wanted to trigger the webhook also for one of the configmaps.
This is the ValidatingWebhook v1beta1 admissionregistration.k8s.io properties.
I guess the namespaceSelector is not the perfect match for my needs since it triggers for any configmap under that namespace. Tried to understand also if the objectSelector is good solution, but couldnt fully understand.
This is the relevent part of my webhook configurations:
webhooks:
- name: myWebhook.webhook
clientConfig:
***
failurePolicy:
***
rules:
- operations: ['CREATE', 'UPDATE']
apiGroups: ***
apiVersion: ***
resources: [CRD_resource_1, CRD_resource_2]
So I guess that my question is- how can I pick one of the multiple configmaps to triger my validation webhook?
Many thanks.

You definitely should use objectSelector in order to act only on specific configMaps.
You can make sure you put some specific label on those configMaps and configure your webhook:
objectSelector:
matchLabels:
myCoolConfigMaps: true

Related

How to trigger webhook to modify pod after pod finishes scheduling by kube-scheduler

I have a project that needs to modify the pod to add a label after the pod assigns the scheduling node
Trigger on pod update using MutatingWebhookConfiguration
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
name: mutate-webhook-cfg
labels:
app: mutate-webhook
webhooks:
- name: add-label
...
...
rules:
- operations: [ "UPDATE" ]
apiGroups: [""]
apiVersions: ["v1"]
resources: [""pods"]
But how to distinguish that the update is allocating nodes to pods? What I think is based on whether the nodeName field is empty, but there seems to be no way similar to [Predicate]{}, which can get the pod objects before and after the change for comparison. Therefore, it can only reduce unnecessary items according to nodeName and whether it contains the label to be added.
if pod.Spec.NodeName != "" && !isContains(pod.Labels, custom-label){
...
...
}
Is there any other way to determine whether the current update is to assign nodes to pods, which can reduce unnecessary webhook calls or internal processing
I really appreciate any help with this.

How to inject secrets from Google Secret Manager into K8s pod?

What is the best practice for injecting a secret from Google Secret Manager into a Kubernetes deployment? I have stored the admin password for my Grafana instance into Google Secret Manager. The Grafana instance was deployed using a helm chart on Google Kubernetes Engine. I did try using kube-secrets-init, which is a Kubernetes mutating admission webhook, that mutates any K8s Pod that is referencing a secret Google Secret Manager. I followed the instructions, but when I deploy my Grafana instance, I get the following error:
Internal error occurred: failed calling webhook "secrets-init.doit-intl.com": expected webhook response of admission.k8s.io/v1, Kind=AdmissionReview, got /, Kind=
This is the file used to deploy the mutating webhook:
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: mutating-secrets-init-webhook-cfg
labels:
app: secrets-init-webhook
webhooks:
- name: secrets-init.doit-intl.com
clientConfig:
service:
name: secrets-init-webhook-svc
namespace: default
path: "/pods"
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLekNDQWhPZ0F3SUJBZ0lSQVAyQ3BnQjlEVGpZbk5xSVBlM01aTTB3RFFZSktvWklodmNOQVFFTEJRQXcKTHpFdE1Dc0dBMVVFQXhNa09ETTVPVFptTW1ZdE1qSmtPQzAwT0RaakxUazNaVGt0TXpsbE0yWXlObUV5T0RaagpNQjRYRFRJeE1ETXhNREF5TWpZMU0xb1hEVEkyTURNd09UQXpNalkxTTFvd0x6RXRNQ3NHQTFVRUF4TWtPRE01Ck9UWm1NbVl0TWpKa09DMDBPRFpqTFRrM1pUa3RNemxsTTJZeU5tRXlPRFpqTUlJQklqQU5CZ2txaGtpRzl3MEIKQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBeTVRU2ZDTitoWERoTG04ZGRrN2Zzbk1HMG84bm9ZeVhUaC9ZeW1UZApiUGRrRGFTT3g1eU9weWtGb1FHV3RNaWR6RTNzd2FWd0x6WjFrdkpCaHZIWm43YzBsRDBNKytJbmNGV2dqZjEzCjdUS2RYZjI1TEFDNkszUVl3MmlzMTc5L1U1U2p2TUVCUFdzMkpVcFZ1L2s2Vm50UmZkMWtLSmpST2tVVTVMWlgKajVEZncyb2prNlZTeSs3MDh4aTBlZU14bjNzUzU1Q3hUSGJzNkdBWTRlOXVRUVhpT2dCWXl4UG90Nlk2Vk9WSApDcW1yTXQ3V1ZTQ0xvOVJDb1V3NjlLSnQ5aWVDdW13QnpWMW4xNXF5bExhNXF0MWdWa3h2RkF3MDRweUxWMnBCCmowSFNXdVQ3L2w4Q1dYeXhMdnlNNFoxeEc3VFQva3FSMElHRyt5YWI4Snk3cFFJREFRQUJvMEl3UURBT0JnTlYKSFE4QkFmOEVCQU1DQWdRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVVRFSjFJN3phSGJkRQp0amxTUEtJdGU2VlhXbTB3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQU1YaU9BbGcweDlqTE5Zbm1IS3MrNW1ECmVhbnhUdmZxWHFCSlphK1ZPZnFYNm4xMzBncGk2cnM2K2lxUnh6bkVtVUJVZWJGM2s4c2VSUFBzenFvRzh4OFMKOXNHaE1idlBkVjZleTByMUR0VGFQQnk2blFXUUdqTzhXV2JNd21uakRlODhCQzZzckp1UEpCM0ZLVVYwSWF1NQo5bHhKaW5OamtoMXp4OHpYNVV2anNXeXg2dk5Ha0lZQy9KR2pTem5NRGhzZEVEbmE0R2FqOHR0TUlPWjduRG9JCkdkeWxCNnFwVUgvZmVsMURoTGlRWFIvL0cyRkdHRERQc3BUc1ZmczV5N2p3d2NURGgwYnpSZmpjdjhBRXR1cDQKWjlQSU9hNUdhN0NXbVJIY0FHSXBnSGdzUTZ5VC95N3krVVluM1pmVW44NEYwdERFMi9HbnN5ekRWZDM4cHZBPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
admissionReviewVersions: ["v1"]
sideEffects: None
rules:
- operations: [ "CREATE" ]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["pods"]
If I understood everything correctly - the current problem in your case is the fact, that there is no AdmissionReview support in V1.
There is a related opened github issue since last year: Add support for v1 AdmissionReview.
If we go more deeper, we will see that kube-secrets-init use as upstream slok/kubewebhook
And kubewebhook has its own opened issue: Add support for v1 AdmissionReview #72
Current stage:
As per authors comment, new v2.0.0-beta.1 has been released but seems noone tested it.
In kube-secrets-init issue there is proposal to update and release V2 as well..
So,
kube-secrets-init is not compatible with v1 version yet. It's pending fix till someone will give feedback to its upstream project (slok/kubewebhook) in version v2.0.0-beta.1

Can OPA Gatekeeper be used to audit K8s PodDisruptionBudget status fields?

We are looking to use OPA gatekeeper to audit K8s PodDisruptionBudget (PDB) objects. In particular, we are looking to audit the number of disruptionsAllowed within the status field.
I believe this field will not be available at point of admission since it is calculated and added by the apiserver once the PDB has been applied to the cluster.
It appears that for e.g Pods, the status field is passed as part of the AdmissionReview object [1]. In that particular example it appears that only the pre-admission status fields make it into the AdmissionReview object.
1.) Is it possible to audit on the current in-cluster status fields in the case of PDBs?
2.) Given the intended use of OPA Gatekeeper as an admission controller, would this be considered an anti-pattern?
[1] https://www.openpolicyagent.org/docs/latest/kubernetes-introduction/
This is actually quite reasonable, and is one of the use cases of Audit. You just need to make sure audit is enabled and spec.enforcementAction: dryrun is set in the Constraint.
Here is an example of what the ConstratintTemplate's Rego would look like. OPA Playground.
deny[msg] {
value := input.request.object.status.disruptionsAllowed
value > maxDisruptionsAllowed
msg := sprintf("status.disruptionsAllowed must be <%v> or fewer; found <%v>", [maxDisruptionsAllowed, value])
}
In the specific Constraint, make sure to set enforcementAction to dryrun so the Constraint does not prevent k8s from updating the status field. For example:
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedPodDisruptions
metadata:
name: max-disruptions
spec:
enforcementAction: dryrun
match:
kinds:
- apiGroups: [""]
kinds: ["PodDisruptionBudget"]
namespaces:
- "default"
parameters:
maxDisruptionsAllowed:
- 10
If you forget to set enforcementAction, k8s will be unable to update the status field of the PodDisruptionBudget.

Mutating Admission Controller doesnt get called on Deployments Create/Updates

I have a ValidatingWebhookConfiguration monitoring Pods which is working fine. I also have a MutatingWebhookConfiguration monitoring ( and eventually mutating ) Deployment Objects.
I have both the Controllers written in Go. Pretty much the code for Mutating one is a clone of the Validating one.
On the ValidatingWebhookConfiguration the triggering rule is :
- operations: ["CREATE","UPDATE"]
apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
It is getting triggered fine.
On the MutatingWebhookConfiguration the triggering rule is :
- operations: ["CREATE","UPDATE"]
apiGroups: [""]
apiVersions: ["v1beta1"]
resources: ["deployments"]
I am able to see that the webhook is getting started, but I am not getting it to trigger.
I have tried changing v1beta1 to extensions/v1beta1 and still have no luck.
Any ideas on what I am doing wrong ?
I would appreciate any help.
Thanks,
-Sreeni
If you want to take action on deployments, you need to specify the api group.
For deployments it is apps.
You can get a list of all resources in kubernetes and the according api groups with the following command:
$ kubectl api-resources
NAME SHORTNAMES APIGROUP NAMESPACED KIND
bindings true Binding
...
deployments deploy apps true Deployment
...

Why does my kubernetes webook only get called on create and not on update?

I have a working mutating admission hook for kubernetes. It is called when I first deploy and app using helm. But it is not called when I update using helm. It will in fact call it if I change the version number for the deployment. But if only the content changed, then it skips calling the hook.
How can I make it always call the hook for any deployment?
Here is my hook config:
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
name: appcfg-mutator
webhooks:
- name: appcfg-mutator.devops.primerica.com
clientConfig:
service:
name: appcfg-mutator
namespace: appcfg-mutator
path: "/"
caBundle: {{ .Values.webhook.caBundle }}
rules:
- operations: ["*"]
apiGroups: [""]
apiVersions: ["v1","v1beta1","v1beta2"]
resources: ["pod","deployments","namespaces","services"]
failurePolicy: Fail
I log all requests as soon as they arrive and before deserializing the http rq body so I can see it's not getting called each update. Only on create, delete or when version field in yaml is changed.
Turns out I had a typo in my mutator config for "pod" instead of "pods". Plus, I was misunderstanding and expecting to see "deployments" updates since I was actually changing the "Deployment" kind yaml. Its just "pods" that I needed.
Here is the correction:
resources: ["pods","deployments","namespaces","services"]