Mutating Admission Controller doesnt get called on Deployments Create/Updates - kubernetes

I have a ValidatingWebhookConfiguration monitoring Pods which is working fine. I also have a MutatingWebhookConfiguration monitoring ( and eventually mutating ) Deployment Objects.
I have both the Controllers written in Go. Pretty much the code for Mutating one is a clone of the Validating one.
On the ValidatingWebhookConfiguration the triggering rule is :
- operations: ["CREATE","UPDATE"]
apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
It is getting triggered fine.
On the MutatingWebhookConfiguration the triggering rule is :
- operations: ["CREATE","UPDATE"]
apiGroups: [""]
apiVersions: ["v1beta1"]
resources: ["deployments"]
I am able to see that the webhook is getting started, but I am not getting it to trigger.
I have tried changing v1beta1 to extensions/v1beta1 and still have no luck.
Any ideas on what I am doing wrong ?
I would appreciate any help.
Thanks,
-Sreeni

If you want to take action on deployments, you need to specify the api group.
For deployments it is apps.
You can get a list of all resources in kubernetes and the according api groups with the following command:
$ kubectl api-resources
NAME SHORTNAMES APIGROUP NAMESPACED KIND
bindings true Binding
...
deployments deploy apps true Deployment
...

Related

Assign permissions to a set of K8s namespaces under the same regex?

We're creating dynamic test environments for our developers. Each environment goes into one namespace called test-<something>, where <something> is entered by the developer when creating the environment (we use Gitlab-CI for the automation).
We want to grant them limited access to the K8s API to see deployments, exec into pods for instance. So the plan is to apply a (cluster)role (yet to decide) like this.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: "{{ ns }}"
name: "test-{{ ns }}"
rules:
- apiGroups: ["apps"]
resources: ["deploys"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
However we preferably don't want to apply it for all namespaces but only the test-* ones.
We could add the creation of the namespaced role and rolebinding during the app deploy, but that would mean granting our Gitlab-CI runner the permission to create and delete roles and rolebindings. We're concerned by the security implications of this and the possible privilege escalations.
Is it possible to create a clusterrolebinding limited to a regexp-ed set of namespaces?
Alternatively, if we want to grant the permissions via the automation, is it possible to limit the namespaces where the rolebindings can be created by the runner?
We looked at the docs but couldn't find such things
Worst case scenario is probably to go for the clusterrolebinding route and not give too many privileges to the automation. So asking if we can find a better way
Thanks in advance
I also stumbled into this problem and Hierarchical Namespaces seem like a decent solution, as you can give the permissions to a single "static" namespace. Every namespace afterwards will inherit the permissions. Hope it helps.

MountVolume.SetUp failed for volume "<volume-name>-token-m4rtn" : failed to sync secret cache: timed out waiting for the condition

I am having an issue on GKE where all this error is being spewed from all name spaces. Not sure what might be the issue or how to troubleshoot this.
message: "MountVolume.SetUp failed for "volume-name-token-m4rtn" : failed to sync secret cache: timed out waiting for the condition"
It occurs for almost all pods in all namespaces. Has anyone come across this or have any ideas on how I can troubleshoot?
The error you are receiving points to be a problem with RBAC(Role-based access control) permissions, looks like the service account used by the pod does not have enough permissions.
Hence, the default service account within the namespace you are deploying to is not authorized to mount the secret that you are trying to mount into your Pod.
You can get further information on the following link Using RBAC Authorization
You also can take a look at the Google’s documentation
For example, the following Role grants read access (get, watch, and list) to all pods in the accounting Namespace:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: accounting
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
Also you can take a look at the following similar cases Case in Reddit, StackOverflow case

Validating Webhook for configmap

I have Validating Webhook that triggers when some CRDs resources get [CREATE, UPDATE] operations.
I wanted to add for that, a specific configmap that will trigger that validating webhook.
Under the same namespace, I have multiple CRDs and configmaps, but I wanted to trigger the webhook also for one of the configmaps.
This is the ValidatingWebhook v1beta1 admissionregistration.k8s.io properties.
I guess the namespaceSelector is not the perfect match for my needs since it triggers for any configmap under that namespace. Tried to understand also if the objectSelector is good solution, but couldnt fully understand.
This is the relevent part of my webhook configurations:
webhooks:
- name: myWebhook.webhook
clientConfig:
***
failurePolicy:
***
rules:
- operations: ['CREATE', 'UPDATE']
apiGroups: ***
apiVersion: ***
resources: [CRD_resource_1, CRD_resource_2]
So I guess that my question is- how can I pick one of the multiple configmaps to triger my validation webhook?
Many thanks.
You definitely should use objectSelector in order to act only on specific configMaps.
You can make sure you put some specific label on those configMaps and configure your webhook:
objectSelector:
matchLabels:
myCoolConfigMaps: true

Automatically create Kubernetes resources after namespace creation

I have 2 teams:
devs: they create a new Kubernetes namespace each time they deploy a branch/tag of their app
ops: they manage access control to the cluster with (cluster)roles and (cluster)rolebindings
The problem is that 'devs' cannot kubectl their namespaces until 'ops' have created RBAC resources. And 'devs' cannot create RBAC resources themselves as they don't have the list of subjects to put in the rolebinding resource (sharing the list is not an option).
I have read the official documentation about Admission webhooks but what I understood is that they only act on the resource that triggered the webhook.
Is there a native and/or simple way in Kubernetes to apply resources whenever a new namespace is created?
I've come up with a solution by writing a custom controller.
With the following custom resource deployed, the controller injects the role and rolebinding in namespaces matching dev-.* and fix-.*:
kind: NamespaceResourcesInjector
apiVersion: blakelead.com/v1alpha1
metadata:
name: nri-test
spec:
namespaces:
- dev-.*
- fix-.*
resources:
- |
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: dev-role
rules:
- apiGroups: [""]
resources: ["pods","pods/portforward", "services", "deployments", "ingresses"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["pods/portforward"]
verbs: ["create"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["list", "get"]
- |
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-rolebinding
subjects:
- kind: User
name: dev
roleRef:
kind: Role
name: dev-role
apiGroup: rbac.authorization.k8s.io
The controller is still in early stages of development but I'm using it successfully in more and more clusters.
Here it is for those interested: https://github.com/blakelead/nsinjector
Yes, there is a native way but not an out of the box feature.
You can do what you have described by using/creating an operator. Essentially extending Kubernetes APIs for your need.
As operator is just an open pattern which can implement things in many ways, in the scenario you gave one way the control flow could look like could be:
Operator with privileges to create RBAC is deployed and subscribed to changes to a k8s namespace object kind
Devs create namespace containing an agreed label
Operator is notified about changes to the cluster
Operator checks namespace validation (this can also be done by a separate admission webhook)
Operator creates RBAC in the newly created namespace
If RBACs are cluster wide, same operator can do the RBAC cleanup once namespace is deleted
It's kind of related to how the user is authenticated to the cluster and how they get a kubeconfig file.You can put a group in the client certificate or the bearer token that kubectl uses from the kubeconfig. Ahead of time you can define a clusterrole having a clusterrolebinding to that group which gives them permission to certain verbs on certain resources(for example ability to create namespace)
Additionally you can use an admission webhook to validate if the user is supposed to be part of that group or not.

Why does my kubernetes webook only get called on create and not on update?

I have a working mutating admission hook for kubernetes. It is called when I first deploy and app using helm. But it is not called when I update using helm. It will in fact call it if I change the version number for the deployment. But if only the content changed, then it skips calling the hook.
How can I make it always call the hook for any deployment?
Here is my hook config:
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
name: appcfg-mutator
webhooks:
- name: appcfg-mutator.devops.primerica.com
clientConfig:
service:
name: appcfg-mutator
namespace: appcfg-mutator
path: "/"
caBundle: {{ .Values.webhook.caBundle }}
rules:
- operations: ["*"]
apiGroups: [""]
apiVersions: ["v1","v1beta1","v1beta2"]
resources: ["pod","deployments","namespaces","services"]
failurePolicy: Fail
I log all requests as soon as they arrive and before deserializing the http rq body so I can see it's not getting called each update. Only on create, delete or when version field in yaml is changed.
Turns out I had a typo in my mutator config for "pod" instead of "pods". Plus, I was misunderstanding and expecting to see "deployments" updates since I was actually changing the "Deployment" kind yaml. Its just "pods" that I needed.
Here is the correction:
resources: ["pods","deployments","namespaces","services"]