I'm using K8s OPA to enforce policies.
From the official document debugging section (https://open-policy-agent.github.io/gatekeeper/website/docs/debug), I created constraintTemplate as below.
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8sdenyall
spec:
crd:
spec:
names:
kind: K8sDenyAll
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sdenyall
violation[{"msg": msg}] {
msg := sprintf("REVIEW OBJECT: %v", [input.review])
}
I also created the constraint below.
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDenyAll
metadata:
name: deny-all-namespaces
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
I thought that every operations regarding namespace would be denied. However, whereas kubectl create ns test1 is denied successfully, kubectl delete ns test2 isn't denied. Any ideas on why? I'm experiencing this issue not only with namespace, but with other k8s resources such as pods.
Sounds like you need to Enable Validation of Delete Operations?
To enable Delete operations for the validation.gatekeeper.sh admission webhook, add "DELETE" to the list of operations in the gatekeeper-validating-webhook-configuration ValidatingWebhookConfiguration [..]
operations:
- CREATE
- UPDATE
- DELETE
You can now check for deletes.
Related
I used the following guide to set up my chaostoolkit cluster: https://chaostoolkit.org/deployment/k8s/operator/
I am attempting to kill a pod using kubernetes, however the following error:
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods is forbidden: User \"system:serviceaccount:chaostoolkit-run:chaostoolkit-b3af262edb\" cannot list resource \"pods\" in API group \"\" in the namespace \"task-dispatcher\"","reason":"Forbidden","details":{"kind":"pods"},"code":403}
I set my serviceAccountName to an RBAC that I created but for some reason my kubernetes defaults to "system:serviceaccount:chaostoolkit-run:chaostoolkit-b3af262edb".
apiVersion: v1
kind: ConfigMap
metadata:
name: my-chaos-exp
namespace: chaostoolkit-run
data:
experiment.yaml: |
---
version: 1.0.0
title: Terminate Pod Experiment
description: If a pod gets terminated, a new one should be created in its place in a reasonable amount of time.
tags: ["kubernetes"]
secrets:
k8s:
KUBERNETES_CONTEXT: "docker-desktop"
method:
- type: action
name: terminate-k8s-pod
provider:
type: python
module: chaosk8s.pod.actions
func: terminate_pods
arguments:
label_selector: ''
name_pattern: my-release-rabbitmq-[0-9]$
rand: true
ns: default
---
apiVersion: chaostoolkit.org/v1
kind: ChaosToolkitExperiment
metadata:
name: my-chaos-exp
namespace: chaostoolkit-crd
spec:
serviceAccountName: test-user
automountServiceAccountToken: false
pod:
image: chaostoolkit/chaostoolkit:full
imagePullPolicy: IfNotPresent
experiment:
configMapName: my-chaos-exp
configMapExperimentFileName: experiment.yaml
restartPolicy: Never
Error which is shared is using default service account "choastoolkit". Look like the role associated might not proper permissions.
The service account "test-user" which is been used in ChaosToolkitExperiment defintion should have proper role access to delete pod.
Please specify proper service account having proper role access.
I am trying to establish the namespace "sandbox" in Kubernetes and have been using it for several days for several days without issue. Today I got the below error.
I have checked to make sure that I have all of the requisite configmaps in place.
Is there a log or something where I can find what this is referring to?
panic: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
I did find this (MountVolume.SetUp failed for volume "kube-api-access-fcz9j" : object "default"/"kube-root-ca.crt" not registered) thread and have applied the below patch to my service account, but I am still getting the same error.
automountServiceAccountToken: false
UPDATE:
In answer to #p10l I am working in a bare-metal cluster version 1.23.0. No terraform.
I am getting closer, but still not there.
This appears to be another RBAC problem, but the error does not make sense to me.
I have a user "dma." I am running workflows in the "sandbox" namespace using the context dma#kubernetes
The error now is
Create request failed: workflows.argoproj.io is forbidden: User "dma" cannot create resource "workflows" in API group "argoproj.io" in the namespace "sandbox"
but that user indeed appears to have the correct permissions.
This is the output of
kubectl get role dma -n sandbox -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"dma","namespace":"sandbox"},"rules":[{"apiGroups":["","apps","autoscaling","batch","extensions","policy","rbac.authorization.k8s.io","argoproj.io"],"resources":["pods","configmaps","deployments","events","pods","persistentvolumes","persistentvolumeclaims","services","workflows"],"verbs":["get","list","watch","create","update","patch","delete"]}]}
creationTimestamp: "2021-12-21T19:41:38Z"
name: dma
namespace: sandbox
resourceVersion: "1055045"
uid: 94191881-895d-4457-9764-5db9b54cdb3f
rules:
- apiGroups:
- ""
- apps
- autoscaling
- batch
- extensions
- policy
- rbac.authorization.k8s.io
- argoproj.io
- workflows.argoproj.io
resources:
- pods
- configmaps
- deployments
- events
- pods
- persistentvolumes
- persistentvolumeclaims
- services
- workflows
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
This is the output of kubectl get rolebinding -n sandbox dma-sandbox-rolebinding -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"dma-sandbox-rolebinding","namespace":"sandbox"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"dma"},"subjects":[{"kind":"ServiceAccount","name":"dma","namespace":"sandbox"}]}
creationTimestamp: "2021-12-21T19:56:06Z"
name: dma-sandbox-rolebinding
namespace: sandbox
resourceVersion: "1050593"
uid: d4d53855-b5fc-4f29-8dbd-17f682cc91dd
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dma
subjects:
- kind: ServiceAccount
name: dma
namespace: sandbox
The issue you are describing is a reoccuring one, described here and here where your cluster lacks KUBECONFIG environment variable.
First, run echo $KUBECONFIG on all your nodes to see if it's empty.
If it is, look for the config file in your cluster, then copy it to all the nodes, then export this variable by running export KUBECONFIG=/path/to/config. This file can be usually found at ~/.kube/config/ or /etc/kubernetes/admin.conf` on master nodes.
Let me know, if this solution worked in your case.
I have constraint template yaml code policy to prevent all namespaces deletion.
Now I want to create a policy to prevent only non-empty namespace deletion, i.e, which containing a resource like pod, ingress, pv, pvc, secret, service, etc.
Therefore empty namespace should be able to be deleted but namespace which contains content should not be able to be deleted.
Any suggestions on this?
Template:
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8snamespacecannotbedeleted
spec:
crd:
spec:
names:
kind: K8sNamespaceCannotBeDeleted
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package kubernetes.admission
violation[{"msg": msg}] {
input.review.kind.kind == "Namespace"
input.review.operation == "DELETE"
msg := "[OPA] Namespace deletions are not permitted"
}
Constraint:
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sNamespaceCannotBeDeleted
metadata:
name: namespace-cannot-be-deleted
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
its probably a bit too late, but this is how i would approach this problem:
first step:
create a cronjob in your fav language that call API server, run that every x min to check for non empty namespaces, add an annotation to those namespaces, like "can_delete" : "no"
second step:
Edit your rego code and check if there is any annotations called "can_delete", if yes, and its set to "no", prevent deletion
I have a working mutating admission hook for kubernetes. It is called when I first deploy and app using helm. But it is not called when I update using helm. It will in fact call it if I change the version number for the deployment. But if only the content changed, then it skips calling the hook.
How can I make it always call the hook for any deployment?
Here is my hook config:
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
name: appcfg-mutator
webhooks:
- name: appcfg-mutator.devops.primerica.com
clientConfig:
service:
name: appcfg-mutator
namespace: appcfg-mutator
path: "/"
caBundle: {{ .Values.webhook.caBundle }}
rules:
- operations: ["*"]
apiGroups: [""]
apiVersions: ["v1","v1beta1","v1beta2"]
resources: ["pod","deployments","namespaces","services"]
failurePolicy: Fail
I log all requests as soon as they arrive and before deserializing the http rq body so I can see it's not getting called each update. Only on create, delete or when version field in yaml is changed.
Turns out I had a typo in my mutator config for "pod" instead of "pods". Plus, I was misunderstanding and expecting to see "deployments" updates since I was actually changing the "Deployment" kind yaml. Its just "pods" that I needed.
Here is the correction:
resources: ["pods","deployments","namespaces","services"]
I've registered custom resource definition in K8S:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: resources.example.com
labels:
service: "my-resource"
spec:
group: example.com
version: v1alpha1
scope: Namespaced
names:
plural: resources
singular: resource
kind: MYRESOURCE
shortNames:
- res
Now on attempt to get an 'explain' for my custom resource with:
kubectl explain resource
I get the following error:
group example.com has not been registered
How can I add an explain information to my custom resource definition, or is this not supported for CRDs?
explain works using openapi schema information published by the server. Prior to v1.15, CRDs did not have the ability to publish that info.
In 1.15+, CRDs that specify structural schemas and enable pruning publish OpenAPI and work with explain.