Prevents unempty namespace deletion - github

I have constraint template yaml code policy to prevent all namespaces deletion.
Now I want to create a policy to prevent only non-empty namespace deletion, i.e, which containing a resource like pod, ingress, pv, pvc, secret, service, etc.
Therefore empty namespace should be able to be deleted but namespace which contains content should not be able to be deleted.
Any suggestions on this?
Template:
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8snamespacecannotbedeleted
spec:
crd:
spec:
names:
kind: K8sNamespaceCannotBeDeleted
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package kubernetes.admission
violation[{"msg": msg}] {
input.review.kind.kind == "Namespace"
input.review.operation == "DELETE"
msg := "[OPA] Namespace deletions are not permitted"
}
Constraint:
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sNamespaceCannotBeDeleted
metadata:
name: namespace-cannot-be-deleted
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]

its probably a bit too late, but this is how i would approach this problem:
first step:
create a cronjob in your fav language that call API server, run that every x min to check for non empty namespaces, add an annotation to those namespaces, like "can_delete" : "no"
second step:
Edit your rego code and check if there is any annotations called "can_delete", if yes, and its set to "no", prevent deletion

Related

K8s OPA Gatekeeper doesn't block DELETE operation

I'm using K8s OPA to enforce policies.
From the official document debugging section (https://open-policy-agent.github.io/gatekeeper/website/docs/debug), I created constraintTemplate as below.
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8sdenyall
spec:
crd:
spec:
names:
kind: K8sDenyAll
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sdenyall
violation[{"msg": msg}] {
msg := sprintf("REVIEW OBJECT: %v", [input.review])
}
I also created the constraint below.
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDenyAll
metadata:
name: deny-all-namespaces
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
I thought that every operations regarding namespace would be denied. However, whereas kubectl create ns test1 is denied successfully, kubectl delete ns test2 isn't denied. Any ideas on why? I'm experiencing this issue not only with namespace, but with other k8s resources such as pods.
Sounds like you need to Enable Validation of Delete Operations?
To enable Delete operations for the validation.gatekeeper.sh admission webhook, add "DELETE" to the list of operations in the gatekeeper-validating-webhook-configuration ValidatingWebhookConfiguration [..]
operations:
- CREATE
- UPDATE
- DELETE
You can now check for deletes.

How to trigger webhook to modify pod after pod finishes scheduling by kube-scheduler

I have a project that needs to modify the pod to add a label after the pod assigns the scheduling node
Trigger on pod update using MutatingWebhookConfiguration
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
name: mutate-webhook-cfg
labels:
app: mutate-webhook
webhooks:
- name: add-label
...
...
rules:
- operations: [ "UPDATE" ]
apiGroups: [""]
apiVersions: ["v1"]
resources: [""pods"]
But how to distinguish that the update is allocating nodes to pods? What I think is based on whether the nodeName field is empty, but there seems to be no way similar to [Predicate]{}, which can get the pod objects before and after the change for comparison. Therefore, it can only reduce unnecessary items according to nodeName and whether it contains the label to be added.
if pod.Spec.NodeName != "" && !isContains(pod.Labels, custom-label){
...
...
}
Is there any other way to determine whether the current update is to assign nodes to pods, which can reduce unnecessary webhook calls or internal processing
I really appreciate any help with this.

Helm pre-install yaml for config

I’ve dependency in priority class inside my k8s yaml configs files and I need to install before any of my yaml inside the template folder
the prio class
apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: ocritical
value: 1000
globalDefault: false
After reading the helm docs it seems that I can use the pre-install hook
I’ve changed my yaml and add anotiations section with pre-hook, and still it doesnt works, any idea what I miss here?
apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: ocritical
annotations:
"helm.sh/hook": pre-install
value: 1000
globalDefault: false
The yaml is located inisde the template folder
You put quotation marks for helm.sh/hook annotation which is incorrect - you can only add quotation marks for values of them.
You can add description field in your configuration file, remember that this field is an arbitrary string. It is meant to tell users of the cluster when they should use this PriorityClass.
Your PriorityClass should looks like this:
apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: ocritical
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation
value: 1000
globalDefault: false
description: "This priority class should be used for XYZ service pods only."
More information about proper configuration of PriorityClass you can find here: PriorityClass.
More information about installing hooks you can find here: helm-hooks.
I hope it helps.

Changing public url in knative service definition

i'm playing around with knative currently and bootstrapped a simple installation using gloo and glooctl. Everything worked fine out of the box. However, i just asked myself if there is a possibility to change the generated url, where the service is made available at.
I already changed the domain, but i want to know if i could select a domain name without containing the namespace, so helloworld-go.namespace.mydomain.com would become helloworld-go.mydomain.com.
The current YAML-definition looks like this:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
labels:
name: helloworld-go
namespace: default
spec:
template:
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
env:
- name: TARGET
value: Go Sample v1
Thank you for your help!
This is configurable via the ConfigMap named config-network in the namespace knative-serving. See the ConfigMap in the deployment resources:
apiVersion: v1
data:
_example: |
...
# domainTemplate specifies the golang text template string to use
# when constructing the Knative service's DNS name. The default
# value is "{{.Name}}.{{.Namespace}}.{{.Domain}}". And those three
# values (Name, Namespace, Domain) are the only variables defined.
#
# Changing this value might be necessary when the extra levels in
# the domain name generated is problematic for wildcard certificates
# that only support a single level of domain name added to the
# certificate's domain. In those cases you might consider using a value
# of "{{.Name}}-{{.Namespace}}.{{.Domain}}", or removing the Namespace
# entirely from the template. When choosing a new value be thoughtful
# of the potential for conflicts - for example, when users choose to use
# characters such as `-` in their service, or namespace, names.
# {{.Annotations}} can be used for any customization in the go template if needed.
# We strongly recommend keeping namespace part of the template to avoid domain name clashes
# Example '{{.Name}}-{{.Namespace}}.{{ index .Annotations "sub"}}.{{.Domain}}'
# and you have an annotation {"sub":"foo"}, then the generated template would be {Name}-{Namespace}.foo.{Domain}
domainTemplate: "{{.Name}}.{{.Namespace}}.{{.Domain}}"
...
kind: ConfigMap
metadata:
labels:
serving.knative.dev/release: "v0.8.0"
name: config-network
namespace: knative-serving
Therefore, your config-network should look like this:
apiVersion: v1
data:
domainTemplate: {{ '"{{.Name}}.{{.Domain}}"' }}
kind: ConfigMap
metadata:
name: config-network
namespace: knative-serving
You can also have a look and customize the config-domain to configure the domain name that is appended to your services.
Assuming you're running knative over an istio service mesh, there's an example of how to use an Istio Virtual Service to accomplish this at the service level in the knative docs.

Why does my kubernetes webook only get called on create and not on update?

I have a working mutating admission hook for kubernetes. It is called when I first deploy and app using helm. But it is not called when I update using helm. It will in fact call it if I change the version number for the deployment. But if only the content changed, then it skips calling the hook.
How can I make it always call the hook for any deployment?
Here is my hook config:
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
name: appcfg-mutator
webhooks:
- name: appcfg-mutator.devops.primerica.com
clientConfig:
service:
name: appcfg-mutator
namespace: appcfg-mutator
path: "/"
caBundle: {{ .Values.webhook.caBundle }}
rules:
- operations: ["*"]
apiGroups: [""]
apiVersions: ["v1","v1beta1","v1beta2"]
resources: ["pod","deployments","namespaces","services"]
failurePolicy: Fail
I log all requests as soon as they arrive and before deserializing the http rq body so I can see it's not getting called each update. Only on create, delete or when version field in yaml is changed.
Turns out I had a typo in my mutator config for "pod" instead of "pods". Plus, I was misunderstanding and expecting to see "deployments" updates since I was actually changing the "Deployment" kind yaml. Its just "pods" that I needed.
Here is the correction:
resources: ["pods","deployments","namespaces","services"]