Gatekeeper/OPA constraints on a subset of namespaces without using labels - kubernetes

I'm using gatekeeper/OPA to create constraints for various services I have running in specific namespaces. To do so, I'm relying on namespaceSelectors to match the constraint to only a set of namespaces. My CI/CD process is responsible for labeling all my custom namespaces with the required labels that my constraint will be looking for.
However I now need to make sure that no new namespace is created without the required labels (otherwise this namespace will ignore all my constraints). The fact that my CI/CD tooling applies these labels does not allow me to be certain that no other namespace has been created in my cluster without these labels.
If I apply the k8srequiredlabels[2] constraint template on all namespaces, this will find a violation on system namespaces such as kube-system. The gatekeeper constraints allow you to specify either of the following to match your constraint[1]:
labelSelector
namespaceSelector
namespaces list
Ideally I'd like to be able to say that I want to ensure that all namespaces have x labels on them, except the namespaces in an exclusion list (e.g kube-system). However there's no option to use the above 'Namespaces' list in an exclusive way and the other 2 options require someone to manually add labels to the newly created namespaces (which opens up room for error).
Any suggestions on how you can ensure that a subset of your clusters
namespace's have x labels without having to manually label them and
use a label/namespaceSelector?
How would you prevent a namespace from being created using OPA &
Gatekeeper if it does not meet certain criteria such as having x
label on it?
[1] https://github.com/open-policy-agent/gatekeeper/pull/131/files
[2] https://github.com/open-policy-agent/gatekeeper/blob/master/demo/agilebank/templates/k8srequiredlabels_template.yaml

Problem 1 can be solved by using OPA itself. You can write mutating webhook using OPA (https://github.com/open-policy-agent/opa/issues/943) to add labels to your newly created namespaces or you can write a mutating controller (using Golang). Underneath both does the same thing.
For 2nd problem, you need to add validation rule in your rego files on namespace creation and verify if the label exists.
Extra relevant information: To perform actions on specific namespaces based on the label, you can add namesapceSelector in your validating/mutating webhook configuration.

You can use Helm to dynamically assigning labels to specific namespaces.
The namespace value can be derived either from --namespace parameter which is the same namespace where helm chart is deployed to. In the charts it should be accessed with {{.Release.Namespace}} then. Or you can set these namespaces using --set when deploying helm chart with helm upgrade. If there are few environments you can access them as aliases in values.yaml and then set namespaces values for them like this:
helm upgrade \
<chart_name> \
<path_to_the_chart> \
--set <environment_one>.namespace=namespace1 \
--set <environment_two>.namespace=namespace2 \
...
Please take a look on: dynamic-namespace-variable.
To check if specific namespace has proper labels use Webhook admission controller.
Here you can find more information: webhook-admssion-controller.

Related

Helmfile with additional resource without chart

I know this is maybe a weird question, but I want to ask if it's possible to also manage single resources (like f.e. a configmap/secret) without a seperate chart?
F.e. I try to install a nginx-ingress and would like to additionally apply a secret map which includes http-basic-authentication data.
I can just reference the nginx-ingress-repo directly in my helmfile, but do I really need to create a seperate helm chart to also apply the http-basic-secret?
I have many releases which need a single, additional resource (like a json configmap, a single secret) and it would be cumbersome to always need a seperate chart file for each release?
Thank you!
Sorry, Helmfile only manages entire Helm releases.
There are a couple of escape hatches you might be able to use. Helmfile hooks can run arbitrary shell commands on the host (as distinct from Helm hooks, which usually run Jobs in the cluster) and so you could in principle kubectl apply a file in a hook. Helmfile also has some integration with Kustomize and it might be possible to add resources this way. As you've noted you can also write local charts and put whatever YAML you need in those.
The occasional chart does support including either arbitrary extra resources or specific configuration content; the Bitnami MariaDB chart, to pick one, supports putting anything you want under an extraDeploy value. You could use this in combination with Helmfile values: to inject more resources
releases:
- name: mariadb
chart: bitnami/mariadb
values:
- extraDeploy:
- |-
apiVersion: v1
kind: ConfigMap
...

Is there any mechanism in kubernetes to automatically add annotation to new pods in a specific namespace?

I have a namespace where new short-lived pods (< 1 minute) are created constantly by Apache Airflow. I want that all those new pods are annotated with aws.amazon.com/cloudwatch-agent-ignore: true automatically so that no CloudWatch metrics (container insights) are created for those pods.
I know that I can achieve that from airflow side with pod mutation hook but for the sake of the argument let's say that I have no control over the configuration of that airflow instance.
I have seen MutatingAdmissionWebhook and it seem that could do the trick, but it seems that it's considerable effort to set up. So I'm looking for a more of the shelf solution, I want to know if there is some "standard" admission controller that can do this specific use case, without me having to deploy a web server and implement the api required by MutatingAdmissionWebhook.
Is there any way to add that annotation from kubernetes side at pod creation time? The annotation must be there "from the beginning", not added 5 seconds later, otherwise the cwagent might pick it between the pod creation and the annotation being added.
To clarify I am posting community Wiki answer.
You had to use aws.amazon.com/cloudwatch-agent-ignore: true annotation. This means the pod that has one, it will be ignored by amazon-cloudwatch-agent / cwagent.
Here is the excerpt of your solution how to add this annotation to Apache Airflow:
(...) In order to force Apache Airflow to add the
aws.amazon.com/cloudwatch-agent-ignore: true annotation to the task/worker pods and to the pods created by the KubernetesPodOperator you will need to add the following to your helm values.yaml (assuming that you are using the "official" helm chart for airflow 2.2.3):
airflowPodAnnotations:
aws.amazon.com/cloudwatch-agent-ignore: "true"
airflowLocalSettings: |-
def pod_mutation_hook(pod):
pod.metadata.annotations["aws.amazon.com/cloudwatch-agent-ignore"] = "true"
If you are not using the helm chart then you will need to change the pod_template_file yourself to add the annotation and you will also need to modify the airflow_local_settings.py to include the pod_mutation_hook.
Here is the link to your whole answer.
You can try this repo which is a mutating admission webhook that does this. To date there's no built-in k8s support to do automatic annotation for specific namespace.

How to run multiple IngressController with same IngressClass?

Is it possible to run multiple IngressController in the same Namespace with the same IngressClass?
I have multiple IngressController with different LoadBalancer IP Addresses and would like to continue with this setup.
I upgraded the first IngressController to the latest version.
Updating the second/third/.. IngressController fails because of:
rendered manifests contain a resource that already exists. Unable to continue with update: IngressClass "nginx" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "nginx-ingress-lb-02": current value is "nginx-ingress-lb-01"
Any idea how to fix this?
The issue you mention here is mainly with Helm, preventing you from overwriting some resources - your IngressClass - that belongs to another helm deployment.
One way to work around this may be to use helm "--dry-run" option. Once you have the list of objects written into a file: remove the IngressClass, then apply that file.
Another way may be to patch the chart deploying your controller. As a contributor to the Traefik helm chart, I know that we would install IngressClasses named after the Traefik deployment we operate. The chart you're using, for Nginx, apparently does not implement support for that scenario. Which doesn't mean it shouldn't work.
Now, answering your first question, is it possible to run multiple IngressController in the same Namespace with the same IngressClass: yes.
You may have several Ingress Controllers, one that watches for Ingresses in namespace A, another in namespace B, both ingresses referencing the same class. Deploying those ingresses into the same namespace is possible - although implementing NetworkPolicies, isolating your controllers into their own namespace would help in distinguishing who's who.
An options that works for me, when deploying multiple ingress controllers with Helm, is setting controller.ingressClassResource.enabled: false in every Helm deployment, except the first.
The comments in the default values.yaml aren't entirely clear on this, but after studying the chart I found that controller.ingressClassResource.enabled is only evaluated to determine whether or not to create the IngressClass, not whether or not to attach the controller.ingressClassResource.controllerValue to the controller. (This is true at least for helm-chart-4.0.13).
So, the first Helm deployment, if you don't override any of the default controller.ingressClassResource settings, the following values will be used to create an IngressClass, and attach the controllerValue to the controller:
controller:
ingressClassResource:
name: nginx
enabled: true
default: false
controllerValue: "k8s.io/ingress-nginx"
For all other controllers that you want to run with the same IngressClass, simply set controller.ingressClassResource.enabled: false, to prevent Helm from trying (and failing) to create the IngressClass again.

Change the spring boot admin registery unique ID

I have a requirement where my client applications are having almost same properties and even the URL is same, as they are running behind a load balancer, the only change they have is a particular set of environment properties that differ.
Is it possible to register them uniquely based on that property.
I would say there are a few approaches.
One would be loading Environment Variables from a Kubernetes Secret.
Second using helm(https://helm.sh/)
Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste.
Explanation:
If you would use a secret option, you would probably create two separate secrets with env variables that you need and load those based on the app name, or if you have them setup in different namespaces then copy the secret over to each as those resources will not work between different namespaces.
If you would use helm, you will have to write your chart and put the env variables into values.yaml or mix it together and load secret from inside Kubernetes.
This will work on Kubernetes, I do not know (based on your tags) if it's the same on OpenShift.
Please provide some samples of what you have already done and I'll provide more details.

Whats the best way for stage-specific K8s config?

Let's say we have to manage a database connection string for stages test, int and prod.
What are the patterns here for Kubernetes?
I would handle general configuration via ConfigMaps. Create configuration for each environment and have your pods/deployment consume the values via environment variables.
This approach allows you to decouple your configuration from your k8 object definition and gives the ability to inject the required config per environment.
For sensitive data, which might include a username and password in a connection string for example, consider using Secrets instead.
The best way in my experience is to use a higher level construct like Helm Chart. This way you manage all your manifests in platform agnostic way and make them configurable during chart install/update.
That way you can use both ConfigMaps, Secrets or env vars, and populate them from values set during install/upgrade. With helm, you would do it somewhat like this :
helm install -f values.yaml : where values yaml contains all your non-default values (ie. db password)
helm upgrade <release> --reuse-values --set image.tag=1.0.1 to say release a new version keeping all other values defined during initial install.
For non-default components like ie. development database, you can use value like devdb.enabled with a default value to false and set it to true only on dev env where you want to launch devdb pod and point your database service there (all the logic for it within manifest templates in helm chart)