I have Grafana dashboard which shows logs. I created some variables variables
How can I define the dependences between them in order to when I select one namespace in Namespace area (variable), Grafana shows only containers in selected namespace (which satisfy only selected namespace)
Namespace
All containers
You can find some relevant log and change the label_values to something like this:
label_values(kube_pod_container_info{namespace="$namespace"}, container)
the above line will make the container filter to depend on the namespace
Related
I have added some labels to kubernetes namespace metadata, now I want to scrape namespace as well as those labels using prometheus, Actually I am trying to create a grafana dashboard and want to categorise namespaces based on labels. I tried using kubernetes_sd_configs, I am able to get namespace but unable to get labels of those namespaces. Does anyone know of any way to scrape labels along with namespace.
In case somebody is also looking for answer, we can use kube-state-metrics
It exposes kube_namespace_labels metric which has labels as well as namespace.
Problem:
I want every pod created in my cluster to hold\point the same data
e.g. let's say I want all of them to have an env vars like "OWNER=MYNAME".
there are multiple users in my cluster and I don't want them to start changing their YAMLs and manually assign OWNER:MYNAME to env.
Is there a way to have all current/future pods to be assigned automatically with a predefined value or mount a configmap so that the same information will be available in every single pod?
can this be done on the cluster level? namespace level?
I want it to be transparent to the user, meaning a user would apply whatever pod to the cluster, and the info could be available to him without even asking.
Thanks, everyone!
Pod Preset might help you here to partially achieve what you need. Pod Preset resource allows injecting additional runtime requirements into a Pod at creation time. You use label selectors to specify the Pods to which a given PodPreset applies.
Check this to know how pod preset works.
First you need to enable pod preset in your cluster.
You can use Pod Preset to inject env variables or volumes in your pod.
You can also inject configmap in your pod.
Make use of some common label for all the pods which you want to have common config, use this common label in your pod preset resource.
Unfortunately there are plans to remove pod presets altogether in coming releases, but I guess you can still use it with current releases. Although there are other implementations similar to pod presets, which you can try.
I'm using gatekeeper/OPA to create constraints for various services I have running in specific namespaces. To do so, I'm relying on namespaceSelectors to match the constraint to only a set of namespaces. My CI/CD process is responsible for labeling all my custom namespaces with the required labels that my constraint will be looking for.
However I now need to make sure that no new namespace is created without the required labels (otherwise this namespace will ignore all my constraints). The fact that my CI/CD tooling applies these labels does not allow me to be certain that no other namespace has been created in my cluster without these labels.
If I apply the k8srequiredlabels[2] constraint template on all namespaces, this will find a violation on system namespaces such as kube-system. The gatekeeper constraints allow you to specify either of the following to match your constraint[1]:
labelSelector
namespaceSelector
namespaces list
Ideally I'd like to be able to say that I want to ensure that all namespaces have x labels on them, except the namespaces in an exclusion list (e.g kube-system). However there's no option to use the above 'Namespaces' list in an exclusive way and the other 2 options require someone to manually add labels to the newly created namespaces (which opens up room for error).
Any suggestions on how you can ensure that a subset of your clusters
namespace's have x labels without having to manually label them and
use a label/namespaceSelector?
How would you prevent a namespace from being created using OPA &
Gatekeeper if it does not meet certain criteria such as having x
label on it?
[1] https://github.com/open-policy-agent/gatekeeper/pull/131/files
[2] https://github.com/open-policy-agent/gatekeeper/blob/master/demo/agilebank/templates/k8srequiredlabels_template.yaml
Problem 1 can be solved by using OPA itself. You can write mutating webhook using OPA (https://github.com/open-policy-agent/opa/issues/943) to add labels to your newly created namespaces or you can write a mutating controller (using Golang). Underneath both does the same thing.
For 2nd problem, you need to add validation rule in your rego files on namespace creation and verify if the label exists.
Extra relevant information: To perform actions on specific namespaces based on the label, you can add namesapceSelector in your validating/mutating webhook configuration.
You can use Helm to dynamically assigning labels to specific namespaces.
The namespace value can be derived either from --namespace parameter which is the same namespace where helm chart is deployed to. In the charts it should be accessed with {{.Release.Namespace}} then. Or you can set these namespaces using --set when deploying helm chart with helm upgrade. If there are few environments you can access them as aliases in values.yaml and then set namespaces values for them like this:
helm upgrade \
<chart_name> \
<path_to_the_chart> \
--set <environment_one>.namespace=namespace1 \
--set <environment_two>.namespace=namespace2 \
...
Please take a look on: dynamic-namespace-variable.
To check if specific namespace has proper labels use Webhook admission controller.
Here you can find more information: webhook-admssion-controller.
I would like to deploy a container to one specific namespace, let's call it dev and easily promote it to test-->acc-->prod namespace.
Reason we use dev-->test-->acc-->prod in cluster ` is mainly testing and integration with external parties.
You can easily deploy container to any namespace. Actually, you should set the namespace to which you are going to deploy your container, but you can not move your container from one namespace to another, because after spawn the container will be in one namespace until it dies. The best way to achieve your goal is to use the image version. You can start to deploy the image version 1.0.1 to namespace dev, work on it, and then use this image for container in test namespace. As a result, you will have the same container but in the new namespace.
You could get the image currently deployed in your dev namespace (assuming your current namespace is dev):
kubectl describe pods
And look at the Image field. Copy the image name with its version and update the image in test:
kubectl set image deployment/<your-deployment> <your-image-name>=<paste-here-image-with-version> --namespace=<your-test-namespace>
One way I could think of is to set an environment which value is the namespace of the Pod when defining the Pod.
Getting the namespace dynamically without requiring changes for Pod will be
better because it lessens the burden of constructing a Pod.
So is there a way to get current namespace in a Pod?
Try the file:
/var/run/secrets/kubernetes.io/serviceaccount/namespace
you don't need to set a static namespace env variable in the pod spec if you want to use env variables, you can use the "Downward API" for letting k8s fill it dynamically with the current namespace. See https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api