Well, my question is really short and hopefully simple? Is it possible to add a cacerts file automatically in every pod in a specific Kubernetes cluster?
According to this article it's possible by creating a ConfigMap and add this to the path /etc/ssl/certs/. But is it possible to achieve this on a higher level so that all pods in a Kubernetes cluster have this cacerts file?
You can add a MutatingAdmissionWebhook for a pod, which adds the folder by default as a volume to each pod. Check out the docs about MutatingAdmissionWebhooks and writing an admission webhook.
This way you add a "service", which mutates the pod config before the scheduler handles it. Check out this for a quick example.
Related
I'm using hardware-dependents pods; in my K8s, I instantiate my pods with a DaemonSet.
Now I want to access those pods with an URL like https://domain/{pod-hostname}/
My use case is a bit more tedious than this one. my pods' names are not predefined.
Moreover, I also need a REST entry point to list my pod's name or hostname.
I publish a Docker Image to solve my issue: urielch/dyn-ingress
My YAML configuration is in the Docker doc.
This Container add label on each pod, then use this label to create a service per pod, and then update an existing Ingress to reach each node with a path //
feel free to test it.
the source code is here
I am using Spark, which has a predefined script to create a pod in my kubernetes cluster.
After the pod is created and running, I want to check if it's still alive. I could do this by using a livenessProbe, however this is configured in the configuration file for the Pod, which I do not have control over, as my pod is created by Spark and I cannot change its config file.
So my question is, after the pod has been already created and running, how can I change the configuration for it so that is uses livenessProbe?
Or is there any other way to check the liveness of the pod?
I am a beginner to Kubernetes, sorry for this question!
After a Pod is created you can't change the livenessProe definition.
You could use a second Pod to report on the status of your workload, if that works for your use case.
The other option is to use a Mutating Admission Controller to modify the Pod definition from your Spark script, though I would consider this not exactly beginner friendly.
https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook
https://www.trion.de/news/2019/04/25/beispiel-kubernetes-mutating-admission-controller.html
https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/
Summarize the problem:
Any way we can add an ENV to a pod or a new pod in kubernetes?
For example, I want to add HTTP_PROXY to many pods and the new pods it will generate in kubeflow 1.4. So these pods can be access to internet.
Describe what you’ve tried:
I searched and found istio maybe do that, but it's too complex for me.
The second, there are too many yamls in kubeflow, as to I cannot modify them one by one to use configmap or add ENV just in them.
So anyone has a good simle way to do this? Like doing this in kubernetes configuation.
Use "PodPreset" object to inject common environment variables and other params to all the matching pods.
Please follow below article
https://v1-19.docs.kubernetes.io/docs/tasks/inject-data-application/podpreset/
If PodPreset is indeed removed from v1.20, then you seem to need a webhook.
You will have to run an additional service in your cluster that will change the configuration of the pods.
Here is an example, on the basis of which I created my webhook, which changed the configuration of the pods in the cluster, in this example the developer used the logic adding a sidecar to the pod, but you can set your own to forward the required ENV:
https://github.com/morvencao/kube-mutating-webhook-tutorial/blob/master/medium-article.md
I have used some bitnami charts in my kubernetes app. In my pod, there is a file whose path is /etc/settings/test.html. I want to override the file. When I search it, I figured out that I should mount my file by creating a configmap. But how can I use the created configmap with the existed pod . Many of the examples creates a new pod and uses the created config map. But I dont want to create a new pod, I wnat to use the existed pod.
Thanks
If not all then almost all pod specs are immutable, meaning that you can't change them without destroying the old pod and creating a new one with desired parameters. There is no way to edit pod volume list without recreating it.
The reason behind this is that pods aren't meant to be immortal. Pods meant to be temporary units that can be spawned/destroyed according to scheduler needs. In general, you need a workload object that does pod management for you (a Deployement, StatefulSet, Job, or DaemonSet, depenging on deployment strategy and application nature).
There are two ways to edit a file in an existing pod: either by using kubectl exec and console commands to edit the file in place, or kubectl cp to copy an already edited file into the pod. I advise you against both of these, because this is not permanent. Better backup the necessary data, switch deployment type to Deployment with one replica, then go with mounting a configMap as you read on the Internet.
I'm new to K8s. In process to config Openstack Cinder as K8s StorageClass, i have to add some flags to my kube controller manager, and I found that it's my big problem.
I'm using K8s 1.11 in VMs, and my K8s cluster has a kube-controller-manager pod, but I don't know how to add these flags to my kube-controller-manager.
After hours search, i found that there's a lot of task require add flag to kube-controller-manager, but no exactly document guide me how to do that. Please share me the way to go over it.
Thank you.
You can check /etc/kubernetes/manifests dir on your master nodes.
This dir would contain yaml files for master components.
These are also known as static pods.
More Info : https://kubernetes.io/docs/tasks/administer-cluster/static-pod/
Update these files and you would be able to see your changes as kubelet should restart the pod on file change.
As a more long term solution, you will need to incorporate the flags to the tooling that you use to generate your k8s cluster.