How to add flag to Kubernetes controller manager - kubernetes

I'm new to K8s. In process to config Openstack Cinder as K8s StorageClass, i have to add some flags to my kube controller manager, and I found that it's my big problem.
I'm using K8s 1.11 in VMs, and my K8s cluster has a kube-controller-manager pod, but I don't know how to add these flags to my kube-controller-manager.
After hours search, i found that there's a lot of task require add flag to kube-controller-manager, but no exactly document guide me how to do that. Please share me the way to go over it.
Thank you.

You can check /etc/kubernetes/manifests dir on your master nodes.
This dir would contain yaml files for master components.
These are also known as static pods.
More Info : https://kubernetes.io/docs/tasks/administer-cluster/static-pod/
Update these files and you would be able to see your changes as kubelet should restart the pod on file change.
As a more long term solution, you will need to incorporate the flags to the tooling that you use to generate your k8s cluster.

Related

Add `cacerts` file to all pods in a Kubernetes cluster

Well, my question is really short and hopefully simple? Is it possible to add a cacerts file automatically in every pod in a specific Kubernetes cluster?
According to this article it's possible by creating a ConfigMap and add this to the path /etc/ssl/certs/. But is it possible to achieve this on a higher level so that all pods in a Kubernetes cluster have this cacerts file?
You can add a MutatingAdmissionWebhook for a pod, which adds the folder by default as a volume to each pod. Check out the docs about MutatingAdmissionWebhooks and writing an admission webhook.
This way you add a "service", which mutates the pod config before the scheduler handles it. Check out this for a quick example.

Any way we can add an ENV to a pod or a new pod in kubernetes?

Summarize the problem:
Any way we can add an ENV to a pod or a new pod in kubernetes?
For example, I want to add HTTP_PROXY to many pods and the new pods it will generate in kubeflow 1.4. So these pods can be access to internet.
Describe what you’ve tried:
I searched and found istio maybe do that, but it's too complex for me.
The second, there are too many yamls in kubeflow, as to I cannot modify them one by one to use configmap or add ENV just in them.
So anyone has a good simle way to do this? Like doing this in kubernetes configuation.
Use "PodPreset" object to inject common environment variables and other params to all the matching pods.
Please follow below article
https://v1-19.docs.kubernetes.io/docs/tasks/inject-data-application/podpreset/
If PodPreset is indeed removed from v1.20, then you seem to need a webhook.
You will have to run an additional service in your cluster that will change the configuration of the pods.
Here is an example, on the basis of which I created my webhook, which changed the configuration of the pods in the cluster, in this example the developer used the logic adding a sidecar to the pod, but you can set your own to forward the required ENV:
https://github.com/morvencao/kube-mutating-webhook-tutorial/blob/master/medium-article.md

How to change kubelet configuration via kubeadm

I'm fairly new to Kubernetes and trying to wrap my head around how to manage ComponentConfigs in already running clusters.
For example:
Recently I initialized a kubeadm cluster in a test environment running Ubuntu. When I did that, I found CoreDNS to be in a CrashLoopBackoff which turned out to be the case because Ubuntu was configured to use systemd-resolved and so the resolv.conf had a loopback resolver configured. After reading the docs for coredns, I found out that a solution for that would be to change the resolvConf parameter for kubelet - either via commandline arguments or in the config.
So how would one do this properly in a kubeadm-managed cluster?
Reading [this page in the documentation][1] I didn't really get a clue, because it seems to be tailored to the case of initializing a new cluster or joining new nodes.
Of course, in this particular situation I could just use "Kubeadm reset" and initialize it again with a --config parameter but that doesn't seem to be the right solution for a running cluster.
So after digging a bit deeper I found several infos:
I could change the /var/lib/kubelet/kubeadm-flags.env on the node directly, but AFAICT this only makes sense for node-specific changes.
There is a ConfigMap in the kube-system namespace named kubelet-config-1.14. This seems promising for upcoming nodes joining the cluster to get the right configuration - but would changing that CM affect the already running Kubelet?
There is a marshalled version of the running config in /var/lib/config/kubelet.yaml that I could change, but AFAIU this would be overriden by kubelet itself periodically (?) or at least during a kubeadm upgrade.
There seems to be an option to specify a configmap in the node object, to let kubelet dynamically load the configuration from there, but given that there is already an existing configmap it seems more sensible to change that one.
I seemingly had success by some combination of changing aforementioned CM, running kubeadm upgrade something afterwards and rebooting the machine (since restarting the kubelet did not fix the CoreDNS issue ... but maybe I was to impatient).
So I am now asking:
What is the recommended way to carry out changes to the kubelet configuration (or any other configuration I could affect via kubeadm-config.yaml) that works and is upgrade-safe for cases where the configuration is not node-specific?
And if this involves running kubeadm ... config --config - how do I extract the existing Kubeadm-config in a way that I can feed it back to to kubeadm?
I am entirely happy with pointers to the right documentation, I just didn't find the right clues myself.
TIA
What you are looking for is well described in official documentation.
The basic workflow for configuring a Kubelet is as follows:
Write a YAML or JSON configuration file containing the Kubelet’s configuration.
Wrap this file in a ConfigMap and save it to the Kubernetes control plane.
Update the Kubelet’s corresponding Node object to use this ConfigMap.
In addition there is DynamicKubeletConfig Feature Gate is enabled by default starting from Kubernetes v1.11, but you need some additional steps to activate it. You need to remember about, that Kubelet’s --dynamic-config-dir flag must be set to a writable directory on the Node.

Is it possible to add/modify kubernetes container spec based on clusterwide setting

I have a kubernetes-based application that uses an operator to build and deploy containers in pods. Sometimes I'd like to run containers in privileged mode to enable performance tracing, but since I'm not deploying the pod/containers directly from a manifest, I cannot simply add privileged mode and the debugfs filesystem mount.
That leaves me to fork the operator code, change where it builds the container spec, and redeploy with the modified operator. Doable, but awkward.
So my question is, is it possible to impose additional attributes to be added to container specs based on some clusterwide setting, either before pods are deployed by the operator? Or to modify the container spec after deployment? I tried that with kubectl edit pod mypod, but that didn't work.
This is on a physical cluster installed with kubespray.
There are three things to consider:
Your operator can create a controller (e.g. Deployment) instead of Pod, which allows modifications in the Pod Spec area, thus triggering Deployment's rollout (see rolling update strategy).
Use MutatingAdmissionWebhook
so before creating the Pod, its manifest would be modified/overwritten on the fly.
More info regarding MutatingAdmissionWebhook can be found here and here.
A workaround solution in a form of modifying the supply spec -> swapping the pod-a.
More about this was discussed here.
Please let me know if any of the above helped.

How to change --horizontal-pod-autoscaler-sync-period field in kube-controller-manager to 5sec in gke

I am trying to set up an horizontal pod auto scaling in GKE. No proper documentation found to reduce the --horizontal-pod-autoscaler-sync-period to 5 sec using kube-controller-manager.
In the below link it says there is a possibility of changing the flags:
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/
Is there any proper implementation steps to this?
You are not able do this on GKE, EKS and other managed clusters.
In order to change/add flags in kube-controller-manager - you should have access to your /etc/kubernetes/manifests/ directory on master node and be able to modify parameters in /etc/kubernetes/manifests/kube-controller-manager.yaml.
GKE, EKS and other clusters manages only by their providers without getting you permissions to have access to master nodes.
But you can create cluster with kubeadm init and configure/change as you like.
you can stop your minikube cluster and start it with your extra configs ...
minikube start --extra-config 'controller-manager.horizontal-pod-autoscaler-sync-period=5s'
for more details, you can go through https://minikube.sigs.k8s.io/docs/handbook/config/#modifying-kubernetes-defaults