minikube start - howto modify KubeletConfiguration passed to kubeadm? - kubernetes

I would like to set the value KubeletConfiguration.cpuCFSQuota = false in the config.yaml passed to kubeadm when launching minikube to turn off CPU resource checking, but I have not managed to find the options to do this through the documentation here https://minikube.sigs.k8s.io/docs/handbook/config/ . The closest solution I have found is to use the option --extra-config=kubelet.cpu-cfs-quota=false but the --cpu-cfs-quota option for the kubelet has been deprecated and no longer has an effect.
Any ideas appreciated.
Environment:
Ubuntu 20.04
Minikube 1.17.1
Kubernetes 1.20.2
Driver docker (20.10.2)
Thanks,
Piers.

Using the --extra-config=kubelet. flag alongside minikube start is a good approach but you would also need to Set Kubelet parameters via a config file.
As you already noticed the --cpu-cfs-quota flag:
Enable CPU CFS quota enforcement for containers that specify CPU
limits (DEPRECATED: This parameter should be set via the config file
specified by the Kubelet's --config flag.
So you need to set that parameter by creating a kubelet config file:
The configuration file must be a JSON or YAML representation of the
parameters in this struct. Make sure the Kubelet has read permissions
on the file.
Here is an example of what this file might look like:
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
memory.available: "200Mi"
Now you can use that config file to set cpuCFSQuota = false:
// cpuCFSQuota enables CPU CFS quota enforcement for containers that
// specify CPU limits.
// Dynamic Kubelet Config (beta): If dynamically updating this field, consider that
// disabling it may reduce node stability.
// Default: true
// +optional`
CPUCFSQuota *bool `json:"cpuCFSQuota,omitempty"
and than call minikube with --extra-config=kubelet.config=/path/to/config.yaml
Alternately, you can start your minikube without the --extra-config flag and than start the Kubelet with the --config flag set to the path of the Kubelet's config file. The Kubelet will then load its config from this file.
I know these are a few steps more than you expected but setting the kubelet parameters via a config file is the recommended approach because it simplifies node deployment and configuration management.

Related

How to set node allocatable computation on kubernetes?

I'm reading the Reserve Compute Resources for System Daemons task in Kubernetes docs and it briefly explains how to allocate a compute resource to a node using kubelet command and flags --kube-reserved, --system-reserved and --eviction-hard.
I'm learning on Minikube for masOS and as far as I got, minikube is configured to use command kubectl along with minikube command.
For local learning purposes on minikube I don't need to have it set (maybe it can't be done on minikube) but
How this could be done let's say in K8's development environment on a node?
This could be be done by:
1. Passing config file during cluster initialization or initilize kubelet with additional parameters via config file,
For cluster initialization using config file it should contains at least:
kind: InitConfiguration
kind: ClusterConfiguration
additional configuratuion types like:
kind: KubeletConfiguration
In order to get basic config file you can use kubeadm config print init-defaults
2. For the live cluster please consider reconfigure current cluster using steps "Generate the configuration file" and "Push the configuration file to the control plane" like described in "Reconfigure a Node's Kubelet in a Live Cluster"
3. I didn't test it but for minikube - please take a look here:
Note:
Minikube has a “configurator” feature that allows users to configure the Kubernetes components with arbitrary values. To use this feature, you can use the --extra-config flag on the minikube start command.
This flag is repeated, so you can pass it several times with several different values to set multiple options.
This flag takes a string of the form component.key=value, where component is one of the strings from the below list, key is a value on the configuration struct and value is the value to set.
Valid keys can be found by examining the documentation for the Kubernetes componentconfigs for each component. Here is the documentation for each supported configuration:
kubelet
apiserver
proxy
controller-manager
etcd
scheduler
Hope this helped:
Additional community resources:
Memory usage in kubernetes cluster

How to change kubelet configuration via kubeadm

I'm fairly new to Kubernetes and trying to wrap my head around how to manage ComponentConfigs in already running clusters.
For example:
Recently I initialized a kubeadm cluster in a test environment running Ubuntu. When I did that, I found CoreDNS to be in a CrashLoopBackoff which turned out to be the case because Ubuntu was configured to use systemd-resolved and so the resolv.conf had a loopback resolver configured. After reading the docs for coredns, I found out that a solution for that would be to change the resolvConf parameter for kubelet - either via commandline arguments or in the config.
So how would one do this properly in a kubeadm-managed cluster?
Reading [this page in the documentation][1] I didn't really get a clue, because it seems to be tailored to the case of initializing a new cluster or joining new nodes.
Of course, in this particular situation I could just use "Kubeadm reset" and initialize it again with a --config parameter but that doesn't seem to be the right solution for a running cluster.
So after digging a bit deeper I found several infos:
I could change the /var/lib/kubelet/kubeadm-flags.env on the node directly, but AFAICT this only makes sense for node-specific changes.
There is a ConfigMap in the kube-system namespace named kubelet-config-1.14. This seems promising for upcoming nodes joining the cluster to get the right configuration - but would changing that CM affect the already running Kubelet?
There is a marshalled version of the running config in /var/lib/config/kubelet.yaml that I could change, but AFAIU this would be overriden by kubelet itself periodically (?) or at least during a kubeadm upgrade.
There seems to be an option to specify a configmap in the node object, to let kubelet dynamically load the configuration from there, but given that there is already an existing configmap it seems more sensible to change that one.
I seemingly had success by some combination of changing aforementioned CM, running kubeadm upgrade something afterwards and rebooting the machine (since restarting the kubelet did not fix the CoreDNS issue ... but maybe I was to impatient).
So I am now asking:
What is the recommended way to carry out changes to the kubelet configuration (or any other configuration I could affect via kubeadm-config.yaml) that works and is upgrade-safe for cases where the configuration is not node-specific?
And if this involves running kubeadm ... config --config - how do I extract the existing Kubeadm-config in a way that I can feed it back to to kubeadm?
I am entirely happy with pointers to the right documentation, I just didn't find the right clues myself.
TIA
What you are looking for is well described in official documentation.
The basic workflow for configuring a Kubelet is as follows:
Write a YAML or JSON configuration file containing the Kubelet’s configuration.
Wrap this file in a ConfigMap and save it to the Kubernetes control plane.
Update the Kubelet’s corresponding Node object to use this ConfigMap.
In addition there is DynamicKubeletConfig Feature Gate is enabled by default starting from Kubernetes v1.11, but you need some additional steps to activate it. You need to remember about, that Kubelet’s --dynamic-config-dir flag must be set to a writable directory on the Node.

Change proxy mode in kubernetes

I have installed kubernetes cluster using this tutorial on ubuntu 16.
everything works but I need change proxy mode (ipvs) and I don't know how can I change kube-proxy mode using kubectl or something else.
kubectl is more for managing the kubernetes workload. You need to modify the control plane itself. Since you created your cluster with kubeadm, you can use that to enable ipvs. You'd add this to your config file for kube init.
...
kubeProxy:
config:
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
...
Here's an article from github.com/kubernetes with more detailed instructions. Depending on your kubernetes version, you can pass it as a flag to kube init instead of using the above configuration.
Edit: Here's a link on how to use kubeadm to edit an existing cluster: How to use kubeadm upgrade to change some features in kubeadm-config

Changing the CPU Manager Policy in Kubernetes

I'm trying to change the CPU Manager Policy for a Kubernetes cluster that I manage, as described here however, I've run into numerous issues while doing so.
The cluster is running in DigitalOcean and here is what I've tried so far.
1. Since the article mentions that --cpu-manager-policy is a kubelet option I assume that I cannot change it via the API Server and have to change it manually on each node. (Is this assumption BTW?)
2. I ssh into one of the nodes (droplets in DigitalOcean lingo) and run kubelet --cpu-manager-policy=static command as described in the kubelet CLI reference here. It gives me the message Flag --cpu-manager-policy has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
3. So I check the file pointed at by the --config flag by running ps aux | grep kubelet and find that its /etc/kubernetes/kubelet.conf.
4. I edit the file and add a line cpuManagerPolicy: static to it, and also kubeReserved and systemReserved because they become required fields if specifying cpuManagerPolicy.
5. Then I kill the process that was running the process and restart it. A couple other things showed up (delete this file and drain the node etc) that I was able to get through and was able to restart the kubelet ultimately
I'm a little lost about the following things
How do I need to do this for all nodes? My cluster has 12 of them and doing all of these steps for each seems very inefficient.
Is there any way I can set these params from the globally i.e. cluster-wide rather than doing node by node?
How can I even confirm that what I did actually changed the CPU Manager policy?
One issue with Dynamic Configuration is that in case the node fails to restart, the API does not give a reasonable response back that tells you what you did wrong, you'll have to ssh into the node and tail the kubelet logs. Plus, you have to ssh into every node and set the --dynamic-config-dir flag anyways.
The folllowing worked best for me
SSH into the node. Edit
vim /etc/systemd/system/kubelet.service
Add the following lines
--cpu-manager-policy=static \
--kube-reserved=cpu=1,memory=2Gi,ephemeral-storage=1Gi \
--system-reserved=cpu=1,memory=2Gi,ephemeral-storage=1Gi \
We need to set the --kube-reserved and --system-reserved flags because they're prerequisties to setting the --cpu-manager-policy flag
Then drain the node and delete the following folder
rm -rf /var/lib/kubelet/cpu_manager_state
Restart the kubelet
sudo systemctl daemon-reload
sudo systemctl stop kubelet
sudo systemctl start kubelet
uncordon the node and check the policy. This assumes that you're running kubectl proxy on port 8001.
curl -sSL "http://localhost:8001/api/v1/nodes/${NODE_NAME}/proxy/configz" | grep cpuManager
If you use a newer k8s version and kubelet is configured by a kubelet configuration file, eg:config.yml. you can just follow the same steps of #satnam mentioned above. but instead of adding --kube-reserved --system-reserved --cpu-manager-policy, you need to add kubeReserved systemReserved cpuManagerPolicy in your config.yml. for example:
systemReserved:
cpu: "1"
memory: "100m"
kubeReserved:
cpu: "1"
memory: "100m"
cpuManagerPolicy: "static"
Meanwhile, be sure your CPUManager is enabled.
It might not be the global way of doing stuff, but I think it will be much more comfortable than what you are currently doing.
First you need to run
kubectl proxy --port=8001 &
Download the configuration:
NODE_NAME="the-name-of-the-node-you-are-reconfiguring"; curl -sSL "http://localhost:8001/api/v1/nodes/${NODE_NAME}/proxy/configz" | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"' > kubelet_configz_${NODE_NAME}
Edit it accordingly, and push the configuration to control plane. You will see a valid response if everything went well. Then you will have to edit the configuration so the Node starts to use the new ConfigMap. There are many more possibilities, for example you can go back to default settings if anything goes wrong.
This process is described with all the details in this documentation section.
Hope this helps.

Kubernetes cluster name change

I'm creating a cluster with kubeadm init --with-stuff (Kubernetes 1.8.4, for reasons). I can setup nodes, weave, etc. But I have a problem setting the cluster name. When I open the admin.conf or a different config file I see:
name: kubernetes
When I run kubectl config get-clusters:
NAME
kubernetes
Which is the default. Is there a way to set the cluster name during init (there is no command line parameter)? Or is there a way to change this after the init? The current name is referenced in many files in /etc/kubernetes/
Best Regrads
Kamil
You can now do so using kubeadm's config file. PR here:
https://github.com/kubernetes/kubernetes/pull/60852
Using the kubeadm config you just set the following at the top level
clusterName: kubernetes
No, you cannot change a name of running cluster, because it serves for discovery inside a cluster and this would require near-simultaneous changing it across the cluster.
Sadly, you also cannot change a name of the cluster before init. Here is the issue on Github.
Update: From version 1.12, kubeadm allow you to change a cluster name before an "init" stage.
To do it (for sure for versions >=1.15, for lower versions commands can be different, commands changed somewhen between versions 1.12 and 1.15), you need to set clusterName value in a cluster configuration file like that:
Save default configuration to a file (cluster config is optional, so we need to do that step first for not to write it from scratch) by a kubeadm config print init-defaults < init-config.yaml command.
Set clusterName value in the config.
Run kubeadm init with a config argument: kubeadm init --config init-config.yaml