How to enable a feature gate on the go? - kubernetes

I want to enable feature gate VolumeSubpathEnvExpansion on a 1.13 kubernetes cluster.
I did not found any information about enabling a feature gate for a running cluster.

You can pass --feature-gates= to kubelet. Find systemd's or whatever init system you are using's and edit the kubelet start line to add in VolumeSubpathEnvExpansion and restart kublet service. It is a comma separated list so you might find that your kubelet already has --feature-gates= passed to it with other args.
https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/

Related

Modify Kubernetes cluster criSocket setting

I have a Kubernetes lab environment for studying an online course.
I missed a step in the installation instructions and didn't change the criSocket setting.
How can I change this setting and keep the rest of the cluster configuration?
I don't want to regenerate default cluster config, as I did in when I installed Kuberentes:
kubeadm config print init-defaults | tee ClusterConfiguration.yaml
The cluster contains 1 control plane node and 3 worker nodes.
cri-socket is a setting for kubelet.
If you have already done some specific settings for the CRI that you want to use, I guess you can switch to the other CRI by editing /var/lib/kubelet/kubeadm-flags.env.
After stopping kubelet, add/modify --container-runtime-endpoint=... on that file and restart kubelet. Then kubelet will use a new CRI which is specified there.
This article may help you: https://dev.to/stack-labs/how-to-switch-container-runtime-in-a-kubernetes-cluster-1628

How to set node allocatable computation on kubernetes?

I'm reading the Reserve Compute Resources for System Daemons task in Kubernetes docs and it briefly explains how to allocate a compute resource to a node using kubelet command and flags --kube-reserved, --system-reserved and --eviction-hard.
I'm learning on Minikube for masOS and as far as I got, minikube is configured to use command kubectl along with minikube command.
For local learning purposes on minikube I don't need to have it set (maybe it can't be done on minikube) but
How this could be done let's say in K8's development environment on a node?
This could be be done by:
1. Passing config file during cluster initialization or initilize kubelet with additional parameters via config file,
For cluster initialization using config file it should contains at least:
kind: InitConfiguration
kind: ClusterConfiguration
additional configuratuion types like:
kind: KubeletConfiguration
In order to get basic config file you can use kubeadm config print init-defaults
2. For the live cluster please consider reconfigure current cluster using steps "Generate the configuration file" and "Push the configuration file to the control plane" like described in "Reconfigure a Node's Kubelet in a Live Cluster"
3. I didn't test it but for minikube - please take a look here:
Note:
Minikube has a “configurator” feature that allows users to configure the Kubernetes components with arbitrary values. To use this feature, you can use the --extra-config flag on the minikube start command.
This flag is repeated, so you can pass it several times with several different values to set multiple options.
This flag takes a string of the form component.key=value, where component is one of the strings from the below list, key is a value on the configuration struct and value is the value to set.
Valid keys can be found by examining the documentation for the Kubernetes componentconfigs for each component. Here is the documentation for each supported configuration:
kubelet
apiserver
proxy
controller-manager
etcd
scheduler
Hope this helped:
Additional community resources:
Memory usage in kubernetes cluster

How to change kubelet configuration via kubeadm

I'm fairly new to Kubernetes and trying to wrap my head around how to manage ComponentConfigs in already running clusters.
For example:
Recently I initialized a kubeadm cluster in a test environment running Ubuntu. When I did that, I found CoreDNS to be in a CrashLoopBackoff which turned out to be the case because Ubuntu was configured to use systemd-resolved and so the resolv.conf had a loopback resolver configured. After reading the docs for coredns, I found out that a solution for that would be to change the resolvConf parameter for kubelet - either via commandline arguments or in the config.
So how would one do this properly in a kubeadm-managed cluster?
Reading [this page in the documentation][1] I didn't really get a clue, because it seems to be tailored to the case of initializing a new cluster or joining new nodes.
Of course, in this particular situation I could just use "Kubeadm reset" and initialize it again with a --config parameter but that doesn't seem to be the right solution for a running cluster.
So after digging a bit deeper I found several infos:
I could change the /var/lib/kubelet/kubeadm-flags.env on the node directly, but AFAICT this only makes sense for node-specific changes.
There is a ConfigMap in the kube-system namespace named kubelet-config-1.14. This seems promising for upcoming nodes joining the cluster to get the right configuration - but would changing that CM affect the already running Kubelet?
There is a marshalled version of the running config in /var/lib/config/kubelet.yaml that I could change, but AFAIU this would be overriden by kubelet itself periodically (?) or at least during a kubeadm upgrade.
There seems to be an option to specify a configmap in the node object, to let kubelet dynamically load the configuration from there, but given that there is already an existing configmap it seems more sensible to change that one.
I seemingly had success by some combination of changing aforementioned CM, running kubeadm upgrade something afterwards and rebooting the machine (since restarting the kubelet did not fix the CoreDNS issue ... but maybe I was to impatient).
So I am now asking:
What is the recommended way to carry out changes to the kubelet configuration (or any other configuration I could affect via kubeadm-config.yaml) that works and is upgrade-safe for cases where the configuration is not node-specific?
And if this involves running kubeadm ... config --config - how do I extract the existing Kubeadm-config in a way that I can feed it back to to kubeadm?
I am entirely happy with pointers to the right documentation, I just didn't find the right clues myself.
TIA
What you are looking for is well described in official documentation.
The basic workflow for configuring a Kubelet is as follows:
Write a YAML or JSON configuration file containing the Kubelet’s configuration.
Wrap this file in a ConfigMap and save it to the Kubernetes control plane.
Update the Kubelet’s corresponding Node object to use this ConfigMap.
In addition there is DynamicKubeletConfig Feature Gate is enabled by default starting from Kubernetes v1.11, but you need some additional steps to activate it. You need to remember about, that Kubelet’s --dynamic-config-dir flag must be set to a writable directory on the Node.

Do initializers (initializerConfiguration) work on k8s 1.10?

I tried (unsuccessfully) to set up an initializer admission controller on k8s 1.10, running in minikube. kubectl does not show 'initializerconfiguration' as a valid object type and attempting 'kubectl create -f init.yaml' with a file containing an initializerConfiguration object (similar to the exmaple found here: https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#configure-initializers-on-the-fly) returns this:
no matches for kind "InitializerConfiguration" in version "admissionregistration.k8s.io/v1alpha1"
(I tried with /v1beta1 as well, because kubectl api-versions doesn't show admissionregistration.k8s.io/v1alpha1 but does have .../v1beta1; no luck with that, either).
"Initializers" is enabled in the --admission-control option for kube-apiserver and all possible APIs are also turned on by default in minikube - so it should have worked, according to the k8s documentation.
According to the document mentioned in question:
Enable initializers alpha feature
Initializers is an alpha feature, so it is disabled by default. To
turn it on, you need to:
Include “Initializers” in the --enable-admission-plugins flag when starting kube-apiserver. If you have multiple kube-apiserver
replicas, all should have the same flag setting.
Enable the dynamic admission controller registration API by adding admissionregistration.k8s.io/v1alpha1 to the --runtime-config flag
passed to kube-apiserver, e.g.
--runtime-config=admissionregistration.k8s.io/v1alpha1. Again, all
replicas should have the same flag setting.
NOTE: For those looking to use this on minikube, use this to pass runtime-config to the apiserver:
minikube start --vm-driver=none --extra-config=apiserver.runtime-config=admissionregistration.k8s.io/v1alpha1=true

Kubernetes change kubelet config at all cluster

I need add argument --authentication-token-webhook in Kubelet. I can change file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf at all nodes step by step with my hands. But it is not funny )). How can I change Kubelet arguments from single point?
You can either
configure your Kubernetes workers via tools like Puppet or Ansible. Write your service drop-in once and deploy it via the tool to all nodes. Make sure you don't restart all kubelets at once (keyword serial for Ansible). Also, don't change 10-kubeadm.conf, drop in another file like 20-kubeadm-extra-args.conf and set the environment variable KUBELET_EXTRA_ARGS.
or use a Kubernetes feature called DynamicKubeletConfig. Beware that this is an alpha feature (as of Kubernetes 1.10) and has to be enabled by hand. I wouldn't recommend this method (yet, as long as it's an alpha feature), but it might become a feasible option in the future.