I'm creating a cluster with kubeadm init --with-stuff (Kubernetes 1.8.4, for reasons). I can setup nodes, weave, etc. But I have a problem setting the cluster name. When I open the admin.conf or a different config file I see:
name: kubernetes
When I run kubectl config get-clusters:
NAME
kubernetes
Which is the default. Is there a way to set the cluster name during init (there is no command line parameter)? Or is there a way to change this after the init? The current name is referenced in many files in /etc/kubernetes/
Best Regrads
Kamil
You can now do so using kubeadm's config file. PR here:
https://github.com/kubernetes/kubernetes/pull/60852
Using the kubeadm config you just set the following at the top level
clusterName: kubernetes
No, you cannot change a name of running cluster, because it serves for discovery inside a cluster and this would require near-simultaneous changing it across the cluster.
Sadly, you also cannot change a name of the cluster before init. Here is the issue on Github.
Update: From version 1.12, kubeadm allow you to change a cluster name before an "init" stage.
To do it (for sure for versions >=1.15, for lower versions commands can be different, commands changed somewhen between versions 1.12 and 1.15), you need to set clusterName value in a cluster configuration file like that:
Save default configuration to a file (cluster config is optional, so we need to do that step first for not to write it from scratch) by a kubeadm config print init-defaults < init-config.yaml command.
Set clusterName value in the config.
Run kubeadm init with a config argument: kubeadm init --config init-config.yaml
Related
Next version 12.1.5
Kubernetes version 1.20
Until now I've copied a .env file to the next image and regulated them that way, but now a new requirement came to hold the env variables in Kubernetes secrets.
After adding the needed change, that is creating the secret and adding "secretAddRef" keys to the env section of my deployment file, when logging process.env within the next.config.js file I get undefined.
The strange thing is that they are present when using shh to enter the Kubernetes pod and running "printenv".
I'm reading the Reserve Compute Resources for System Daemons task in Kubernetes docs and it briefly explains how to allocate a compute resource to a node using kubelet command and flags --kube-reserved, --system-reserved and --eviction-hard.
I'm learning on Minikube for masOS and as far as I got, minikube is configured to use command kubectl along with minikube command.
For local learning purposes on minikube I don't need to have it set (maybe it can't be done on minikube) but
How this could be done let's say in K8's development environment on a node?
This could be be done by:
1. Passing config file during cluster initialization or initilize kubelet with additional parameters via config file,
For cluster initialization using config file it should contains at least:
kind: InitConfiguration
kind: ClusterConfiguration
additional configuratuion types like:
kind: KubeletConfiguration
In order to get basic config file you can use kubeadm config print init-defaults
2. For the live cluster please consider reconfigure current cluster using steps "Generate the configuration file" and "Push the configuration file to the control plane" like described in "Reconfigure a Node's Kubelet in a Live Cluster"
3. I didn't test it but for minikube - please take a look here:
Note:
Minikube has a “configurator” feature that allows users to configure the Kubernetes components with arbitrary values. To use this feature, you can use the --extra-config flag on the minikube start command.
This flag is repeated, so you can pass it several times with several different values to set multiple options.
This flag takes a string of the form component.key=value, where component is one of the strings from the below list, key is a value on the configuration struct and value is the value to set.
Valid keys can be found by examining the documentation for the Kubernetes componentconfigs for each component. Here is the documentation for each supported configuration:
kubelet
apiserver
proxy
controller-manager
etcd
scheduler
Hope this helped:
Additional community resources:
Memory usage in kubernetes cluster
I'm fairly new to Kubernetes and trying to wrap my head around how to manage ComponentConfigs in already running clusters.
For example:
Recently I initialized a kubeadm cluster in a test environment running Ubuntu. When I did that, I found CoreDNS to be in a CrashLoopBackoff which turned out to be the case because Ubuntu was configured to use systemd-resolved and so the resolv.conf had a loopback resolver configured. After reading the docs for coredns, I found out that a solution for that would be to change the resolvConf parameter for kubelet - either via commandline arguments or in the config.
So how would one do this properly in a kubeadm-managed cluster?
Reading [this page in the documentation][1] I didn't really get a clue, because it seems to be tailored to the case of initializing a new cluster or joining new nodes.
Of course, in this particular situation I could just use "Kubeadm reset" and initialize it again with a --config parameter but that doesn't seem to be the right solution for a running cluster.
So after digging a bit deeper I found several infos:
I could change the /var/lib/kubelet/kubeadm-flags.env on the node directly, but AFAICT this only makes sense for node-specific changes.
There is a ConfigMap in the kube-system namespace named kubelet-config-1.14. This seems promising for upcoming nodes joining the cluster to get the right configuration - but would changing that CM affect the already running Kubelet?
There is a marshalled version of the running config in /var/lib/config/kubelet.yaml that I could change, but AFAIU this would be overriden by kubelet itself periodically (?) or at least during a kubeadm upgrade.
There seems to be an option to specify a configmap in the node object, to let kubelet dynamically load the configuration from there, but given that there is already an existing configmap it seems more sensible to change that one.
I seemingly had success by some combination of changing aforementioned CM, running kubeadm upgrade something afterwards and rebooting the machine (since restarting the kubelet did not fix the CoreDNS issue ... but maybe I was to impatient).
So I am now asking:
What is the recommended way to carry out changes to the kubelet configuration (or any other configuration I could affect via kubeadm-config.yaml) that works and is upgrade-safe for cases where the configuration is not node-specific?
And if this involves running kubeadm ... config --config - how do I extract the existing Kubeadm-config in a way that I can feed it back to to kubeadm?
I am entirely happy with pointers to the right documentation, I just didn't find the right clues myself.
TIA
What you are looking for is well described in official documentation.
The basic workflow for configuring a Kubelet is as follows:
Write a YAML or JSON configuration file containing the Kubelet’s configuration.
Wrap this file in a ConfigMap and save it to the Kubernetes control plane.
Update the Kubelet’s corresponding Node object to use this ConfigMap.
In addition there is DynamicKubeletConfig Feature Gate is enabled by default starting from Kubernetes v1.11, but you need some additional steps to activate it. You need to remember about, that Kubelet’s --dynamic-config-dir flag must be set to a writable directory on the Node.
I have installed kubernetes cluster using this tutorial on ubuntu 16.
everything works but I need change proxy mode (ipvs) and I don't know how can I change kube-proxy mode using kubectl or something else.
kubectl is more for managing the kubernetes workload. You need to modify the control plane itself. Since you created your cluster with kubeadm, you can use that to enable ipvs. You'd add this to your config file for kube init.
...
kubeProxy:
config:
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
...
Here's an article from github.com/kubernetes with more detailed instructions. Depending on your kubernetes version, you can pass it as a flag to kube init instead of using the above configuration.
Edit: Here's a link on how to use kubeadm to edit an existing cluster: How to use kubeadm upgrade to change some features in kubeadm-config
My Docker for Windows ~/.kube/config file was replaced when setting up access to cloud based K8s cluster.
Is there a way to re-create it without having to restart Docker for Windows Kubernetes?
Update
My current ~/.kube/config file is now set to a GKE cluster. I don't want to reset Docker for Kubernetes and clobber it. Instead I want to create a separate kubeconfig file for Docker for Windows i.e. place it in some other location rather than ~/.kube/config.
You probably want to back up your ~/.kube/config for GKE and then disable/reenable Kubernetes on Docker for Windows. Pull up a Windows command prompt:
copy \<where-your-.kube-is\config \<where-your-.kube-is\config.bak
Then follow this. In essence, uncheck the box, wait for a few minutes and check it again.
You can re-recreate without disabling/reenabling Kubernetes on Docker but you will have to know exactly where your API server and credentials (certificates, etc):
kubectl config set-context ...
kubectl config use-context ...
What's odd is that you are specifying ~/.kube/config where the ~ (tilde) thingy is unix/linux thing, but maybe what you mean is $HOME
I just want to add to this, in case you are using wsl as kubectl/docker client as I am.
You can find your local kubernetes config in C:\Users\username\.kube\config.
You can then use that to create a new kubernetes context for docker.
For instance:
cp /mnt/c/Users/username/.kube/config ~/.kube/docker-k8s.config
docker context create local-k8s --default-stack-orchestrator=kubernetes --kubernetes config-file=/home/username/.kube/docker-k8s.config --docker host=tcp://localhost:2375
Note: I have exposed the docker engine on port 2375. The default settings for the unix sock type of connection can be found on the link above. You need to add the absolute path to the kubeconfig, you can't use '~'.
Then you can use docker context use <context name> to switch between your local docker-desktop kubernetes cluster and an external cloud env cluster with your docker client.
docker context ls will show the local existing contexts.
You basically want to access multiple clusters. One option is to play around with KUBECONFIG environmental variable. Here is the documentation.
The KUBECONFIG environment variable is a list of paths to configuration files. The list is colon-delimited for Linux and Mac, and semicolon-delimited for Windows. If you have a KUBECONFIG environment variable, familiarize yourself with the configuration files in the list.
Or, you can provide an inline option.
kubectl config --kubeconfig=config-demo set-context dev-frontend --cluster=development --namespace=frontend --user=developer
kubectl config --kubeconfig=config-demo set-context dev-storage --cluster=development --namespace=storage --user=developer
kubectl config --kubeconfig=config-demo set-context exp-scratch --cluster=scratch --namespace=default --user=experimenter
And then use use-context