kubeadm init: recommended value for clusterDND IP - kubernetes

When using the following settings in the kubeadm config file:
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
clusterDNS:
- fd10::4:5
I see the following warning when initializing my cluster:
[root#k8s-ansible-2 ansible]# kubeadm init --config /home/ansible/kubeadm-config-new.yaml
W0414 05:52:56.598882 1454 defaults.go:186] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [fd10::4:a]; the provided value is: [fd10::4:5]
and kubeadm actually does configure the recommended value:
[ansible#k8s-ansible-2 ~]$ kubectl get service kube-dns -n kube-system -o yaml | grep clusterIP
clusterIP: fd10::4:a
The Kubelet systemd config file is the following:
[root#k8s-ansible-3 ~]# more /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
ExecStartPre=/bin/sleep 15
In my environment, I need to control which ClusterIP is assigned to which service so my questions are:
Is it a bug or known limitation ?. I was not able to find anything related to that specific behavior.
It seems kubeadm assigns the 10th address from serviceCIDR to coredns service. Could someone confirm if this is the default implementation and it is not a random address from the serviceCIDR pool ?.
Thanks for your support.

Related

Export current Kubrenetes cluster config during upgrade to 1.21

I upgraded a self-hosted lab environment from Kubernetes 1.20.1 to 1.21.14.
I ran the command:
sudo kubeadm upgrade plan v1.21.14
Then, I had to provide current Kubernetes cluster config from ConfigMap or file.
I'm trying to figure out:
Is it possible to get the Kubernetes cluster config yaml file, in case I don't have the file I used to initialize the cluster?
It also turned out didn't exist in ConfigMap.
The above command output was:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[upgrade/config] In order to upgrade, a ConfigMap called "kubeadm-config" in the kube-system namespace must exist.
[upgrade/config] Without this information, 'kubeadm upgrade' won't know how to configure your upgraded cluster.
[upgrade/config] Next steps:
- OPTION 1: Run 'kubeadm config upload from-flags' and specify the same CLI arguments you passed to 'kubeadm init' when you created your control-plane.
- OPTION 2: Run 'kubeadm config upload from-file' and specify the same config file you passed to 'kubeadm init' when you created your control-plane.
- OPTION 3: Pass a config file to 'kubeadm upgrade' using the --config flag.
[upgrade/config] FATAL: the ConfigMap "kubeadm-config" in the kube-system namespace used for getting configuration information was not found
To see the stack trace of this error execute with --v=5 or higher
I tried:
kubeadm config view
The output:
Command "view" is deprecated, This command is deprecated and will be removed in a future release, please use 'kubectl get cm -o yaml -n kube-system kubeadm-config' to get the kubeadm config directly.
configmaps "kubeadm-config" not found
I ran:
kubectl -n kube-system get cm kubeadm-config -o yaml
The output was:
Error from server (NotFound): configmaps "kubeadm-config" not found

Where to find the Kubernetes Scheduler Configuration file in local system

I am currently working in Minikube cluster and looking to change some flags of kubernetes scheduler configuration, but I can't find it. The file looks something like-
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
algorithmSource:
provider: DefaultProvider
...
disablePreemption: true
What is it's name and where can I find it?
Posting this answer as a community wiki to set a baseline and to provide additional resources/references rather than giving a definitive solution.
Feel free to edit and expand.
I haven't found the file that you are referencing (KubeSchedulerConfiguration) in minikube.
The minikube provisioning process does not create it nor references it in the configuration files (/etc/kubernetes/manifests/kube-scheduler.yaml and the --config=PATH parameter).
I'd reckon you could take a look on other Kubernetes solutions where you can configure how your cluster is created (how kube-scheduler is configured). Some of the options are:
Kubernetes.io: Docs: Setup: Production environment: Tools: Kubeadm: Create cluster and also:
Kubernetes.io: Docs: Setup: Production environment: Tools: Kubeadm: Control plane flags
Github.com: Kubernetes sigs: Kubespray
A side note!
Both: kubespray and minikube are using kubeadm as a bootstrapper!
I would also consider creating additional scheduler that would be responsible for spawning your workload (by referencing in the YAML manifests):
Kubernetes.io: Docs: Tasks: Extend Kubernetes: Configure multiple schedulers
I haven't tested it extensively and in the long term but I've managed to include the YAML manifest that you are referencing for the kube-scheduler.
Disclaimers!
Please consider below example as a workaround!
The method described below is not persistent.
Steps:
Start your minikube instance with the --extra-config
Connect to your minikube instance and edit/add files:
/etc/kubernetes/manifests/kube-scheduler.yaml
newly created KubeSchedulerConfiguration
Delete the failing kube-scheduler Pod and wait for it to be recreated.
Start your minikube instance with the --extra-config
As previously said you can add some additional parameters for your $ minikube start to be passed down to the provisioning process.
In this setup you can either pass it with $ minikube start ... or do it manually later on.
$ minikube start --extra-config=scheduler.config="/etc/kubernetes/sched.yaml"
Above parameter will add the - --config=/etc/kubernetes/sched.yaml to the command of your kube-scheduler. It will look for the file in the mentioned location.
Connect to your minikube instance ($ minikube ssh) and edit/add files:
Your kube-scheduler will fail as you've passed an argument (config) that is incorrect (lack of file). To work around this you will need to:
add: /etc/kubernetes/sched.yaml with your desired configuration
modify: /etc/kubernetes/manifests/kube-scheduler.yaml:
add to: volumeMounts:
- mountPath: /etc/kubernetes/sched.yaml
name: scheduler
readOnly: true
add to volumes:
- hostPath:
path: /etc/kubernetes/sched.conf
type: FileOrCreate
name: scheduler
Delete the failing kube-scheduler Pod and wait for it to be recreated.
You will need to redeploy modified scheduler to get its new config running:
$ kubectl delete pod -n kube-system kube-scheduler-minikube
After some time you should see your kube-scheduler in Ready state.
Additional resources:
Kubernetes.io: Docs: Concepts: Scheduling eviction: Kube-scheduler
Kubernetes.io: Docs: Reference: Command line tools reference: Kube-scheduler

where I can find kubeadm-config.yaml on my kubernetes cluster

There is a k8s single master node, I need to back it up and restore it
I googled this topic and found a solution -
https://elastisys.com/2018/12/10/backup-kubernetes-how-and-why/
Everything looked easy; so,I followed the instruction and got a copy of the certificates and a snapshot of the etcd database.
But at last, I am not able to find kubeadm-config.yaml on my master server.
Where to find this file?
During kubeadm init, kubeadm uploads the ClusterConfiguration object to your cluster in a ConfigMap called kubeadm-config in the kube-system namespace. You can get it from the ConfigMap and take a backup
kubectl get cm kubeadm-config -n kube-system -o yaml > kubeadm-config.yaml
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-config/

How to add flag to kubelet

I want to deploy rook for kubernetes. I use 1 master and 3 worker and the host is ubuntu in baremetal. but the container stuck in creating container. after a lot of search i understand i should use this document https://github.com/rook/rook/blob/master/Documentation/flexvolume.md#most-common-readwrite-flexvolume-path that said
Configuring the Rook operator You must provide the above found
FlexVolume path when deploying the rook-operator by setting the
environment variable FLEXVOLUME_DIR_PATH. For example:
env: [...]
- name: FLEXVOLUME_DIR_PATH value: "/var/lib/kubelet/volumeplugins" (In the operator.yaml manifest replace with the
path or if you use helm set the agent.flexVolumeDirPath to the
FlexVolume path)
Configuring the Kubernetes kubelet You need to add the flexvolume flag
with the path to all nodes's kubelet in the Kubernetes cluster:
--volume-plugin-dir=PATH_TO_FLEXVOLUME (Where the PATH_TO_FLEXVOLUME is the above found FlexVolume path)
the question is how can i add flexvolume flag with the path to all nodes's kubelet ?
#yasin lachini,
If you deploy kubernetes cluster on baremetal, you don't need to configure anything. That is because /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ is the kubelet default FlexVolume path and Rook assumes the default FlexVolume path if not set differently.
My env:
rook-ceph/operator.yml (use default FLEXVOLUME_DIR_PATH) :
...
# Set the path where the Rook agent can find the flex volumes
# - name: FLEXVOLUME_DIR_PATH
# value: "/usr/libexec/kubernetes/kubelet-plugins/volume/exec"
...
After deploy,on node:
# ls /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
ceph.rook.io~rook ceph.rook.io~rook-ceph-system rook.io~rook rook.io~rook-ceph-system
There are two options.
I. set
KUBELET_EXTRA_ARGS=--FLEXVOLUME_DIR_PATH=/var/lib/kubelet/volumeplugins
within the file
/etc/default/kubelet
And restart kubelete service
sudo systemctl restart kubelet
II. You can set kubelet parameters via a config file.
For example:
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
env:
- name: FLEXVOLUME_DIR_PATH
value: "/var/lib/kubelet/volumeplugins"
Then, you just start the Kubelet with the --config flag set to the path of the Kubelet’s config file
sudo kubelet --config=/etc/default/kubelet/custom-conf.config
https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/
If you initiated your Cluster with kubeadm you can add flags in this file: /var/lib/kubelet/kubeadm-flags.env and then restart the kubelet with sudo systemctl restart kubelet

How to change name of a kubernetes node

I have a running node in a kubernetes cluster. Is there a way I can change its name?
I have tried to
delete the node using kubectl delete
change the name in the node's manifest
add the node back.
But the node won't start.
Anyone know how it should be done?
Thanks
Usualy it's kubelet that is responsible for registering the node under particular name, so you should make changes to your nodes kubelet configuration and then it should pop up as new node.
Changing the node's name is not possible at the moment, it requires you to remove and rejoin the node.
You need to make sure the hostname is changed to the new name, remove the node, reset it and rejoin it.
(you will notice that with the command : kubectl edit node , you will get an error if you try and save the name:
A copy of your changes has been stored to "/tmp/kubectl-edit-qlh54.yaml"
error: At least one of apiVersion, kind and name was changed
)
Ideally you have removed the running pods on it.
You can try to run kubectl drain <node_name_to_rename> . Proceed at your own risk if that doesn't complete . --ignore-daemon-sets can be used to ignore possible issues for pods that cannot be evicted.
In short, for a node that has been renamed and is out of the cluster on CentOS 7:
kubectl delete node <original-nodename>
Then on the node that you want to rejoin, as root:
kubeadm reset
check the output and see if it applies to your setup (for potential further cleanup).
Now generate the join command on the master node:
export KUBECONFIG=/etc/kubernetes/admin.conf #(or wherever you have it)
kubeadm token create --print-join-command
Run the output on the worker node you have just reset:
kubeadm join <masternode_ip_address>:6443 --token somegeneratedtoken --discovery-token-ca-cert-hash sha256:somesha256hashthatyougotfromtheabovecommand
If you run kubectl get nodes it should show up now with the new name
output in my case:
W0220 10:43:23.286109 11473 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Enjoy your renamed node!
Based on source: https://www.youtube.com/watch?v=TqoA9HwFLVU