kubeadm init --config issue - kubernetes

I'm trying to init a kubernetes cluster using kubeadm. I followed the instructions found here for the stacked control plane/etcd nodes. For the container runtime installation, it is recommended that the runtime (in my case, containerd) and kubelet use the same cgroup driver (in my case, systemd).
Still following the instructions, I added
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
to /etc/containerd/config.toml and then restarted containerd using sudo systemctl restart containerd
So far so good, everything work as expected. Then I get to the part where I have to configure cgroup driver used by kubelet. It says I have to call kubeadm init with a config file. Something like: sudo kubeadm init --config.yaml. Here is my problem: I found very little information about what should that config file look like. The documentation says kubeadm config print init-defaults should print the default config but the default config has nothing to do with my setup (ie, it uses docker instead of containerd). I found an example of a file here but so far I've not managed to adapt it to my setup and I've found very little documentation to do so. There has to be a simpler way than just rewriting an entire config file just for one attribute change, right? How come I can't print the literal config file used by kubeadm init?

My solution was to restart containerd with default config (no systemd cgroup), then run kubeadm init as I would normally. Once the cluster was started, I printed the config to a file with: kubeadm config view. I then modified that file to add the required parameters to set systemd cgroup. Finally, I configured containerd to use systemd and ran kubeadm init with the newly created config file. Everything worked.
Warning: The command kubeadm config view says that the "view" command is deprecated and to use kubectl get cm -o yaml -n kube-system kubeadm-config instead, but the output of that command doesn't create a valid kubeadm config file, so I don't recommend doing that.

Related

Why does crictl pull from private registry not need account/password?

I init the latest kubernetes v1.25.2 with kubeadm, containerd as runtime.
Then config /etc/containerd/certs.d/my_registry:5000/hosts.toml in order to pull images from the private registry.
Command like this:
$ crictl pull my_registry:5000/hello-world:latest
The result is successful! But my registry requires account/password when using 'docker pull'.
Why does this happen?
crictl is only using your container runtime. In your case, it is using containerd to actually do the pull. That means if you already have the configuration for containerd to authenticate, that will work out of the box with crictl.
How authentication for containerd works is lined out here and you can check if that is what you are actually using with the following command:
cat /etc/crictl.yaml
If that file does not exist, you will use the defaults, which are deprecated.

Kubernetes : do we need to set http_proxy and no_proxy in apiserver manifest?

My cluster is behind a corporate proxy, and I have manually set http_proxy=myproxy, https_proxy=myproxy and no_proxy=10.96.0.0/16,10.244.0.0/16,<nodes-ip-range> in the three kubernetes core manifests (kube-apiserver.yaml, kube-controller-manager.yaml and kube-scheduler.yaml). Now, I want to upgrade kubernetes with kubeadm. But I know kubeadm will regenerate these manifests from the kubeadm-config configmap when upgrading, so without these environment variables. I can't find an extraEnvs key in kubeadm-config configmap (like extraArgs and extraVolumes).
Do I really need to set these variables in all kubernetes manifests ? If not, I think kubeadm will throw a warning because all communications will use the proxy (and I don't want that).
How can I pass these variables to kubeadm when upgrading ?
There are no such a flags available for Kubeadm at the moment. You may want to open github request for that feature.
You can use the way described here or here and export variables:
$ export http_proxy=http://proxy-ip:port/
$ export https_proxy=http://proxy-ip:port/
$ export no_proxy=master-ip,node-ip,127.0.0.1
And then use sudo -E bash to use the current
$ sudo -E bash -c "kubeadm init... "
Alternative way would be to reference those variables in the command as showed here:
NO_PROXY=master-ip,node-ip,127.0.0.1 HTTPS_PROXY=http://proxy-ip:port/ sudo kubeadm init --pod-network-cidr=192.168.0.0/16...

error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

I have KUBECONFIG inside my .kube/ folder still facing this issue.
I have also tried
kubectl config set-context ~/.kube/kubeconfig.yml
kubectl config use-context ~/.kube/kubeconfig.yml
No luck! Still the same.
Answering my own question
Initially, I had lost a lot of time on this error, but later I found that my kubeconfig is not having the correct context.
I tried the same steps:
kubectl config set-context ~/.kube/kubeconfig1.yml
kubectl config use-context ~/.kube/kubeconfig1.yml
or add a line with kubeconfig to your ~/.bashrc file
export KUBECONFIG=~/.kube/<kubeconfig_env>.yml
Also, if you want to add multiple kubeconfig: Add it like in ~/.bashrc file
export KUBECONFIG=~/.kube/kubeconfig_dev.yml:~/.kube/kubeconfig_dev1.yml:~/.kube/kubeconfig_dev2.yml
with different Kube contexts and it worked well.
Though the error should simplify and return the specific error related to kubecontext.
But after some digging it worked.
if u are running k3s u can put the following line to ur ~/.zshrc / ~/.bashrc
k3s
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
By default, kubectl looks for a file named config in the $HOME/. kube directory. So you have to rename your config file or set its location properly.
Look here for more detailed explanation
I had the same problem because I used the Linux subsystem on Windows.
If you use the command kubectl in the subsystem, you must copy your config file to part of the Linux system.
For example, if your username on Windows is jammy, and your username is also jammy in the Linux subsystem, you must copy the config file
from:
/mnt/c/Users/jammy/.kube/config
to:
/home/jammy/.kube/

Kubelet config yaml is missing when restart work node docker service

When I restart the docker service in work node, the logs of kubelet in master node report a no such file error.
# in work node
# systemctl restart docker service
# in master node
# journalctl -u kubelet
# failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
Arghya is right but I would like to add some info you should be aware of:
You can execute kubeadm init phase kubelet-start to only invoke a particular step that will write the kubelet configuration file and environment file and then start the kubelet.
After performing a restart there is a chance that swap would re-enable. Make sure to run swapoff -a in order to turn it off.
If you encounter any token validation problems than simply run kubeadm token create --print-join-command and than do the join process with the provided info. Remember that tokens expire after 24 hours by default.
If you wish to know more about kubeadm init phase you can find it here and here.
Please let me know if that helped.
You might have done kubeadm reset which cleans up all files.
Just do kubeadm reset --force to reset the node and then kubeadm init in master node and kubeadm join in woker node thereafter.

How to pass --pod-manifest-path to the kubelet quickly, without creating a new configuration file?

Running kubelet --pod-manifest-path=/newdir returns errors.
It's not clear to me where I can add the --pod-manifest-path to a systemd file on Ubuntu. I know for v1.12 there is the KubeletConfiguration type but I am using v1.11.
You can find in documentation:
Configure your kubelet daemon on the node to use this directory by running it with --pod-manifest-path=/etc/kubelet.d/ argument. On Fedora edit /etc/kubernetes/kubelet to include this line:
KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --pod-manifest-path=/etc/kubelet.d/"
Instructions for other distributions or Kubernetes installations may vary.
Restart kubelet. On Fedora, this is:
[root#my-node1 ~] $ systemctl restart kubelet
If you want to use --pod-manifest-path you can define it in Kubelet configuration.
Usually it is stored /etc/kubernetes/kubelet or /etc/default/kubelet or /etc/systemd/system/kubelet.service