I want to change kubelet logs directory location. For achieving same I have modified /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file contents as follows(as mentioned in how to change kubelet working dir to somewhere else)
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_EXTRA_ARGS=--root-dir=/D/kubelet-files/ --log-dir=/D/kubelet-logs/"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_EXTRA_ARGS $KUBELET_KUBEADM_ARGS
After this I executed commands :
systemctl daemon-reload
systemctl restart kubelet
Even I restarted kubeadm. But still logs directory location is not changed and it goes on writing to default /var/lib/kubelet directory . I am using Kubernetes version: v1.11.2. What might be the issue?
I have tried on some machines of mine on GCloud with v1.11.2
and I noticed the same your problem.
The parameter --log-dir in kubelet seems to have no effect.
It is worth opening an issue in kubelet project.
Related
I'm trying to init a kubernetes cluster using kubeadm. I followed the instructions found here for the stacked control plane/etcd nodes. For the container runtime installation, it is recommended that the runtime (in my case, containerd) and kubelet use the same cgroup driver (in my case, systemd).
Still following the instructions, I added
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
to /etc/containerd/config.toml and then restarted containerd using sudo systemctl restart containerd
So far so good, everything work as expected. Then I get to the part where I have to configure cgroup driver used by kubelet. It says I have to call kubeadm init with a config file. Something like: sudo kubeadm init --config.yaml. Here is my problem: I found very little information about what should that config file look like. The documentation says kubeadm config print init-defaults should print the default config but the default config has nothing to do with my setup (ie, it uses docker instead of containerd). I found an example of a file here but so far I've not managed to adapt it to my setup and I've found very little documentation to do so. There has to be a simpler way than just rewriting an entire config file just for one attribute change, right? How come I can't print the literal config file used by kubeadm init?
My solution was to restart containerd with default config (no systemd cgroup), then run kubeadm init as I would normally. Once the cluster was started, I printed the config to a file with: kubeadm config view. I then modified that file to add the required parameters to set systemd cgroup. Finally, I configured containerd to use systemd and ran kubeadm init with the newly created config file. Everything worked.
Warning: The command kubeadm config view says that the "view" command is deprecated and to use kubectl get cm -o yaml -n kube-system kubeadm-config instead, but the output of that command doesn't create a valid kubeadm config file, so I don't recommend doing that.
Attempting to deploy a k8 master node using kubeadm from a fork of the Kubernetes repository, branch release-1.19. What configuration is necessary ahead of running kubeadm init {opts...}
The kubeadm guide recommends install of kubeadm, kubectl and kubelet using apt. The guide states, following installation that "The kubelet is now restarting every few seconds, as it waits in a crashloop for kubeadm to tell it what to do."
From a local repository I'm compiling the Kubernetes binaries (kubeadm, kubectl and kubelet) using the 'make all' method. Then scp'ing them to the master node at /usr/local/bin with exec perms.
Executing kubeadm init fails since the kubelet is not running/configured. However, initialising the required kubelet.service from the kubelet binary seems to require the certs (ca.pem) and configs (kubelet.config.yaml) that I assumed kubeadm generates. So chicken-egg situation regarding kubeadm and the kubelet.
The question then is, what additional configurations does the apt installation complete for initialising the kubelet.service?
Is there a minimal config & service template kubelet can be started with ahead of kubeadm init?
Does kubeadm replace the certs used by the pre-initialised kubelet?
Any help/direction would be hugely appreciated. Online docs/threads for building from source are sparse
For anyone searching, found the solution to this:
Install dependencies through apt: apt-transport-https, conntrack, socat, ipset
Move kubelet, kubeadm, kubectl binaries to /usr/local/bin and give exec perms
Write systemd kubelet.service file to /etc/systemd/system
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
Write kubelet config file to /etc/systemd/system/kubelet.service.d
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
Build cni plugins
https://github.com/containernetworking/plugins
ie. For linux, build_linux.sh
Copy cni plugin binaries to /opt/cni
Start Kubelet
systemctl daemon-reload
systemctl enable kubelet --now
systemctl start kubelet
Now kubeadm init can run
In short this initialised the kubelet.service systemd process prior to the kubeadm init; with some default/minimal configs. kubeadm init then modifies the process's configs on execution.
When I restart the docker service in work node, the logs of kubelet in master node report a no such file error.
# in work node
# systemctl restart docker service
# in master node
# journalctl -u kubelet
# failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
Arghya is right but I would like to add some info you should be aware of:
You can execute kubeadm init phase kubelet-start to only invoke a particular step that will write the kubelet configuration file and environment file and then start the kubelet.
After performing a restart there is a chance that swap would re-enable. Make sure to run swapoff -a in order to turn it off.
If you encounter any token validation problems than simply run kubeadm token create --print-join-command and than do the join process with the provided info. Remember that tokens expire after 24 hours by default.
If you wish to know more about kubeadm init phase you can find it here and here.
Please let me know if that helped.
You might have done kubeadm reset which cleans up all files.
Just do kubeadm reset --force to reset the node and then kubeadm init in master node and kubeadm join in woker node thereafter.
Running kubelet --pod-manifest-path=/newdir returns errors.
It's not clear to me where I can add the --pod-manifest-path to a systemd file on Ubuntu. I know for v1.12 there is the KubeletConfiguration type but I am using v1.11.
You can find in documentation:
Configure your kubelet daemon on the node to use this directory by running it with --pod-manifest-path=/etc/kubelet.d/ argument. On Fedora edit /etc/kubernetes/kubelet to include this line:
KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --pod-manifest-path=/etc/kubelet.d/"
Instructions for other distributions or Kubernetes installations may vary.
Restart kubelet. On Fedora, this is:
[root#my-node1 ~] $ systemctl restart kubelet
If you want to use --pod-manifest-path you can define it in Kubelet configuration.
Usually it is stored /etc/kubernetes/kubelet or /etc/default/kubelet or /etc/systemd/system/kubelet.service
I was trying to setup a Kubernetes cluster based on the documentation. https://kubernetes.io/docs/tasks/tools/install-kubeadm/
I install kubeadm by running:
yum install -y kubeadm
I was about to update the 10-kubeadm.conf file as mentioned in the doc. But the file looks completely different, it was like this https://github.com/kubernetes/kubernetes/blob/master/build/rpms/10-kubeadm.conf.
Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
This is a file that kubeadm init and kubeadm join generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
The .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
It does not have Cgroup driver variable . So in this case how should we proceed the installation .
First of all ensure that besides kubeadm you also installed kubelet and kubectl. If not, install them.
yum install -y kubelet kubectl
Check if Docker has been started with cgroup driver systemd.
docker info | grep -i cgroup
Modify your 10-kubeadm.conf file and add a new string.
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Additionally, you have to add $KUBELET_CGROUP_ARGS variable to the ExecStart section.
And as a final step, reload systemd manager configuration and restart kubelet service as described here.
systemctl daemon-reload && service kubelet restart
UPDATE
Since version 1.11 Kubernetes automatically detects the right cgroup driver and you can just skip step about settings of cgroup driver.
That is from the changelog:
kubeadm now detects the Docker cgroup driver and starts the kubelet with the matching driver. This eliminates a common error experienced by new users in when the Docker cgroup driver is not the same as the one set for the kubelet due to different Linux distributions setting different cgroup drivers for Docker, making it hard to start the kubelet properly. (#64347, #neolit123)