When I try To deploy Kubernetes cluster on centos 7 server ,i got below error so i try to deploy a different server same error happen ,so kindly help me to fix this issue
Adding Kubernetes repo with the below command in RockyLinux 8 (Like CentOS 8) worked for me!
# adding google kubernetes repository for amd64 (x86_64) architecture
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
Run this command to check yum search kubeadm
I have running k8s cluster using kops. the autoscaling policy terminate the master machine and recreated a new one since then every time i try to run kubectl command it returns "The connection to the server refused, did you specify the right host or port". i tried to ssh to the master machine but the did not found any of k8s services so i think the autoscale policy did not configure the master node correctly. so what should i do in this situation ?
update: also i found this log in syslog file:
E: Package 'ebtables' has no installation candidate
Jun 25 12:03:33 ip-172-20-35-193 nodeup[7160]: I0625 12:03:33.389286 7160 executor.go:145] No progress made, sleeping before retrying 2 failed task(s)
the issue was the kops was unable to install ebtables and conntrack so i installed it manually by :
sudo apt-get -o Acquire::Check-Valid-Until=false update
sudo apt-get install -y ebtables --allow-unauthenticated
sudo apt-get install --yes conntrack
and everything is running fine now
I install ceph cluster with ceph-deploy tool. And I want to install ceph-mgr-dashboard that its removed from ceph-mgr modules. In the official ceph tell if you want to use ceph-dashboard must be installl ceph-mgr-dashboard package and enable its.
This package with apt install ceph-mgr-dashboard. But it don't do anything with its. This is very miserable for ceph document. There's instruction but can't use follow its.
Install the dashboard rpm in all the mgr servers:
yum install ceph-mgr-dashboard.noarch
Generate self-signed certificate (key point, in my installations, dashboard does not run without certificate):
ceph dashboard create-self-signed-cert
Enable dashboard:
sudo ceph mgr module enable dashboard
check status:
sudo ceph mgr services
I have set up my master node and I am trying to join a worker node as follows:
kubeadm join 192.168.30.1:6443 --token 3czfua.os565d6l3ggpagw7 --discovery-token-ca-cert-hash sha256:3a94ce61080c71d319dbfe3ce69b555027bfe20f4dbe21a9779fd902421b1a63
However the command hangs forever in the following state:
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
Since this is just a warning, why does it actually fails?
edit: I noticed the following in my /var/log/syslog
Mar 29 15:03:15 ubuntu-xenial kubelet[9626]: F0329 15:03:15.353432 9626 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
Mar 29 15:03:15 ubuntu-xenial systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Mar 29 15:03:15 ubuntu-xenial systemd[1]: kubelet.service: Unit entered failed state.
First if you want to see more detail when your worker joins to the master use:
kubeadm join 192.168.1.100:6443 --token m3jfbb.wq5m3pt0qo5g3bt9 --discovery-token-ca-cert-hash sha256:d075e5cc111ffd1b97510df9c517c122f1c7edf86b62909446042cc348ef1e0b --v=2
Using the above command I could see that my worker could not established connection with the master, so i just stoped the firewall:
systemctl stop firewalld
This can be solved by creating a new token
using this command:
kubeadm token create --print-join-command
and use the token generated for joining other nodes to the cluster
The problem had to do with kubeadm not installing a networking CNI-compatible solution out of the box;
Therefore, without this step the kubernetes nodes/master are unable to establish any form of communication;
The following task addressed the issue:
- name: kubernetes.yml --> Install Flannel
shell: kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
become: yes
environment:
KUBECONFIG: "/etc/kubernetes/admin.conf"
when: inventory_hostname in (groups['masters'] | last)
I did get the same error on CentOS 7 but in my case join command worked without problems, so it was indeed just a warning.
> [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker
> cgroup driver. The recommended driver is "systemd". Please follow the
> guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading
> configuration from the cluster... [preflight] FYI: You can look at
> this config file with 'kubectl -n kube-system get cm kubeadm-config
> -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
As the official documentation mentions, there are two common issues that make the init hang (I guess it also applies to join command):
the default cgroup driver configuration for the kubelet differs from
that used by Docker. Check the system log file (e.g. /var/log/message)
or examine the output from journalctl -u kubelet. If you see something
like the following:
First try the steps from official documentation and if that does not work please provide more information so we can troubleshoot further if needed.
I had a bunch of k8s deployment scripts that broke recently with this same error message... it looks like docker changed it's install. Try this --
previous install:
apt-get isntall docker-ce
updated install:
apt-get install docker-ce docker-ce-cli containerd.io
How /var/lib/kubelet/config.yaml is created?
Regarding the /var/lib/kubelet/config.yaml: no such file or directory error.
Below are steps that should occur on the worker node in order for the mentioned file to be created.
1 ) The creation of the /var/lib/kubelet/ folder. It is created when the kubelet service is installed as mentioned here:
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
2 ) The creation of config.yaml. The kubeadm join flow should take place so when you run kubeadm join, kubeadm uses the Bootstrap Token credential to perform a TLS bootstrap, which fetches the credential needed to download the kubelet-config-1.X ConfigMap and writes it to /var/lib/kubelet/config.yaml.
After a successful execution you should see the logs below:
.
.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
.
.
So, after these 2 steps you should have /var/lib/kubelet/config.yaml in place.
Failure of the kubeadm join flow
In your case, it seems that the kubeadm join flow failed which might happen due to multiple reasons like bad configuration of iptables, ports that are already in use, container runtime not installed properly, etc' - as described here and here.
As far as I know, the fact that no networking CNI-compatible solution was in place should not affect the creation of /var/lib/kubelet/config.yaml:
A) We can see the under the kubeadm preflight checks what issues will cause the join phase to fail.
B ) I also tested this issue by removing the current solution I used (Calico) and ran kubeadm reset and kubeadm join again and no errors appeared in the kubeadm logs (I've got the successful execution logs I mentioned above) and /var/lib/kubelet/config.yaml was created properly.
(*) Of course that the cluster can't function in this state - I just wanted to emphasize that I think the problem was one of the options mentioned in A.
I'm going to re-export ceph into iSCSI, but I can't do this. Looks like epel package scsi-target-utils in CentOS 7 compiled without rbd support.
When I run:
$ sudo tgtadm --lld iscsi --mode system --op show
System:
State: ready
debug: off
LLDs:
iscsi: ready
iser: error
Backing stores:
sheepdog
bsg
sg
null
ssc
smc (bsoflags sync:direct)
mmc (bsoflags sync:direct)
rdwr (bsoflags sync:direct)
aio
Device types:
disk
cd/dvd
osd
controller
changer
tape
passthrough
iSNS:
iSNS=Off
iSNSServerIP=
iSNSServerPort=3205
iSNSAccessControl=Off
I don't see any ceph related strings. As noted on ceph site the rbd support patch has been accepted into the mainline of tgt repository.
How to enable rbd support into scsi-target-utils package in CentOS 7?
As I investigated, rbd support actually disabled in scsi-target-utils package. You can see it if you install it's SRPM and look at SPEC file of this package.
Here are 7-8 lines of this file:
# Disable rbd on epel7 b/c deps are not present
%{!?rhel:%global with_rbd 1}
Also there is an additional dependency for this backstore in scsi-target-utils. You will need to install ceph-devel package (could be fetched from ceph repos).
So, to install scsi-target-utils with rbd support you need to do actions below:
Add official ceph repository
Add epel repository
Install build environment
Download and install scsi-target-utils SRPM
Set global flag with_rbd in SRPM's spec file
Build SRPM
Install dependent packages for scsi-target-utils
Install built scsi-target-utils and scsi-target-utils-rbd packages
Or in Bash language:
cd /tmp
sudo yum install -y epel-release
sudo rpm --import 'https://download.ceph.com/keys/release.asc'
sudo yum install -y http://download.ceph.com/rpm/rhel7/noarch/ceph-release-1-1.el7.noarch.rpm
sudo yum install -y yum-utils rpm-build redhat-rpm-config make gcc
yumdownloader --source scsi-target-utils
rpm -i scsi-target-utils*.src.rpm
cd ~/rpmbuild
sed -ie 's/%{!?rhel:%global with_rbd 1}/%global with_rbd 1/' SPECS/scsi-target-utils.spec
sudo yum install -y libxslt docbook-style-xsl libaio-devel systemd-devel libibverbs-devel librdmacm-devel ceph-devel glusterfs-api-devel
rpmbuild -ba SPECS/scsi-target-utils.spec
sudo yum install -y ./RPMS/x86_64/scsi-target-utils-1.*.rpm ./RPMS/x86_64/scsi-target-utils-rbd-1.*.rpm
After installation was finished start tgtd daemon and check for available components:
$ sudo systemctl enable tgtd.service
$ sudo systemctl start tgtd.service
$ sudo tgtadm --lld iscsi --mode system --op show
System:
State: ready
debug: off
LLDs:
iscsi: ready
iser: error
Backing stores:
rbd (bsoflags sync:direct)
sheepdog
bsg
sg
null
ssc
smc (bsoflags sync:direct)
mmc (bsoflags sync:direct)
rdwr (bsoflags sync:direct)
aio
Device types:
disk
cd/dvd
osd
controller
changer
tape
passthrough
iSNS:
iSNS=Off
iSNSServerIP=
iSNSServerPort=3205
iSNSAccessControl=Off