When mounting glusterfs on servers where kubernetes is installed via kubespray, an error occurs:
Mount failed. Please check the log file for more details.
[2020-12-20 11:40:42.845231] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.8 (args: /usr/sbin/glusterfs --volfile-server=kube-pv01 --volfile-id=/replicated /mnt/replica/)
pending frames:
patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash:
2020-12-20 11:40:42
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.8.8
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_msg_backtrace_nomem+0x7e)[0x7f084d99337e]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x334)[0x7f084d99cac4]
/lib/x86_64-linux-gnu/libc.so.6(+0x33060)[0x7f084bfe2060]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_ports_reserved+0x13a)[0x7f084d99d12a]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_process_reserved_ports+0x8e)[0x7f084d99d35e]
/usr/lib/x86_64-linux-gnu/glusterfs/3.8.8/rpc-transport/socket.so(+0xc09b)[0x7f08481ef09b]
/usr/lib/x86_64-linux-gnu/glusterfs/3.8.8/rpc-transport/socket.so(client_bind+0x9d)[0x7f08481ef48d]
/usr/lib/x86_64-linux-gnu/glusterfs/3.8.8/rpc-transport/socket.so(+0x98d3)[0x7f08481ec8d3]
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_reconnect+0xc9)[0x7f084d75e0f9]
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_start+0x39)[0x7f084d75e1c9]
/usr/sbin/glusterfs(glusterfs_mgmt_init+0x159)[0x5604fe77df79]
/usr/sbin/glusterfs(glusterfs_volumes_init+0x44)[0x5604fe778e94]
/usr/sbin/glusterfs(main+0x811)[0x5604fe7754b1]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7f084bfcf2e1]
/usr/sbin/glusterfs(_start+0x2a)[0x5604fe7755ea]
---------
[11:41:47] [root#kube01.unix.local ~ ]# lsb_release -a
Distributor ID: Debian
Description: Debian GNU/Linux 9.12 (stretch)
Release: 9.12
Codename: stretch
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
On servers without kubespray is mounted successfully.
How do I fix this error?
When mounting glusterfs on servers where kubernetes is installed via kubespray, an error occurs:
Solved. Update Debian 10
Related
I installed Minikube on my Debian 10, but when I try to start it, I
get these errors:
$ minikube start
* minikube v1.25.2 on Debian 10.1
* Unable to pick a default driver. Here is what was considered, in preference order:
- docker: Not healthy: "docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/version: dial unix /var/run/docker.sock: connect: permission denied
- docker: Suggestion: Add your user to the 'docker' group: 'sudo usermod -aG docker $USER && newgrp docker' <https://docs.docker.com/engine/install/linux-postinstall/>
- kvm2: Not healthy: /usr/bin/virsh domcapabilities --virttype kvm failed:
error: failed to get emulator capabilities
error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
exit status 1
- kvm2: Suggestion: Follow your Linux distribution instructions for configuring KVM <https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/>
* Alternatively you could install one of these drivers:
- podman: Not installed: exec: "podman": executable file not found in $PATH
- vmware: Not installed: exec: "docker-machine-driver-vmware": executable file not found in $PATH
- virtualbox: Not installed: unable to find VBoxManage in $PATH
I added my user to the docker group using:
sudo usermod -aG docker $USER
and I insalled kvm without any apparent problems as far as I understand:
kvm --version
QEMU emulator version 3.1.0 (Debian 1:3.1+dfsg-8~deb10u1)
Copyright (c) 2003-2018 Fabrice Bellard and the QEMU Project developers
$ lsmod | grep kvm
kvm 729088 0
irqbypass 16384 1 kvm
$ sudo virsh list --all
Id Name State
-----------------------------
1 debian10-MK running
What could be the problem and solution then?
Thanks,
Tamar
I am trying to deploy a k8s cluster in openstack rocky but after long time it fails. I've checked orchestration stack and see that kube_minions resources never completes. Checking the logs output for all the instances created:
[ 196.817505] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 215.082433] random: crng init done
Fedora 27 (Atomic Host)
Kernel 4.14.18-300.fc27.x86_64 on an x86_64 (ttyS0)
host-10-0-0-3 login: [ 691.438618] bridge: filtering via
arp/ip/ip6tables is no longer available by default. Update your scripts
to load br_netfilter if you need this.
[ 691.516277] Bridge firewalling registered
[ 692.149217] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
[ 701.932912] IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready
Checking deeply in the instances I've found that in master node can not start heat-agent-service...
_prefix=docker.io/openstackmagnum/
atomic install --storage ostree --system --system-package no --set
REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-
agent docker.io/openstackmagnum/heat-container-agent:rocky-stable
systemctl start heat-container-agent
Failed to start heat-container-agent.service: Unit heat-container-
agent.service not found.
2019-04-04 14:57:40,238 - util.py[WARNING]: Failed running
/var/lib/cloud/instance/scripts/part-013 [5]´
I try to stack up my kubeadm cluster with three masters. I receive this problem from my init command...
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
But I do not use no cgroupfs but systemd
And my kubelet complain for not knowing his nodename.
Jan 23 14:54:12 master01 kubelet[5620]: E0123 14:54:12.251885 5620 kubelet.go:2266] node "master01" not found
Jan 23 14:54:12 master01 kubelet[5620]: E0123 14:54:12.352932 5620 kubelet.go:2266] node "master01" not found
Jan 23 14:54:12 master01 kubelet[5620]: E0123 14:54:12.453895 5620 kubelet.go:2266] node "master01" not found
Please let me know where is the issue.
The issue can be because of docker version, as docker version < 18.6 is supported in latest kubernetes version i.e. v1.13.xx.
Actually I also got the same issue but it get resolved after downgrading the docker version from 18.9 to 18.6.
If the problem is not related to Docker it might be because the Kubelet service failed to establish connection to API server.
I would first of all check the status of Kubelet: systemctl status kubelet and consider restarting with systemctl restart kubelet.
If this doesn't help try re-installing kubeadm or running kubeadm init with other version (use the --kubernetes-version=X.Y.Z flag).
In my case,my k8s version is 1.21.1 and my docker version is 19.03. I solved this bug by upgrading docker to version 20.7.
What are installed for minikube:
$ ls -al /usr/local/bin/
-rwxr-xr-x 1 root root 26406912 Jun 14 12:05 docker-machine
-rwxrwxr-x 1 me libvirtd 11889064 Jun 14 12:07 docker-machine-driver-kvm
-rwxrwxr-x 1 me me 70232912 Jun 14 11:58 kubectl
-rwxrwxr-x 1 me me 82512696 Jun 14 11:57 minikube
Trying to start cluster by minikube
$ minikube start --vm-driver=kvm
Starting local Kubernetes v1.6.4 cluster...
Starting VM...
E0614 12:07:39.515994 14655 start.go:127] Error starting host: Error creating host: Error creating machine: Error in driver during machine creation: virError(Code=8, Domain=44, Message='invalid argument: could not find capabilities for domaintype=kvm ').
Retrying.
E0614 12:07:39.517076 14655 start.go:133] Error starting host: Error creating host: Error creating machine: Error in driver during machine creation: virError(Code=8, Domain=44, Message='invalid argument: could not find capabilities for domaintype=kvm ')
I am new to kubernetes. Any idea how to fix it? Thanks
UPDATE
sudo /usr/sbin/kvm-ok
INFO: /dev/kvm does not exist
HINT: sudo modprobe kvm_intel
INFO: Your CPU supports KVM extensions
INFO: KVM (vmx) is disabled by your BIOS
HINT: Enter your BIOS setup and enable Virtualization Technology (VT),
and then hard poweroff/poweron your system
KVM acceleration can NOT be used
$ dmesg | grep kvm
[ 2.114855] kvm: disabled by bios
[ 2.327746] kvm: disabled by bios
[ 120.423249] kvm: disabled by bios
[ 222.250977] kvm: disabled by bios
My update is close to the solution. The solution is to enable virtualization in the BIOS.
1, Power on your PC and open the BIOS.
2, Go to the security section and enable virtualization.
you need to install the kvm package refer package.
https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm-driver
# Install libvirt and qemu-kvm on your system, e.g.
# Debian/Ubuntu
$ sudo apt install libvirt-bin qemu-kvm
# Fedora/CentOS/RHEL
$ sudo yum install libvirt-daemon-kvm kvm
# Add yourself to the libvirtd group (use libvirt group for rpm based distros) so you don't need to sudo
# Debian/Ubuntu (NOTE: For Ubuntu 17.04 change the group to `libvirt`)
$ sudo usermod -a -G libvirtd $(whoami)
# Fedora/CentOS/RHEL
$ sudo usermod -a -G libvirt $(whoami)
# Update your current session for the group change to take effect
# Debian/Ubuntu (NOTE: For Ubuntu 17.04 change the group to `libvirt`)
$ newgrp libvirtd
# Fedora/CentOS/RHEL
$ newgrp libvirt
I have this Docker command:
docker run -d mongo
this will build and run a mongodb server running in a docker container
However, I get an error:
no space left on device
I am on MacOS, and using the newer versions of Docker which use hyper-v instead of VirtualBox (I think that's correct).
Here is the exact error message from the mongo container:
$ docker logs efee16702c5756659d563b98d4ae0f58ecf1f1bba8a54f63443c0ae4b520ab4e
about to fork child process, waiting until server is ready for connections.
forked process: 21
2017-05-04T20:23:51.412+0000 I CONTROL [main] ***** SERVER RESTARTED *****
2017-05-04T20:23:51.430+0000 I CONTROL [main] ERROR: Cannot write pid file to /tmp/tmp.Lo035QkbfL: No space left on device
ERROR: child process failed, exited with error number 1
Any idea how to fix this and prevent it from happening in future?
As suggested, the output of df -h is:
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1 465Gi 116Gi 349Gi 25% 1963838 4293003441 0% /
devfs 183Ki 183Ki 0Bi 100% 634 0 100% /dev
map -hosts 0Bi 0Bi 0Bi 100% 0 0 100% /net
map auto_home 0Bi 0Bi 0Bi 100% 0 0 100% /home
Output of docker info is:
$ docker info
Containers: 5
Running: 0
Paused: 0
Stopped: 5
Images: 741
Server Version: 17.03.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.13-moby
Operating System: Alpine Linux v3.5
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 1.952 GiB
Name: moby
ID: OR4L:WYWW:FFAP:IDX3:B6UK:O2AN:UVTO:EPH6:GYSV:4GV4:L5WP:BQTH
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 17
Goroutines: 30
System Time: 2017-05-04T20:45:27.056157913Z
EventsListeners: 1
No Proxy: *.local, 169.254/16
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
As you state in the comments to the question, ls -altrh ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 returns the following:
-rw-r--r--# 1 alexamil staff 53G
This is a known bug on MacOS (actually, not only) and an official dev comment could be found here. Except for one thing: I read, that different people get different size limit. In the comment it is 64Gb, but for another person it was 20Gb.
There are a couple walkarounds, but no definite solution that I could find.
The manual one
Run docker ps -a and manually remove all unused containers. Then run docker images and remove manually all the intermediate and unused images.
The simplest one
Delete the Docker.qcow2 file entirely. But you will lose all images and containers. Completely.
The less simple
Another way is to run docker volume prune, which will remove all unused volumes
The resizing one (keeps the data)
Another idea that comes to me is to expand the disk image size with QEMU or something like it:
$ brew install qemu
$ /Applications/Docker.app/Contents/MacOS/qemu-img resize ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 +5G
After you expanded the image, you will need to run a VM in which you should run GParted against Docker.qcow2 and expand the partition to use added space. You could use GParted Live ISO for that:
$ qemu-system-x86_64 -drive file=Docker.qcow2 -m 512 -cdrom ~/Downloads/gparted-live.iso -boot d -device usb-mouse -usb
Some people report this either doesn't work or doesn't help.
Yet another resizing one (wipes the data)
Create a substitute image with desired size (120G):
$ qemu-img create -f qcow2 ~/data.qcow2 120G
$ cp ~/data.qcow2 /Application/Docker.app/Contents/Resources/moby/data.qcow2
$ rm ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2
data.qcow2 is copied to ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 when you restart docker.
This walkaround comes from this comment.
Hope this helps. Good luck!