I build a ceph cluster with kubernetes and it create an osd block into the sdb disk.
I had delete the ceph cluster but cleanup all the kubernetes instance which were created by ceph cluster, but it did't delete the osd block which is mounted into sdb.
I am a beginner in kubernetes. How can I remove the osd block from sdb.
And why the osd block will have all the disk space?
I find a way to remove osd block from disk on ubuntu18.04:
Use this command to show the logical volume information:
$ sudo lvm lvdisplay
Then you will get the log like this:
Then execute this command to remove the osd block volumn.
$ sudo lvm lvremove <LV Path>
Check if we have removed the volume successfully.
$ lsblk
Related
I've changed log_driver to "local" in daemon.json docker configuration file, because an high activity level on rados gateway logs had satured disk space. My will was to change to journald to have logrotate. Unfortunately, after restart the docker daemon, many Ceph services did disappeared even as containers images. So now that node had caused an HEALTH_ERR because it lost 1 mgr, 1 mon and 3 osd services at the same time.
I've tried to use some ceph commands inside cephadm shell (on another node), but it freezes and nothing happened. What can I try to do to restore the node's services and cluster health?
I'm trying to understand how to find out the current and real disk usage of a ceph cluster and I noticed that the output of a rbd du is way different from the output of a df -h once that rbd is mounted as a disk.
Example:
Inside the ToolBox I have the following:
$ rbd du replicapool/csi-vol-da731ad9-eebe-11eb-9fbd-f2c976e9e23a
warning: fast-diff map is not enabled for csi-vol-da731ad9-eebe-11eb-9fbd-f2c976e9e23a. operation may be slow.
2021-09-01T13:53:23.482+0000 7f8c56ffd700 -1 librbd::object_map::DiffRequest: 0x557402c909c0 handle_load_object_map: failed to load object map: rbd_object_map.8cdeb6e704c7e0
NAME PROVISIONED USED
csi-vol-da731ad9-eebe-11eb-9fbd-f2c976e9e23a 100 GiB 95 GiB
But, inside the Pod that is mounting this rbd, I have:
$ k exec -it -n monitoring prometheus-prometheus-operator-prometheus-1 -- sh
Defaulting container name to prometheus.
Use 'kubectl describe pod/prometheus-prometheus-operator-prometheus-1 -n monitoring' to see all of the containers in this pod.
/prometheus $ df -h
Filesystem Size Used Available Use% Mounted on
overlay 38.0G 19.8G 18.2G 52% /
...
/dev/rbd5 97.9G 23.7G 74.2G 24% /prometheus
...
Is there a reason for the two results to be so different? Can this be a problem when ceph keeps track of the total space used by the cluster to know how much space is available?
I tried this but didn't work:
minikube start --vm-driver=hyperkit --memory 8192 --mount \
--mount-string /home/user/app1:/minikube-host/app1 \
--mount-string /home/user/app2:/minikube-host/app2
but only /home/user/app2 was mounted.
You can run multiple mount commands after starting your minikube to mount the different folders:
minikube mount /home/user/app1:/minikube-host/app1
minikube mount /home/user/app2:/minikube-host/app2
This will mount multiple folders in minikube .
There is no need for multiple volumes when starting in your case.
Also, minikube mount after start needs a terminal in running state(opened always).
You can mount /home/user -> /minikube-host. All the folders inside /home/user will be inside VM at /minikube-host.
/home/user/app1 will be available inside VM as
/minikube-host/app1
/home/user/app2 will be available inside VM as
/minikube-host/app2
minikube start --vm-driver=hyperkit --memory 8192 --mount \
--mount-string /home/user:/minikube-host
Hope this helps !
Currently there is no way. Even using "minikube mount" you need to run each command in separate terminal, what is completely unusable
I'd Created three CephFS and try to Mount it on Client node but didn't find any way to mount specific one Cephfs. I'd tried
mount -t ceph mon-node:/ /mnt/apachefs/ -o mds_namespace=webfs,secret=ceph-authtool -p /etc/ceph/ceph.client.admin.keyring
But it fails, Is there any other way to Mount Multiple File systems on Client node with use of kernel Driver, mount.ceph or ceph-fuse?
It is possible to specify multiple CephFS by following options.
-o mds_namespace ... kernel Driver (mount -t ceph)
--client_mds_namespace ... ceph fuse (cephf-fuse)
I am pretty sure that -o mds_namespace did not work due to old kernel version. If you are using CentOS7, please test it with ceph-fuse 12.2.4 or later version with (--client_mds_namespace). It worked fine on my env.
If you using Debian based system, you can install ceph-fs-common package with apt, like: apt-get install -y ceph-fs-common.
ceph fs volume create nextcloud [<placement>]
ceph fs volume create okd-admin [<placement>]
#/etc/fstab
### one
10.10.20.6:6789:/folder1 /USERDATA ceph name=admin,secretfile=/etc/ceph/secret.key,fs=nextcloud,noatime,_netdev 0 2
### two
10.10.20.5:6789:/folder2 /mnt/cephfs ceph name=okd-admin,secretfile=/etc/ceph/secret-openshift.key,fs=openshift,noatime,_netdev 0 2
I am running local ceph (version 10.2.7) and kubernetes v1.6.5 in separate cluster. Using PV and PVM Claim I was about mount the rbd device to the pod.
When I configure to use ceph Storage Classes for Dynamic Provisioning. its giving the below error for pvclaim.
E0623 00:22:30.520160 1 rbd.go:317] rbd: create volume failed, err: failed to create rbd image: executable file not found in $PATH, command output:
W0623 00:22:45.513291 1 rbd_util.go:364] failed to create rbd image, output
E0623 00:22:45.513308 1 rbd.go:317] rbd: create volume failed, err: failed to create rbd image: executable file not found in $PATH, command output:
W0623 00:22:45.516768 1 rbd_util.go:364] failed to create rbd image, output
E0623 00:22:45.516830 1 rbd.go:317] rbd: create volume failed, err: failed to create rbd image: executable file not found in $PATH, command output:
I have installed ceph comman package on all the kuberernets cluster nodes. all the node running centos 7 OS.
How can I fix this error message?
Thanks
SR
Well, the internal kubernetes.io/rbd does not work, which is known for very long time and eg. discussed here.
One should use an external provisionier like the one mentioned here.
Kubelet is trying to run rbd create ....
The rbd command needs to be in the PATH of the kubelet binary.
Kubelet usually runs as root. Check if you can run rbd create as root. If not, add it to root's path, or to the environment of whatever script (systemd?) that is starting Kubelet.
You need define a new provisioner rbd-provisioner. Ref this issue.