How to explain Ceph space usage - kubernetes

I looked up RBD disk space usage, but found different statistics from Ceph and the host which mounts the disk.
From Ceph:
$ rbd -p rbd du
NAME PROVISIONED USED
kubernetes-dynamic-pvc-13a2d932-6be0-11e9-b53a-0a580a800339 40GiB 37.8GiB
From the host which mounts the disk
$ df -h
Filesystem Size Used Available Use% Mounted on
/dev/rbd0 39.2G 26.6G 10.6G 72% /data
How could I explain the difference?

You can check mount options of /dev/rbd0 device. There should be no 'discard' option. Without that option filesystem cannot report to Ceph about reclaimed space. So Ceph has no idea how much space is actually occupied on rbd volume. This is not a big problem and can be safely ignored. You can rely on stats reported by kubelet.

Related

Ceph RBD real space usage is much larger than disk usage once mounted

I'm trying to understand how to find out the current and real disk usage of a ceph cluster and I noticed that the output of a rbd du is way different from the output of a df -h once that rbd is mounted as a disk.
Example:
Inside the ToolBox I have the following:
$ rbd du replicapool/csi-vol-da731ad9-eebe-11eb-9fbd-f2c976e9e23a
warning: fast-diff map is not enabled for csi-vol-da731ad9-eebe-11eb-9fbd-f2c976e9e23a. operation may be slow.
2021-09-01T13:53:23.482+0000 7f8c56ffd700 -1 librbd::object_map::DiffRequest: 0x557402c909c0 handle_load_object_map: failed to load object map: rbd_object_map.8cdeb6e704c7e0
NAME PROVISIONED USED
csi-vol-da731ad9-eebe-11eb-9fbd-f2c976e9e23a 100 GiB 95 GiB
But, inside the Pod that is mounting this rbd, I have:
$ k exec -it -n monitoring prometheus-prometheus-operator-prometheus-1 -- sh
Defaulting container name to prometheus.
Use 'kubectl describe pod/prometheus-prometheus-operator-prometheus-1 -n monitoring' to see all of the containers in this pod.
/prometheus $ df -h
Filesystem Size Used Available Use% Mounted on
overlay 38.0G 19.8G 18.2G 52% /
...
/dev/rbd5 97.9G 23.7G 74.2G 24% /prometheus
...
Is there a reason for the two results to be so different? Can this be a problem when ceph keeps track of the total space used by the cluster to know how much space is available?

how to drop ceph osd block?

I build a ceph cluster with kubernetes and it create an osd block into the sdb disk.
I had delete the ceph cluster but cleanup all the kubernetes instance which were created by ceph cluster, but it did't delete the osd block which is mounted into sdb.
I am a beginner in kubernetes. How can I remove the osd block from sdb.
And why the osd block will have all the disk space?
I find a way to remove osd block from disk on ubuntu18.04:
Use this command to show the logical volume information:
$ sudo lvm lvdisplay
Then you will get the log like this:
Then execute this command to remove the osd block volumn.
$ sudo lvm lvremove <LV Path>
Check if we have removed the volume successfully.
$ lsblk

Limit Disk usage in Docker+MongoDB

I am using the official mongo Docker image to start a MongoDB container where my boot disk is limited (e.g. 10G) I configured the docker to run with Google Cloud Logging driver and was hoping Google to store all the logs and save my local disk space. However, I notice the disk continues to grow:
$ df -h
/dev/sda1 9.9G 4.5G 4.9G 49%
As I digged deeper I realized the size of docker containers seems to be growing over time.
$ sudo du -sh /var/lib/docker/
3.6G /var/lib/docker/
However, I can't go further as somehow I can't access the directories within.
If I go inside the docker and du -sh the root, I don't find any suspicious directories occupying space.
So my problem is how do I find out where the disk space is used and how do I eliminate it.
My docker startup command (shown without project options)
docker run -d --log-driver=gcplogs mongo mongod
EDIT: I noticed the size growing has stopped at 4.5GB from ~3GB for a while. So I supposed it has reached some equilibrium now.

Docker container: MongoDb Insufficient free space for journal files

I am running MongoDB inside a Docker (version 1.10.1, on OSX) container and it is giving this error:
MongoDb Insufficient free space for journal files
I am not able to find out weather the issue in on the host, the container, or in virtual box?
However, on my host I have:
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1 465Gi 75Gi 389Gi 17% 19777401 102066309 16% /
And on the docker container:
Filesystem Inodes IUsed IFree IUse% Mounted on
none 1218224 742474 475750 61% /
I have also mounted a volume from the host with:
run -it -v /Users/foobar/Projects/compose:/data/db mongoImage:0.1 /bin/bash
Thanks for the comments #andy, the issue did seem to be within the virtual box env.
I was able to resolve the issue by:
backing up all docker images
cloning the default virtual box iso (as
a backup)
deleting the default virtual box iso and all associated
files.
restarting docker, a new default Vbox iso was created. This
resolved the issue (which I expect to have again at some point)

Can't start emulator environment (error NAND: could not write file...file exists)

I'm trying to start developing in android but have had problems setting up the development environment:
I am running Ubuntu 11.04 and have installed Eclipse Juno 4.2.0. and have updated the android sdk tools to the latest version.
When I try to run an android emulator I get the error "NAND: Could not write file...file exists". When searching on this error on answer said I needed to free up some space on my hard drive. I have since freed up a few Gig from the hard drive but I still get the same error. Another site said to delete all emulator environments and create new ones from scratch. I tried this but when I had just one environment listed in the avd manager and I try to delete it, and error message pops up that says I can't because the emulator is currently running. Even when I reboot the computer, open the avd manager and try to delete I still get the same error.
I have tried
adb devices
to find the device that is running but no devices get listed.
I get this error whether I am running the avd manager form Eclipse or from the command line. Does anyone know Why I am getting the NAND: Could not write file...file exists error or why I always get the message about the emulator running.
Regards,
John
Try to check the free space on your hard drive....... its usually due to low storage space
Try running df -h repeatedly while the emulator is starting up. You may see something like this:
$ df -h
Filesystem Size Used Avail Use% Mounted on
...
tmpfs 3.7G 2.7G 1.1G 72% /tmp
...
$ df -h
Filesystem Size Used Avail Use% Mounted on
...
tmpfs 3.7G 3.6G 191M 95% /tmp
...
$ df -h
Filesystem Size Used Avail Use% Mounted on
...
tmpfs 3.7G 3.6G 160M 96% /tmp
...
$ df -h
Filesystem Size Used Avail Use% Mounted on
...
tmpfs 3.7G 3.6G 112M 98% /tmp
...
$ df -h
Filesystem Size Used Avail Use% Mounted on
...
tmpfs 3.7G 3.7G 8.8M 100% /tmp
...
$ df -h
Filesystem Size Used Avail Use% Mounted on
...
tmpfs 3.7G 2.7G 1.1G 72% /tmp
...
That is, the partition fills up, then you get the error message and then the partition frees up.
Solution would be either to remount the tmpfs at /tmp with a larger space allocation, 5 GB should be enough, using sudo mount -o remount,size=5G tmpfs /tmp/ or to tell AVD to put its temp directory somewhere else as per How to change the Android emulator temporary directory and https://code.google.com/p/android/issues/detail?id=15716