Ceph RBD real space usage is much larger than disk usage once mounted - kubernetes

I'm trying to understand how to find out the current and real disk usage of a ceph cluster and I noticed that the output of a rbd du is way different from the output of a df -h once that rbd is mounted as a disk.
Example:
Inside the ToolBox I have the following:
$ rbd du replicapool/csi-vol-da731ad9-eebe-11eb-9fbd-f2c976e9e23a
warning: fast-diff map is not enabled for csi-vol-da731ad9-eebe-11eb-9fbd-f2c976e9e23a. operation may be slow.
2021-09-01T13:53:23.482+0000 7f8c56ffd700 -1 librbd::object_map::DiffRequest: 0x557402c909c0 handle_load_object_map: failed to load object map: rbd_object_map.8cdeb6e704c7e0
NAME PROVISIONED USED
csi-vol-da731ad9-eebe-11eb-9fbd-f2c976e9e23a 100 GiB 95 GiB
But, inside the Pod that is mounting this rbd, I have:
$ k exec -it -n monitoring prometheus-prometheus-operator-prometheus-1 -- sh
Defaulting container name to prometheus.
Use 'kubectl describe pod/prometheus-prometheus-operator-prometheus-1 -n monitoring' to see all of the containers in this pod.
/prometheus $ df -h
Filesystem Size Used Available Use% Mounted on
overlay 38.0G 19.8G 18.2G 52% /
...
/dev/rbd5 97.9G 23.7G 74.2G 24% /prometheus
...
Is there a reason for the two results to be so different? Can this be a problem when ceph keeps track of the total space used by the cluster to know how much space is available?

Related

how to drop ceph osd block?

I build a ceph cluster with kubernetes and it create an osd block into the sdb disk.
I had delete the ceph cluster but cleanup all the kubernetes instance which were created by ceph cluster, but it did't delete the osd block which is mounted into sdb.
I am a beginner in kubernetes. How can I remove the osd block from sdb.
And why the osd block will have all the disk space?
I find a way to remove osd block from disk on ubuntu18.04:
Use this command to show the logical volume information:
$ sudo lvm lvdisplay
Then you will get the log like this:
Then execute this command to remove the osd block volumn.
$ sudo lvm lvremove <LV Path>
Check if we have removed the volume successfully.
$ lsblk

How to differentiate between RAM and heap usage in logstash?

I am running a logstash Kubernetes pod, I have set LS_JVM_OPTS := -Xmx1g -Xms500m, and monitoring the same using Prometheus grafana, I see memory usage 3.2 Gig. May I know what is happening here?
You are probably seeing the container memory used and not the heap size, there are other things in the JVM like the GC that require memory. Although, 3.2G seems a bit excessive for that heap 😲, so you might want to check 🔬 that the logstash JVM does indeed have those heap options.
$ kubectl exec -t <pod-name> -c <container-name> -- /bin/ps -Af | grep java
You can also check 🕵️ what request/limits you have in your container, to see if you are requesting 3.2Gb initially.
kubectl get pod <logstash-pod-name> -c <container-name> -o=yaml

How to explain Ceph space usage

I looked up RBD disk space usage, but found different statistics from Ceph and the host which mounts the disk.
From Ceph:
$ rbd -p rbd du
NAME PROVISIONED USED
kubernetes-dynamic-pvc-13a2d932-6be0-11e9-b53a-0a580a800339 40GiB 37.8GiB
From the host which mounts the disk
$ df -h
Filesystem Size Used Available Use% Mounted on
/dev/rbd0 39.2G 26.6G 10.6G 72% /data
How could I explain the difference?
You can check mount options of /dev/rbd0 device. There should be no 'discard' option. Without that option filesystem cannot report to Ceph about reclaimed space. So Ceph has no idea how much space is actually occupied on rbd volume. This is not a big problem and can be safely ignored. You can rely on stats reported by kubelet.

Limit Disk usage in Docker+MongoDB

I am using the official mongo Docker image to start a MongoDB container where my boot disk is limited (e.g. 10G) I configured the docker to run with Google Cloud Logging driver and was hoping Google to store all the logs and save my local disk space. However, I notice the disk continues to grow:
$ df -h
/dev/sda1 9.9G 4.5G 4.9G 49%
As I digged deeper I realized the size of docker containers seems to be growing over time.
$ sudo du -sh /var/lib/docker/
3.6G /var/lib/docker/
However, I can't go further as somehow I can't access the directories within.
If I go inside the docker and du -sh the root, I don't find any suspicious directories occupying space.
So my problem is how do I find out where the disk space is used and how do I eliminate it.
My docker startup command (shown without project options)
docker run -d --log-driver=gcplogs mongo mongod
EDIT: I noticed the size growing has stopped at 4.5GB from ~3GB for a while. So I supposed it has reached some equilibrium now.

Docker container: MongoDb Insufficient free space for journal files

I am running MongoDB inside a Docker (version 1.10.1, on OSX) container and it is giving this error:
MongoDb Insufficient free space for journal files
I am not able to find out weather the issue in on the host, the container, or in virtual box?
However, on my host I have:
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1 465Gi 75Gi 389Gi 17% 19777401 102066309 16% /
And on the docker container:
Filesystem Inodes IUsed IFree IUse% Mounted on
none 1218224 742474 475750 61% /
I have also mounted a volume from the host with:
run -it -v /Users/foobar/Projects/compose:/data/db mongoImage:0.1 /bin/bash
Thanks for the comments #andy, the issue did seem to be within the virtual box env.
I was able to resolve the issue by:
backing up all docker images
cloning the default virtual box iso (as
a backup)
deleting the default virtual box iso and all associated
files.
restarting docker, a new default Vbox iso was created. This
resolved the issue (which I expect to have again at some point)