Why does it show I have no disk space left while I still have a lot of space available? - centos

I am on a CentOs system, and df shows that I have a lot of disk spaces available:
See this command:
$ git pull
fatal: write error: No space left on device
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 30G 4.2G 24G 15% /
devtmpfs 63G 0 63G 0% /dev
tmpfs 63G 0 63G 0% /dev/shm
tmpfs 63G 435M 63G 1% /run
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sda2 30G 28G 0 100% /usr
/dev/sda7 148G 24G 118G 17% /data0
/dev/sda6 30G 1.3G 27G 5% /var
/dev/sda5 30G 45M 28G 1% /tmp
/dev/sdc1 3.9T 462G 3.3T 13% /data1
/dev/sdb1 274G 107G 154G 42% /data2
tmpfs 13G 0 13G 0% /run/user/60422
And I am currently running the git pull command under /data1, which has 87% spaces left.
Why is that?
EDIT:
df -ih
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 1.9M 14K 1.9M 1% /
devtmpfs 16M 610 16M 1% /dev
tmpfs 16M 1 16M 1% /dev/shm
tmpfs 16M 1022 16M 1% /run
tmpfs 16M 16 16M 1% /sys/fs/cgroup
/dev/sda2 1.9M 344K 1.6M 18% /usr
/dev/sda7 9.5M 58K 9.4M 1% /data0
/dev/sda6 1.9M 14K 1.9M 1% /var
/dev/sda5 1.9M 35 1.9M 1% /tmp
/dev/sdc1 251M 160K 251M 1% /data1
/dev/sdb1 18M 1.2K 18M 1% /data2
tmpfs 16M 1 16M 1% /run/user/60422

Maybe you are running out of inodes? Check with df -ih.

Related

How to remove kubernetes pods related files from file system?

I am new to kubernetes.
So, I created few pods.
Then I deleted all pods using
kubectl delete pods --all
But output of df -h still shows kubernetes consumed disk space.
Filesystem Size Used Avail Use% Mounted on
/dev/root 194G 19G 175G 10% /
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 1.6G 2.2M 1.6G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/loop0 34M 34M 0 100% /snap/amazon-ssm-agent/3552
/dev/loop2 56M 56M 0 100% /snap/core18/2246
/dev/loop1 25M 25M 0 100% /snap/amazon-ssm-agent/4046
/dev/loop3 56M 56M 0 100% /snap/core18/2253
/dev/loop4 68M 68M 0 100% /snap/lxd/21835
/dev/loop5 44M 44M 0 100% /snap/snapd/14295
/dev/loop6 62M 62M 0 100% /snap/core20/1242
/dev/loop7 43M 43M 0 100% /snap/snapd/14066
/dev/loop8 68M 68M 0 100% /snap/lxd/21803
/dev/loop9 62M 62M 0 100% /snap/core20/1270
tmpfs 1.6G 20K 1.6G 1% /run/user/123
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/a2054657-e24d-434f-8ba5-b93813a405fc/volumes/kubernetes.io~secret/local-path-provisioner-service-account-token-4hkj6
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/fa06c678-814f-4f98-8d2d-806e85923830/volumes/kubernetes.io~secret/metrics-server-token-pjbwh
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/daceb65d912a45e87d29955b499aff1d7fbc40584eade7903a75a2c5a317325a/shm
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/daceb65d912a45e87d29955b499aff1d7fbc40584eade7903a75a2c5a317325a/rootfs
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/956d3b341a87e4232792ebf1ad0925f07c180d6d86de149a6ec801f74c0b47f8/rootfs
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/374537a007565bba5b00824576d35e2f2ee8835c354205748117b6622dc68a6d/shm
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/374537a007565bba5b00824576d35e2f2ee8835c354205748117b6622dc68a6d/rootfs
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/babfe080e5ec18297a219e65f99d6156fbd8b8651950a63052606ffebd7a618a/rootfs
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/4e3b15c1-f051-42eb-a3d1-9b3de38dae12/volumes/kubernetes.io~secret/default-token-lnpwv
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/df53096e-f89b-4fc7-ab8a-672d841ac44f/volumes/kubernetes.io~secret/coredns-token-sxtjn
tmpfs 7.8G 8.0K 7.8G 1% /var/lib/kubelet/pods/415a1140-5813-48cf-bd88-17b647bd955c/volumes/kubernetes.io~secret/ssl
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/415a1140-5813-48cf-bd88-17b647bd955c/volumes/kubernetes.io~secret/traefik-token-46qmp
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/d29d1a4a1ac25c92618ff9294e9045a1e2333899f64c3935c5e9955b7d1b3e61/shm
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/d29d1a4a1ac25c92618ff9294e9045a1e2333899f64c3935c5e9955b7d1b3e61/rootfs
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/2ad63b79faa95666c75dfa397524c4ed5464acfebf577c388e19ae5fc349c0c8/shm
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/2ad63b79faa95666c75dfa397524c4ed5464acfebf577c388e19ae5fc349c0c8/rootfs
overlay 194G 19G 175G 10% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/39b88e479947c9240a7c5233555c7a19b29f3ccc7bd1da117251c8e8959aca3c/rootfs
shm 64M 0 64M 0%
What are these spaces showing in df -h ? How to free up these spaces ?
EDIT:
I noticed that pods are restarting after I delete them.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mylab-airflow-redis-0 1/1 Running 0 33m
mylab-airflow-postgresql-0 1/1 Running 0 34m
mylab-postgresql-0 1/1 Running 0 34m
mylab-keyclo-0 1/1 Running 0 34m
mylab-keycloak-postgres-0 1/1 Running 0 34m
mylab-airflow-scheduler-788f7f4dd6-ppg6v 2/2 Running 0 34m
mylab-airflow-worker-0 2/2 Running 0 34m
mylab-airflow-flower-6d8585794d-s2jzd 1/1 Running 0 34m
mylab-airflow-webserver-859766684b-w9zcm 1/1 Running 0 34m
mylab-5f7d84fcbc-59mkf 1/1 Running 0 34m
Edited
So I deleted the deployments.
kubectl delete deployment --all
Now, there are no deployments.
$ kubectl get deployment
No resources found in default namespace.
Then after, I stopped the cluster.
systemctl stop k3s
Disk space is still not released.
Output of latest disk usage.
Filesystem Size Used Avail Use% Mounted on
/dev/root 194G 35G 160G 18% /
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 1.6G 2.5M 1.6G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/loop0 34M 34M 0 100% /snap/amazon-ssm-agent/3552
/dev/loop2 56M 56M 0 100% /snap/core18/2246
/dev/loop1 25M 25M 0 100% /snap/amazon-ssm-agent/4046
/dev/loop3 56M 56M 0 100% /snap/core18/2253
/dev/loop4 68M 68M 0 100% /snap/lxd/21835
/dev/loop5 44M 44M 0 100% /snap/snapd/14295
/dev/loop6 62M 62M 0 100% /snap/core20/1242
/dev/loop7 43M 43M 0 100% /snap/snapd/14066
/dev/loop8 68M 68M 0 100% /snap/lxd/21803
/dev/loop9 62M 62M 0 100% /snap/core20/1270
tmpfs 1.6G 20K 1.6G 1% /run/user/123
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/a2054657-e24d-434f-8ba5-b93813a405fc/volumes/kubernetes.io~secret/local-path-provisioner-service-account-token-4hkj6
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/fa06c678-814f-4f98-8d2d-806e85923830/volumes/kubernetes.io~secret/metrics-server-token-pjbwh
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/daceb65d912a45e87d29955b499aff1d7fbc40584eade7903a75a2c5a317325a/shm
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/daceb65d912a45e87d29955b499aff1d7fbc40584eade7903a75a2c5a317325a/rootfs
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/956d3b341a87e4232792ebf1ad0925f07c180d6d86de149a6ec801f74c0b47f8/rootfs
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/374537a007565bba5b00824576d35e2f2ee8835c354205748117b6622dc68a6d/shm
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/374537a007565bba5b00824576d35e2f2ee8835c354205748117b6622dc68a6d/rootfs
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/4e3b15c1-f051-42eb-a3d1-9b3de38dae12/volumes/kubernetes.io~secret/default-token-lnpwv
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/df53096e-f89b-4fc7-ab8a-672d841ac44f/volumes/kubernetes.io~secret/coredns-token-sxtjn
tmpfs 7.8G 8.0K 7.8G 1% /var/lib/kubelet/pods/415a1140-5813-48cf-bd88-17b647bd955c/volumes/kubernetes.io~secret/ssl
tmpfs 7.8G 12K 7.8G 1% /var/lib/kubelet/pods/415a1140-5813-48cf-bd88-17b647bd955c/volumes/kubernetes.io~secret/traefik-token-46qmp
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/d29d1a4a1ac25c92618ff9294e9045a1e2333899f64c3935c5e9955b7d1b3e61/shm
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/d29d1a4a1ac25c92618ff9294e9045a1e2333899f64c3935c5e9955b7d1b3e61/rootfs
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/2ad63b79faa95666c75dfa397524c4ed5464acfebf577c388e19ae5fc349c0c8/shm
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/2ad63b79faa95666c75dfa397524c4ed5464acfebf577c388e19ae5fc349c0c8/rootfs
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/39b88e479947c9240a7c5233555c7a19b29f3ccc7bd1da117251c8e8959aca3c/rootfs
shm 64M 0 64M 0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/6eddeab3511cf326a530dd042f5348978c6ba98bf8d595c2936cb6f56e30f754/shm
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/6eddeab3511cf326a530dd042f5348978c6ba98bf8d595c2936cb6f56e30f754/rootfs
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/78568d4850964c9c7b8ca5df11bf532a477492119813094631641132aadd23a0/rootfs
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/14d87054e0c7a2a86ae64be70a79f94e2d193bc4739d97e261e85041c160f3bc/rootfs
overlay 194G 35G 160G 18% /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/0971fe44fc6f0f5c9e0b8c1a0e3279c20b3bc574e03d12607644e1e7d427ff65/rootfs
tmpfs 1.6G 4.0K 1.6G 1% /run/user/1000
Output of ctr containers ls
# ctr container list
CONTAINER IMAGE RUNTIME
There are mandatory data to be maintain when a cluster is running (eg. default service token). When you shutdown (eg. systemctl stop k3s) the cluster (not just delete pods) these will be released.

is it possible to decrease swap partition space in centos7

Some of my friends told me that larger swap partition is very bad like thousands of web hits/min should be possible on my server,
The swap space is 16 GB, and i installed centos7 with "CWP" -> Control web panel
with enable csf, should i consider decrease swap partition space if it possible without formatting the server and how? or there is a solution to maintain this space not to harm the server,,
[root#server ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 201M 16G 2% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sda2 1.8T 256G 1.5T 15% /
/dev/sdb1 1.8T 275G 1.5T 16% /backup
/dev/sda5 16G 83M 15G 1% /tmp
/dev/sda1 969M 187M 716M 21% /boot
tmpfs 3.2G 0 3.2G 0% /run/user/0
tmpfs 3.2G 0 3.2G 0% /run/user/1075
[root#server ~]#
I found that swap file i more effective and easier to do in my case, so i disabled the swap partition and created a swapfile instead..
Here how you do it exactly.

Openshift v3 space consumption check

Can anyone tell me how do I check the current available volume, consumed space, on my Mongodb pod on the new Openshift Online Platform? After allocating around 4GB of space, I am unclear on what's the volume that has been consumed until now.
Any light thrown on this will help. Thank you.
You can run oc exec <mongodb_pod> -- df -H .
For example, I get the following output when I run it for the sample python app in my cluster:
$ oc exec os-sample-python-1-hrbq7 -- df -H
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-253:1-94278-dbe79cf53785ab8a1b083f53c88088ab667a01f45e8a8725b26fbff82eef2a33 11G 689M 11G 7% /
tmpfs 4.2G 0 4.2G 0% /dev
tmpfs 4.2G 0 4.2G 0% /sys/fs/cgroup
/dev/vda1 11G 3.0G 7.8G 28% /etc/hosts
shm 68M 0 68M 0% /dev/shm
tmpfs 4.2G 17k 4.2G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 4.2G 0 4.2G 0% /proc/scsi

How to get more space on /dev/mapper/centos-root

The following is df -h result. /dev/mapper/centos-root is 100%.
/dev/mapper/centos-root 50G 50G 20K 100% /
devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs 3.9G 8.0K 3.9G 1% /dev/shm
tmpfs 3.9G 8.9M 3.8G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/mapper/centos-home 406G 5.0G 401G 2% /home
/dev/sda3 497M 303M 195M 61% /boot
tmpfs 780M 0 780M 0% /run/user/0
tmpfs 780M 0 780M 0% /run/user/1000
I removed a lot of large files (nGB) in $HOME and $HOME/Downloads. But there is no change in df -h result.
Could you tell me where I should remove files to get more space on /dev/mapper/centos-root?
Your Home directory is mounted from partition
/dev/mapper/centos-home
Try deleting unimportant files for example in
/tmp/
If you want to examine the file sizes have a look at the du command
The following is df -i result. /dev/mapper/vg00-lv_var 6553600 6553600 0 100% /var
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vg00-lv_root 49856512 220435 49636077 1% /
devtmpfs 16444850 583 16444267 1% /dev
tmpfs 16448811 6 16448805 1% /dev/shm
tmpfs 16448811 986 16447825 1% /run
tmpfs 16448811 16 16448795 1% /sys/fs/cgroup
/dev/sda2 32768 35 32733 1% /boot
/dev/sda1 0 0 0 - /boot/efi
/dev/mapper/vg00-lv_var 6553600 6553600 0 100% /var
tmpfs 16448811 17 16448794 1% /run/user/42
tmpfs 16448811 1 16448810 1% /run/user/0
tmpfs 16448811 1 16448810 1% /run/user/99

Raspberry Pi Filesystem

I have a problem which I can't find the answer to anywhere online - so I thought I would ask it here. Below is a code snippet of the directories in my Raspberry Pi. I have a 16GB SD Card installed in my Pi, and the total size of all those directories is only about 5GB. When I try and install anything it says "Out of Memory", and I have already cleared all log files, etc to try and free up space.
But the real question is, why does it say full when it has a 16GB card and I've only installed apache2 and php5 on it?
edward#raspberrypi:/ $ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 3.6G 3.5G 0 100% /
devtmpfs 214M 0 214M 0% /dev
tmpfs 218M 0 218M 0% /dev/shm
tmpfs 218M 4.6M 213M 3% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 218M 0 218M 0% /sys/fs/cgroup
/dev/mmcblk0p1 60M 20M 41M 34% /boot
tmpfs 44M 0 44M 0% /run/user/1000
tmpfs 44M 0 44M 0% /run/user/1001
Thanks for any help.
Try to type in: raspi-config and after that a menu pops up, with the option to expand your file system.