Disk Usage in kubernetes pod - kubernetes

I am trying to debug the storage usage in my kubernetes pod. I have seen the pod is evicted because of Disk Pressure. When i login to running pod, the see the following
Filesystem Size Used Avail Use% Mounted on
overlay 30G 21G 8.8G 70% /
tmpfs 64M 0 64M 0% /dev
tmpfs 14G 0 14G 0% /sys/fs/cgroup
/dev/sda1 30G 21G 8.8G 70% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 14G 12K 14G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 14G 0 14G 0% /proc/acpi
tmpfs 14G 0 14G 0% /proc/scsi
tmpfs 14G 0 14G 0% /sys/firmware
root#deploy-9f45856c7-wx9hj:/# du -sh /
du: cannot access '/proc/1142/task/1142/fd/3': No such file or directory
du: cannot access '/proc/1142/task/1142/fdinfo/3': No such file or directory
du: cannot access '/proc/1142/fd/4': No such file or directory
du: cannot access '/proc/1142/fdinfo/4': No such file or directory
227M /
root#deploy-9f45856c7-wx9hj:/# du -sh /tmp
11M /tmp
root#deploy-9f45856c7-wx9hj:/# du -sh /dev
0 /dev
root#deploy-9f45856c7-wx9hj:/# du -sh /sys
0 /sys
root#deploy-9f45856c7-wx9hj:/# du -sh /etc
1.5M /etc
root#deploy-9f45856c7-wx9hj:/#
As we can see 21G is consumed, but when i try to run du -sh it just returns 227M. I would like to find out who(which directory) is consuming the space

According to the docs Node Conditions, DiskPressure has to do with conditions on the node causing kubelet to evict the pod. It doesn't necessarily mean it's the pod that caused the conditions.
DiskPressure
Available disk space and inodes on either the node’s root filesystem
or image filesystem has satisfied an eviction threshold
You may want to investigate what's happening on the node instead.

Looks like the process 1142 is still running and holding file descriptors and/or perhaps some space (You may have other processes and other file descriptors too not being released) Is it the kubelet?. To alleviate the problem you can verify that it's running and then kill it:
$ ps -Af | grep 1142
$ kill -9 1142
P.D. You need to provide more information about the processes and what's running on that node.

Related

is it possible to decrease swap partition space in centos7

Some of my friends told me that larger swap partition is very bad like thousands of web hits/min should be possible on my server,
The swap space is 16 GB, and i installed centos7 with "CWP" -> Control web panel
with enable csf, should i consider decrease swap partition space if it possible without formatting the server and how? or there is a solution to maintain this space not to harm the server,,
[root#server ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 201M 16G 2% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sda2 1.8T 256G 1.5T 15% /
/dev/sdb1 1.8T 275G 1.5T 16% /backup
/dev/sda5 16G 83M 15G 1% /tmp
/dev/sda1 969M 187M 716M 21% /boot
tmpfs 3.2G 0 3.2G 0% /run/user/0
tmpfs 3.2G 0 3.2G 0% /run/user/1075
[root#server ~]#
I found that swap file i more effective and easier to do in my case, so i disabled the swap partition and created a swapfile instead..
Here how you do it exactly.

How to attach extra volume on Centos7 server

I have created additional volume on my server.
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 19G 3.4G 15G 19% /
devtmpfs 874M 0 874M 0% /dev
tmpfs 896M 0 896M 0% /dev/shm
tmpfs 896M 17M 879M 2% /run
tmpfs 896M 0 896M 0% /sys/fs/cgroup
tmpfs 180M 0 180M 0% /run/user/0
/dev/sdb 25G 44M 24G 1% /mnt/HC_Volume_1788024
How can I attach /dev/sdb either to the whole server (I mean merge it with "/dev/sda1") or assign it to specific directory on server "/var/lib" without overwriting current /var/lib...
Your will not be able to "merge" as you are using standard sdX devices and not using something like LVM for file systems.
As root, you can manually run:
mount /dev/sdb /var/lib/
The original content of /var/lib will still be there (taking up space on your / filesystem)
To make permanent, (carefully) edit your /etc/fstab and add a line like:
/dev/sdb /var/lib FILESYSTEM_OF_YOUR_SDB_DISK defaults 0 0
You will need to replace "FILESYSTEM_OF_YOUR_SDB_DISK" with the correct filesystem type ("df -T", "blkid" or "lsblk -f" will show the type)
You should test the "correctness" of your /etc/fstab by first:
umount /var/lib (if you previously mounted)
Then run:
mount -a then
df -T
and you should see the mount point corrected and the "mount -a" should have not produced any error.

/dev/mapper/centos_server2-root is full

i have a problem with centOS and DirectAdmin,
i can't login to panel, because not space available for create session
when i run df -h command, get the following result:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos_server2-root 50G 50G 20K 100% /
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 7.8G 8.6M 7.8G 1% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda1 1014M 143M 872M 15% /boot
/dev/mapper/centos_server2-home 192G 124G 68G 65% /home
tmpfs 1.6G 0 1.6G 0% /run/user/0
how i can free up /dev/mapper/centos_server2-root ?
You can remove the old logs from your system.
For removing old log files, are your logs being rotated, e.g. do you have /var/log/messages, /var/log/messages.1, /var/log/messages.2.gz, etc, or maybe /var/log/messages-20101221, /var/log/messages-20101220.gz, etc?
The obvious way to remove those is by age, e.g.
# find /var/log -type f -mtime +14 -print
# find /var/log -type f -mtime +14 -exec rm '{}' \;
Found out here https://serverfault.com/questions/215179/centos-100-disk-full-how-to-remove-log-files-history-etc (by mikel)
Note: Use the rm command with cautious.

Openshift v3 space consumption check

Can anyone tell me how do I check the current available volume, consumed space, on my Mongodb pod on the new Openshift Online Platform? After allocating around 4GB of space, I am unclear on what's the volume that has been consumed until now.
Any light thrown on this will help. Thank you.
You can run oc exec <mongodb_pod> -- df -H .
For example, I get the following output when I run it for the sample python app in my cluster:
$ oc exec os-sample-python-1-hrbq7 -- df -H
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-253:1-94278-dbe79cf53785ab8a1b083f53c88088ab667a01f45e8a8725b26fbff82eef2a33 11G 689M 11G 7% /
tmpfs 4.2G 0 4.2G 0% /dev
tmpfs 4.2G 0 4.2G 0% /sys/fs/cgroup
/dev/vda1 11G 3.0G 7.8G 28% /etc/hosts
shm 68M 0 68M 0% /dev/shm
tmpfs 4.2G 17k 4.2G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 4.2G 0 4.2G 0% /proc/scsi

Raspberry Pi Filesystem

I have a problem which I can't find the answer to anywhere online - so I thought I would ask it here. Below is a code snippet of the directories in my Raspberry Pi. I have a 16GB SD Card installed in my Pi, and the total size of all those directories is only about 5GB. When I try and install anything it says "Out of Memory", and I have already cleared all log files, etc to try and free up space.
But the real question is, why does it say full when it has a 16GB card and I've only installed apache2 and php5 on it?
edward#raspberrypi:/ $ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 3.6G 3.5G 0 100% /
devtmpfs 214M 0 214M 0% /dev
tmpfs 218M 0 218M 0% /dev/shm
tmpfs 218M 4.6M 213M 3% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 218M 0 218M 0% /sys/fs/cgroup
/dev/mmcblk0p1 60M 20M 41M 34% /boot
tmpfs 44M 0 44M 0% /run/user/1000
tmpfs 44M 0 44M 0% /run/user/1001
Thanks for any help.
Try to type in: raspi-config and after that a menu pops up, with the option to expand your file system.