Resizing disk on google cloud kubernetes - kubernetes

Hi im trying to resize a disk for a pod in my kubernetes cluster,following the steps on the docs, i ssh in to the instance node to follow the steps, but its giving me an error
sudo growpart /dev/sdb 1
WARN: unknown label
failed [sfd_dump:1] sfdisk --unit=S --dump /dev/sdb
/dev/sdb: device contains a valid 'ext4' signature; it is strongly recommended to wipe the device with wipefs(8)
if this is unexpected, in order to avoid possible collisions
sfdisk: failed to dump partition table: Success
FAILED: failed to dump sfdisk info for /dev/sdb
i try running the commands from inside the pod but doesnt even locate the disk even tho its there
root#rc-test-r2cfg:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 59G 2.5G 56G 5% /
/dev/sdb 49G 22G 25G 47% /var/lib/postgresql/data
root#rc-test-r2cfg:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 96G 0 disk /var/lib/postgresql/data
sda 8:0 0 60G 0 disk
└─sda1 8:1 0 60G 0 part /etc/hosts
root#rc-test-r2cfg:/# growpart /dev/sdb 1
FAILED: /dev/sdb: does not exist
where /dev/sdb is the disk location

This can now be easily done by updating the storage specification directly of the Persistent Volume Claim. See these posts for reference:
https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/
https://dev.to/bzon/resizing-persistent-volumes-in-kubernetes-like-magic-4f96 (GKE example)

Related

Is there a default size limit for volume in Docker?

I had a one time problem with a database container on a Docker machine:
PostgreSQL Database directory appears to contain a database; Skipping initialization
2023-01-30 06:07:50.651 GMT [1] FATAL: could not write lock file "postmaster.pid": No space left > on device
The error seem simple but there is space left in the device:
docker exec -it database_container df -h
Filesystem      Size  Used Avail Use% Mounted on
overlay         394G   17G  361G   5% /
tmpfs            64M     0   64M   0% /dev
shm              64M  2.0M   63M   3% /dev/shm
> /dev/vda1       394G   17G  361G   5% /data/postgres
tmpfs           3.9G     0  3.9G   0% /proc/acpi
tmpfs           3.9G     0  3.9G   0% /sys/firmware
But after a simple reboot of the machine it worked again.
Can it be a ponctual bug or it is a misconfiguration?
The storage driver overlay2 so I don't think it's the problem.
And now the problem disapeared.
What can I do to resolve this one time problem?
Thank you.

CentOs Partition Resize

I'm struggling with resizing a CentOs Partition on a Server. I found some steps, but I'm not sure which circumstances I face and whats the correct approach and i definitely cannot mess that up.
The space should already be available, but the partition is not resized as far as I can tell.
The goal is to extend the partition /dev/sdb1 from 197GB to 1TB
Below are the "lsblk", "df -h" and "fdisk -l" results which should show my current situation.
[ ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 50G 0 disk
├─sda1 8:1 0 1G 0 part /boot
├─sda2 8:2 0 3.7G 0 part [SWAP]
└─sda3 8:3 0 45.3G 0 part /
sdb 8:16 0 1T 0 disk
└─sdb1 8:17 0 1024G 0 part /var/www/vhosts
sdc 8:32 0 50G 0 disk
└─sdc1 8:33 0 50G 0 part /var/lib/psa
sr0 11:0 1 680M 0 rom
[ ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 7.8G 12M 7.8G 1% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda3 45G 7.0G 36G 17% /
/dev/sda1 976M 135M 775M 15% /boot
/dev/sdc1 50G 53M 47G 1% /var/lib/psa
/dev/sdb1 197G 126G 62G 68% /var/www/vhosts
tmpfs 1.6G 0 1.6G 0% /run/user/0
[ ~]# fdisk -l
Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0009c4b4
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 2099199 1048576 83 Linux
/dev/sda2 2099200 9910271 3905536 82 Linux swap / Solaris
/dev/sda3 9910272 104855551 47472640 83 Linux
Disk /dev/sdb: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x8e948ef1
Device Boot Start End Blocks Id System
/dev/sdb1 2048 2147483647 1073740800 83 Linux
Disk /dev/sdc: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x7677284e
Device Boot Start End Blocks Id System
/dev/sdc1 2048 104857599 52427776 83 Linux
I found this answer here on an external page, but I'm not familiar with the commands and cannot tell, if thats the right way to go (if allowed I can paste the url). Partition Paths have not beed update to mine.
There are three steps to make:
alter your partition table so sda2 ends at end of disk
reread the partition table (will require a reboot)
resize your LVM pv using pvresize
Step 1 - Partition table Run fdisk
/dev/sda. Issue p to print your current partition table and copy that
output to some safe place. Now issue d followed by 2 to remove the
second partition. Issue n to create a new second partition. Make sure
the start equals the start of the partition table you printed earlier.
Make sure the end is at the end of the disk (usually the default).
Issue t followed by 2 followed by 8e to toggle the partition type of
your new second partition to 8e (Linux LVM).
Issue p to review your new partition layout and make sure the start of
the new second partition is exactly where the old second partition
was.
If everything looks right, issue w to write the partition table to
disk. You will get an error message from partprobe that the partition
table couldn't be reread (because the disk is in use).
Step 2 Reboot your system This step is neccessary so the partition table gets
re-read.
Step 3 Resize the LVM PV After your system rebooted invoke pvresize
/dev/sda2. Your Physical LVM volume will now span the rest of the
drive and you can create or extend logical volumes into that space.
The question is, is that the right way to increase the partition size without loosing any data on it for a CentOs System?
Thank you
As you can see the partition
sdb 8:16 0 1T 0 disk
└─sdb1 8:17 0 1024G 0 part /var/www/vhosts
is already 1TB. So you need to extend the filesystem. If your filesystem is ext4 you can use command:
resize2fs /var/www/vhosts
if your filesystem is xfs you can use command:
xfs_growfs /var/www/vhosts

Pod killed with "The node was low on resource: inodes" but df -i shows 29% usage only

Sometimes when running certain builds via self-hosted GitLab CI in a Kubernetes cluster the runners fail with the K8s event The node was low on resource: inodes. Upon checking the respective nodes inodes I don't see any problem. This occurs only on specific CI runs which seem to install a lot of files with npm. Sadly the Node Exporter always reports zero free inodes with its metric container_fs_inodes_free.
Since the inode-usage on the node itself is very low I don't know how to further debug this problem. Do you have ideas/hints I can follow?
All nodes are set up with VMWare.
Here is the build-log:
$ npx lerna run typecheck --scope=myproduct --stream
lerna notice cli v3.22.1
lerna info versioning independent
lerna info ci enabled
lerna notice filter including "myproduct"
lerna info filter [ 'myproduct' ]
lerna info Executing command in 1 package: "yarn run typecheck"
myproduct: yarn run v1.22.5
myproduct: $ tsc --noEmit
Cleaning up file based variables
ERROR: Job failed: command terminated with exit code 137
Here is the output of df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
devtmpfs 4079711 454 4079257 1% /dev
tmpfs 4083138 1 4083137 1% /dev/shm
tmpfs 4083138 1738 4081400 1% /run
tmpfs 4083138 17 4083121 1% /sys/fs/cgroup
/dev/mapper/vg00-lv_root 327680 93629 234051 29% /
/dev/mapper/vg00-lv_opt 131072 18148 112924 14% /opt
/dev/mapper/vg00-lv_var 327680 16972 310708 6% /var
/dev/sda1 128016 351 127665 1% /boot
/dev/mapper/vg00-lv_home 32768 73 32695 1% /home
/dev/mapper/vg00-lv_tmp 65536 53 65483 1% /tmp
/dev/mapper/vg01-lv_docker 3211264 409723 2801541 13% /var/lib/docker
/dev/mapper/vg00-lv_varlog 65536 283 65253 1% /var/log

Ran out of Docker disk space

I have this Docker command:
docker run -d mongo
this will build and run a mongodb server running in a docker container
However, I get an error:
no space left on device
I am on MacOS, and using the newer versions of Docker which use hyper-v instead of VirtualBox (I think that's correct).
Here is the exact error message from the mongo container:
$ docker logs efee16702c5756659d563b98d4ae0f58ecf1f1bba8a54f63443c0ae4b520ab4e
about to fork child process, waiting until server is ready for connections.
forked process: 21
2017-05-04T20:23:51.412+0000 I CONTROL [main] ***** SERVER RESTARTED *****
2017-05-04T20:23:51.430+0000 I CONTROL [main] ERROR: Cannot write pid file to /tmp/tmp.Lo035QkbfL: No space left on device
ERROR: child process failed, exited with error number 1
Any idea how to fix this and prevent it from happening in future?
As suggested, the output of df -h is:
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1 465Gi 116Gi 349Gi 25% 1963838 4293003441 0% /
devfs 183Ki 183Ki 0Bi 100% 634 0 100% /dev
map -hosts 0Bi 0Bi 0Bi 100% 0 0 100% /net
map auto_home 0Bi 0Bi 0Bi 100% 0 0 100% /home
Output of docker info is:
$ docker info
Containers: 5
Running: 0
Paused: 0
Stopped: 5
Images: 741
Server Version: 17.03.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.13-moby
Operating System: Alpine Linux v3.5
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 1.952 GiB
Name: moby
ID: OR4L:WYWW:FFAP:IDX3:B6UK:O2AN:UVTO:EPH6:GYSV:4GV4:L5WP:BQTH
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 17
Goroutines: 30
System Time: 2017-05-04T20:45:27.056157913Z
EventsListeners: 1
No Proxy: *.local, 169.254/16
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
As you state in the comments to the question, ls -altrh ~/Library/Containers/com.docker.docker/Data/com.docker.driver.‌​amd64-linux/Docker.q‌​cow2 returns the following:
-rw-r--r--# 1 alexamil staff 53G
This is a known bug on MacOS (actually, not only) and an official dev comment could be found here. Except for one thing: I read, that different people get different size limit. In the comment it is 64Gb, but for another person it was 20Gb.
There are a couple walkarounds, but no definite solution that I could find.
The manual one
Run docker ps -a and manually remove all unused containers. Then run docker images and remove manually all the intermediate and unused images.
The simplest one
Delete the Docker.qcow2 file entirely. But you will lose all images and containers. Completely.
The less simple
Another way is to run docker volume prune, which will remove all unused volumes
The resizing one (keeps the data)
Another idea that comes to me is to expand the disk image size with QEMU or something like it:
$ brew install qemu
$ /Applications/Docker.app/Contents/MacOS/qemu-img resize ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 +5G
After you expanded the image, you will need to run a VM in which you should run GParted against Docker.qcow2 and expand the partition to use added space. You could use GParted Live ISO for that:
$ qemu-system-x86_64 -drive file=Docker.qcow2 -m 512 -cdrom ~/Downloads/gparted-live.iso -boot d -device usb-mouse -usb
Some people report this either doesn't work or doesn't help.
Yet another resizing one (wipes the data)
Create a substitute image with desired size (120G):
$ qemu-img create -f qcow2 ~/data.qcow2 120G
$ cp ~/data.qcow2 /Application/Docker.app/Contents/Resources/moby/data.qcow2
$ rm ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2
data.qcow2 is copied to ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 when you restart docker.
This walkaround comes from this comment.
Hope this helps. Good luck!

Error exceeded disk quota despite the fact that they are disabled, and disk space is enough

On CentOS has recently started to happen a strange thing - the user can not create files in its directory:
[Deployer # server ~] $ echo test> test.file
-Bash: echo: write error: Disk quota exceeded
Although quotas had not been established:
[Deployer # server ~] $ quota
Disk quotas for user deployer (uid 500): none
Disk space is sufficient:
[Deployer # server ~] $ df-h
Filesystem Size Used Avail Use% Mounted on
/ Dev / vzfs 9.6G 6.9G 2.8G 72% /
none 256M 4.0K 256M 1% / dev
Problems with inodes should not be:
[Deployer # server ~] $ df-i
Filesystem Inodes IUsed IFree IUse% Mounted on
/ Dev / vzfs 10000000 130959 9869041 2% /
none 65536 95 65441 1% / dev
Can you please tell what could be the problem?
You have group quota also enabled for that user:
repquota -avug|grep username
repquota -avug|grep groupname
Edit quota for group:
edquota -g groupname