Limit Disk usage in Docker+MongoDB - mongodb

I am using the official mongo Docker image to start a MongoDB container where my boot disk is limited (e.g. 10G) I configured the docker to run with Google Cloud Logging driver and was hoping Google to store all the logs and save my local disk space. However, I notice the disk continues to grow:
$ df -h
/dev/sda1 9.9G 4.5G 4.9G 49%
As I digged deeper I realized the size of docker containers seems to be growing over time.
$ sudo du -sh /var/lib/docker/
3.6G /var/lib/docker/
However, I can't go further as somehow I can't access the directories within.
If I go inside the docker and du -sh the root, I don't find any suspicious directories occupying space.
So my problem is how do I find out where the disk space is used and how do I eliminate it.
My docker startup command (shown without project options)
docker run -d --log-driver=gcplogs mongo mongod
EDIT: I noticed the size growing has stopped at 4.5GB from ~3GB for a while. So I supposed it has reached some equilibrium now.

Related

is it possible to shrink the spaces of io.containerd.snapshotter.v1.overlayfs folder in kubernetes

Today I found the host kubernetes(v1.21.3) folder io.containerd.snapshotter.v1.overlayfs takes too much spaces:
[root#k8smasterone kubernetes.io~nfs]# pwd
/var/lib/kubelet/pods/8aafe99f-53c1-4bec-8cb8-abd09af1448f/volumes/kubernetes.io~nfs
[root#k8smasterone kubernetes.io~nfs]# duc ls -Fg /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/
13.5G snapshots/ [++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++]
2.2M metadata.db [
It takes 13.5GB of disk spaces. is it possible to shrink this folder?
The directory /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs is where the various container and image layers are persisted by containerd. These layers are downloaded based on the containers running on the node. If we start running out of space, the kubelet has the ability to garbage collected unused images - which will reduce the size of this directory. The customer also has the ability to configure the size of the boot disk for the node-pools if needed.
It is expected that this would grow from the time a node is created. However when the node disk usage is above 85% then garbage collection will attempt to identify images that can be removed. It may not be able to remove images though if they are currently in use by an existing container running on the node or they have been recently pulled.
If you want to remove unused container images with just containerd, you can use the below command:
$ crictl rmi --prune
Also you can use the $ docker image prune command which allows you to clean up unused images. By default, docker image prune only cleans up dangling images. A dangling image is one that is not tagged and is not referenced by any container.
To remove all images which are not used by existing containers, use the -a flag:
$ docker image prune -a

Unable to resize /dev/sda1 of GCP postgres

I created a postgres VM in GCP using this instructions https://joncloudgeek.com/blog/deploy-postgres-container-to-compute-engine/#create-a-compute-instance-running-a-postgres-container with a 10GB disk, everything has worked fine for the last couple of months but I seem to have run out of space on /dev/sda1. So i increased the disk size to 400GB but I can't seem to be able to resize /dev/sda1 using the standard command "sudo growpart /dev/sda 1" I keep getting command not found.
Solution for me:
Create a machine image of the container.
Spin up a new VM based on the machine image created.
Delete old VM.
This created a new Postgres VM with 400GB of disk.

Google cloud after VM Import no free space available on root drive

I created a Postgres server locally using virualbox using Ubuntu 16.04. Using the import tool to move it to Google cloud seemed to work fine, but the root drive shows 100% full. None of the disk expansion instructions (including creating a snapshot and recreating the boot drive) seem to make any space available.
There seems to be a boot drive and a root drive. But the root drive shows it is all used. The boot drive shows space available, but it should be 15G in size not 720M.
Filesystem Size Used Avail Use% Mounted on
udev 1.8G 0 1.8G 0% /dev
tmpfs 370M 5.3M 365M 2% /run
/dev/mapper/techredo--vg-root 2.5G 2.5G 0 100% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sdb1 720M 121M 563M 18% /boot
tmpfs 370M 0 370M 0% /run/user/406485188
I checked if is possible to use LVM in GCP instances, and I find out that you're free to use it but is not supported by Google Cloud since instances doesn't use LVM by default.
On the other hand, you need to make sure that the Linux Guest Environment is installed in your instance, so you can get the automatic resizing feature. Please follow this guide to learn how to validate: https://cloud.google.com/compute/docs/images/install-guest-environment#wgei
Since your root partition is full and you're not able to install more programs I suggest you 2 workarounds:
Workaround 1: Create a new VirtualBox VM with and import it again, please note that your root partition is pretty small (2.5G) so I suggest you next time create a partition with at least 10GB, and avoid use LVM during the installation.
After your instance is ready in GCP, please check if the Linux Guest Environment is installed in your instance, if not install it: https://cloud.google.com/compute/docs/images/install-guest-environment
Workaround 2: Check which directory is causing problems and then which files are consuming your disk space, delete them to gain space, install the Guest enviroment and try to resize your instance.
a) To check the directories and files sizes, follow these steps:
There are several tools that can display your disk usage graphically but since your root partition is full you'll have to get the information by running commands (old school style).
Please follow these steps:
Please go to the root directory:
cd /
Please run this command to get the size of the main subdirectories under the root partition:
sudo du -aBM -d 1 . -R | sort -nr | head -20
NOTE: Identify which directory is eating your root partition.
Please run this command to get a full list of the files and its sizes:
du -k *| sort -nr | cut -f2 | xargs -d '\n' du -sh
NOTE: The above command will display all files and directories too fast, so in order to scroll down slowly, please run the same command adding the "less" instruction:
du -k *| sort -nr | cut -f2 | xargs -d '\n' du -sh |less
Press the spacebar to scroll down.
Please keep in mind that you have to go to the directory you want to analyze before running the commands in step 3 or 4 (just in case you want to analyze another directory).
Additional to this you can run the command "apt-get clean" to clear the downloaded packages (.deb files) that usually consumes a high part of your disk.
b) To resize your instance, you have 2 options:
Resize your VM instance "primary-server" by following this guide[1].
NOTE: The steps included in this option are pretty easy to follow, if this doesn't work try the second option which requires advanced Linux abilities.
Create a snapshot from the VM instance "primary-server".
2.1 Create a new instance based on a Linux distribution.
2.2 Once it's been created, stop the instance.
2.3 Follow this guide to add an additional disk[2].
NOTE: Basically you have to edit the instance "primary-server" and add an additional disk, don't forget to select the snapshot option from the "Source type" list and click on the snapshot you just created.
2.4 Start the instance.
2.5 Mount the disk by following this guide[3].
NOTE: Please skip step 4. The additional disk it's actually a boot disk so it's been already formatted. So, don't apply the format to it; just mount it, .
2.6 Check the permissions of the file "/etc/fstab".
NOTE: The permissions should be "-rw-r--r--" and the owner "root"
2.6.1 Delete files to reduce the disk size.
2.7 Unmount the disk at OS level.
2.8 Stop the instance.
2.9 Detach the additional disk from the new instance in GCP.
NOTE: Please follow this guide[4] and instead of clicking on X next to the boot disk, please click on X next to the additional disk.
2.10 Create a new instance and instead of using an image in the "boot disk" section, please use the disk you just restored.
NOTE: For this please go to the "Boot disk" section and click on the "Change" button, then go to the "Existing" TAB and select the disk you just restored.
REFERENCES:
[1] https://cloud.google.com/compute/docs/disks/add-persistent-disk#inaccessible_instance
[2] https://cloud.google.com/compute/docs/disks/add-persistent-disk#create_disk
[3] https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting
[4] https://cloud.google.com/compute/docs/disks/detach-reattach-boot-disk#detach_disk
Please let me know the results.

IBM Cloud: How do I see the newly attached block storage in a virtual server?

I created a virtual server but found that the space was too small, so I wanted to add additional disk space to it. After attaching it, I'm not able to see the newly attached disk using df -h in the virtual server. How can I see the newly attached disk?
There are several commands that you can use: fdisk -l and lsblk.
The first disk drive should appear as xvda. Usually, there is a second disk drive xvdb that is used for swap. Your new disk will appear as xvdc. Note that naming is OS-specific.
You will need to partition the new disk, format a file system and then mount the file system to a directory. The exact steps are OS-dependent.
Note: Following steps may change wrt OS. Below example specfic to IBM VSI with ubuntu 18.04 LTS
First confirm the disk is attached to VSI(virtual server instance)and check the number of disk at the node by fdisk command.
fdisk -l | grep xvdc
Create a directory for new partition to be mounted
mkdir /data1
Make the partition using fdisk command
fdisk /dev/xvdc
After the above command it will ask some info
n # And then hit enter until it creates the partition.
Format the partition using mkfs command mkfs.ext4
mkfs.ext4 /dev/xvdc
Mount the partitoined on the new directory
mount /dev/xvdc /data1
Check the disk and its mounted points
df -h

Kubernetes in vmware vsphere issues

I am following this guide to set up my cluster. It all works fine.
However, when I install fabric8 in this cluster I run out of disk on the minions. The image, kube.vmdk, is only about 6GB. It is the /var/lib/docker which gets filled up. How do I solve this?
Using the GUI for vmware the option to resize the disk is 'greyed out'.
Should I attach a second disk to the minions and then mount this disk? Where should I mount it? /var/lib/docker?
I would appreciate any input.
Docker's image is store in /var/lib/docker(more precisely, it store in storage driver's directory, /var/lib/docker/aufs when using aufs storage driver) , so when Kubernetes report disk gets filled up, it check that directory.
So you can
Remove all the images in docker(not necessary, you can copy everything to new dir).
stop docker daemon.
mount your new disk to /var/lib/docker/ or /var/lib/docker
start docker daemon.
If you are not sure what storage driver your docker is using, type docker info in your node, will get something contain this:
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 139
Dirperm1 Supported: true
It seems that you run out of the space of the disk. You can remove all the files in /var/lib/docker, and mount the second disk. Finally you need restart your dockerd.