Docker container: MongoDb Insufficient free space for journal files - mongodb

I am running MongoDB inside a Docker (version 1.10.1, on OSX) container and it is giving this error:
MongoDb Insufficient free space for journal files
I am not able to find out weather the issue in on the host, the container, or in virtual box?
However, on my host I have:
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1 465Gi 75Gi 389Gi 17% 19777401 102066309 16% /
And on the docker container:
Filesystem Inodes IUsed IFree IUse% Mounted on
none 1218224 742474 475750 61% /
I have also mounted a volume from the host with:
run -it -v /Users/foobar/Projects/compose:/data/db mongoImage:0.1 /bin/bash

Thanks for the comments #andy, the issue did seem to be within the virtual box env.
I was able to resolve the issue by:
backing up all docker images
cloning the default virtual box iso (as
a backup)
deleting the default virtual box iso and all associated
files.
restarting docker, a new default Vbox iso was created. This
resolved the issue (which I expect to have again at some point)

Related

Unable to resize /dev/sda1 of GCP postgres

I created a postgres VM in GCP using this instructions https://joncloudgeek.com/blog/deploy-postgres-container-to-compute-engine/#create-a-compute-instance-running-a-postgres-container with a 10GB disk, everything has worked fine for the last couple of months but I seem to have run out of space on /dev/sda1. So i increased the disk size to 400GB but I can't seem to be able to resize /dev/sda1 using the standard command "sudo growpart /dev/sda 1" I keep getting command not found.
Solution for me:
Create a machine image of the container.
Spin up a new VM based on the machine image created.
Delete old VM.
This created a new Postgres VM with 400GB of disk.

Postgres & docker os disk cache

I'm using docker-compose to run my database container that has fairly small memory limit (using mem_limit setting).
Will the docker host disk cache be used (no mem limit) by the postgres container, or should I make sure the container has enough free memory for disk caching?
Using Debian 4.9.246-2 linux host

How to explain Ceph space usage

I looked up RBD disk space usage, but found different statistics from Ceph and the host which mounts the disk.
From Ceph:
$ rbd -p rbd du
NAME PROVISIONED USED
kubernetes-dynamic-pvc-13a2d932-6be0-11e9-b53a-0a580a800339 40GiB 37.8GiB
From the host which mounts the disk
$ df -h
Filesystem Size Used Available Use% Mounted on
/dev/rbd0 39.2G 26.6G 10.6G 72% /data
How could I explain the difference?
You can check mount options of /dev/rbd0 device. There should be no 'discard' option. Without that option filesystem cannot report to Ceph about reclaimed space. So Ceph has no idea how much space is actually occupied on rbd volume. This is not a big problem and can be safely ignored. You can rely on stats reported by kubelet.

Kubernetes in vmware vsphere issues

I am following this guide to set up my cluster. It all works fine.
However, when I install fabric8 in this cluster I run out of disk on the minions. The image, kube.vmdk, is only about 6GB. It is the /var/lib/docker which gets filled up. How do I solve this?
Using the GUI for vmware the option to resize the disk is 'greyed out'.
Should I attach a second disk to the minions and then mount this disk? Where should I mount it? /var/lib/docker?
I would appreciate any input.
Docker's image is store in /var/lib/docker(more precisely, it store in storage driver's directory, /var/lib/docker/aufs when using aufs storage driver) , so when Kubernetes report disk gets filled up, it check that directory.
So you can
Remove all the images in docker(not necessary, you can copy everything to new dir).
stop docker daemon.
mount your new disk to /var/lib/docker/ or /var/lib/docker
start docker daemon.
If you are not sure what storage driver your docker is using, type docker info in your node, will get something contain this:
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 139
Dirperm1 Supported: true
It seems that you run out of the space of the disk. You can remove all the files in /var/lib/docker, and mount the second disk. Finally you need restart your dockerd.

Limit Disk usage in Docker+MongoDB

I am using the official mongo Docker image to start a MongoDB container where my boot disk is limited (e.g. 10G) I configured the docker to run with Google Cloud Logging driver and was hoping Google to store all the logs and save my local disk space. However, I notice the disk continues to grow:
$ df -h
/dev/sda1 9.9G 4.5G 4.9G 49%
As I digged deeper I realized the size of docker containers seems to be growing over time.
$ sudo du -sh /var/lib/docker/
3.6G /var/lib/docker/
However, I can't go further as somehow I can't access the directories within.
If I go inside the docker and du -sh the root, I don't find any suspicious directories occupying space.
So my problem is how do I find out where the disk space is used and how do I eliminate it.
My docker startup command (shown without project options)
docker run -d --log-driver=gcplogs mongo mongod
EDIT: I noticed the size growing has stopped at 4.5GB from ~3GB for a while. So I supposed it has reached some equilibrium now.