MongoDB Docker container: ERROR: Cannot write pid file to /tmp/tmp.aLmNg7ilAm: No space left on device - mongodb

I started a MongoDB container like so:
docker run -d -p 27017:27017 --net=cdt-net --name cdt-mongo mongo
I saw that my MongoDB container exited:
0e35cf68a29c mongo "docker-entrypoint.s…" Less than a second ago Exited (1) 3 seconds ago cdt-mongo
I checked my Docker logs, I see:
$ docker logs 0e35cf68a29c
about to fork child process, waiting until server is ready for connections.
forked process: 21
2018-01-12T23:42:03.413+0000 I CONTROL [main] ***** SERVER RESTARTED *****
2018-01-12T23:42:03.417+0000 I CONTROL [main] ERROR: Cannot write pid file to /tmp/tmp.aLmNg7ilAm: No space left on device
ERROR: child process failed, exited with error number 1
Does anyone know what this error is about? Not enough space in the container?

I had to delete old Docker images to free up space, here are the commands I used:
# remove all unused / orphaned images
echo -e "Removing unused images..."
docker rmi -f $(docker images --no-trunc | grep "<none>" | awk "{print \$3}") 2>&1 | cat;
echo -e "Done removing unused images"
# clean up stuff -> using these instructions https://lebkowski.name/docker-volumes/
echo -e "Cleaning up old containers..."
docker ps --filter status=dead --filter status=exited -aq | xargs docker rm -v 2>&1 | cat;
echo -e "Cleaning up old volumes..."
docker volume ls -qf dangling=true | xargs docker volume rm 2>&1 | cat;

We've experienced this problem recently while using docker-compose with mongo and a bunch of other services. There are two fixes which have worked for us.
Clear down unused stuff
# close down all services
docker-compose down
# clear unused docker images
docker system prune
# press y
Increase the image memory available to docker - this will depend on your installation of docker. On Mac, for example, it defaults to 64Gb and we doubled it to 128Gb via the UI.
We've had this problem in both Windows and Mac and the above fixed it.

Related

mkdir: /data: Read-only file system

I installed MongoDB using these instructions:
https://github.com/mongodb/homebrew-brew
However, I cannot connect to a Mongo shell when I run mongo.
How do I connect to the Mongo shell?
I was made aware of mongosh, when I run the mongosh command after installing it, I get connect ECONNREFUSED.
When I run a ps -ef | grep mongod | grep -v grep | wc -l | tr -d '' it shows zero process is running, which would logically explain why I cannot connect, but then what do I do when I run brew services start mongodb-community and it says it starts but zero process when I look into it.
I don't know if sudo mkdir -p /data/db is what is missing for this to run correctly, but when I attempt to run that step, I get mkdir: /data: Read-only file system.
So I placed the directory in /Users/<username>/data/db and then ran mongod --dbpath=/Users/<username>/data/db and nothing.

cannot connect to "workspaceMount" at container launch from vscode

using vscode and wsl2, I have tried to launch a container using the default method and no customization. This generated the same error as below.
so following vscode docs I set a "workspaceMount" in devcontainer.json
"workspaceMount": "source=${localWorkspaceFolder},target=/workspaces/myRepo,type=bind,consistency=delegated",
"workspaceFolder": "/workspaces",
I select Reopen in container, the launch sequence happens but an error is generated
a mount config is invalid, make sure it has the right format and a source folder that exists on the machine where the Docker daemon is running
the log error is
Command failed: docker run -a STDOUT -a STDERR --mount source=d:\git\myRepo,target=/workspaces/myRepo,type=bind,consistency=delegated --mount type=volume,src=vscode,dst=/vscode -l vsch.quality=stable -l vsch.remote.devPort=0 -l vsch.local.folder=d:\git\myRepo --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --entrypoint /bin/sh vsc-myRepo-a878aa9edbcf04f717c76e764dabcde6 -c echo Container started ; trap "exit 0" 15; while sleep 1 & wait $!; do :; done
by launching the container from docker desktop I can confirm
cd /workspaces
ls -l
drwxr-xr-x 2 root root 4096 Dec 3 11:48 myRepo
Is this issue due to owner root:root ?
Should this be changed by chown in the Dokerfile? if so could you provide a sample code to do this, is it by RUN chown ...?
I guess you followed the documentation in https://code.visualstudio.com/docs/remote/containers-advanced
The source should contains the subfolder "myRepo" and the target only "workspaces"
"workspaceMount": "source=${localWorkspaceFolder}/myRepo,target=/workspaces,type=bind,consistency=delegated",
"workspaceFolder": "/workspaces",

howto: elastic beanstalk + deploy docker + graceful shutdown

Hi great people of stackoverflow,
Were hosting a docker container on EB with an nodejs based code running on it.
When redeploying our docker container we'd like the old one to do a graceful shutdown.
I've found help & guides on how our code could receive a sigterm signal produced by 'docker stop' command.
However further investigation into the EB machine running docker at:
/opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh
shows that when "flipping" from current to the new staged container, the old one is killed with 'docker kill'
Is there any way to change this behaviour to docker stop?
Or in general a recommended approach to handling graceful shutdown of the old container?
Thanks!
Self answering as I've found a solution that works for us:
tl;dr: use .ebextensions scripts to run your script before 01flip, your script will make sure a graceful shutdown of whatevers inside the docker takes place
first,
your app (or whatever your'e running in docker) has to be able to catch a signal, SIGINT for example, and shutdown gracefully upon it.
this is totally unrelated to Docker, you can test it running wherever (locally for example)
There is a lot of info about getting this kind of behaviour done for different kind of apps on the net (be it ruby, node.js etc...)
Second,
your EB/Docker based project can have a .ebextensions folder that holds all kinda of scripts to execute while deploying.
we put 2 custom scripts into it, gracefulshutdown_01.config and gracefulshutdown_02.config file that looks something like this:
# gracefulshutdown_01.config
commands:
backup-original-flip-hook:
command: cp -f /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh /opt/elasticbeanstalk/hooks/appdeploy/01flip.sh.bak
test: '[ ! -f /opt/elasticbeanstalk/hooks/appdeploy/01flip.sh.bak ]'
cleanup-custom-hooks:
command: rm -f 05gracefulshutdown.sh
cwd: /opt/elasticbeanstalk/hooks/appdeploy/enact
ignoreErrors: true
and:
# gracefulshutdown_02.config
commands:
reorder-original-flip-hook:
command: mv /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh /opt/elasticbeanstalk/hooks/appdeploy/enact/10flip.sh
test: '[ -f /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh ]'
files:
"/opt/elasticbeanstalk/hooks/appdeploy/enact/05gracefulshutdown.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/sh
# find currently running docker
EB_CONFIG_DOCKER_CURRENT_APP_FILE=$(/opt/elasticbeanstalk/bin/get-config container -k app_deploy_file)
EB_CONFIG_DOCKER_CURRENT_APP=""
if [ -f $EB_CONFIG_DOCKER_CURRENT_APP_FILE ]; then
EB_CONFIG_DOCKER_CURRENT_APP=`cat $EB_CONFIG_DOCKER_CURRENT_APP_FILE | cut -c 1-12`
echo "Graceful shutdown on app container: $EB_CONFIG_DOCKER_CURRENT_APP"
else
echo "NO CURRENT APP TO GRACEFUL SHUTDOWN FOUND"
exit 0
fi
# give graceful kill command to all running .js files (not stats!!)
docker exec $EB_CONFIG_DOCKER_CURRENT_APP sh -c "ps x -o pid,command | grep -E 'workers' | grep -v -E 'forever|grep' " | awk '{print $1}' | xargs docker exec $EB_CONFIG_DOCKER_CURRENT_APP kill -s SIGINT
echo "sent kill signals"
# wait (max 5 mins) until processes are done and terminate themselves
TRIES=100
until [ $TRIES -eq 0 ]; do
PIDS=`docker exec $EB_CONFIG_DOCKER_CURRENT_APP sh -c "ps x -o pid,command | grep -E 'workers' | grep -v -E 'forever|grep' " | awk '{print $1}' | cat`
echo TRIES $TRIES PIDS $PIDS
if [ -z "$PIDS" ]; then
echo "finished graceful shutdown of docker $EB_CONFIG_DOCKER_CURRENT_APP"
exit 0
else
let TRIES-=1
sleep 3
fi
done
echo "failed to graceful shutdown, please investigate manually"
exit 1
gracefulshutdown_01.config is a small util that backups the original flip01 and deletes (if exists) our custom script.
gracefulshutdown_02.config is where the magic happens.
it creates a 05gracefulshutdown enact script and makes sure flip will happen afterwards by renaming it to 10flip.
05gracefulshutdown, the custom script, does this basically:
find current running docker
find all processes that need to be sent a SIGINT (for us its processes with 'workers' in its name
send a sigint to the above processes
loop:
check if processes from before were killed
continue looping for an amount of tries
if tries are over, exit with status "1" and dont continue to 10flip, manual interference is needed.
this assumes you only have 1 docker running on the machine, and that you are able to manually hop on to check whats wrong in the case it fails (for us never happened yet).
I imagine it can also be improved in many ways, so have fun.

Boot2Docker (on Windows) running Mongo with shared folder (This file system is not supported)

I am trying to start a Mongo container using shared folders on Windows using Boot2Docker. When starting using run -it -v /c/Users/310145787/Desktop/mongo:/data/db mongo i get a warning message inside the container saying:
WARNING: This file system is not supported.
After starting mongo shutsdown immediately.
Any hints or tips on how to solve this?
Apparently, according to this gist and Sev (sevastos), mongo doesn't support mounted volume through the VirtualBox shared folder:
See mongoDB Productions Notes:
MongoDB requires a filesystem that supports fsync() on directories.
For example, HGFS and Virtual Box’s shared folders do not support this operation.
the easiest solutions of all and a proper way for data persistance is Data Volumes:
Assuming you have a container that has VOLUME ["/data"]
# Create a data volume
docker create -v /data --name yourData busybox true
# and use
docker run --volumes-from yourData ...
This isn't always ideal (but the following is for Mac, by Edward Chu (chuyik)):
I don't think it's a good solution, because the data just moved to another container right?
But it still inside the container rather than local system(mac disk).
I found another solution, that is to use sshfs to map data between boot2docker vm and your mac, which may be better since data is not stored inside linux container.
Create a directory to store data inside boot2docker:
boot2docker ssh
mkdir -p /mnt/sda1/dev
Use sshfs to link boot2docker and mac:
echo tcuser | sshfs docker#localhost:/mnt/sda1/dev <your mac dir path> -p 2022 -o password_stdin
Run image with mongo installed:
docker run -v /mnt/sda1/dev:/data/db <mongodb-image> mongod
The corresponding boot2docker issue points out to docker issue 12590 ( Problem with -v shared folders in 1.6 #12590), which points to the work around of using double-slash.
using a double slash seems to work. I checked it locally and it works.
docker run -d -v //c/Users/marco/Desktop/data:/data <image name>
it also works with
docker run -v /$(pwd):/data
As an workaround I just copy from a folder before mongo deamon starts. Also, in my case I don't care of journal files, so i only copy database files.
I've used this command on my docker-compose.yml
command: bash -c "(rm /data/db/*.lock && cd /prev && cp *.* /data/db) && mongod"
And everytime before stoping the container I use:
docker exec <container_name> bash -c 'cd /data/db && cp $(ls *.* | grep -v *.lock) /prev'
Note: /prev is set as a volume. path/to/your/prev:/prev
Another workaround is to use mongodump and mongorestore.
in docker-compose.yml: command: bash -c "(sleep 30; mongorestore
--quiet) & mongod"
in terminal: docker exec <container_name> mongodump
Note: I use sleep because I want to make sure that mongo started, and it takes a while.
I know this involves manual work etc, but I am happy that at least I got mongo with existing data running on my Windows 10 machine, and still can work on my Macbook when I want.
It's seems like you don't need the data directory for MongoDb, removing those lines from your docker-composer.yml should run without problems.
The data directory is only used by mongo for cache.

How to analyze disk usage of a Docker container

I can see that Docker takes 12GB of my filesystem:
2.7G /var/lib/docker/vfs/dir
2.7G /var/lib/docker/vfs
2.8G /var/lib/docker/devicemapper/mnt
6.3G /var/lib/docker/devicemapper/devicemapper
9.1G /var/lib/docker/devicemapper
12G /var/lib/docker
But, how do I know how this is distributed over the containers?
I tried to attach to the containers by running (the new v1.3 command)
docker exec -it <container_name> bash
and then running 'df -h' to analyze the disk usage. It seems to be working, but not with containers that use 'volumes-from'.
For example, I use a data-only container for MongoDB, called 'mongo-data'.
When I run docker run -it --volumes-from mongo-data busybox, and then df -h inside the container, It says that the filesystem mounted on /data/db (my 'mongo-data' data-only container) uses 11.3G, but when I do du -h /data/db, it says that it uses only 2.1G.
So, how do I analyze a container/volume disk usage? Or, in my case, how do I find out the 'mongo-data' container size?
To see the file size of your containers, you can use the --size argument of docker ps:
docker ps --size
After 1.13.0, Docker includes a new command docker system df to show docker disk usage.
$ docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 5 1 2.777 GB 2.647 GB (95%)
Containers 1 1 0 B 0B
Local Volumes 4 1 3.207 GB 2.261 (70%)
To show more detailed information on space usage:
$ docker system df --verbose
Posting this as an answer because my comments above got hidden:
List the size of a container:
du -d 2 -h /var/lib/docker/devicemapper | grep `docker inspect -f "{{.Id}}" <container_name>`
List the sizes of a container's volumes:
docker inspect -f "{{.Volumes}}" <container_name> | sed 's/map\[//' | sed 's/]//' | tr ' ' '\n' | sed 's/.*://' | xargs sudo du -d 1 -h
Edit:
List all running containers' sizes and volumes:
for d in `docker ps -q`; do
d_name=`docker inspect -f {{.Name}} $d`
echo "========================================================="
echo "$d_name ($d) container size:"
sudo du -d 2 -h /var/lib/docker/devicemapper | grep `docker inspect -f "{{.Id}}" $d`
echo "$d_name ($d) volumes:"
docker inspect -f "{{.Volumes}}" $d | sed 's/map\[//' | sed 's/]//' | tr ' ' '\n' | sed 's/.*://' | xargs sudo du -d 1 -h
done
NOTE: Change 'devicemapper' according to your Docker filesystem (e.g 'aufs')
The volume part did not work anymore so if anyone is insterested I just change the above script a little bit:
for d in `docker ps | awk '{print $1}' | tail -n +2`; do
d_name=`docker inspect -f {{.Name}} $d`
echo "========================================================="
echo "$d_name ($d) container size:"
sudo du -d 2 -h /var/lib/docker/aufs | grep `docker inspect -f "{{.Id}}" $d`
echo "$d_name ($d) volumes:"
for mount in `docker inspect -f "{{range .Mounts}} {{.Source}}:{{.Destination}}
{{end}}" $d`; do
size=`echo $mount | cut -d':' -f1 | sudo xargs du -d 0 -h`
mnt=`echo $mount | cut -d':' -f2`
echo "$size mounted on $mnt"
done
done
I use docker stats $(docker ps --format={{.Names}}) --no-stream to get :
CPU usage,
Mem usage/Total mem allocated to container (can be allocate with docker run command)
Mem %
Block I/O
Net I/O
Improving Maxime's anwser:
docker ps --size
You'll see something like this:
+---------------+---------------+--------------------+
| CONTAINER ID | IMAGE | SIZE |
+===============+===============+====================+
| 6ca0cef8db8d | nginx | 2B (virtual 183MB) |
| 3ab1a4d8dc5a | nginx | 5B (virtual 183MB) |
+---------------+---------------+--------------------+
When starting a container, the image that the container is started from is mounted read-only (virtual).
On top of that, a writable layer is mounted, in which any changes made to the container are written.
So the Virtual size (183MB in the example) is used only once, regardless of how many containers are started from the same image - I can start 1 container or a thousand; no extra disk space is used.
The "Size" (2B in the example) is unique per container though, so the total space used on disk is:
183MB + 5B + 2B
Be aware that the size shown does not include all disk space used for a container.
Things that are not included currently are;
- volumes
- swapping
- checkpoints
- disk space used for log-files generated by container
https://github.com/docker/docker.github.io/issues/1520#issuecomment-305179362
(this answer is not useful, but leaving it here since some of the comments may be)
docker images will show the 'virtual size', i.e. how much in total including all the lower layers. So some double-counting if you have containers that share the same base image.
documentation
You can use
docker history IMAGE_ID
to see how the image size is ditributed between its various sub-components.
Keep in mind that docker ps --size may be an expensive command, taking more than a few minutes to complete. The same applies to container list API requests with size=1. It's better not to run it too often.
Take a look at alternatives we compiled, including the du -hs option for the docker persistent volume directory.
Alternative to docker ps --size
As "docker ps --size" produces heavy IO load on host, it is not feasable running such command every minute in a production environment. Therefore we have to do a workaround in order to get desired container size or to be more precise, the size of the RW-Layer with a low impact to systems perfomance.
This approach gathers the "device name" of every container and then checks size of it using "df" command. Those "device names" are thin provisioned volumes that a mounted to / on each container. One problem still persists as this observed size also implies all the readonly-layers of underlying image. In order to address this we can simple check size of used container image and substract it from size of a device/thin_volume.
One should note that every image layer is realized as a kind of a lvm snapshot when using device mapper. Unfortunately I wasn't able to get my rhel system to print out those snapshots/layers. Otherwise we could simply collect sizes of "latest" snapshots. Would be great if someone could make things clear. However...
After some tests, it seems that creation of a container always adds an overhead of approx. 40MiB (tested with containers based on Image "httpd:2.4.46-alpine"):
docker run -d --name apache httpd:2.4.46-alpine // now get device name from docker inspect and look it up using df
df -T -> 90MB whereas "Virtual Size" from "docker ps --size" states 50MB and a very small payload of 2Bytes -> mysterious overhead 40MB
curl/download of a 100MB file within container
df -T -> 190MB whereas "Virtual Size" from "docker ps --size" states 150MB and payload of 100MB -> overhead 40MB
Following shell prints results (in bytes) that match results from "docker ps --size" (but keep in mind mentioned overhead of 40MB)
for c in $(docker ps -q); do \
container_name=$(docker inspect -f "{{.Name}}" ${c} | sed 's/^\///g' ); \
device_n=$(docker inspect -f "{{.GraphDriver.Data.DeviceName}}" ${c} | sed 's/.*-//g'); \
device_size_kib=$(df -T | grep ${device_n} | awk '{print $4}'); \
device_size_byte=$((1024 * ${device_size_kib})); \
image_sha=$(docker inspect -f "{{.Image}}" ${c} | sed 's/.*://g' ); \
image_size_byte=$(docker image inspect -f "{{.Size}}" ${image_sha}); \
container_size_byte=$((${device_size_byte} - ${image_size_byte})); \
\
echo my_node_dm_device_size_bytes\{cname=\"${container_name}\"\} ${device_size_byte}; \
echo my_node_dm_container_size_bytes\{cname=\"${container_name}\"\} ${container_size_byte}; \
echo my_node_dm_image_size_bytes\{cname=\"${container_name}\"\} ${image_size_byte}; \
done
Further reading about device mapper: https://test-dockerrr.readthedocs.io/en/latest/userguide/storagedriver/device-mapper-driver/
The docker system df command displays information regarding the amount of disk space used by the docker daemon.
docker system df -v