How to specify a tmpfs volume size in a docker-compose.yml v2.x file?
If the head of your yml file is:
version: "2.1"
services:
service_name_foobar:
...
volumes:
- foo:/tmp
you may specify the tmpfs options like this, for a 7 GB volume:
volumes:
foo:
driver_opts:
type: tmpfs
device: tmpfs
o: "size=7g"
Related
Is it possible to store an environment variable from an .env file directly and secure (without showing up in docker inspect to a tmpfs storage?
version: "3"
services:
app:
build:
context: .
tmpfs:
- /var/tmp
command: "echo \"${SECRET}\" > /var/tmp/test.txt && ./app -c /var/tmp/test.txt"
My current approach leads to the container restarting itself over and over again.
I want to resize postgres container's shared memory from default 64M. So I add:
build:
context: .
shm_size: '2gb'
I'm using version 3.6 of the compose file, postgres service definition.
version: "3.6"
services:
#other services go here..
postgres:
restart: always
image: postgres:10
hostname: postgres
container_name: fiware-postgres
expose:
- "5432"
ports:
- "5432:5432"
networks:
- default
environment:
- "POSTGRES_PASSWORD=password"
- "POSTGRES_USER=postgres"
- "POSTGRES_DB=postgres"
volumes:
- ./postgres-data:/var/lib/postgresql/data
build:
context: .
shm_size: '2gb'
However, this change doesn't take effect even though I restart the service docker-compose down then up. So Immediately I start interacting with postgres to display some data on dashboard, I get shared memory issue.
Before lunching the dashboard:
$docker exec -it fiware-postgres df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-253:1-107615-1541c55e4c3d5e03a7716d5418eea4c520b6556a6fd179c6ab769afd0ce64d9f 10G 266M 9.8G 3% /
tmpfs 64M 0 64M 0% /dev
tmpfs 1.4G 0 1.4G 0% /sys/fs/cgroup
/dev/vda1 197G 52G 136G 28% /etc/hosts
shm 64M 8.0K 64M 1% /dev/shm
tmpfs 1.4G 0 1.4G 0% /proc/acpi
tmpfs 1.4G 0 1.4G 0% /proc/scsi
tmpfs 1.4G 0 1.4G 0% /sys/firmware
After lunching the dashboard:
$docker exec -it fiware-postgres df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-253:1-107615-1541c55e4c3d5e03a7716d5418eea4c520b6556a6fd179c6ab769afd0ce64d9f 10G 266M 9.8G 3% /
tmpfs 64M 0 64M 0% /dev
tmpfs 1.4G 0 1.4G 0% /sys/fs/cgroup
/dev/vda1 197G 52G 136G 28% /etc/hosts
shm 64M 50M 15M 78% /dev/shm
tmpfs 1.4G 0 1.4G 0% /proc/acpi
tmpfs 1.4G 0 1.4G 0% /proc/scsi
tmpfs 1.4G 0 1.4G 0% /sys/firmware
postgres error log:
2019-07-01 17:27:58.802 UTC [47] ERROR: could not resize shared memory segment "/PostgreSQL.1145887853" to 12615680 bytes: No space left on device
What's going on here?
You set shm_size in build, this will just affect build, you need to set it in service level, like next:
docker-compose.yaml:
version: "3.6"
services:
#other services go here..
postgres:
restart: always
image: postgres:10
hostname: postgres
container_name: fiware-postgres
expose:
- "5432"
ports:
- "5432:5432"
networks:
- default
environment:
- "POSTGRES_PASSWORD=password"
- "POSTGRES_USER=postgres"
- "POSTGRES_DB=postgres"
volumes:
- ./postgres-data:/var/lib/postgresql/data
build:
context: .
shm_size: 256mb
shm_size: 512mb
Dockerfile:
FROM postgres:10
RUN df -h | grep shm
Then, docker-compose up -d --build to start it and check:
shubuntu1#shubuntu1:~/66$ docker-compose --version
docker-compose version 1.24.0, build 0aa59064
shubuntu1#shubuntu1:~/66$ docker-compose up -d --build
Building postgres
Step 1/2 : FROM postgres:10
---> 0959974989f8
Step 2/2 : RUN df -h | grep shm
---> Running in 25d341cfde9c
shm 256M 0 256M 0% /dev/shm
Removing intermediate container 25d341cfde9c
---> 1637f1afcb81
Successfully built 1637f1afcb81
Successfully tagged postgres:10
Recreating fiware-postgres ... done
shubuntu1#shubuntu1:~/66$ docker exec -it fiware-postgres df -h | grep shm
shm 512M 8.0K 512M 1% /dev/shm
You can see in build time it shows 256m, but the runtime container it shows 512m.
This happened because Postgres wrote over 64MB to the shared memory (/dev/shm under Linux). In default Linux settings, the max shared memory size is 64M.
Verification
We can use the following command to verify this:
0d807385d325:/usr/src# df -h | grep shm
shm 64.0M 0 64.0M 0% /dev/shm
And we can manually adjust the max shared memory by configuring docker-compose.yml like (adjusting it to 4MB, https://docs.docker.com/compose/compose-file/compose-file-v3/#shm_size):
services:
my_service:
....
tty: true
shm_size: '4mb'
After changing it to 4MB, verify it again:
0d807385d325:/usr/src# df -h | grep shm
shm 4.0M 0 4.0M 0% /dev/shm
We cannot write over 4MB data into shared memory device after this:
# write 4MB (1kb * 4096) to /dev/shm/test file succeeds
0d807385d325:/usr/src# dd if=/dev/zero of=/dev/shm/test bs=1024 count=4096
4096+0 records in
4096+0 records out
4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0042619 s, 984 MB/s
# write 4.001MB (1kb * 4097) to /dev/shm/test file fails
0d807385d325:/usr/src# dd if=/dev/zero of=/dev/shm/test bs=1024 count=4097
dd: error writing '/dev/shm/test': No space left on device
4097+0 records in
4096+0 records out
4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0041456 s, 1.0 GB/s
Fix for this issue
We simply adjust this value to be a bigger value.
For docker run, we can use the --shm-size to adjust it (https://docs.docker.com/engine/reference/commandline/run/)
For docker-compose, we can use the shm_size option in docker-compose file to adjust it like above (https://docs.docker.com/compose/compose-file/compose-file-v3/#shm_size)
For Kubernetes, we have to use the emptyDir option in Kubernetes (https://kubernetes.io/docs/concepts/storage/volumes/#emptydir). Basically, we need to:
3.1) add a new emptyDir volume with "Memory" as medium
volumes:
- name: dshm
emptyDir:
medium: Memory
3.2) Mount it to the /dev/shm for the stonewave container
volumeMounts:
- mountPath: /dev/shm
name: dshm
According to Kubernetes's documentation (https://kubernetes.io/docs/concepts/storage/volumes/#emptydir), it will use 50% of the memory as the max by default.
If the SizeMemoryBackedVolumes feature gate is enabled, you can specify a size for memory backed volumes. If no size is specified, memory backed volumes are sized to 50% of the memory on a Linux host.
References
https://docs.openshift.com/container-platform/3.11/dev_guide/shared_memory.html
https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
https://docs.docker.com/compose/compose-file/compose-file-v3/#shm_size
My MongoDB gets stuck and returning the following error:
2019-01-28T18:28:53.419+0000 E STORAGE [WTCheckpointThread] WiredTiger error (28) [1548700133:419188][1:0x7feecb0ae700], file:WiredTiger.wt, WT_SESSION.checkpoint: /data/db/WiredTiger.turtle.set: handle-open: open: No space left on device
2019-01-28T18:28:53.419+0000 E STORAGE [WTCheckpointThread] WiredTiger error (22) [1548700133:419251][1:0x7feecb0ae700], file:WiredTiger.wt, WT_SESSION.checkpoint: WiredTiger.wt: the checkpoint failed, the system must restart: Invalid argument
2019-01-28T18:28:53.419+0000 E STORAGE [WTCheckpointThread] WiredTiger error (-31804) [1548700133:419260][1:0x7feecb0ae700], file:WiredTiger.wt, WT_SESSION.checkpoint: the process must exit and restart: WT_PANIC: WiredTiger library panic
2019-01-28T18:28:53.419+0000 F - [WTCheckpointThread] Fatal Assertion 28558 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 361
2019-01-28T18:28:53.419+0000 F - [WTCheckpointThread]
***aborting after fassert() failure
2019-01-28T18:28:53.444+0000 F - [WTCheckpointThread] Got signal: 6 (Aborted).
However, my disk has space:
df -h
Filesystem Size Used Avail Use% Mounted on
udev 992M 0 992M 0% /dev
tmpfs 200M 5.7M 195M 3% /run
/dev/xvda1 39G 26G 14G 66% /
tmpfs 1000M 1.1M 999M 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1000M 0 1000M 0% /sys/fs/cgroup
tmpfs 200M 0 200M 0% /run/user/1000
df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 253844 322 253522 1% /dev
tmpfs 255835 485 255350 1% /run
/dev/xvda1 5120000 5090759 29241 100% /
tmpfs 255835 10 255825 1% /dev/shm
tmpfs 255835 3 255832 1% /run/lock
tmpfs 255835 16 255819 1% /sys/fs/cgroup
tmpfs 255835 4 255831 1% /run/user/1000
And this would be my docker-compose:
version: "3"
services:
# MariaDB
mariadb:
container_name: mariadb
image: mariadb
ports: ['3306:3306']
restart: always
volumes:
- /home/ubuntu/mysql:/var/lib/mysql
environment:
- "MYSQL_ROOT_PASSWORD=PasswordGoesHere"
command:
# - --memory=1536M
- --wait_timeout=28800
- --innodb_buffer_pool_size=1g
- --innodb_buffer_pool_instances=4
# - --innodb_buffer_pool_chunk_size=1073741824
# APACHE
apache:
container_name: apache
image: apache-php7.1
ports: ['80:80', '443:443']
restart: always
entrypoint: tail -f /dev/null
volumes:
- /home/ubuntu/apache2/apache-config:/etc/apache2/sites-available/
- /home/ubuntu/apache2/www:/var/www/html/
# MONGODB
mongodb:
container_name: mongodb
image: mongo
ports: ['27017:27017']
restart: always
command:
- --auth
volumes:
- /home/ubuntu/moongodb:/data/db
Would it be a problem with my docker-compose.yml? Because I'm using the physical disk and not virtual. I can run the applications and after 1-2 hours the mongo will fail again.
Clean docker cache - volumes & containers
for me it works:
docker system prune
and then
docker volume prune
or in one line:
docker system prune --volumes
To see all volumes : docker volume ls
To show docker disk usage: docker system df
If you are running this in centos/RHEL/Amazon Linux you should know that the devicemapper has major issues with releasing inodes in Docker.
Even if you prune the entire docker system, it will still hang on to a lot of inodes, the only way to really solve this is to basically implode docker:
service docker stop
rm -rf /var/lib/docker
service docker start
This should release all your inodes.
I've spent a lot of time on this, Docker really only fully supports Ubuntu overlay2, and the devicemapper, although works, is technically not supported.
It looks like 100% of your inodes are in use (from the df -i output). Try looking for dangling volumes and cleaning them up. Also, it would be a good idea to make sure the docker daemon is using a production-grade storage driver (about storage drivers, choosing a storage driver).
I'm using MongoDB with replication on azure and I have attached an SSD disk to MongoDB node and mounted on a specific path.
And I changed MongoDB data and logs path to specific path in MongoDB.conf(/opt/bitnami/mongodb/conf/mongodb.conf) file.
But when I restart MongoDB server using sudo service bitnami restart command it gives me error like
ERROR Unable to start com.bitnami.mongodb: Cannot find pid file
'/opt/bitnami/mong...b.pid'.
enter image description here
bitnami#mymongodb0:/tmp$ sudo service bitnami status
● bitnami.service - LSB: Bitnami Init Script
Loaded: loaded (/etc/init.d/bitnami)
Active: failed (Result: exit-code) since Fri 2017-08-04 05:49:58 UTC; 2min 15s ago
Process: 26654 ExecStop=/etc/init.d/bitnami stop (code=exited, status=0/SUCCESS)
Process: 92099 ExecStart=/etc/init.d/bitnami start (code=exited, status=1/FAILURE)
Aug 04 05:46:53 mymongodb0 bitnami[92099]: 2017-08-04T05:46:53.376Z - info: Saving configuration info to disk
Aug 04 05:46:53 mymongodb0 bitnami[92099]: 2017-08-04T05:46:53.987Z - info: Performing service start operation for mongodb
Aug 04 05:49:58 mymongodb0 bitnami[92099]: nami ERROR Unable to start com.bitnami.mongodb: Cannot find pid file '/opt/bitnami/mong...b.pid'.
Aug 04 05:49:58 mymongodb0 bitnami[92099]: 2017-08-04T05:49:58.249Z - error: Unable to perform start operation nami command exited wi... code 1
Aug 04 05:49:58 mymongodb0 systemd[1]: bitnami.service: control process exited, code=exited status=1
Aug 04 05:49:58 mymongodb0 systemd[1]: Failed to start LSB: Bitnami Init Script.
Aug 04 05:49:58 mymongodb0 systemd[1]: Unit bitnami.service entered failed state.
Hint: Some lines were ellipsized, use -l to show in full.
root#mymongodb0:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 30G 1.8G 27G 7% /
udev 10M 0 10M 0% /dev
tmpfs 3.2G 8.4M 3.2G 1% /run
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sdd1 1007G 272M 956G 1% /data
/dev/sdb1 32G 48M 30G 1% /mnt/resource
/opt/bitnami/mongodb/conf/mongodb.conf
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /data/db
journal:
enabled: true
#engine:
#mmapv1:
#smallFiles: true
#wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /data/logs/mongodb.log
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
unixDomainSocket:
enabled: true
pathPrefix: /opt/bitnami/mongodb/tmp
# replica set options
replication:
replSetName: replicaset
# process management options
processManagement:
fork: false
pidFilePath: /opt/bitnami/mongodb/tmp/mongodb.pid
/dev/sdc1 50G 33M 50G 1% /bitnami
tmpfs 1.6G 0 1.6G 0% /run/user/1000
After a long talk, the root reason is /data not give enough permissions.
chown mongo:mongo -R /data
/data directory needs mongo user and group.
Follow-up for an older image: bitnami/mongodb:3.2.7-r5
I mounted a directory /Users/sb/mongodata on my host to :/bitnami/mongodb and had to manually create this directory structure:
- conf
mongodb.conf
- data
- db
- logs
- tmp
Then chmod -R g+rwX /Users/sb/mongodata
I want to use the volumes rbd config to mount the folder on ceph images .
But it seems the container mount a host path.
I used the daemon of "https://github.com/kubernetes/kubernetes/tree/master/examples/rbd".
The pod and container start successfully.
I use the "docker exec " login the container and watch the /mnt folder.
root#test-rbd-read-01:/usr/local/tomcat# findmnt /mnt
TARGET SOURCE FSTYPE OPTIONS
/mnt /dev/vda1[/var/lib/kubelet/pods/****/volumes/kubernetes.io~rbd/rbd] xfs rw,relatime,attr2,inode64,noquota
root#test-rbd-read-01:/usr/local/tomcat# ls /mnt/
root#test-rbd-read-01:/usr/local/tomcat#
And then I watch the host path that mount on the ceph. The 1.txt had existed on ceph image.
[20:52 root#mongodb:/home] # mount |grep kubelet
/dev/rbd0 on /var/lib/kubelet/plugins/kubernetes.io/rbd/rbd/wujianlin-image-zlh_test type ext4 (ro,relatime,stripe=1024,data=ordered)
/dev/rbd0 on /var/lib/kubelet/pods/****/volumes/kubernetes.io~rbd/rbd type ext4 (ro,relatime,stripe=1024,data=ordered)
[20:53 root#mongodb:/home] # ll /var/lib/kubelet/pods/****/volumes/kubernetes.io~rbd/rbd
total 20K
drwx------ 2 root root 16K Mar 18 09:49 lost+found
-rw-r--r-- 1 root root 4 Mar 18 09:53 1.txt
[20:53 root#mongodb:/home] # rbd showmapped
id pool image snap device
0 wujianlin zlh_test - /dev/rbd0
It should except that the container folder /mnt is same as the host path /var/lib/kubelet/pods/ * * * */volumes/kubernetes.io~rbd/rbd, but it was not.`
And I try to write file to /mnt, it also can not see any changes in /var/lib/kubelet/pods/* * * */volumes/kubernetes.io~rbd/rbd
So is my some config wrong, or someting misunderstand ?
k8s version: Release v1.2.0
Here is my config:
apiVersion: v1
kind: Pod
metadata:
name: test-rbd-read-01
spec:
containers:
- name: tomcat-read-only-01
image: tomcat
volumeMounts:
- name: rbd
mountPath: /mnt
volumes:
- name: rbd
rbd:
monitors:
- 10.63.90.177:6789
pool: wujianlin
image: zlh_test
user: wujianlin
secretRef:
name: ceph-client-admin-keyring
keyring: /etc/ceph/ceph.client.wujianlin.keyring
fsType: ext4
readOnly: true
what did you do when you restart docker? Are you able to reproduce this issue after the docker is restarted and pod is recreated?