Postgres & docker os disk cache - postgresql

I'm using docker-compose to run my database container that has fairly small memory limit (using mem_limit setting).
Will the docker host disk cache be used (no mem limit) by the postgres container, or should I make sure the container has enough free memory for disk caching?
Using Debian 4.9.246-2 linux host

Related

Unable to resize /dev/sda1 of GCP postgres

I created a postgres VM in GCP using this instructions https://joncloudgeek.com/blog/deploy-postgres-container-to-compute-engine/#create-a-compute-instance-running-a-postgres-container with a 10GB disk, everything has worked fine for the last couple of months but I seem to have run out of space on /dev/sda1. So i increased the disk size to 400GB but I can't seem to be able to resize /dev/sda1 using the standard command "sudo growpart /dev/sda 1" I keep getting command not found.
Solution for me:
Create a machine image of the container.
Spin up a new VM based on the machine image created.
Delete old VM.
This created a new Postgres VM with 400GB of disk.

Slow query time with Postgres 10 inside Docker vs bare-metal for AWS Linux 2

I've been trying to deploy Postgres within Docker for portability reason, and noticed that query performance as measured by "explain analyze" has been painfully slow compared to bare metal.
For a table with 1.7 million rows, a query on bare metal Postgres takes about 1.2 sec vs 4.8 sec on Dockered Postgres, an increase of 4 times! This comparison is done with the same mounted volume for both bare-metal and Docker (for Docker, I'm using the -v option) The volume is a gp2 volume, mounted through AWS console, 60GB
Couple of things I tried:
Increase shared memory buffer option in postgresql.conf, which has negligible effect
Tried several volume mapping options (delegated, cached, consistent)
Upgrading Docker from 17.06-ce to 17.12-ce
This is all done in AWS Linux 2 instance. At this point I’m hoping to get more suggestions on what to do to improve performance.
The docker run command I use:
docker run -p 5432:5432 --name postgres -v /vol/pgsql/10.0/data:/var/lib/postgresql/data postgres:latest

Limit Disk usage in Docker+MongoDB

I am using the official mongo Docker image to start a MongoDB container where my boot disk is limited (e.g. 10G) I configured the docker to run with Google Cloud Logging driver and was hoping Google to store all the logs and save my local disk space. However, I notice the disk continues to grow:
$ df -h
/dev/sda1 9.9G 4.5G 4.9G 49%
As I digged deeper I realized the size of docker containers seems to be growing over time.
$ sudo du -sh /var/lib/docker/
3.6G /var/lib/docker/
However, I can't go further as somehow I can't access the directories within.
If I go inside the docker and du -sh the root, I don't find any suspicious directories occupying space.
So my problem is how do I find out where the disk space is used and how do I eliminate it.
My docker startup command (shown without project options)
docker run -d --log-driver=gcplogs mongo mongod
EDIT: I noticed the size growing has stopped at 4.5GB from ~3GB for a while. So I supposed it has reached some equilibrium now.

Docker container: MongoDb Insufficient free space for journal files

I am running MongoDB inside a Docker (version 1.10.1, on OSX) container and it is giving this error:
MongoDb Insufficient free space for journal files
I am not able to find out weather the issue in on the host, the container, or in virtual box?
However, on my host I have:
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1 465Gi 75Gi 389Gi 17% 19777401 102066309 16% /
And on the docker container:
Filesystem Inodes IUsed IFree IUse% Mounted on
none 1218224 742474 475750 61% /
I have also mounted a volume from the host with:
run -it -v /Users/foobar/Projects/compose:/data/db mongoImage:0.1 /bin/bash
Thanks for the comments #andy, the issue did seem to be within the virtual box env.
I was able to resolve the issue by:
backing up all docker images
cloning the default virtual box iso (as
a backup)
deleting the default virtual box iso and all associated
files.
restarting docker, a new default Vbox iso was created. This
resolved the issue (which I expect to have again at some point)

How to restrict cpu usage from host to docker container

I have one VM host in one physical server with many docker containers inside.
Here one fragment of my fig.yml
pg:
image: pg...
redis:
image: redis...
mongodb:
image: mongodb...
app:
image: myapp...
I wish set pg container use only 25% of host cpu and app use only 50% of host cpu and so on.
Could I do it with fig or with docker run and manage link by hand?
In my case when one of this container is running a costly task it affect the cpu performance of the others ones. But when in the same physical server I have others VM with similar deploy inside the problem increase dramatically.
For now, Fig doesn't support setting CPU and memory limitation. Maybe it will support in the future.
I encourage you to experiment with using docker run -m for memory limit, and docker run -c for CPU shares. These flags will allow you to set memory and CPU values when starting a container. Read more about the flags you can use with docker run here:
https://docs.docker.com/reference/commandline/cli/#run
But it can only set when you are create a new container.
After creating container, you cannot change the value.