Postgres DB not starting in Docker - Mac - postgresql

I already using Postgres DB in my development and at the last time when I'm push my Spring Micro-services the Postgres DB is not starting, following is the docker output from kitematic
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
2017-10-17T08:37:47.562145630Z
Data page checksums are disabled.
2017-10-17T08:37:47.562162938Z
fixing permissions on existing directory /var/lib/postgresql/data ... ok
initdb: could not create directory "/var/lib/postgresql/data/pg_xlog": No space left on device
initdb: removing contents of data directory "/var/lib/postgresql/data"
Any one have a idea on this? I couldn't find a solution

Hi if you have good amount of space in your machine ,then this problem is because of dangling images and dangling volume remove those using
docker volume rm $(docker volume ls -f dangling=true -q) or docker volume prune ,for images
docker rmi $(docker images -f dangling=true -q)
you can also do docker system prune to clean the docker completely
if you still face this problem the only way to do is reset the docker complete just like a new one

Related

Why postgresql docker compose up volume setting must be /var/lib/postgresql/data?

I am beginner docker user.
I installed docker and postgresql in Mac OS.
and why most of documents mention the directory
/var/lib/postgresql/data as an volume setting???
Because in my local directory, there is not existed /var/lib/postgresql..
Is it default option? or am I missing something?
Yes, correct, /var/lib/postgresql does not exist on your local computer, but it does in the created container. The volumes parameter is used to associate the local data with the container data, in order to preserve the data in case the container crashes
For example:
volumes:
- ./../database/main:/var/lib/postgresql/data
Above we link the local directory from the left side to the container directory
If you are using official PostgreSQL image from Docker Hub, then you can check the contents of its Dockerfile. E.g. here is a fragment of postgres:15 image responsible for data directory:
ENV PGDATA /var/lib/postgresql/data
# this 777 will be replaced by 700 at runtime (allows semi-arbitrary "--user" values)
RUN mkdir -p "$PGDATA" && chown -R postgres:postgres "$PGDATA" && chmod 777 "$PGDATA"
VOLUME /var/lib/postgresql/data
As you can see Postgres is configured to have data in that directory. And to persist the data even if container is stopped and removed, the volume is created. Volumes have lifetime independent of the container which allows them to "survive".

Postgres on Docker exits immediately and deletes data in external filesystem

I'm following instructions in this article to pull a Postgres Docker image and run it.
$ mkdir -p $HOME/docker/volumes/postgres
$
$ docker pull postgres:9.5
$
$ docker run --name pg-docker -e POSTGRES_PASSWORD=postgres -d -p 5432:5432 -v $HOME/docker/volumes/postgres:/var/lib/postgresql/data postgres:9.5
I see Postgres files and data getting created in the path I mounted with -v above, but the container exits soon after starting, and the files vanish.
My questions:
Why does the container exit at all?
Why does it delete all the files? The whole purpose of -v is to
preserve data outside the container.
Update 1:
Here are the logs from the container that I was able to get when the container was briefly up (after the container exited, I only got an error message Error: No such container: postgres:9.5 but no logs):
$ docker logs pg-docker
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default timezone ... Etc/UTC
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
Update 2:
Running docker without -d produced more output, which has the reason for this situation:
creating template1 database in /var/lib/postgresql/data/base/1 ...
LOG: could not link file "pg_xlog/xlogtemp.36" to "pg_xlog/000000010000000000000001": Operation not supported
FATAL: could not open file "pg_xlog/000000010000000000000001": No such file or directory
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data"
Does it matter that I'm on a Windows system inside a Git Bash shell?

Trying to create a Docker image with a MongoDB database, but the container does not have it even though it loaded successfully

The data in the database is intended to be surfaced by an API in another container. Previously, I have successfully loaded the database during run using this suggestion. However, my database is quite large (10gb) and ideally I would not have to load the database again each time I start a new container. I want the database to be loaded on build. To accomplish this, I tried the following for my Dockerfile:
FROM mongo:4.0.6-xenial
COPY dump /data/dump
RUN mongod --fork --logpath /var/log/mongod.log \
&& mongorestore /data/dump \
&& mongo --eval "db.getSiblingDB('db').createUser({user:'user',pwd:'pwd',roles:['readWrite']})" \
&& mongod --shutdown
I expected the database to be in the container when I ran this image, but it was not, nor does the user exist. However, the log file /var/log/mongod.log indicates that the database loaded successfully as far as I can tell. Why did this not work?
The official mongo Docker image writes the database data in a docker volume.
At run time (thus in a docker container), keep in mind that files written to volumes do not end up written on the container file system. This is done to persist your data so that it survives container deletion, but more importantly in the context of database, for performance reasons. To have good I/O performances with disks, disk operations must be done on a volume, not on the container file system itself.
At build time (thus when creating a docker image), if you happen to have RUN/ADD/COPY directives in your Dockerfile write files to a location which is already declared as a volume, those files will be discarded. However, if you write the files to a directory in your Dockerfile, and only after you declare that directory as a volume, then those the volume will keep those files unless you start your container specifying a volume with the docker run -v option.
This means that in the case your own Dockerfile is built FROM mongo, the /data location is already declared as a volume. Writing files to that location is pointless.
What can be done?
Make you own mongo image from scratch
Knowing how volumes works, you could copy the contents from the Dockerfile of the official mongo Docker image and insert a RUN/ADD/COPY directive to write the files you want to the /data/db location before the VOLUME /data/db /data/configdb directive.
Override the entrypoint
Assuming you have a tar archive named mongo-data-db.tar with the contents of the /data/db location from a mongo container having all the database and collections you want, you could use the following Dockerfile and copy-initial-data-entry-point.sh, you can build an image which will copy those data to the /data/db location every time the container is started. This only make sense in a use case where such a container is used for a test suite which requiers the very same initial data everytime such a container is started as previous data are replaced with the inital data at each start.
Dockerfile:
FROM mongo
COPY ./mongo-data-db.tar /mongo-data-db.tar
COPY ./copy-initial-data-entry-point.sh /
RUN chmod +x /copy-initial-data-entry-point.sh
ENTRYPOINT [ "/copy-initial-data-entry-point.sh"]
CMD ["mongod"]
copy-initial-data-entry-point.sh:
#!/bin/bash
set -e
tar xf /mongo-data-db.tar -C /
exec /usr/local/bin/docker-entrypoint.sh "$#"
In order to extract the contents of a /data/db from the volume of a mongo container named my-mongo-container, proceed as follow:
stop the mongo container: docker stop my-mongo-container
create a temporary container to produce the tar archive from the volume: docker run --rm --volumes-from my-mongo-container -v $(pwd):/out ubuntu tar cvf /out/mongo-data-db.tar
Note that this archive will be quite large as it contains the full contents of the mongo server data including indexes as described on the mongo documentation

Run postgresql docker image with persistent data returns permission error

I've been running my docker images upon my Vagrant machine (the box is ubuntu 14.04) without any big issues. But the following error is racking my brains. I wish you people can help me.
When I run this:
$ docker run -it -v /vagrant/postgresql/data:/var/lib/postgresql/data postgres:9.2
I get this error
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
fixing permissions on existing directory /var/lib/postgresql/data ... ok
initdb: could not create directory "/var/lib/postgresql/data/pg_xlog": Permission denied
initdb: removing contents of data directory "/var/lib/postgresql/data"
vagrant#legionat:/vagrant/sonarqube$ docker run -it -v /vagrant/postgresql/data:/var/lib/postgresql/data postgres:9.2
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
fixing permissions on existing directory /var/lib/postgresql/data ... ok
initdb: could not create directory "/var/lib/postgresql/data/pg_xlog": Permission denied
initdb: removing contents of data directory "/var/lib/postgresql/data"
I've tried open all the permissions of /vagrant/postgresql without success. Maybe this is a problem of the official docker image.
EDIT:
I've just noticed that that is a lot of people facing the same problem as me: https://github.com/docker-library/postgres/issues/26
And, as someone asked about this in the comments, here it goes:
$ ls -l /vagrant/postgresql/data
total 0
If you are just concerned about persisting the data, I would recommend using a data volume instead of a host volume.
Run docker volume create --name pgdata
Then connect it to your container with:
docker run --rm --name pg -v pgdata:/var/lib/PostgreSQL/data postgres:9.2
Even after that container is gone, you can start a new one connected to the volume and your data will be there.
Just make sure your Vagrant user has the permission to access this directory:
ls -ld /vagrant/postgresql/data
And as deinspanjer said. you can use named volume for persisting the data

Postgres Docker Image: Failed to map database to host

I'm using the stock official Postgres image from Docker Hub. docker pull postgres. I wanted to map the data directory in the Postgres container to my OS X host. So, I tried this.
docker run --rm -p 5432:5432 -e POSTGRES_PASSWORD=mypass -v `pwd`/data:/var/lib/postgresql/data postgres
This resulted in the Postgres container failing to launch correctly.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... initdb: could not create directory "/var/lib/postgresql/data/global": Permission denied
initdb: removing contents of data directory "/var/lib/postgresql/data"
The goal I'm trying to achieve is to have my database data stored on the host machine, so that I can start a postgres container and have it read (or load) the database from a previous instance. Am I on the right track or is this a stupid way to achieve database persistence?
According to official documentation you should use boot2docker to resolve the issue. However, without it, you won't be able to mount container.