Postgres Docker Image: Failed to map database to host - postgresql

I'm using the stock official Postgres image from Docker Hub. docker pull postgres. I wanted to map the data directory in the Postgres container to my OS X host. So, I tried this.
docker run --rm -p 5432:5432 -e POSTGRES_PASSWORD=mypass -v `pwd`/data:/var/lib/postgresql/data postgres
This resulted in the Postgres container failing to launch correctly.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... initdb: could not create directory "/var/lib/postgresql/data/global": Permission denied
initdb: removing contents of data directory "/var/lib/postgresql/data"
The goal I'm trying to achieve is to have my database data stored on the host machine, so that I can start a postgres container and have it read (or load) the database from a previous instance. Am I on the right track or is this a stupid way to achieve database persistence?

According to official documentation you should use boot2docker to resolve the issue. However, without it, you won't be able to mount container.

Related

Docker Postgres data host volume mapping

I'm trying to docker-containerize PostgreSQL server and this container will have many other applications as well. The need is that, PostgreSQL server data should be mapped to the host volume so that when container is stopped, we won't lose the data. Also that, the next time when we start the container, the same directory can be mapped again and postgres can use the old data. Below is the DOCKERFILE. Note that I'm using ubuntu 22.04 on the host.
FROM ubuntu:22.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt install -y postgresql
ENTRYPOINT ["tail", "-f", "/dev/null"]
Docker image is built using the command
docker build -t pg_test .
and the container is run using the command
docker run --name test -v /home/me/data:/var/lib/postgresql/14/main pg_test
'/home/me/data' is the host directory which is empty where I want to map the postgres server data. '/var/lib/postgresql/14/main' is the directory inside the docker container where the postgres is supposed to store the data.
Once the docker container starts, I enter the docker container using the command
docker exec -it test bash
and once I'm inside, I'm trying to start the PostgreSQL service. But PostgreSQL fails to start as there is no data in '/var/lib/postgresql/14/main' directory. I understand that since I have mapped an empty host directory to '/var/lib/postgresql/14/main' directory, postgres doesn't have the files required to start.
I understand that I'm doing it the wrong way, but I couldn't find a way around it. Can anyone please help me to do this the right way, if there is one?
Any help would be appreciable.
You should use the postgres docker image, it will set up the db for you when you start the container, you can find instructions on https://hub.docker.com/_/postgres
If you must use a custom image, you will need to initialize the db yourself, usually by running initdb or whatever your system provides.
But really you should use the appropriate docker image, and if you need more services you start them in their own container and connect them to the postgres one

Postgresql as docker container not starting with data from mapped volume

On my macbook I have postgresql running in a docker container and I use a mapped volume to persist the data. This works perfectly locally. However, when I try to do the same on the Ubuntu server the 'initial' data from the mapped volume is not working. Postgres starts up in an 'empty' initial state.
However, when I add a table and data in that table in the default postgres database it IS persistent. So the volume mapping seems to work.
Furthermore it is interesting to note that I'm getting an error when I try to create a table in a new database. The new database is persistent as well, but the table cant be saved as there is an error thrown:
could not open file "base/16384/2611": No such file or directory
This is expected as the folder base/16384 doesn't exist.
To me this seems this is a user/rights issue perhaps, but no clue how to fix this.
I tried running the container as root, which didn't help.
Any suggestions?
I'm starting the container with either docker-compose or from the command line using;
docker run --rm --name pg -e POSTGRES_PASSWORD=[password] -d -p 5432:5432 -v /root/docker/volumes/postgres:/var/lib/postgresql/data postgres -c listen_addresses='*'
Instead of moving the actual data folder around I used pg_dump and pg_restore within the docker containers per suggestion on the docker forums. This did the trick

Map host user into postgres container

I was trying to run postgres 12.0 alpine with arbitrary user in an attempt to have easier acces to the mounted drives. However, I get the following error. I was following the instructions from official docker hub here
docker run -it --user "$(id -u):$(id -g)" -v /etc/passwd:/etc/passwd:ro postgres:12.0-alipne
I get: initdb: error: could not change permissions of directory "/var/lib/postgresql/data": Operation not permitted
Then I tried initializing the target directory separately which needs a restart in between. This is also not working and gives me the same error. But this time, the container starts as a root user.
Has anyone had success running the postgres alpine container with an arbitrary user?

Creating a running Postgres service inside a docker container

I'm a bit new to Docker.
I have two containers running using docker-compose.
One is the API and the other is the actual application.
I want to add a new DB container using the Postgres official image.
It's a bit hard to find a simple tutorial on how to create the container and populate it with a predefined sql file (of schemas and data).
When I start with "CMD /etc/init.d/postgresql start" in the Dockerfile I get an error saying: "No PostgreSQL clusters exist; see "man pg_createcluster" ... (warning)."
Since it takes me too much time to get things going I was wondering if it might be better to get an Ubuntu image and install Postgres on my own since there is only one source on how to use the image - docker hub, and I don't seem to understand it that well.
Any ideas or simple steps on how to compose and 'configure' this image?
If you want populate your database with some file, A simply way to do this is:
How to extend this image
If you would like to do additional initialization in an image derived
from this one, add one or more *.sql, *.sql.gz, or *.sh scripts under
/docker-entrypoint-initdb.d (creating the directory if necessary).
After the entrypoint calls initdb to create the default postgres user
and database, it will run any *.sql files and source any *.sh scripts
found in that directory to do further initialization before starting
the service.
Dockerfile
FROM postgres:alpine
COPY init.sql /docker-entrypoint-initdb.d/init.sql
docker-compose.yml
version: '3'
services:
app:
//your app definition
postgres:
build: .
Pull the postgres image
docker pull postges:14.2
Create the service with the below command
docker service create --name postgres --network my_overlay --env "POSTGRES_PASSWORD=password" --publish 5432:5432 postgres:14.2
Try to connect using userName as postgres and password as password to the default postgres db.
jdbc:postgresql://127.0.0.1:5432/postgres // JDBC connection

docker postgres, fail to map volume in windows

I wish to store my persists data in my local D:\dockerData\postgres9.6. Below is my docker command
docker pull postgres
docker run -d -v /d/dockerData/postgres9.6:/var/lib/postgresql/data -p 5432:5432 postgres
It successful create a container and I can use pgAdmin to access and create database.
But I found out that there is no file in my D:\dockerData\postgres9.6. I exec bash into the container, there is at least 20+ files inside /var/lib/postgresql/data.
Anyone can point out which part goes wrong?
It depends what kind of Docker you are using on Windows:
Docker Toolbox with VirtualBox: only C:\Users\mylogin is shared by default. D:\ is not mounted.
Docker for Windows with HyperV: only C:\ is mounted by default. Make sure D:\ is a shared drive: see image