Map host user into postgres container - postgresql

I was trying to run postgres 12.0 alpine with arbitrary user in an attempt to have easier acces to the mounted drives. However, I get the following error. I was following the instructions from official docker hub here
docker run -it --user "$(id -u):$(id -g)" -v /etc/passwd:/etc/passwd:ro postgres:12.0-alipne
I get: initdb: error: could not change permissions of directory "/var/lib/postgresql/data": Operation not permitted
Then I tried initializing the target directory separately which needs a restart in between. This is also not working and gives me the same error. But this time, the container starts as a root user.
Has anyone had success running the postgres alpine container with an arbitrary user?

Related

Docker Postgres data host volume mapping

I'm trying to docker-containerize PostgreSQL server and this container will have many other applications as well. The need is that, PostgreSQL server data should be mapped to the host volume so that when container is stopped, we won't lose the data. Also that, the next time when we start the container, the same directory can be mapped again and postgres can use the old data. Below is the DOCKERFILE. Note that I'm using ubuntu 22.04 on the host.
FROM ubuntu:22.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt install -y postgresql
ENTRYPOINT ["tail", "-f", "/dev/null"]
Docker image is built using the command
docker build -t pg_test .
and the container is run using the command
docker run --name test -v /home/me/data:/var/lib/postgresql/14/main pg_test
'/home/me/data' is the host directory which is empty where I want to map the postgres server data. '/var/lib/postgresql/14/main' is the directory inside the docker container where the postgres is supposed to store the data.
Once the docker container starts, I enter the docker container using the command
docker exec -it test bash
and once I'm inside, I'm trying to start the PostgreSQL service. But PostgreSQL fails to start as there is no data in '/var/lib/postgresql/14/main' directory. I understand that since I have mapped an empty host directory to '/var/lib/postgresql/14/main' directory, postgres doesn't have the files required to start.
I understand that I'm doing it the wrong way, but I couldn't find a way around it. Can anyone please help me to do this the right way, if there is one?
Any help would be appreciable.
You should use the postgres docker image, it will set up the db for you when you start the container, you can find instructions on https://hub.docker.com/_/postgres
If you must use a custom image, you will need to initialize the db yourself, usually by running initdb or whatever your system provides.
But really you should use the appropriate docker image, and if you need more services you start them in their own container and connect them to the postgres one

How to reconnect to same postgres database on Docker

I'm very new to using docker and I've created a postgres container using
docker run --name mytrainingdb -e POSTGRES_PASSWORD=mysecretpassword -d postgres. Then I connected to it with docker exec -it <container-id> bash and then psql.
Then I stop the container.
My query is, what do I do reconnect to the same database? I tried to run same docker run command, but it says the name 'mytrainingdb' is used, which means it is trying to create it afresh, which is not what I want. Hope my expectation is right, as in when I restart my laptop or resume work I can just restart the same container and my data/config would be preserved?
The documentation also mentions that we can link a host directory to volume of pg container to have the stored data accessible to us, but I'm ok with docker managing my storage for that database.
You will have error when you try to re-run the same command, because docker is trying to create a new container with same name as the previous one "mytrainingdb". If you close docker and reopen it you will still find your container , but its not running , you can start it again with docker start mytrainingdb or you can remove it with docker rm mytrainingdb .
However , dont restart docker because you want to create a new container with the same name! If you want to start a new container with the same name and your container is still running you can first stop it with docker stop mytrainingdb and docker rm mytrainingdb or you can just do docker rm -f mytrainingdb (this will remove you running container with force ) and then create a new container..
As for the volumes ,you just created one by default which is named is kind of hash , and its found at volumes/var/lib/docker/volumes/ .Because generally containers such PostgreSQL, or databases in general persists volumes. The volume gets created when running the container and is handy to save persistent data, whether you start the container with -v or not.
The volume you talked about in your question , is called mounted volume , is when you basically just bind a certain directory or file from the host (outside) to inside the container
docker run -v /hostdir:/containerdir in your case docker run -v /hostdir:/var/lib/postgresql/data
If you restart docker or your computer running containers won't be automatically restarted. You can start your container again with docker start mytrainingdb (related question), then connect with your docker exec command.
(one tip: instead of running bash, then psql, you can directly run psql, e.g. docker exec -it mytrainingdb psql --user postgres)
Your understanding of data persistence is correct, docker will manage the data and it will still be around.
From the postgres image documentation
There are several ways to store data used by applications that run in Docker containers. We encourage users of the postgres images to familiarize themselves with the options available, including:
Let Docker manage the storage of your database data by writing the database files to disk on the host system using its own internal volume management. This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
You can add --rm argument so that whenever you stop the container manually, or container stops for any reasons (his task is done or it fails), it will remove that container.
In your case, you can use this:
docker run --name mytrainingdb --rm -e POSTGRES_PASSWORD=mysecretpassword -d postgres

Postgres docker error: FATAL: password authentication failed for user

I created a Postgres container with docker
sudo docker run -d \
--name dev-postgres \
-e POSTGRES_PASSWORD=test \
-e POSTGRES_USER=test \
-v ${HOME}/someplace/:/var/lib/postgresql/data \
-p 666:5432 \
postgres
I give the Postgres instance test as a username and password as specified in the doc.
The Postgres port (5432) inside the container is linked to my 666 port on the host.
Now I want to try this out using psql
psql --host=localhost --port=666 --username=test
I'm prompted to enter the password for user test and after entering test, I get
psql: error: FATAL: password authentication failed for user "test"
There are different problems that can cause this
The version of Postgres on the host and the container might not be the same
If you have to change the docker version of Postgres used, make sure that the container with the new version is not crashing. Trying to change the version of Postgres while using the same directory for data might cause problem as the directory was initialized with the wrong version.
You can use docker logs [container name] to debug if it crashes
There might be a problem with the volumes used by docker (something with cached value preventing the creation of a new user when env variable change) if you changed the env parameters.
docker stop $(docker ps -qa) && docker system prune -af --volumes
If you have problem with some libraries that use Postgres, you might need to install some package to allow libraries to work with Postgres. Those two are the one Stack Overflow answers often reference.
sudo apt install libpq-dev postgresql-client
Other problems seem to relate to problems with docker configuration.

Error when attempting to mount another volume to clickhouse docker container

I've been trying to mount a volume to a docker container with clickhouse, specifically on docker desktop windows 10. Following the documentation:
https://hub.docker.com/r/yandex/clickhouse-server/
I have no problem setting up the docker container on my C drive which is in my $HOME path and loading data into etc. I want to now mount a custom volume, my E/ drive which is larger as the database will continue to grow. I am getting an error when I run this:
docker run -d -p 8123:8123 --name clickhousedb --ulimit nofile=262144:262144 --volume=/E:/ch/clickhousedb:/var/lib/clickhouse yandex/clickhouse-server
specifically this:
Error response from daemon: invalid mode: /var/lib/clickhouse.
Any ideas what might be the issue?
The issue is the "/" character right after " --volume=", which tells the docker CLI to split the string as:
empty string (directory to be mounted)
E:/ch/clickhousedb (mounting point inside the container)
/var/lib/clickhouse (mounting mode)
Docker thought "/var/lib/clickhouse" was the mode for the volume mount, hence the error message.
Seemed to be a permission issue. Was able to access the root of the E drive:
docker run -d -p 8134:8123 --name clickhousedb --ulimit nofile=262144:262144 --volume=E:/:/var/lib/clickhouse yandex/clickhouse-server

Postgres Docker Image: Failed to map database to host

I'm using the stock official Postgres image from Docker Hub. docker pull postgres. I wanted to map the data directory in the Postgres container to my OS X host. So, I tried this.
docker run --rm -p 5432:5432 -e POSTGRES_PASSWORD=mypass -v `pwd`/data:/var/lib/postgresql/data postgres
This resulted in the Postgres container failing to launch correctly.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... initdb: could not create directory "/var/lib/postgresql/data/global": Permission denied
initdb: removing contents of data directory "/var/lib/postgresql/data"
The goal I'm trying to achieve is to have my database data stored on the host machine, so that I can start a postgres container and have it read (or load) the database from a previous instance. Am I on the right track or is this a stupid way to achieve database persistence?
According to official documentation you should use boot2docker to resolve the issue. However, without it, you won't be able to mount container.