Configure postgres in a dockerfile [duplicate] - postgresql

This question already has answers here:
How to create User/Database in script for Docker Postgres
(9 answers)
Closed 6 months ago.
I'm feeling a bit stupid, but...
All I want is a dockerfile that allows me to configure the db and the user to replace a script like
docker run --name $CONTAINER_NAME \
-p 5432:5432 \
-e POSTGRES_PASSWORD=mypass \
-e POSTGRES_DB=mydb \
-d postgres:13.5-alpine
###
### artificial wait here
###
docker exec $CONTAINER_NAME bash -c "psql --username postgres --command \"CREATE USER foo WITH SUPERUSER ENCRYPTED PASSWORD 'bar';\""
I.e. something like this (I thought):
FROM postgres:13.5-alpine
EXPOSE 5432
ENV POSTGRES_DB mydb
ENV POSTGRES_PASSWORD mypass
CMD psql --username postgres --command "CREATE USER foo WITH SUPERUSER ENCRYPTED PASSWORD 'bar';"
except this results in
psql: error: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
So what's the proper way to do this?

The Dockerfile is for building the image and the CMD will run when the container is ran. So the reason that
docker run --name $CONTAINER_NAME \
-p 5432:5432 \
-e POSTGRES_PASSWORD=mypass \
-e POSTGRES_DB=mydb \
-d postgres:13.5-alpine
###
### artificial wait here
###
docker exec $CONTAINER_NAME bash -c "psql --username postgres --command \"CREATE USER foo WITH SUPERUSER ENCRYPTED PASSWORD 'bar';\""
this worked is because the container (db) already is up and running before you input the SQL query into it. But if you try to do it from the Dockerfile it literally will try to run the psql command before the db is up and running, hence the connection error.
Your best bet is to use Docker Compose and add an initialize script
version: 3
services:
db:
image: postgres:13.5-alpine
environment:
POSTGRES_PASSWORD:mypass
POSTGRESS_DB: mydb
volume:
data:/var/lib/postgresql/data
volumes:
data:
Something like this(Excuse some typos. I'm typing from memory here.
See https://hub.docker.com/_/postgres for more info on initialising scripts
But it's basically a bash script you can write and then pass to docker compose to run upon container startup.

Related

How to combine exec commands in shell script

I am running postgreSQL on docker.
In order to access the container I run
winpty docker exec -it postgres_db bash
Now that I am inside the docker container, I run this to access as user 'postgres' in postgres server
psql -U postgres
Then what I want to do is create a database
create database test;
I can I do this sequentially in a script?
I want something like this:
#!/bin/bash
winpty docker run -p 8005:5432 --name postgres_db -e POSTGRES_PASSWORD=password -d postgres
sleep 5
winpty docker exec -it postgres_db bash -c "psql -U postgres" [-c "create database test;"]
create database cannot be executed if im not inside "psql -U postgres". Obviously the last line -c "create database test;" is wrong, just to make you understand what I want to do.
winpty docker exec -it postgres_db bash -c "psql -U postgres -c 'CREATE DATABASE test;'"
Both bash -c and psql -c take one shell word as input. A single or double quoted string is one word. You need to nest the quotes, just like the -c commands are nested.
Alternative:
winpty docker exec -it postgres_db bash -c "psql -U postgres -c \"CREATE DATABASE test;\""

docker-compose can't find file to inject to Postgresql when running a command

I'm trying to restore a .backup file to a Postgresql database. For that, I use a docker-compose file to launch a postgres docker container:
docker-compose.yml
postgresql:
image: postgres
restart: always
ports:
- "5432:5432"
environment:
- POSTGRES_USER:postgres
# - PGDATA:/var/lib/postgresql/data/pgdata/
volumes:
- ${PWD}/project/data/MX/bkup/data:/var/lib/postgresql
command:
- pg_restore -U postgres -d postgres /var/lib/postgresql/ph.backup
When I run my docker-compose file using the command:
docker-compose up postgresql
I get the error:
(virtual) med#nid:~/projects/project/pkg$ docker-compose up postgresql
Recreating pkg_postgresql_1 ...
Recreating pkg_postgresql_1 ... done
Attaching to pkg_postgresql_1
postgresql_1 | /usr/local/bin/docker-entrypoint.sh: line 176: /pg_restore -U postgres -d postgres /var/lib/postgresql/ph.backup: No such file or directory
pkg_postgresql_1 exited with code 127
This happens even though the file is inside the volume
med#nid:~/projects/project/pkg$ docker-compose exec postgresql bash
root#ab7dbe2b0232:/# cd /var/lib/postgresql/
root#ab7dbe2b0232:/var/lib/postgresql# ls -l
total 1054780
drwx------ 19 postgres postgres 4096 Oct 2 08:51 data
-r--r--r-- 1 postgres ssl-cert 1080082803 Sep 27 15:40 ph.backup
I tried to use the -h argument of pg_restore in the docker-compose command:
pg_restore -h tcp://`docker-machine ip default`:5432 -U postgres -d postgres /var/lib/postgresql/ph.backup
What works:
If I comment the command target in the docker-compose.yml, launch the docker container and run the command inside it I get to have the data injected!
Is there a fix for this, Meaning, is there a way to make the command work directly from the docker-compose.yml file?
There are two forms of Docker Compose command:. You should move the command up on to the same line
command: pg_restore -U postgres -d postgres /var/lib/postgresql/ph.backup
The form you have individually spells out each argument (in a YAML list); for example
command:
- /bin/ls
- -l
- -r
- -t
(Also consider just installing the PostgreSQL client tools on your host and running this outside of Docker; use localhost as the host name and the first number from the database container's ports: as the port number.)
The error comes from the form of the command configuration. If you inspect the finished postgres container (docker inspect <container-id>) the entrypoint and command look like this:
"Cmd": [
"pg_restore -U postgres -d postgres /var/lib/postgresql/ph.backup"
],
"Entrypoint": [
"docker-entrypoint.sh"
]
That practically means that the default entrypoint script docker-entrypoint
is executed with one argument which is the pg_restore command. At line 176 the script execs the passed in arguments (exec "$#").
The exec command needs a command and a list of arguments
exec [command [arguments]]
but in this case the command is the full string formed by pg_restore and its arguments. This obviously is not a valid file
Now, if you change command in docker-compose.yml to:
command: pg_restore -U postgres -d postgres /var/lib/postgresql/ph.backup
inspecting the container shows the following:
"Cmd": [
"pg_restore",
"-U",
"postgres",
"-d",
"postgres",
"/var/lib/postgresql/ph.backup"
]
that means that exec will run pg_restore as command passing the rest as arguments and everything works as expected.
In alternative you could override the entrypoint in the docker-compose file to execute the command in a shell:
entrypoint: /bin/bash -c
command:
- pg_restore -U postgres -d postgres /var/lib/postgresql/ph.backup

Customize the configuration of the official PostgreSQL docker image

I am using the official postgresql docker image (version 9.4). I have extended the Dockerfile, so I can alter the settings in the postgresql.conf etc, using a bash script. It successfully adds and runs the script on entrypoint, for a single sed command. But when I put 2 or more sed commands, I get the following error:
/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/config.sh
: No such file or directoryread
/var/lib/postgresql/data/postgresql.conf
I am trying on Windows 10, in combination with Vagrant and VirtualBox, using NFS file system on shared folders, via the vagrant-winnfsd plugin.
Why is this happening? How can I alter my bash script in order to work with more configuration settings? Is there a better way?
Dockerfile:
FROM postgres:9.4
RUN echo "Europe/Athens" > /etc/timezone \
&& dpkg-reconfigure -f noninteractive tzdata
RUN localedef -i el_GR -c -f UTF-8 -A /usr/share/locale/locale.alias el_GR.UTF-8
ADD config.sh /docker-entrypoint-initdb.d/
RUN chmod 755 /docker-entrypoint-initdb.d/config.sh
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
config.sh:
#!/bin/bash
sed -i -e"s/^#logging_collector = off.*$/logging_collector = on/" /var/lib/postgresql/data/postgresql.conf
sed -i -e"s/^max_connections = 100.*$/max_connections = 1000/" /var/lib/postgresql/data/postgresql.conf
database.yml
postgres:
container_name: postgres-9.4
image: ***/postgres-9.4
volumes_from:
- postgres_data
ports:
- 5432:5432
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=database
- USERMAP_UID=999
- USERMAP_GID=999
postgres_data:
container_name: postgres_data
image: ***/postgres-9.4
volumes:
- ./services/postgres:/etc/postgresql
- ./services/postgres:/var/lib/postgresql
- ./services/postgres/logs:/var/log/postgresql
command: "true"
You might want to try using a RUN statement to execute your bash script or just run sed directly with both commands combined with a semicolon:
RUN sed -i -e 's/^#\(logging_collector = \).*/\1on/; s/^\(max_connections = \).*/\11000/' \
/var/lib/postgresql/data/postgresql.conf
A more scalable solution would be to put the sed program in an external file, then use these statements:
ADD postgres-edit.sed /var/local
RUN sed -i -f /var/local/postgres-edit.sed /var/lib/postgresql/data/postgresql.conf
postgres-edit.sed:
# sed script to edit postgresql configuration
s/^#\(logging_collector = \).*/\1on/
s/^\(max_connections = \).*/\11000/
Seems like a duplicate of How to customize the configuration file of the official PostgreSQL docker image?.
Copy-paste of my answer at https://stackoverflow.com/a/40598124/385548.
Inject custom postgresql.conf into postgres Docker container
The default postgresql.conf file lives within the PGDATA dir (/var/lib/postgresql/data), which makes things more complicated especially when running postgres container for the first time, since the docker-entrypoint.sh wrapper invokes the initdb step for PGDATA dir initialization.
To customize PostgreSQL configuration in Docker consistently, I suggest using config_file postgres option together with Docker volumes like this:
Production database (PGDATA dir as Persistent Volume)
docker run -d \
-v $CUSTOM_CONFIG:/etc/postgresql.conf \
-v $CUSTOM_DATADIR:/var/lib/postgresql/data \
-e POSTGRES_USER=postgres \
-p 5432:5432 \
--name postgres \
postgres:9.6 postgres -c config_file=/etc/postgresql.conf
Testing database (PGDATA dir will be discarded after docker rm)
docker run -d \
-v $CUSTOM_CONFIG:/etc/postgresql.conf \
-e POSTGRES_USER=postgres \
--name postgres \
postgres:9.6 postgres -c config_file=/etc/postgresql.conf
Debugging
Remove the -d (detach option) from docker run command to see the server logs directly.
Connect to the postgres server with psql client and query the configuration:
docker run -it --rm --link postgres:postgres postgres:9.6 sh -c 'exec psql -h $POSTGRES_PORT_5432_TCP_ADDR -p $POSTGRES_PORT_5432_TCP_PORT -U postgres'
psql (9.6.0)
Type "help" for help.
postgres=# SHOW all;

How to start Phoenix by using PostgreSQL through container?

I tried:
$ alias psql="docker exec -ti pg-hello-phoenix sh -c 'exec psql -h localhost -p 5432 -U postgres'"
$ mix ecto.create
but got:
** (RuntimeError) could not find executable psql in path, please guarantee it is available before running ecto commands
lib/ecto/adapters/postgres.ex:106: Ecto.Adapters.Postgres.run_with_psql/2
lib/ecto/adapters/postgres.ex:83: Ecto.Adapters.Postgres.storage_up/1
lib/mix/tasks/ecto.create.ex:34: anonymous fn/2 in Mix.Tasks.Ecto.Create.run/1
(elixir) lib/enum.ex:604: Enum."-each/2-lists^foreach/1-0-"/2
(elixir) lib/enum.ex:604: Enum.each/2
(mix) lib/mix/cli.ex:58: Mix.CLI.run_task/2
(elixir) lib/code.ex:363: Code.require_file/2
Also I tried to create symlink /usr/local/bin/psql:
#!/usr/bin/env bash
docker exec -ti pg-hello-phoenix sh -c "exec psql -h localhost -p 5432 -U postgres $#"
and then:
$ sudo chmod +x /usr/local/bin/psql
check:
$ which psql
/usr/local/bin/psql
$ psql --version
psql (PostgreSQL) 9.5.1
run again:
$ mix ecto.create
** (Mix) The database for HelloPhoenix.Repo couldn't be created, reason given: cannot enable tty mode on non tty input
.
Container with PostgreSQL is launched:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
013464d7227e postgres "/docker-entrypoint.s" 47 minutes ago Up 47 minutes 5432/tcp pg-hello-phoenix
I was able to do this by going into /config/.exs In my case it was development, so /config/dev.exs and left the hostname as localhost but added another setting for port: 32768 because that's the port that docker exposed.
Make sure to put a space between the port: and the number (not string). Otherwise it won't work.
Worked as usual after that. The natural assumption is that the username / password matches on the container as well.
To me I did the following:
sudo docker exec -it postgres-db bash
After I got the interactive shell
psql -h localhost -p 5432 -U postgres
Then I create my db manually:
CREATE DATABASE cars_dev;
Then finally:
mix ecto.migrate
Everything worked fine after that :) hope it helps.

How to generate a Postgresql Dump from a Docker container?

I would like to have a way to enter into the Postgresql container and get a data dump from it.
Use the following command from a UNIX or a Windows terminal:
docker exec <container_name> pg_dump <schema_name> > backup
The following command will dump only inserts from all tables:
docker exec <container_name> pg_dump --column-inserts --data-only <schema_name> > inserts.sql
I have container named postgres with mounted volume -v /backups:/backups
To backup gziped DB my_db I use:
docker exec postgres pg_dump -U postgres -F t my_db | gzip >/backups/my_db-$(date +%Y-%m-%d).tar.gz
Now I have
user#my-server:/backups$ ls
my_db-2016-11-30.tar.gz
Although the mountpoint solution above looked promising, the following is the only solution that worked for me after multiple iterations:
docker run -it -e PGPASSWORD=my_password postgres:alpine pg_dump -h hostname -U my_user my_db > backup.sql
What was unique in my case: I have a password on the database that needs to be passed in; needed to pass in the tag (alpine); and finally the hosts version of the psql tools were different to the docker versions.
This one, using container_name instead of database_scheme's one, works for me:
docker exec {container_name} pg_dump -U {user_name} > {backup_file_name}
In instance, for me, database name, user and password are supposed declared in docker-compose.yaml
I wish it could help someone
for those who suffered with permissions, I used this following command with success to perform my dump:
docker exec -i MY_CONTAINER_NAME /bin/bash -c "PGPASSWORD=MY_PASSWORD pg_dump -Fc -h localhost -U postgres MY_DB_NAME" > /home/MY_USER/db-$(date +%d-%m-%y).backup
This will mount the pwd and include your environment variables
docker run -it --rm \
--env-file <(env) \
-w /working \
--volume $(pwd):/working \
postgres:latest /usr/bin/pg_dump -Fc -h localhost -U postgres MY_DB_NAME" > /working/db-$(date +%d-%m-%y).backup
Another workaround method is to start postgre sql with a mountpoint to the location of the dump in docker.
like docker run -v <location of the files>.
Then perform a docker inspect on the docker running container
docker inspect <image_id>
you can find "Volumes" tag inside and a corresponding location.Go to the location and you can find all the postgresql/mysql files.It worked for me.Let us know if that worked for you also.
Good luck
To run the container that has the Postgres user and password, you need to have preconfigured variables as container environment variable.
For example:
docker run -it --rm --link <container_name>:<data_container_name> -e POSTGRES_PASSWORD=<password> postgres /usr/bin/pg_dump -h <data_container_name> -d <database_name> -U <postgres_username> > dump.sql