I have a postgres:9.5.6-alpine container, and another container, named web, which has to be linked to it.
I want to run a script named create_db.sh in postgres container after it has been started and docker-entrypoint.sh has been executed, in order to create a db and a user and restore a backup.
My docker-compose.yml (postgres part):
postgres:
build: ./postgres
container_name: postgres
volumes:
- /shared_folder/postgresql:/var/lib/postgresql
ports:
- "5432:5432"
command: sh /home/create_db.sh
The content of create_db.sh is:
#!/bin/sh
psql -d template1 -U postgres
psql --command "CREATE USER user WITH PASSWORD 'userpassword';"
psql --command "CREATE DATABASE userdb;"
psql --command "GRANT ALL PRIVILEGES ON DATABASE userdb to user;"
psql --command "\q;"
psql -U user -d userdb -f /var/lib/postgresql/backup.sql
exit
When i run docker-compose build and then docker-compose up i get this:
Attaching to postgres, web
postgres | psql: could not connect to server: No such file or directory
postgres | Is the server running locally and accepting
postgres | connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I've understood this is because when I launch create_db.sh postgres server is not ready, but so how can i run it after the docker-entrypoint.sh of the container?
You are overriding the original command and you do not start postgres in this script which is why your database is not available.
You can put your database initialization into the container's entrypoint directory: /docker-entrypoint-initdb.d. This executes all *.sh and *.sql files in this directory and does not touch the original command.
All files in this directory are automatically executed in the alphabetical order on container creation. Therefore, create a volume to add your scripts / sql files to the entrypoint and let the container execute them. This is described in the official postgres documentation, section "How to extend this image".
Your compose file then changes to something like this:
postgres:
build: ./postgres
volumes:
- /shared_folder/postgresql:/var/lib/postgresql
- ./db-init-scripts:/docker-entrypoint-initdb.d
ports:
- "5432:5432"
whereas a local directory, e.g. db-init-scripts, contains your initialization scripts (rename it if you want). Copy create_db.sh to this folder and it will be automatically executed when you create a new container.
Several database-images watch this entrypoint-directory, which is very convenient.
Your container_name: postgres seems redundant.
If you are getting an error /bin/sh: bad interpreter: Permission denied change execution permission to your script file first, since Docker copies over permissions:
chmod +x your/script.sh
Related
This question already has answers here:
How to create User/Database in script for Docker Postgres
(9 answers)
Closed 8 months ago.
I have a very complex environment, there are two services which named as A service and B service depend on service postgresql, but A and B use difference super user. so once I start the service postgresql, I need create another superuser for service B.
This is a part of docker compose file :
postgres:
image: postgres:13.4
container_name: postgresql
hostname: postgresql
volumes:
- 'postgres_data:/var/lib/postgresql/data'
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
command: ["CREATE ROLE postgres LOGIN SUPERUSER; | ALTER USER postgres CREATEDB CREATEROLE LOGIN INHERIT REPLICATION BYPASSRLS;"]
enter image description here
In the environments, I already setup the default user, password and DB in service postgresql for service A, BUT I’d like to create another super user for service B once service postgresql started, does there any one know how to write the commands. many thanks.
I think this grammer is wrong: command: ["CREATE ROLE postgres LOGIN SUPERUSER; | ALTER USER postgres CREATEDB CREATEROLE LOGIN INHERIT REPLICATION BYPASSRLS;"]
The issue with running commands on container startup is that you need the database to be up to be able to run the commands. By overriding the command: on the container, you replace the normal command which is to start Postgres. So with a new command, Postgres will never start.
You can run the command from a different container once Postgres is started. The command-runner container uses psql to connect to the database and run the command. It also uses the postgres docker image, but because it overrides the command on the container, this container doesn't start a database. It only runs the command.
Here's what I've come up with.
version: "3"
services:
postgres:
image: postgres
environment:
- POSTGRES_USER=keycloak
- POSTGRES_PASSWORD=password
- POSTGRES_DB=keycloak
command-runner:
image: postgres
command: /bin/sh -c 'sleep 10 && PGPASSWORD=password psql -U keycloak -h postgres -d keycloak -c "CREATE ROLE postgres LOGIN SUPERUSER; ALTER USER postgres CREATEDB CREATEROLE LOGIN INHERIT REPLICATION BYPASSRLS;"'
depends_on:
- postgres
I had to put in a sleep 10 command in the command-runner container for it to wait for Postgres to be ready to accept connections.
I also removed the | you had in your command before ALTER USER. I got a syntax error on it.
I have a docker image that's not accepting credentials for a user that is defined in the yaml docker-compose file. When I go to the docker console for the container and check users it only lists postgres. Not sure what I am missing - here's the yaml file:
version: '3.8'
services:
db:
container_name: drewreport_container
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: mpassword
POSTGRES_USER: thedrewreport
POSTGRES_DB: thedrewreportdb
ports:
- "5432:5432"
volumes:
- thedrewreportdata:/var/lib/postgresql/data/
volumes:
thedrewreportdata:
Any ideas?
I can't reproduce your problem. Running docker-compose up, I see:
Creating network "docker_default" with the default driver
Creating volume "docker_thedrewreportdata" with default driver
Creating docker_client_1 ...
Creating docker_db_1 ...
Creating docker_client_1 ... done
Creating docker_db_1 ... done
Attaching to docker_db_1, docker_client_1
[...]
db_1 | 2021-07-19 23:03:39.676 UTC [1] LOG: database system is ready to accept connections
If I then connect with psql, I can authenticate using the username
and password you've defined in your docker-compose.yml:
# psql -h localhost -U thedrewreport thedrewreportdb
Password for user thedrewreport:
psql (13.3 (Debian 13.3-1.pgdg100+1))
Type "help" for help.
thedrewreportdb=# \du
List of roles
Role name | Attributes | Member of
---------------+------------------------------------------------------------+-----------
thedrewreport | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
thedrewreportdb=#
Note that any volumes specified in your docker-compose.yml will
persist between a docker-compose down and a docker-compose up, so
if you ever brought your stack up with different credentials, those
will never be replaced unless your explicitly destroy the volume by
running docker-compose down -v.
You can tell that docker-compose is re-using a volume if you don't
see a message like this when you run docker-compose up:
Creating volume "docker_thedrewreportdata" with default driver
If existing, the existing DB data you mount to /var/lib/postgresql/data/ will take precedence over the environment variables to initialize it.
You have 2 options:
Update the existing DB data to add your user / password / database. To do so you can use docker compose exec db bash and then connect using psql command to make your changes.
Delete or move your existing thedrewreportdata local volume, for instance updating it to ./thedrewreportdata_postgres:/var/lib/postgresql/data/
Once done, you can use docker compose exec db psql --username thedrewreport --dbname thedrewreportdb to doublecheck you can connect with your credentials to the updated DB.
I'm trying to initialize a database without using the entry point directory.
Here is a minimal Dockerfile:
FROM postgres:latest
POSTGRES_DB db
POSTGRES_USER user
POSTGRES_PASSWORD password
ADD db.sql /directory/
ADD script.sh /directory/
CMD ["sh", "/directory/script.sh"]
# Or ENTRYPOINT ["/directory/script.sh"]?
And script.sh:
psql -d db -U user < /directory/db.sql
This does not work because postgres isn't up when the script is run.
How can I run db.sql without using /docker-entrypoint-initdb.d?
If you look at the standard postgres image's entrypoint script the mechanism to support the /docker-entrypoint-initdb.d directory is pretty intricate: it needs to bootstrap the database directory and initial user and database, then it starts the database server in the background and runs everything in that directory, and finally runs the database for real. If you're trying to replicate this setup, you have to do all of these steps yourself.
There are other ways to set up a database, though. You can create an empty database and then run your application's migrations normally to create the initial schema. If you have an SQL file that normally you'd run against a database running the psql client tool, you can do the exact same thing with Docker
docker run -d -p 5432:5432 --name postgres postgres:12
psql -h localhost < db.sql
I want to Download DVD Rental Sample Database from http://www.postgresqltutorial.com/postgresql-sample-database/
The database file is in zipformat ( dvdrental.zip) so I need to extract it to dvdrental.tar I have no .tar program in my computer. How can I extract the zip file to dvdrental.tar ?
So, I came across the same problem. The way i fixed it was to download:
"The Unarchiver"
Go to downloads and select your .zip file, open with: "the unachiver.app" (you should now see a .tar file)
Go to your terminal and run this command: pg_restore -U postgres -d dvdrental /Users/username/Downloads/dvdrental.tar
Change username to your username. you might have to change the path depending on where your now .tar file is.
You should now see all the tables
If you use the default Mac unzip program. The dvdrental.tar file will disappear after the unzip finish. Try another unzip app, such as "The Unarchiver". It will leave the dvdrental.tar file in your folder.
If you are trying to restore the dvdrental database use psql. After unarchive the zip file run the below command. Example if you have unarchived to ./downloads folder then
psql -h localhost -U postgres < ./Downloads/dvdrental/restore.sql
Another option:
Download the dvdrental.zip to a directory
from terminal go that directory
then type:
unzip dvdrental.zip
dvdrental.tar should now appear.
For future readers who want to practice with docker psql, here's how I did it.
For overall guide to make psql container, see postgresql docker hub.
Source of sample database: Postgresql tutorial page.
create docker volume
$ docker create volume dvdrental
run docker container
$ docker run --name dvdrental -e POSTGRES_PASSWORD=postgres --restart=always -v dvdrental:/var/lib/postgresql/data/ -p 5433:5432 -d postgres:13
I intentionally binded external port(5433) to internal port(5432), since I'm already using another database instance with 5432 port. If you don't have any postgresql db instance, feel free to change external one to 5432.
copy dvdrental.tar inside the running container
$ docker cp ~/Downloads/dvdrental.tar dvdrental:/dvdrental.tar
Syntax for copy is docker cp <file_name> <container_name>:/<file_name>. Same is true with directory.
I thank Mosd for dropping hints on generating tar file from zip extension.
Create database
$ docker exec -it dvdrental psql -U postgres
psql (13.3 (Debian 13.3-1.pgdg100+1))
Type "help" for help.
postgres=# CREATE DATABASE dvdrental;
CREATE DATABASE
postgres=# exit
I got hint from Postgresql tutorial for creating database in this process.
pg_restore to populate data
$ docker exec -it dvdrental pg_restore -U postgres -d dvdrental /dvdrental.tar
Your dvdrental database is now populated.
You can make connection with tools such as dbeaver with
database name: dvdrental
user: postgres
password: postgres
port: 5433
Remember that your external port is 5433, as you have set above.
I have a working Postgres Dockerfile that I modify and unfortunately after applying modifications Postgres container stops working as expected. I'd like to ask your for explanation of what I'm doing wrong.
Working example
Here's the Postgres Dockerfile that works and which I modify:
# Use ubuntu image
FROM ubuntu
# Install database
RUN apt-get update && apt-get install -y postgresql-9.3
# Switch to postgres user.
USER postgres
# Create databse and user with all privileges to the database.
RUN /etc/init.d/postgresql start && \
psql --command "CREATE DATABASE docker;" && \
psql --command "CREATE USER docker WITH SUPERUSER PASSWORD 'docker';" &&\
psql --command "GRANT ALL PRIVILEGES ON DATABASE docker TO docker;"
# Allow remote connections to the database.
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
# Add VOLUMEs to allow backup of config, logs and databases
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
# Set the default command to run when starting the container
CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]
I build it like that:
docker build --tag postgres-image .
Then I create a container:
docker run -d -it -p 32768:5432 --name=postgres postgres-image
And I connect with database:
psql -h localhost -p 32768 -d docker -U docker --password
First modification
I don't need to have any volumes because I'm going to use data-only container that will store all Postgres data. When I remove the line:
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
and do all steps like in working example I get the following error after passing password in the last step:
psql: FATAL: the database system is starting up
FATAL: the database system is starting up
So the question is: Why do I need VOLUME instruction in the Dockerfile?
Second modification
This modification doesn't include the first one. Both modification are independent.
The parameters used in CMD instraction points to default Postgres data directory and configuration file so I wanted to simplify it by setting CMD to the command I always use to start Posgres:
service postgres start
After setting CMD to:
CMD ["service", "postgres", "start]
and doing all steps like in working example I get the following error after passing password in the last step:
psql: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 32768?
The question is: Why the command that works on my host system doesn't work in Docker container?
I'm not sure about the first problem. It may be that Postgres doesn't like running on top of the UFS.
The second problem is just that a container will exit when its main process ends. So the command "service postgres start" runs, starts Postgres in the background then immediately exits and the container halts. The first version works because Postgres stays running in the foreground.
But why are you doing this? Why not just use the official Postgres image?