I'm trying to initialize a database without using the entry point directory.
Here is a minimal Dockerfile:
FROM postgres:latest
POSTGRES_DB db
POSTGRES_USER user
POSTGRES_PASSWORD password
ADD db.sql /directory/
ADD script.sh /directory/
CMD ["sh", "/directory/script.sh"]
# Or ENTRYPOINT ["/directory/script.sh"]?
And script.sh:
psql -d db -U user < /directory/db.sql
This does not work because postgres isn't up when the script is run.
How can I run db.sql without using /docker-entrypoint-initdb.d?
If you look at the standard postgres image's entrypoint script the mechanism to support the /docker-entrypoint-initdb.d directory is pretty intricate: it needs to bootstrap the database directory and initial user and database, then it starts the database server in the background and runs everything in that directory, and finally runs the database for real. If you're trying to replicate this setup, you have to do all of these steps yourself.
There are other ways to set up a database, though. You can create an empty database and then run your application's migrations normally to create the initial schema. If you have an SQL file that normally you'd run against a database running the psql client tool, you can do the exact same thing with Docker
docker run -d -p 5432:5432 --name postgres postgres:12
psql -h localhost < db.sql
Related
I’m currently trying to run fossology in Gitlab CI. Fossology requires an external database that can be set up from a schema created using pg_dump. When I'm trying to use psql I get the title error.
At the moment, I have a script that sets up a container that runs the required version of postgres (9.6). It then tries to run an .sql script via psql in the postgres container via docker exec. Upon doing so it gets the title error.
I have tried specifying both a port and a host when issuing the psql statement, neither of which worked. I have tried using localhost, 127.0.0.1, the IP address of the postgres container and the name of the container as a host. I have tried rewriting things in different scripts, but nothing seems to work.
After extensive google searching, many people seem to have the same error message but not for the same reasons and not usually when using a docker container to host the database.
When I have run the contents of my script in the command line, i do not get this error, the script works fine and I can connect to Fossology. The issue only arises when trying to do the same in Gitlab CI.
The sequence of steps (i.e. pasted line by line) that works when using the command line on Mac:
# creates blank database and hosts it in a docker container
docker run -d --name fossdb -p 5432:5432 postgres:9.6
docker cp /fossology_db_schema.sql fossdb:/fossy.sql
docker exec -it fossdb bash
psql postgres -U postgres
# creates user needed for database to work with fossology
create user fossy with password 'fossy';
create database fossology;
grant all privileges on database fossology to fossy;
\q
# builds the fossology database in the hosted blank database
psql fossology < fossy.sql
psql postgres -U postgres
\connect fossology
exit
What I am attempting in GitLab CI:
# creates container with postgres image
docker run -d --name fossdb -p 5432:5432 --network foss-net postgres:9.6
# creates blank database (error occurs here)
docker exec fossdb psql -h $(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' fossdb) -f ./createBlank.sql -U postgres
# builds fossology database from schema
docker exec fossdb psql -h $(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' fossdb) fossology < ./schema.sql -U postgres
createBlank.sql:
create user fossy with password 'fossy';
create database fossology;
grant all privileges on database fossology to fossy;
Expected results: runs createBlank.sql to create a blank database called fossology, then builds fossology database from schema
Actual results: psql: could not connect to server: Connection refused
Is the server running on host "172.19.0.2" and accepting
TCP/IP connections on port 5432?
Are you sure you set up postgres completely?
A few quick checks you can perform:
(Excuse me, you DID do that. goto suggestion 2)
suggestion 1: Did you tell postgres there is a user with a password? (createuser command)
https://www.postgresql.org/docs/9.2/app-createuser.html
suggestion 2: Did you tell postgres that user can connect, and how? (tcp or local sockets)
https://www.postgresql.org/docs/9.2/auth-pg-hba-conf.html
I have a postgres:9.5.6-alpine container, and another container, named web, which has to be linked to it.
I want to run a script named create_db.sh in postgres container after it has been started and docker-entrypoint.sh has been executed, in order to create a db and a user and restore a backup.
My docker-compose.yml (postgres part):
postgres:
build: ./postgres
container_name: postgres
volumes:
- /shared_folder/postgresql:/var/lib/postgresql
ports:
- "5432:5432"
command: sh /home/create_db.sh
The content of create_db.sh is:
#!/bin/sh
psql -d template1 -U postgres
psql --command "CREATE USER user WITH PASSWORD 'userpassword';"
psql --command "CREATE DATABASE userdb;"
psql --command "GRANT ALL PRIVILEGES ON DATABASE userdb to user;"
psql --command "\q;"
psql -U user -d userdb -f /var/lib/postgresql/backup.sql
exit
When i run docker-compose build and then docker-compose up i get this:
Attaching to postgres, web
postgres | psql: could not connect to server: No such file or directory
postgres | Is the server running locally and accepting
postgres | connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I've understood this is because when I launch create_db.sh postgres server is not ready, but so how can i run it after the docker-entrypoint.sh of the container?
You are overriding the original command and you do not start postgres in this script which is why your database is not available.
You can put your database initialization into the container's entrypoint directory: /docker-entrypoint-initdb.d. This executes all *.sh and *.sql files in this directory and does not touch the original command.
All files in this directory are automatically executed in the alphabetical order on container creation. Therefore, create a volume to add your scripts / sql files to the entrypoint and let the container execute them. This is described in the official postgres documentation, section "How to extend this image".
Your compose file then changes to something like this:
postgres:
build: ./postgres
volumes:
- /shared_folder/postgresql:/var/lib/postgresql
- ./db-init-scripts:/docker-entrypoint-initdb.d
ports:
- "5432:5432"
whereas a local directory, e.g. db-init-scripts, contains your initialization scripts (rename it if you want). Copy create_db.sh to this folder and it will be automatically executed when you create a new container.
Several database-images watch this entrypoint-directory, which is very convenient.
Your container_name: postgres seems redundant.
If you are getting an error /bin/sh: bad interpreter: Permission denied change execution permission to your script file first, since Docker copies over permissions:
chmod +x your/script.sh
I want to Download DVD Rental Sample Database from http://www.postgresqltutorial.com/postgresql-sample-database/
The database file is in zipformat ( dvdrental.zip) so I need to extract it to dvdrental.tar I have no .tar program in my computer. How can I extract the zip file to dvdrental.tar ?
So, I came across the same problem. The way i fixed it was to download:
"The Unarchiver"
Go to downloads and select your .zip file, open with: "the unachiver.app" (you should now see a .tar file)
Go to your terminal and run this command: pg_restore -U postgres -d dvdrental /Users/username/Downloads/dvdrental.tar
Change username to your username. you might have to change the path depending on where your now .tar file is.
You should now see all the tables
If you use the default Mac unzip program. The dvdrental.tar file will disappear after the unzip finish. Try another unzip app, such as "The Unarchiver". It will leave the dvdrental.tar file in your folder.
If you are trying to restore the dvdrental database use psql. After unarchive the zip file run the below command. Example if you have unarchived to ./downloads folder then
psql -h localhost -U postgres < ./Downloads/dvdrental/restore.sql
Another option:
Download the dvdrental.zip to a directory
from terminal go that directory
then type:
unzip dvdrental.zip
dvdrental.tar should now appear.
For future readers who want to practice with docker psql, here's how I did it.
For overall guide to make psql container, see postgresql docker hub.
Source of sample database: Postgresql tutorial page.
create docker volume
$ docker create volume dvdrental
run docker container
$ docker run --name dvdrental -e POSTGRES_PASSWORD=postgres --restart=always -v dvdrental:/var/lib/postgresql/data/ -p 5433:5432 -d postgres:13
I intentionally binded external port(5433) to internal port(5432), since I'm already using another database instance with 5432 port. If you don't have any postgresql db instance, feel free to change external one to 5432.
copy dvdrental.tar inside the running container
$ docker cp ~/Downloads/dvdrental.tar dvdrental:/dvdrental.tar
Syntax for copy is docker cp <file_name> <container_name>:/<file_name>. Same is true with directory.
I thank Mosd for dropping hints on generating tar file from zip extension.
Create database
$ docker exec -it dvdrental psql -U postgres
psql (13.3 (Debian 13.3-1.pgdg100+1))
Type "help" for help.
postgres=# CREATE DATABASE dvdrental;
CREATE DATABASE
postgres=# exit
I got hint from Postgresql tutorial for creating database in this process.
pg_restore to populate data
$ docker exec -it dvdrental pg_restore -U postgres -d dvdrental /dvdrental.tar
Your dvdrental database is now populated.
You can make connection with tools such as dbeaver with
database name: dvdrental
user: postgres
password: postgres
port: 5433
Remember that your external port is 5433, as you have set above.
I'm trying to build a debian image in Docker that contains nginx, postgresql and php-fpm. I've managed to get nginx and php-fpm working. Postgres is also working but I can't add the schema to the database I have created.
The code from Dockerfile relating to postgres (got it from docker website) is the following:
# Add database
RUN apt-get update && apt-get install -y postgresql-9.4 postgresql-client-9.4 postgresql-contrib-9.4
# Run the rest of the commands as the ``postgres`` user created by the ``postgres-9.4`` package when it was ``apt-get installed``
USER postgres
# Create a PostgreSQL role named ``use_name`` with ``user_password`` as the password and
# then create a database `database_name` owned by the ``use_name`` role.
# Note: here we use ``&&\`` to run commands one after the other - the ``\``
# allows the RUN command to span multiple lines.
RUN /etc/init.d/postgresql start &&\
psql --command "CREATE USER user_name WITH SUPERUSER PASSWORD 'user_password';" &&\
createdb -O user_name database_name
# Adjust PostgreSQL configuration so that remote connections to the
# database are possible.
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.4/main/pg_hba.conf
# And add ``listen_addresses`` to ``/etc/postgresql/9.4/main/postgresql.conf``
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.4/main/postgresql.conf
# Reload postgres configuration
RUN /etc/init.d/postgresql stop && /etc/init.d/postgresql start
# Add database schema
COPY ./postgresql/database_name.sql /tmp/database_name.sql
RUN psql -U use_name -d database_name -a -f /tmp/database_name.sql
The error I get is
psql: could not connect to server: Connection refused
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Is there another way to do this that I have not seen? Do I need to do something more
The reason you can't connect to the server is because it isn't running. Each line in a Dockerfile is processed in a new container, so any old processes are no longer running, but changes to the filesystem will persist (provided they weren't made to a volume).
You could start the database in the same RUN instruction as the psql command, but normally this sort of thing would be done in an ENTRYPOINT or CMD script that runs when the container is started.
By the far the best plan is to do as #h3nrik suggests and use the official postgres image. Even if you don't want to do this, it's worth looking at the Dockerfile and scripts (e.g https://github.com/docker-library/postgres/tree/master/9.5) used to build the official image to understand how they tackled the same problems.
I would not put the nginx and postgres installations in one single docker container. I would create a separate postgres container and link to it from the nginx/php-fpm container.
The postgres container I would base on the official one. There it is also described how you can add your custom schema to the postgres installation (please have a look at the comment of justfalter, there).
I have a working Postgres Dockerfile that I modify and unfortunately after applying modifications Postgres container stops working as expected. I'd like to ask your for explanation of what I'm doing wrong.
Working example
Here's the Postgres Dockerfile that works and which I modify:
# Use ubuntu image
FROM ubuntu
# Install database
RUN apt-get update && apt-get install -y postgresql-9.3
# Switch to postgres user.
USER postgres
# Create databse and user with all privileges to the database.
RUN /etc/init.d/postgresql start && \
psql --command "CREATE DATABASE docker;" && \
psql --command "CREATE USER docker WITH SUPERUSER PASSWORD 'docker';" &&\
psql --command "GRANT ALL PRIVILEGES ON DATABASE docker TO docker;"
# Allow remote connections to the database.
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
# Add VOLUMEs to allow backup of config, logs and databases
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
# Set the default command to run when starting the container
CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]
I build it like that:
docker build --tag postgres-image .
Then I create a container:
docker run -d -it -p 32768:5432 --name=postgres postgres-image
And I connect with database:
psql -h localhost -p 32768 -d docker -U docker --password
First modification
I don't need to have any volumes because I'm going to use data-only container that will store all Postgres data. When I remove the line:
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
and do all steps like in working example I get the following error after passing password in the last step:
psql: FATAL: the database system is starting up
FATAL: the database system is starting up
So the question is: Why do I need VOLUME instruction in the Dockerfile?
Second modification
This modification doesn't include the first one. Both modification are independent.
The parameters used in CMD instraction points to default Postgres data directory and configuration file so I wanted to simplify it by setting CMD to the command I always use to start Posgres:
service postgres start
After setting CMD to:
CMD ["service", "postgres", "start]
and doing all steps like in working example I get the following error after passing password in the last step:
psql: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 32768?
The question is: Why the command that works on my host system doesn't work in Docker container?
I'm not sure about the first problem. It may be that Postgres doesn't like running on top of the UFS.
The second problem is just that a container will exit when its main process ends. So the command "service postgres start" runs, starts Postgres in the background then immediately exits and the container halts. The first version works because Postgres stays running in the foreground.
But why are you doing this? Why not just use the official Postgres image?