I am trying to launch a docker container to hold 2 databases( PostgreSQL). I am executing the .sql scripts by injecting it to docker-initdb. But Only one script gets executed.
Dockerfile:
FROM postgres
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
COPY /appdb.sql /docker-entrypoint-initdb.d/
COPY /userdb.sql /docker-entrypoint-initdb.d/
Here only userdb.sql executes. How can I change it to make both scripts execute at one go?
Please help
Related
My Application needs 3 databases. I am using PostgreSQL.
How can I launch all the 3 databases in single container at one shot. All 3 have different tables & scripts. For each of those, *.sql files are being executed by copying in the Dockerfile.
I tried in conventional way . Didn't work.
Dockerfile:
FROM postgres
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
ENV POSTGRES_DB my_db_dev
COPY /devdb.sql /docker-entrypoint-initdb.d/
ENV POSTGRES_DB my_db_test
COPY /testdb.sql /docker-entrypoint-initdb.d/
ENV POSTGRES_DB my_db_prod
COPY /proddb.sql /docker-entrypoint-initdb.d/
Here, only the last DB (my_db_prod) is launching up.
How Can I make all 3 up at once ?
The ENV Dockerfile command sets an environment variable at build-time.
The COPY is also executed at build-time, basically overwriting the docker-entrypoint-initdb.d
When you invoke these actions, the latest executed overrides the previous one, thus the image final state will have POSTGRES_DB=my_db_prod and the docker-entrypoint-initdb.d will have the contents of proddb.sql.
The database is not created at build-time. Instead, it is created at run-time, following the instructions from /docker-entrypoint-initdb.d/ (/proddb.sql ) and POSTGRES_DB my_db_prod, hence this is the state in which the succession of commands from build-time left the image.
To create multiple databases, you can merge the scripts of the 3 entrypoints in a single one, or even better, have one script for each DB and have the scripts read different ENVs:
COPY ./create_second_db.sql /docker-entrypoint-initdb.d/create_second_db.sql
COPY ./create_third_db.sql /docker-entrypoint-initdb.d/create_third_db.sql
Here is a complete example that can save some time.
i follow up this way and it working for me.
my docker-compose.yml :
version: '3.7'
db:
image: postgres:10.5
container_name: pg
restart: always
environment:
POSTGRES_PASSWORD: postgres
volumes:
- /initdb:/docker-entrypoint-initdb.d
ports:
- "5432:5432"
i mount one folder initdb from docker host to inside postgres container like above.
inside initdb have 2 sql files. and it successful ran for both script sql.
My init.sql file is :
CREATE USER postgres WITH PASSWORD '123qwe';
CREATE DATABASE gmta_database ;
GRANT ALL PRIVILEGES ON DATABASE gmta_database TO postgres;
And DockerFile is:
FROM postgres:latest
COPY init.sql /docker-entrypoint-initdb.d/
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD 123qwe
ENV POSTGRES_DB docker_pg
EXPOSE 5432
COPY test_latest.sql /
VOLUME /var/lib/postgresql/data
RUN pg_restore -U postgres -d docker_pg < test_latest.sql
When I run with command docker build -t gmta-test-vol:1.0.0 . , I am getting error like this:
Sending build context to Docker daemon 358.4kB
Step 1/7 : FROM postgres:latest
---> b97bae343e06
Step 2/7 : COPY init.sql /docker-entrypoint-initdb.d/
---> Using cache
---> 6f275b44db01
Step 3/7 : ENV POSTGRES_USER postgres
---> Using cache
---> 039924093b36
Step 4/7 : ENV POSTGRES_PASSWORD 123qwe
---> Using cache
---> 5e636686a2f7
Step 5/7 : ENV POSTGRES_DB docker_pg
---> Using cache
---> 9c0a773c138c
Step 6/7 : COPY gtma_latest.sql /
---> Using cache
---> 8dd99f79b403
Step 7/7 : RUN pg_restore -U postgres -d docker_pg < gtma_latest.sql
---> Running in 1e0a85650eb1
pg_restore: error: connection to database "docker_pg" failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I want to create the database and restore the dump data with single docker command.
How can I solved the problem?
Is it possible RUN pg_store command without create the docker container?
Please help. Thanks in advance.
You should not import data at build time, as DB server is not ready also this will not persistent the import for the subsequent layer.
All you need to add this
COPY test_latest.sql /docker-entrypoint-initdb.d/
If you would like to do additional initialization in an image derived from this one, add one or more *.sql, *.sql.gz, or *.sh scripts under /docker-entrypoint-initdb.d (creating the directory if necessary). After the entrypoint calls initdb to create the default postgres user and database, it will run any *.sql files, run any executable *.sh scripts, and source any non-executable *.sh scripts found in that directory to do further initialization before starting the service.
postgres init
How can I solved the problem?
Just place the sql file into /docker-entrypoint-initdb.d/ and Postgres container will take care of it.
Is it possible RUN pg_store command without create the docker container?
No, place copy at build, it will create a database whenever container started.
Docker builds the images in steps. So when you "run" pg_restore there is no actual live socket at that moment. You need to run it combined with a pg_ctl start or /etc/init.d/postgresql for example ike this
RUN /etc/initd./postgresqll restart && pg_restore-U postgres -d docker_pg < gtma_latest.sql
I want to create a database in PostgreSQL and restore a backup in a docker container. I am able to create the database and run the docker container, and then run the pg_restore to restore the backup.
My Dockerfile is :
FROM postgres:latest
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD 123qwe
ENV POSTGRES_DB docker_pg
COPY createTable.sql /docker-entrypoint-initdb.d/
VOLUME /var/lib/postgresql/data
Then I run the command for restore the backup :
docker exec -i 0d96d6b59d74 pg_restore -U postgres -d docker_pg< backup_latest.sql
It is working fine.
But my requirement is when I run the command for create the docker container database creation and restore the backup both work done in same time, mean at the time of container creation.
How can I do this?
But my requirement is when I run the command for create the docker
container database creation and restore the backup both work done in
same time, mean at the time of container creation.
Both tasks can be performed by the Docker container all you need to place the restore script in the docker-entrypoint-initdb.d folder.
COPY createTable.sql /docker-entrypoint-initdb.d/a_createTable.sql
COPY backup_latest.sql /docker-entrypoint-initdb.d/
As I changed createTable.sql to a_createTable.sql, so it will first create Table and then it will restore the backup.
These initialization files will be executed in sorted name order as
defined by the current locale
Initialization scripts
Or the other option is to create single SQL file and the order will be
ALL DDL
# then
ALL DML
so something like
COPY db.sql /docker-entrypoint-initdb.d/
I'm using docker file to build ubuntu image have install postgresql. But I can't wait for service postgres status OK
FROM ubuntu:18.04
....
RUN apt-get update && apt-get install -y postgresql-11
RUN service postgresql start
RUN su postgres
RUN psql
RUN CREATE USER kong; CREATE DATABASE kong OWNER kong;
RUN \q
RUN exit
Everything seem okay, but RUN su postgres will throw error because service postgresql not yet started after RUN service postgresql start. How can I do that?
First thing, each RUN command in Dockerfile run in a separate shell and RUN command should be used for installation or configuration not for starting the process. The process should be started at CMD or entrypoint.
RUN vs CMD
Better to use offical postgress docker image.
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
The default postgres user and database are created in the entrypoint with initdb.
or you can build your image based on postgress.
FROM postgres:11
ENV POSTGRES_USER=kong
ENV POSTGRES_PASSWORD=example
COPY seed.sql /docker-entrypoint-initdb.d/seed.sql
This will create an image with user, password and also the entrypoint will insert seed data as well when container started.
POSTGRES_USER
This optional environment variable is used in conjunction with
POSTGRES_PASSWORD to set a user and its password. This variable will
create the specified user with superuser power and a database with the
same name. If it is not specified, then the default user of postgres
will be used.
Some advantage with offical docker image
Create DB from ENV
Create DB user from ENV
Start container with seed data
Will wait for postgres to be up and running
All you need
# Use postgres/example user/password credentials
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
Initialization scripts
If you would like to do additional initialization in an image derived
from this one, add one or more *.sql, *.sql.gz, or *.sh scripts under
/docker-entrypoint-initdb.d (creating the directory if necessary).
After the entrypoint calls initdb to create the default postgres user
and database, it will run any *.sql files, run any executable *.sh
scripts, and source any non-executable *.sh scripts found in that
directory to do further initialization before starting the service.
I'm looking to build dockerfiles that represent company databases that already exist. Similarly, I'd like create a docker file that starts by restoring a psql dump.
I have my psql_dump.sql in the . directory.
FROM postgres
ADD . /init_data
run "createdb" "--template=template0" "my_database"
run "psql" "-d" "my_database" --command="create role my_admin superuser"
run "psql" "my_database" "<" "init_data/psql_dump.sql"
I thought this would be good enough to do it. I'd like to avoid solutions that use a .sh script. Like this solution.
I use template0 since the psql documentation says you need the same users created that were in the original database, and you need to create the database with template0 before you restore.
However, it gives me an error:
createdb: could not connect to database template1: could not connect to server: No such file or directory
Is the server running locally and accepting
I'm also using docker compose for the overall application, if solving this problem in docker-compose is better, I'd be happy to use the base psql image and use docker compose to do this.
According to the usage guide for the official PostreSQL Docker image, all you need is:
Dockerfile
FROM postgres
ENV POSTGRES_DB my_database
COPY psql_dump.sql /docker-entrypoint-initdb.d/
The POSTGRES_DB environment variable will instruct the container to create a my_database schema on first run.
And any .sql file found in the /docker-entrypoint-initdb.d/ of the container will be executed.
If you want to execute .sh scripts, you can also provide them in the /docker-entrypoint-initdb.d/ directory.
As said in the comments, #Thomasleveil answer is great and simple if your schema recreation is fast.
But in my case it's slow, and I wanted to use docker volumes, so here is what I did
First use docker image as in #Thomasleveil answer to create a container with postgres with all the schema initialization
Dockerfile:
FROM postgres
WORKDIR /docker-entrypoint-initdb.d
ADD psql_dump.sql /docker-entrypoint-initdb.d
EXPOSE 5432
then run it and create new local dir which contains the postgres data after its populated from the “psql_dump.sql” file: docker cp mypg:/var/lib/postgresql/data ./postgres-data
Copy the data to a temp data folder, and start a new postgres docker-compose container whose volume is at the new temp data folder:
startPostgres.sh:
rm -r ./temp-postgres-data/data
mkdir -p ./temp-postgres-data/data
cp -r ./postgres-data/data ./temp-postgres-data/
docker-compose -p mini-postgres-project up
and the docker-compose.yml file is:
version: '3'
services:
postgres:
container_name: mini-postgres
image: postgres:9.5
ports:
- "5432:5432"
volumes:
- ./temp-postgres-data/data:/var/lib/postgresql/data
Now you can run steps #1 and #2 on a new machine or if your psql_dump.sql changes. And each time you want a new clean (but already initialized) db, you can only run startPostgres.sh from step #3.
And it still uses docker volumes.
#Thomasleveil's answer will re-create the database schema at runtime, which is fine for most cases.
If you want to recreate the database schema at buildtime (i.e. if your schema initialization is really slow) you can invoke the stock docker_entrypoint.sh from within your Dockerfile.
However, since the docker_entrypoint.sh is designed to start a long-running database server, you have to add an extra script to exit the process after database initialization but before booting the long-running server.
Dockerfile (with build time database initialization)
# STAGE 1 - Equivalent to #Thomasleveil
FROM postgres AS runtime_init
ENV POSTGRES_DB my_database
COPY 1-psql_dump.sql /docker-entrypoint-initdb.d/
# STAGE 2 - Initialize the database during the build
FROM runtime_init AS buildtime_init_builder
RUN echo "exit 0" > /docker-entrypoint-initdb.d/100-exit_before_boot.sh
ENV PGDATA=/pgdata
RUN docker-entrypoint.sh postgres
# STAGE 3 - Copy the initialized db to a new image to reduce size.
FROM postgres AS buildtime_init
ENV PGDATA=/pgdata
COPY --chown=postgres:postgres --from=buildtime_init_builder /pgdata /pgdata
Important Notes
The stock postgres image will run initialization scripts in alphabetical order, so ensure that your database restoration scripts appear earlier than the exit_before_boot.sh script created in the Dockerfile.
This is demonstrated by the 1 and 100 prefixes shown above. Modify them to your liking.
Database updates to a running instance of this image will not be persisted across reboots since the PGDATA path where the database files are stored no longer maps to a volume mounted from the host machine.
Further Reading
Instructions from the authors of the official postgres image about writing your own custom_entrypoint.sh. This is arguably the more "official" way to solve this problem, but I personally find my approach easier to understand and implement.
A demo of this concept for PostgreSQL 9, which uses the --help flag to exit the docker-entrypoint.sh before the long-running server boots. Unfortunately, this no longer works as of December 3, 2019
Two discussions (1) (2) of this same question from the official docker postgres repository.