I am trying to run a Dockerfile that run PostgreSQL and then add some tables to it.
This is my Dockerfile:
FROM postgres:9.6
MAINTAINER agomez#ikerlan.es
USER postgres
ADD ./backup.sql /backup.sql
ADD ./deploy/import_database.sh /import_database.sh
USER root
RUN chmod +x /import_database.sh
USER postgres
ENTRYPOINT ./docker-entrypoint.sh postgres
CMD ./import_database.sh
The entrypoint doesn't finish becouse it runs the postgres server.
How can I run the CMD for example, after 20 seconds of running the ENTRYPOINT, but without finishing the ENTRYPOINT?
Is it possible?
This is not how the CMD and ENTRYPOINT work. I'd recommend reading https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact and checking the chart for a better understanding of how the ENTRYPOINT and CMD interact with each other.
That said, if all you want to do is import a SQL file or run a script at runtime, the official PostgreSQL image already has you covered. See https://github.com/docker-library/docs/tree/master/postgres#how-to-extend-this-image for information on how to do this.
An example in the case of your Dockerfile (if you just wanted to import the SQL file) would be to do something like:
FROM postgres:9.6
ADD ./backup.sql /docker-entrypoint-initdb.d/backup.sql
When starting a container from the image build on this Dockerfile, the default ENTRYPOINT script will start a temporary PostgreSQL instance, wait for it to be ready, import your data, and then restart to serve connections.
Related
I'm using docker file to build ubuntu image have install postgresql. But I can't wait for service postgres status OK
FROM ubuntu:18.04
....
RUN apt-get update && apt-get install -y postgresql-11
RUN service postgresql start
RUN su postgres
RUN psql
RUN CREATE USER kong; CREATE DATABASE kong OWNER kong;
RUN \q
RUN exit
Everything seem okay, but RUN su postgres will throw error because service postgresql not yet started after RUN service postgresql start. How can I do that?
First thing, each RUN command in Dockerfile run in a separate shell and RUN command should be used for installation or configuration not for starting the process. The process should be started at CMD or entrypoint.
RUN vs CMD
Better to use offical postgress docker image.
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
The default postgres user and database are created in the entrypoint with initdb.
or you can build your image based on postgress.
FROM postgres:11
ENV POSTGRES_USER=kong
ENV POSTGRES_PASSWORD=example
COPY seed.sql /docker-entrypoint-initdb.d/seed.sql
This will create an image with user, password and also the entrypoint will insert seed data as well when container started.
POSTGRES_USER
This optional environment variable is used in conjunction with
POSTGRES_PASSWORD to set a user and its password. This variable will
create the specified user with superuser power and a database with the
same name. If it is not specified, then the default user of postgres
will be used.
Some advantage with offical docker image
Create DB from ENV
Create DB user from ENV
Start container with seed data
Will wait for postgres to be up and running
All you need
# Use postgres/example user/password credentials
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
Initialization scripts
If you would like to do additional initialization in an image derived
from this one, add one or more *.sql, *.sql.gz, or *.sh scripts under
/docker-entrypoint-initdb.d (creating the directory if necessary).
After the entrypoint calls initdb to create the default postgres user
and database, it will run any *.sql files, run any executable *.sh
scripts, and source any non-executable *.sh scripts found in that
directory to do further initialization before starting the service.
I am using the "plain" postgresql:alpine docker image, but have to schedule a database backup daily. I think this is a pretty common task.
I created a script backupand stored in the container in /etc/periodic/15min, and made it executable:
bash-4.4# ls -l /etc/periodic/15min/
total 4
-rwxr-xr-x 1 root root 95 Mar 2 15:44 backup
I tried executing it manually, that works fine.
My problem is getting crond to run automatically.
If I exec docker exec my-postgresql-container crond, the deamon is started and cron works, but I would like to embed this into my Dockerfile
FROM postgres:alpine
# my backup script, MUST NOT have .sh extension
COPY backup.sh /etc/periodic/15min/backup
RUN chmod a+x /etc/periodic/15min/backup
RUN crond # <- doesn't work
I have no idea how to rewrite or overwrite the commands in the official image. For update reasons I also would like to stay on these images, if possible.
Note: This option if you would like to use the same container with multiple service
Install Supervisord which will makes you able to run crond and postgresql. The Dockerfile will be as the following:
FROM postgres:alpine
RUN apk add --no-cache supervisor
RUN mkdir /etc/supervisor.d
COPY postgres_cron.ini /etc/supervisor.d/postgres_cron.ini
ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"]
And postgres_cron.ini will be as the following:
[supervisord]
logfile=/var/log/supervisord.log ; (main log file;default $CWD/supervisord.log)
loglevel=info ; (log level;default info; others: debug,warn,trace)
nodaemon=true ; (start in foreground if true;default false)
[program:postgres]
command=/usr/local/bin/docker-entrypoint.sh postgres
autostart=true
autorestart=true
[program:cron]
command =/usr/sbin/crond -f
autostart=true
autorestart=true
Then you can start the docker build process and run a container from your new image. Feel free to modify the Dockerfile or postgres_cron.ini as needed
I had the exact same problem a few month ago. The key aspect is that a container can have only one main process defined by the ENTRYPOINT and/or CMD in your Dockerfile.
You cannot just swap out postgres with crond otherwise your database isn't running. It is generally recommended to separate areas of concern by using one service per container.
With that in mind either use a separate container which runs nothing but crond and thus Docker can both track its lifecycle, and restart it when/if it fails, the machine restarts, etc.
Or run the jobs via cron on your host using docker exec.
The third and in my opinion best (but also advanced) solution is pg_cron. It is an postgres extension and therefore runs the jobs in the same database container. Your challenge would be to adapt the configuration and installation of it.
The easy part should be the
postgresql.conf:
# add to postgresql.conf:
shared_preload_libraries = 'pg_cron'
cron.database_name = 'postgres'
Next, you need to add the pg_cron extension to your image by adjusting the Dockerfile, which you can derive from the official alpine postgres image. The installation of it is described here.
I'm a bit new to Docker.
I have two containers running using docker-compose.
One is the API and the other is the actual application.
I want to add a new DB container using the Postgres official image.
It's a bit hard to find a simple tutorial on how to create the container and populate it with a predefined sql file (of schemas and data).
When I start with "CMD /etc/init.d/postgresql start" in the Dockerfile I get an error saying: "No PostgreSQL clusters exist; see "man pg_createcluster" ... (warning)."
Since it takes me too much time to get things going I was wondering if it might be better to get an Ubuntu image and install Postgres on my own since there is only one source on how to use the image - docker hub, and I don't seem to understand it that well.
Any ideas or simple steps on how to compose and 'configure' this image?
If you want populate your database with some file, A simply way to do this is:
How to extend this image
If you would like to do additional initialization in an image derived
from this one, add one or more *.sql, *.sql.gz, or *.sh scripts under
/docker-entrypoint-initdb.d (creating the directory if necessary).
After the entrypoint calls initdb to create the default postgres user
and database, it will run any *.sql files and source any *.sh scripts
found in that directory to do further initialization before starting
the service.
Dockerfile
FROM postgres:alpine
COPY init.sql /docker-entrypoint-initdb.d/init.sql
docker-compose.yml
version: '3'
services:
app:
//your app definition
postgres:
build: .
Pull the postgres image
docker pull postges:14.2
Create the service with the below command
docker service create --name postgres --network my_overlay --env "POSTGRES_PASSWORD=password" --publish 5432:5432 postgres:14.2
Try to connect using userName as postgres and password as password to the default postgres db.
jdbc:postgresql://127.0.0.1:5432/postgres // JDBC connection
I'm trying to extend the official docker postgres image in order to install a custom python module so I can use it from withing a plpython3 stored procedure.
Here's my dockerfile
FROM postgres:9.5
RUN apt-get update && apt-get install -y postgresql-plpython3-9.5 python3
ADD ./repsug/ /opt/smtnel/repsug/
WORKDIR /opt/smtnel/repsug/
RUN ["python3", "setup.py", "install"]
WORKDIR /
My question is: do I need to add ENTRYPOINT and CMD commands to my Dockerfile? Or are they "inherited" FROM the base image?
The example in the official readme.md shows a Dockerfile which only changes the locale, without ENTRYPOINT or CMD.
I also read in the readme that I can extend the image by executing custom sh and/or sql scripts. Should I use this feature instead of creating my custom image? The question in this case is how I make sure the scripts are run only once at "install time" and not every time? I mean, if the database is already created and populated, I would not want to overwrite it.
Thanks,
Awer
If you define a new ENTRYPOINT in your Dockerfile it will override the inherited ENTRYPOINT. So in that case, Postgres will not be able to be initialized automatically (unless yo write the same ENTRYPOINT).
https://docs.docker.com/engine/reference/builder/#entrypoint
Also, the official postgres image let you add .sql/.sh files in the /docker-entrypoint-initdb.d folder, so they can be executed once the database is initialized.
Finally, if you don't want that Postgres remove your data, you can mount a volume between the /var/lib/postgresql/data folder and a local folder in each docker run ... command to persist them.
So this might be my Dockerfile:
FROM ubuntu:latest
RUN apt-get -y update && apt-get install -y mysql-server-5.6
RUN service mysql start
RUN service mysql status
It throws an error during the build that MySQL is not running, even though the previous command finished with a success. The deamons seem not to be able to be running between different commands in the Dockerfile.
This is an artificial example, but in my real Dockerfile I have lines which configure the database and they need to have a deamon running in the backgroud. The only solution to go around this that I found is to run:
RUN service mysql start && ./database_configure1.sh
RUN service mysql start && ./do_something_else_with_db.sh
and so on
But this is probably not the way to do it. Is there any better way to go about this?
Each RUN command within your Dockerfile runs within a different container, so here's the actual sequence of events:
service mysql start starts MySQL.
Then the container is stopped (MySQL is stopped).
Then a snapshot is taken.
Then a new container is launched using that snapshot.
service mysql status is run in the new container.
Of course, mysql isn't actually running in the latter container, so that fails.
So, instead, you need to do everything in a single build step. Usually, you'll want to do this by running a shell script within your container.
Here goes.
Your directory tree should look like this:
Dockerfile
do_stuff_with_mysql.sh
Then, in your Dockerfile, do:
ADD do_stuff_with_mysql.sh /
RUN chmod 755 /do_stuff_with_mysql.sh
RUN do_stuff_with_mysql.sh
And, in do_stuff_with_mysql.sh, you should have something that looks like this:
#!/bin/bash
set -o errexit
set -o nounset
service mysql start
./database_configure1.sh
./do_something_else_with_db.sh
service mysql stop
# you should loop on `service mysql status` to confirm MySQL is done shutting down