I am using the "plain" postgresql:alpine docker image, but have to schedule a database backup daily. I think this is a pretty common task.
I created a script backupand stored in the container in /etc/periodic/15min, and made it executable:
bash-4.4# ls -l /etc/periodic/15min/
total 4
-rwxr-xr-x 1 root root 95 Mar 2 15:44 backup
I tried executing it manually, that works fine.
My problem is getting crond to run automatically.
If I exec docker exec my-postgresql-container crond, the deamon is started and cron works, but I would like to embed this into my Dockerfile
FROM postgres:alpine
# my backup script, MUST NOT have .sh extension
COPY backup.sh /etc/periodic/15min/backup
RUN chmod a+x /etc/periodic/15min/backup
RUN crond # <- doesn't work
I have no idea how to rewrite or overwrite the commands in the official image. For update reasons I also would like to stay on these images, if possible.
Note: This option if you would like to use the same container with multiple service
Install Supervisord which will makes you able to run crond and postgresql. The Dockerfile will be as the following:
FROM postgres:alpine
RUN apk add --no-cache supervisor
RUN mkdir /etc/supervisor.d
COPY postgres_cron.ini /etc/supervisor.d/postgres_cron.ini
ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"]
And postgres_cron.ini will be as the following:
[supervisord]
logfile=/var/log/supervisord.log ; (main log file;default $CWD/supervisord.log)
loglevel=info ; (log level;default info; others: debug,warn,trace)
nodaemon=true ; (start in foreground if true;default false)
[program:postgres]
command=/usr/local/bin/docker-entrypoint.sh postgres
autostart=true
autorestart=true
[program:cron]
command =/usr/sbin/crond -f
autostart=true
autorestart=true
Then you can start the docker build process and run a container from your new image. Feel free to modify the Dockerfile or postgres_cron.ini as needed
I had the exact same problem a few month ago. The key aspect is that a container can have only one main process defined by the ENTRYPOINT and/or CMD in your Dockerfile.
You cannot just swap out postgres with crond otherwise your database isn't running. It is generally recommended to separate areas of concern by using one service per container.
With that in mind either use a separate container which runs nothing but crond and thus Docker can both track its lifecycle, and restart it when/if it fails, the machine restarts, etc.
Or run the jobs via cron on your host using docker exec.
The third and in my opinion best (but also advanced) solution is pg_cron. It is an postgres extension and therefore runs the jobs in the same database container. Your challenge would be to adapt the configuration and installation of it.
The easy part should be the
postgresql.conf:
# add to postgresql.conf:
shared_preload_libraries = 'pg_cron'
cron.database_name = 'postgres'
Next, you need to add the pg_cron extension to your image by adjusting the Dockerfile, which you can derive from the official alpine postgres image. The installation of it is described here.
Related
I'm trying to docker-containerize PostgreSQL server and this container will have many other applications as well. The need is that, PostgreSQL server data should be mapped to the host volume so that when container is stopped, we won't lose the data. Also that, the next time when we start the container, the same directory can be mapped again and postgres can use the old data. Below is the DOCKERFILE. Note that I'm using ubuntu 22.04 on the host.
FROM ubuntu:22.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt install -y postgresql
ENTRYPOINT ["tail", "-f", "/dev/null"]
Docker image is built using the command
docker build -t pg_test .
and the container is run using the command
docker run --name test -v /home/me/data:/var/lib/postgresql/14/main pg_test
'/home/me/data' is the host directory which is empty where I want to map the postgres server data. '/var/lib/postgresql/14/main' is the directory inside the docker container where the postgres is supposed to store the data.
Once the docker container starts, I enter the docker container using the command
docker exec -it test bash
and once I'm inside, I'm trying to start the PostgreSQL service. But PostgreSQL fails to start as there is no data in '/var/lib/postgresql/14/main' directory. I understand that since I have mapped an empty host directory to '/var/lib/postgresql/14/main' directory, postgres doesn't have the files required to start.
I understand that I'm doing it the wrong way, but I couldn't find a way around it. Can anyone please help me to do this the right way, if there is one?
Any help would be appreciable.
You should use the postgres docker image, it will set up the db for you when you start the container, you can find instructions on https://hub.docker.com/_/postgres
If you must use a custom image, you will need to initialize the db yourself, usually by running initdb or whatever your system provides.
But really you should use the appropriate docker image, and if you need more services you start them in their own container and connect them to the postgres one
I don't want to have to deploy a whole other ECS service just to enable X-Ray. I'm hoping I can run X-Ray on the same docker container as my app, I would have thought that was the preferred way of running it. I know there might be some data loss if my container dies. But I don't much care about that, I'm trying to stop this proliferation of extra services which serve only extra analytical/logging functions, I already have a logstash container I'm not happy about, my feeling is that apps themselves should be able to do this sort of stuff.
While we have the Dockerhub image of the X-Ray Daemon, you can absolutely run the daemon in the same docker container as your application - that shouldn't be an issue.
Here's the typical setup with the daemon dockerfile and task definition instructions:
https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-ecs.html
I imagine you can simply omit the task definition attributes around the daemon, since it would be running locally beside your application - those wouldn't be used at all.
So I think the proper way to do this is using supervisord, see link for an example of that, but I ended up just making a very simple script:
# start.sh
/usr/bin/xray &
$CATALINA_HOME/bin/catalina.sh run
And then having a Dockerfile:
FROM tomcat:9-jdk11-openjdk
RUN apt-get install -y unzip
RUN curl -o daemon.zip https://s3.dualstack.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-linux-3.x.zip
RUN unzip daemon.zip && cp xray /usr/bin/xray
# COPY APPLICATION
# TODO
COPY start.sh /usr/bin/start.sh
RUN chmod +x /usr/bin/start.sh
CMD ["/bin/bash", "/usr/bin/start.sh"]
I think I will look at using supervisord next time.
I am trying to run a Dockerfile that run PostgreSQL and then add some tables to it.
This is my Dockerfile:
FROM postgres:9.6
MAINTAINER agomez#ikerlan.es
USER postgres
ADD ./backup.sql /backup.sql
ADD ./deploy/import_database.sh /import_database.sh
USER root
RUN chmod +x /import_database.sh
USER postgres
ENTRYPOINT ./docker-entrypoint.sh postgres
CMD ./import_database.sh
The entrypoint doesn't finish becouse it runs the postgres server.
How can I run the CMD for example, after 20 seconds of running the ENTRYPOINT, but without finishing the ENTRYPOINT?
Is it possible?
This is not how the CMD and ENTRYPOINT work. I'd recommend reading https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact and checking the chart for a better understanding of how the ENTRYPOINT and CMD interact with each other.
That said, if all you want to do is import a SQL file or run a script at runtime, the official PostgreSQL image already has you covered. See https://github.com/docker-library/docs/tree/master/postgres#how-to-extend-this-image for information on how to do this.
An example in the case of your Dockerfile (if you just wanted to import the SQL file) would be to do something like:
FROM postgres:9.6
ADD ./backup.sql /docker-entrypoint-initdb.d/backup.sql
When starting a container from the image build on this Dockerfile, the default ENTRYPOINT script will start a temporary PostgreSQL instance, wait for it to be ready, import your data, and then restart to serve connections.
I'm on windows 10 with docker version 1.9.1 using docker toolbox
I wanted to put up a quick postgres container, something I've done before with a dockerfile I had laying around.
FROM postgres
ADD create-db.sql /tmp/
ADD drop_create_table.sql /tmp/
ADD db.sql /tmp/
ADD create-db.sh /docker-entrypoint-initdb.d/
It's pretty simple.
and when i run the resulting image. it starts fine.
However at the end it says:
...
server started ALTER ROLE
/docker-entrypoint-sh: running
/docker-entrypoint-initdb.d/create-db.sh :No such file or directory
If I try to do docker run -it <imagename> //bin/bash I can see that the file is indeed there:
root#xxxx:/docker-entrypoint-initdb.d# ls
create-db.sh
but whenever I run it it tells me it's not.
The container promptly stops when it doesn't find the file, so I can't try to ssh into the running container.
So this might be my Dockerfile:
FROM ubuntu:latest
RUN apt-get -y update && apt-get install -y mysql-server-5.6
RUN service mysql start
RUN service mysql status
It throws an error during the build that MySQL is not running, even though the previous command finished with a success. The deamons seem not to be able to be running between different commands in the Dockerfile.
This is an artificial example, but in my real Dockerfile I have lines which configure the database and they need to have a deamon running in the backgroud. The only solution to go around this that I found is to run:
RUN service mysql start && ./database_configure1.sh
RUN service mysql start && ./do_something_else_with_db.sh
and so on
But this is probably not the way to do it. Is there any better way to go about this?
Each RUN command within your Dockerfile runs within a different container, so here's the actual sequence of events:
service mysql start starts MySQL.
Then the container is stopped (MySQL is stopped).
Then a snapshot is taken.
Then a new container is launched using that snapshot.
service mysql status is run in the new container.
Of course, mysql isn't actually running in the latter container, so that fails.
So, instead, you need to do everything in a single build step. Usually, you'll want to do this by running a shell script within your container.
Here goes.
Your directory tree should look like this:
Dockerfile
do_stuff_with_mysql.sh
Then, in your Dockerfile, do:
ADD do_stuff_with_mysql.sh /
RUN chmod 755 /do_stuff_with_mysql.sh
RUN do_stuff_with_mysql.sh
And, in do_stuff_with_mysql.sh, you should have something that looks like this:
#!/bin/bash
set -o errexit
set -o nounset
service mysql start
./database_configure1.sh
./do_something_else_with_db.sh
service mysql stop
# you should loop on `service mysql status` to confirm MySQL is done shutting down