docker build does not sustain processes - service

So this might be my Dockerfile:
FROM ubuntu:latest
RUN apt-get -y update && apt-get install -y mysql-server-5.6
RUN service mysql start
RUN service mysql status
It throws an error during the build that MySQL is not running, even though the previous command finished with a success. The deamons seem not to be able to be running between different commands in the Dockerfile.
This is an artificial example, but in my real Dockerfile I have lines which configure the database and they need to have a deamon running in the backgroud. The only solution to go around this that I found is to run:
RUN service mysql start && ./database_configure1.sh
RUN service mysql start && ./do_something_else_with_db.sh
and so on
But this is probably not the way to do it. Is there any better way to go about this?

Each RUN command within your Dockerfile runs within a different container, so here's the actual sequence of events:
service mysql start starts MySQL.
Then the container is stopped (MySQL is stopped).
Then a snapshot is taken.
Then a new container is launched using that snapshot.
service mysql status is run in the new container.
Of course, mysql isn't actually running in the latter container, so that fails.
So, instead, you need to do everything in a single build step. Usually, you'll want to do this by running a shell script within your container.
Here goes.
Your directory tree should look like this:
Dockerfile
do_stuff_with_mysql.sh
Then, in your Dockerfile, do:
ADD do_stuff_with_mysql.sh /
RUN chmod 755 /do_stuff_with_mysql.sh
RUN do_stuff_with_mysql.sh
And, in do_stuff_with_mysql.sh, you should have something that looks like this:
#!/bin/bash
set -o errexit
set -o nounset
service mysql start
./database_configure1.sh
./do_something_else_with_db.sh
service mysql stop
# you should loop on `service mysql status` to confirm MySQL is done shutting down

Related

Can I run aws-xray on the same ECS container?

I don't want to have to deploy a whole other ECS service just to enable X-Ray. I'm hoping I can run X-Ray on the same docker container as my app, I would have thought that was the preferred way of running it. I know there might be some data loss if my container dies. But I don't much care about that, I'm trying to stop this proliferation of extra services which serve only extra analytical/logging functions, I already have a logstash container I'm not happy about, my feeling is that apps themselves should be able to do this sort of stuff.
While we have the Dockerhub image of the X-Ray Daemon, you can absolutely run the daemon in the same docker container as your application - that shouldn't be an issue.
Here's the typical setup with the daemon dockerfile and task definition instructions:
https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-ecs.html
I imagine you can simply omit the task definition attributes around the daemon, since it would be running locally beside your application - those wouldn't be used at all.
So I think the proper way to do this is using supervisord, see link for an example of that, but I ended up just making a very simple script:
# start.sh
/usr/bin/xray &
$CATALINA_HOME/bin/catalina.sh run
And then having a Dockerfile:
FROM tomcat:9-jdk11-openjdk
RUN apt-get install -y unzip
RUN curl -o daemon.zip https://s3.dualstack.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-linux-3.x.zip
RUN unzip daemon.zip && cp xray /usr/bin/xray
# COPY APPLICATION
# TODO
COPY start.sh /usr/bin/start.sh
RUN chmod +x /usr/bin/start.sh
CMD ["/bin/bash", "/usr/bin/start.sh"]
I think I will look at using supervisord next time.

How to run systemctl in a pod

Getting access denied error while running the systemctl command in a pod.
Whenever try to start any service, for example, MySQL or tomcat server in a pod, it gives access denied error.
Is there any way by which I can run systemctl within a pod.
This is a problem related to Docker, not Kubernetes.
According to the page Run multiple services in a container in docker docs:
It is generally recommended that you separate areas of concern by
using one service per container
However if you really want to use a process manager, you can try supervisord, which allows you to use supervisorctl commands, similar to systemctl. The page above explains how to do that:
Here is an example Dockerfile using this approach, that assumes the
pre-written supervisord.conf, my_first_process, and my_second_process
files all exist in the same directory as your Dockerfile.
FROM ubuntu:latest
RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY my_first_process my_first_process
COPY my_second_process my_second_process
CMD ["/usr/bin/supervisord"]
That's a rather short question. The 'systemctl' command does try to talk to the systemd daemon which is not running in a pod by default (it could however). Running multiple services is yet another question about service management. It both cases it could help to use a tool like the docker-systemctl-replacement overwriting /usr/bin/systemctl and registering it as the init-CMD of the container.

Cron in postgresql:alpine docker container

I am using the "plain" postgresql:alpine docker image, but have to schedule a database backup daily. I think this is a pretty common task.
I created a script backupand stored in the container in /etc/periodic/15min, and made it executable:
bash-4.4# ls -l /etc/periodic/15min/
total 4
-rwxr-xr-x 1 root root 95 Mar 2 15:44 backup
I tried executing it manually, that works fine.
My problem is getting crond to run automatically.
If I exec docker exec my-postgresql-container crond, the deamon is started and cron works, but I would like to embed this into my Dockerfile
FROM postgres:alpine
# my backup script, MUST NOT have .sh extension
COPY backup.sh /etc/periodic/15min/backup
RUN chmod a+x /etc/periodic/15min/backup
RUN crond # <- doesn't work
I have no idea how to rewrite or overwrite the commands in the official image. For update reasons I also would like to stay on these images, if possible.
Note: This option if you would like to use the same container with multiple service
Install Supervisord which will makes you able to run crond and postgresql. The Dockerfile will be as the following:
FROM postgres:alpine
RUN apk add --no-cache supervisor
RUN mkdir /etc/supervisor.d
COPY postgres_cron.ini /etc/supervisor.d/postgres_cron.ini
ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"]
And postgres_cron.ini will be as the following:
[supervisord]
logfile=/var/log/supervisord.log ; (main log file;default $CWD/supervisord.log)
loglevel=info ; (log level;default info; others: debug,warn,trace)
nodaemon=true ; (start in foreground if true;default false)
[program:postgres]
command=/usr/local/bin/docker-entrypoint.sh postgres
autostart=true
autorestart=true
[program:cron]
command =/usr/sbin/crond -f
autostart=true
autorestart=true
Then you can start the docker build process and run a container from your new image. Feel free to modify the Dockerfile or postgres_cron.ini as needed
I had the exact same problem a few month ago. The key aspect is that a container can have only one main process defined by the ENTRYPOINT and/or CMD in your Dockerfile.
You cannot just swap out postgres with crond otherwise your database isn't running. It is generally recommended to separate areas of concern by using one service per container.
With that in mind either use a separate container which runs nothing but crond and thus Docker can both track its lifecycle, and restart it when/if it fails, the machine restarts, etc.
Or run the jobs via cron on your host using docker exec.
The third and in my opinion best (but also advanced) solution is pg_cron. It is an postgres extension and therefore runs the jobs in the same database container. Your challenge would be to adapt the configuration and installation of it.
The easy part should be the
postgresql.conf:
# add to postgresql.conf:
shared_preload_libraries = 'pg_cron'
cron.database_name = 'postgres'
Next, you need to add the pg_cron extension to your image by adjusting the Dockerfile, which you can derive from the official alpine postgres image. The installation of it is described here.

Connection refused when running mongo DB command in docker

I'm new to docker and mongoDB, so I expect I'm missing some steps. Here's what I have in my Dockerfile so far:
FROM python:2.7
RUN apt-get update \
&& apt-get install -y mongodb \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p /data/db
RUN service mongodb start
RUN mongod --fork --logpath /var/log/mongodb.log
RUN mongo db --eval 'db.createUser({user:"dbuser",pwd:"dbpass",roles:["readWrite","dbAdmin"]})'
The connection fails on the last command:
Error: couldn't connect to server 127.0.0.1:27017 at src/mongo/shell/mongo.js:145
exception: connect failed`.
How can I connect successfully? Should I change the host/IP, and to what, in which commands?
Several things going wrong here. First are the commands you're running:
RUN service mongodb start
RUN mongod --fork --logpath /var/log/mongodb.log
Each of these will run to create a layer in docker. And once the command being run returns, the temporary container that was started is stopped and any files changed in the container are captured to make a new layer. There are no persistent processes that last between these commands.
These commands are also running the background version of the startup commands. In docker, you'll find this to be problematic since when you use this as your container's command, you'll find the container dies as soon as the command finishes. Pid 1 on the container has the same role of pid 1 on a linux OS, once it dies, so does everything else.
The second issue I'm seeing is mixing data with your container in the form of initializing the database with the last RUN command. This fails since there's no database running (see above). I'd recommend instead to make an entrypoint that configures the database if one does not already exist, and then use a volume in your docker-compose.yml or on your docker run commandline to persist data between containers.
If you absolutely must initialize the data as part of your image, then you can try merging the various commands into a single run:
RUN mongod --fork --logpath /var/log/mongodb.log \
&& mongo db --eval 'db.createUser({user:"dbuser",pwd:"dbpass",roles:["readWrite","dbAdmin"]})'
I think you misunderstood what Dockerfiles are used for.
As Dockerfile reference points out, a
Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.
The whole concept of an image is to derive running container from it which are then filled with data and queried (in case of a database) or are beeing called by an external container / host (in case of an web service) or many other possible usages.
To answer your question I'll assume that:
You want to use a mongo database to store data.
You have some pyhton code which needs to have access to mongo.
You want some initial data in your database.
To do so:
Run a mongo database
docker run --name my-mongo -d mongo
Note: There is no need to write a custom image. Use the official mongo image!
Create a python image which contains your script
a) Write your Dockerfile
FROM python:3-alpine
ADD my_script.py /
RUN pip install any-dependency-you-might-need
CMD [ "python", "./my_script.py" ]
b) Write your my_script.py
Insert your application stuff here. It will be executed in the python container. And as mongo will be linked, you can use s.th. like client = MongoClient('mongodb://mongo:27017/') to get started.
Run your python container with a link to mongo
a) Build it:
docker build -t my-pyhthon-magic .
b) Run it:
docker run -d --name python-magic-container --link my-mongo:mongo my-python-magic
Note: The --link here links a running container named my-mongo to be reached internally in my-python-magic-container as mongo. That`s why you can use it in your python script.
I hope this helped you - don't hesitate to ask or modify your question if I misunderstood you.

Can't run Mongo DB deamon in docker container

I'm running docker container in OSX using boot2docker. It is a latest Ubuntu image with mongo installed using official way from package mongodb-org.
I can perfectly run mongod from command line, but can't run it as a service.
When I'm trying to do sudo service mongod start it returns
Rather than invoking init scripts through /etc/init.d, use the service(8)
utility, e.g. service mongod start
Since the script you are attempting to invoke has been converted to an
Upstart job, you may also use the start(8) utility, e.g. start mongod
I have tried to do start mongod which doesn't have any output. I have tried everything I found in Google, but no luck.
Meanwhile, I have tried to install MySQL using apt-get and I can perfectly run it as a service.
Also I have tried to install Mongo from Ubuntu's mongodb package which is a older version. Also no problem to run it as a service.
I suspect that there is something wrong with /etc/init.d/mongod script, but don't know exactly what.
Apprieciate any help.
The init-related commands on the Docker Ubuntu image are dummied out / not working because Upstart (/sbin/init) is not the first process started on the machine.
In general, any service which initializes using Upstart will not run properly in a Docker container unless you start the container with /sbin/init (you probably have to be using the ubuntu-upstart image, and make a bunch of tweaks to it too.)
If you really needed to do it this way, write a traditional init script for mongo and insert it using update-rc.d. Then, starting it with /sbin/service should work.
Why not just have the Dockerimage run mongod instead of init/shell/etc? "One process per container", right?
Use a Dockerfile to create your image, and set the CMD to:
CMD ["/usr/bin/mongod", "-f", "/etc/mongod.conf"]