Is there a way to daemonize docker-compose after you've started it with sudo docker-compose up? Sending some signal to it?
Related
I don't want to have to deploy a whole other ECS service just to enable X-Ray. I'm hoping I can run X-Ray on the same docker container as my app, I would have thought that was the preferred way of running it. I know there might be some data loss if my container dies. But I don't much care about that, I'm trying to stop this proliferation of extra services which serve only extra analytical/logging functions, I already have a logstash container I'm not happy about, my feeling is that apps themselves should be able to do this sort of stuff.
While we have the Dockerhub image of the X-Ray Daemon, you can absolutely run the daemon in the same docker container as your application - that shouldn't be an issue.
Here's the typical setup with the daemon dockerfile and task definition instructions:
https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-ecs.html
I imagine you can simply omit the task definition attributes around the daemon, since it would be running locally beside your application - those wouldn't be used at all.
So I think the proper way to do this is using supervisord, see link for an example of that, but I ended up just making a very simple script:
# start.sh
/usr/bin/xray &
$CATALINA_HOME/bin/catalina.sh run
And then having a Dockerfile:
FROM tomcat:9-jdk11-openjdk
RUN apt-get install -y unzip
RUN curl -o daemon.zip https://s3.dualstack.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-linux-3.x.zip
RUN unzip daemon.zip && cp xray /usr/bin/xray
# COPY APPLICATION
# TODO
COPY start.sh /usr/bin/start.sh
RUN chmod +x /usr/bin/start.sh
CMD ["/bin/bash", "/usr/bin/start.sh"]
I think I will look at using supervisord next time.
Getting access denied error while running the systemctl command in a pod.
Whenever try to start any service, for example, MySQL or tomcat server in a pod, it gives access denied error.
Is there any way by which I can run systemctl within a pod.
This is a problem related to Docker, not Kubernetes.
According to the page Run multiple services in a container in docker docs:
It is generally recommended that you separate areas of concern by
using one service per container
However if you really want to use a process manager, you can try supervisord, which allows you to use supervisorctl commands, similar to systemctl. The page above explains how to do that:
Here is an example Dockerfile using this approach, that assumes the
pre-written supervisord.conf, my_first_process, and my_second_process
files all exist in the same directory as your Dockerfile.
FROM ubuntu:latest
RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY my_first_process my_first_process
COPY my_second_process my_second_process
CMD ["/usr/bin/supervisord"]
That's a rather short question. The 'systemctl' command does try to talk to the systemd daemon which is not running in a pod by default (it could however). Running multiple services is yet another question about service management. It both cases it could help to use a tool like the docker-systemctl-replacement overwriting /usr/bin/systemctl and registering it as the init-CMD of the container.
I have setup a docker container based on OpenSuse 12, installed some additional files and copied some installer binaries into the container. So far everything fine.
From inside a running image of the container I now need to run the aforementioned setup program but this needs to have uuid.socket up and running - uuid.socket in turn needs systemctl to work correctly and this causes an error like this:
hxehost:/usr/sap/SRCFiles # systemctl
Failed to get D-Bus connection: Unknown error -1
I started the docker container like this:
docker run -h hxehost -i -t f3096b0aa964 /bin/bash
Which, according to some postings should start a machine container as opposed to an application container.
Can anyone tell me what I'm doing wrong here??? How do I get systemctl to work inside a docker container?
I tried to starte the container with this command, which according to linked hints should do, but to no avail
docker run --privileged --rm -ti -e 'container=docker' -h hxehost --network="bridge" --tmpfs /run --tmpfs /tmp -v /sys/fs/cgroup:/sys/fs/cgroup:ro siliconchris/hxe:v0.0.2 /bin/bash
If I do this, systemctl still gives exact same error.
If I start /sbin/init instead of /bin/bash, I can see that quite a lot of services are started (some, like wicked, login and module, fail). In the end, the container presents me with a login. After login, I can now execute systemctl and it shows all services with their respective states.
Now my next question is: IS THIS APPROACH FEASIBLE AT ALL???
Best regards,
Chris
You may find the repo to this image at SAP HANA Express Edition inside docker
Most current Linux systems depend on SystemD running, and systemctl will send requests to it. However most applications did install easily when I replaced the systemctl binary with a script that just interprets start/stop/status/enable commands. As another benefit, it would not need anymore those complicated startup-commands for the resulting image to get the systemd mapped into the container. May be that would help you? Please have a look at the docker-systemctl-replacement.
So this might be my Dockerfile:
FROM ubuntu:latest
RUN apt-get -y update && apt-get install -y mysql-server-5.6
RUN service mysql start
RUN service mysql status
It throws an error during the build that MySQL is not running, even though the previous command finished with a success. The deamons seem not to be able to be running between different commands in the Dockerfile.
This is an artificial example, but in my real Dockerfile I have lines which configure the database and they need to have a deamon running in the backgroud. The only solution to go around this that I found is to run:
RUN service mysql start && ./database_configure1.sh
RUN service mysql start && ./do_something_else_with_db.sh
and so on
But this is probably not the way to do it. Is there any better way to go about this?
Each RUN command within your Dockerfile runs within a different container, so here's the actual sequence of events:
service mysql start starts MySQL.
Then the container is stopped (MySQL is stopped).
Then a snapshot is taken.
Then a new container is launched using that snapshot.
service mysql status is run in the new container.
Of course, mysql isn't actually running in the latter container, so that fails.
So, instead, you need to do everything in a single build step. Usually, you'll want to do this by running a shell script within your container.
Here goes.
Your directory tree should look like this:
Dockerfile
do_stuff_with_mysql.sh
Then, in your Dockerfile, do:
ADD do_stuff_with_mysql.sh /
RUN chmod 755 /do_stuff_with_mysql.sh
RUN do_stuff_with_mysql.sh
And, in do_stuff_with_mysql.sh, you should have something that looks like this:
#!/bin/bash
set -o errexit
set -o nounset
service mysql start
./database_configure1.sh
./do_something_else_with_db.sh
service mysql stop
# you should loop on `service mysql status` to confirm MySQL is done shutting down
I have installed mongodb on a docker container together with openssh on ubuntu 14.04. The container is running with ssh but when I ssh into the container I get the following error when trying to start mongod.
root#430f9502ba2d:~# service mongod start
Rather than invoking init scripts through /etc/init.d, use the service(8)
utility, e.g. service mongod start
Since the script you are attempting to invoke has been converted to an
Upstart job, you may also use the start(8) utility, e.g. start mongod
Also start mongod does not affect anything.
Tried looking at this also Mongo daemon doesn't run by service mongod start without it helping.
mongod --config /your/path/to/mongod.conf doesn't seem to work also, just locks up.
The error below is standard as of course there is no mongod server running.
root#430f9502ba2d:/# mongo
MongoDB shell version: 2.6.9
connecting to: test
2015-05-07T20:49:56.213+0000 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
2015-05-07T20:49:56.214+0000 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
exception: connect failed
The problem here is your approach. Docker does not have an init system like you are used to on traditional systems. What docker does is replace PID 1 with the process you specify in the CMD or ENTRYPOINT Dockerfile commands. For now, ignore ENTRYPOINT, because it replaces what your CMD is run with (normally, it's /bin/sh -c). You need to instruct docker to start your mongod service in your Dockerfile with the CMD command, like:
CMD usr/bin/mongod
And when you run your container, mongod will be your PID 1. Now, you're probably wondering at this point "But what about my SSH server?" and the answer is: Don't run an SSH server on your docker containers. There are some use cases where running an SSH server is okay, but almost all of the "normal" reasons (debug, C&C, etc) are nullified with the "best practice" for getting a shell on your container:
docker exec -it myContainer /bin/bash
This will drop you into a shell on your running container. The recommendation here for managing configuration and changes in your docker container is to use something like Ansible. However, remember that docker containers are ephemeral, and you shouldn't be restarting services and changing configuration state on them. If you need a config change, change the Dockerfile or config data, and then start a new container. Good luck! Here is a little more information on Dockerizing MongoDB, but keep in mind that the method described there alters the ENTRYPOINT in the Dockerfile, which is a little more involved and requires a better understanding of what's going on in Dockerfiles.
This is really helpful. I was trying to make old Ansible playbooks work with Docker by creating several blank containers and let Ansible do the rest.
It works through command
mongod --dbpath /var/lib/mongodb --smallfiles