Docker is not running init. So services are not started during startup. Lxc runs init during lxc-start.Since Docker is using lxc why it is not running init. What are the advantages of not running init and depending on supervisord for daemonization?
I think that running /sbin/init is just default behaviour in lxc-start, it awaits a command to be run. There is no default command parameter for run command in docker.
You can run init explicitly in docker:
docker run ubuntu /sbin/init
Personally, I like this behaviour - I prefer to use container for my few apss related processes and I do not need init to be started.
The advantage simply is to keep your container light-weight. You decide which processes to run, and no more than that. That way, docker can start a container really really fast.
By the way, you don't depend on supervisord, as you could for instance write a complicated shell script which you put in your command.
One of the applications of docker is to set it up as an executable. E.g. you can make images that run unit or integration tests. Now, you wouldn't want each of those to run several dozens of services that you don't use, right?
Related
Is there a way to run docker_compose with parameters?
Something like the following:
docker-compose run --rm app_service python init_script
Now I use shell module for this.
Can I use the docker_compose module instead?
The documentation for the docker_compose module suggests that it can only do the equivalents of docker-compose up, down, and build. None of the other Ansible Docker modules connect to Compose at all.
You could use docker_container as an equivalent to a separate docker run command, but this has the same drawbacks as trying to docker run a separate container in a mostly-Compose environment (you don't get networks or volumes or dependencies declared in the docker-compose.yml file).
Falling back to shell is probably your best option here.
I run a centos 8 distro on docker and I would like to have bash TAB completion with dnf package manager. According to other posts, I did the following once my docker container is started:
dnf clean all && rm -r /var/cache/dnf && dnf upgrade -y && dnf update -y
and then
dnf install bash-completion sqlite -y
After doing that I restart the container but there is still no bash completion. I also tried to source directly the bash completion file by doing:
source /etc/profile.d/bash_completion.sh
but without any better effect.
Would you know what I am doing wrong ?
You shouldn't need BASH Completion in a Docker container. The only time you should be manually connecting to a shell inside a Linux container is to troubleshoot why the process running in the container is behaving abnormally. In fact, some container design advice might even go as far as suggesting you not include a shell inside your base OS at all!
The reason this isn't working for you is due to the way in which Linux containers operate. A Container is simply a namespaced process that is managed by the kernel installed on the Host OS. This process cannot be modified or interrupted or the container will be destroyed since the process will be sent a SIGTERM. When you attempt to source the bash_completion.sh script, you are attempting to pass new configuration arguments to your existing namespaced process managed by Docker.
If you really wanted to do this the best way to do it would be to create a new Docker Container Image based on the original CentOS 8 Base Image. And then from there install the bash completion package and add an echo command to add the source line to your user's .bashrc file.
EDIT:
With regards to the additional question asked OP in the comments of this answer I have added additional information below.
Why should not I need bash completion in a container
The reason you do not need bash completion in a container is because containers are not meant to be attached to with a shell. A is simply supposed to be a single instance of a process running under specific configured criteria. Containers aren't meant to be used to create dev environments for you to connect to, they're meant to run processes and applications in software infrastructure.
Manually updating & installing packages
You mention that one of the first things you do when you spin up a container is install packages. This is also alarming to me because you are not supposed to be manually interacting with a container at all. This includes package installation. Instead, you should generate a new Container Image from the older Base Image and add additional RUN statements to the Dockerfile to update the system and install these desired packages.
Cannot believe it is not possible
It is possible if you create a new Dockerfile that purposely installs it on a new layer of the base image and produces a new container image for you to use. BUT the point is that you shouldn't be connecting to Docker containers in the first place to even get to a point where you could need something like bash completion!
Here is a great summary on the difference between a container and a virtual machine that might help clarify some of this for you. In a nutshell, containers are supposed to run, and only run, processes.
The crond is not running by default in the official postgres alpine image. How could I define my Dockerfile to make sure that the daemon runs in the background? I want that it is running by default, if possible even when the container gets restarted.
I tried to add CMD ["/usr/sbin/crond"] to my Dockerfile but I didn't succeed. Any thoughts how to run this in combination with postgres?
Update
I have added the answer of tianon:
[...]
If you must run crond inside a container, I'd recommend instead using
a separate container which runs nothing but crond (and thus Docker can
both track its lifecycle, and restart it when/if it fails, the machine
restarts, etc). You should be able to connect to the PostgreSQL
instance from a second container, but if absolutely necessary, one
could use things like --network container:some-postgres in order to
join the network namespace of the database container directly.
pg_cron must be added to shared_preload_libraries. Per the docs:
# add to postgresql.conf:
shared_preload_libraries = 'pg_cron'
and you must then restart PostgreSQL.
I need to do a dataimport from a PostgreSQL container running inside docker to a Solr server also running inside of Docker.
In my docker run command I specify the --link option which creates the environment variable $POSTGRESQL_PORT_5432_TCP_ADDR inside the solr docker container, and I need to pass this into Solr to use in my solrconfig.xml file.
I've heard that this is possible by passing JVM environment variables to the Solr startup command, but docker run starts Solr automatically. The only workaround I've found is doing something like:
docker run --name solr -d -p 8983:8983 --link postgresql --volumes-from solr_cores makuk66/docker-solr /bin/true
Starting the container with bin/true so it does nothing, and then
docker exec -it solr /bin/bash
to get into the container, finally running the solr startup command myself with the flag
-Dsolr.database.ip=$POSTGRESQL_PORT_5432_TCP_ADDR
However this is an involved manual process, and I'm wondering if there's a better way.
Looking on the page Taking Solr to Production you see
The bin/solr script simply passes options starting with -D on to the JVM during startup. For running in production, we recommend setting these properties in the SOLR_OPTS variable defined in the include file. Keeping with our soft-commit example, in /var/solr/solr.in.sh, you would do:
SOLR_OPTS="$SOLR_OPTS -Dsolr.autoSoftCommit.maxTime=10000"
So all you need to do is edit the SOLR_OPTS environment variable in solr.bin.sh.
It's a bit different for Docker because you don't directly have access to solr.bin.sh, but it after some trial and error, it was as easy as adding this to my Dockerfile.
RUN echo 'SOLR_OPTS="$SOLR_OPTS -Dsolr.database.ip=$POSTGRESQL_PORT_5432_TCP_ADDR"' >> /opt/solr/bin/solr.in.sh
Then you can use it in the solrconfig.xml file as
${solr.database.ip}
An important thing to note is that you can call the JVM environment variable whatever you want as long as you make sure not to overwrite anything important. I could have called it
-Dsolr.potato
if I wanted to.
For some reason the solr.in.cmd file looks exactly the same as solr.in.sh which confused me on how to set variables there. In windows containers, the command to accomplish the same - from a dockerfile, would be:
RUN Add-Content C:\solr\bin\solr.in.cmd 'set SOLR_OPTS=%SOLR_OPTS% -Dsolr.database.ip=%POSTGRESQL_PORT_5432_TCP_ADDR%'
I'm running docker container in OSX using boot2docker. It is a latest Ubuntu image with mongo installed using official way from package mongodb-org.
I can perfectly run mongod from command line, but can't run it as a service.
When I'm trying to do sudo service mongod start it returns
Rather than invoking init scripts through /etc/init.d, use the service(8)
utility, e.g. service mongod start
Since the script you are attempting to invoke has been converted to an
Upstart job, you may also use the start(8) utility, e.g. start mongod
I have tried to do start mongod which doesn't have any output. I have tried everything I found in Google, but no luck.
Meanwhile, I have tried to install MySQL using apt-get and I can perfectly run it as a service.
Also I have tried to install Mongo from Ubuntu's mongodb package which is a older version. Also no problem to run it as a service.
I suspect that there is something wrong with /etc/init.d/mongod script, but don't know exactly what.
Apprieciate any help.
The init-related commands on the Docker Ubuntu image are dummied out / not working because Upstart (/sbin/init) is not the first process started on the machine.
In general, any service which initializes using Upstart will not run properly in a Docker container unless you start the container with /sbin/init (you probably have to be using the ubuntu-upstart image, and make a bunch of tweaks to it too.)
If you really needed to do it this way, write a traditional init script for mongo and insert it using update-rc.d. Then, starting it with /sbin/service should work.
Why not just have the Dockerimage run mongod instead of init/shell/etc? "One process per container", right?
Use a Dockerfile to create your image, and set the CMD to:
CMD ["/usr/bin/mongod", "-f", "/etc/mongod.conf"]