In ansible can I run docker_compose with parameters? - docker-compose

Is there a way to run docker_compose with parameters?
Something like the following:
docker-compose run --rm app_service python init_script
Now I use shell module for this.
Can I use the docker_compose module instead?

The documentation for the docker_compose module suggests that it can only do the equivalents of docker-compose up, down, and build. None of the other Ansible Docker modules connect to Compose at all.
You could use docker_container as an equivalent to a separate docker run command, but this has the same drawbacks as trying to docker run a separate container in a mostly-Compose environment (you don't get networks or volumes or dependencies declared in the docker-compose.yml file).
Falling back to shell is probably your best option here.

Related

I can't enter into the mongo db cli in my docker project

I am learning docker and during my project, i can't enter the mongo db with this command:
mongo -u "username" -p "mypassword"
It throws me this error:
bash: mongo: command not found
I am not sure what the issue is. I have installed the community edition of mongo db and i also tried different terminals but i can't enter the db.
Any suggestions?
Thanks in advance!
I assume, you did the following: Create docker-compose.yml as you wrote before. Start docker compose up. This will start a container on your system, having mongodb installed in it. It will not affect your "normal" system outside this container. (You can imagine it as kind of a virtual machine, though it is not really the same.) So, if you did not install mongodb on your local host system as well, the error you encounter is quite explicable.
If you want to access the mongodb running within the container, you have two possibilities:
1. From outside the container (which is the more common use case)
You will have to install mongo on your regular PC (or anywhere you want to access your db from) as well. Then you would issue mongo 127.0.0.1:3000. The 3000 is important as your docker-compose.yml says, mongo is listening on port 3000. Note that you might have to get your network configuration adapted before this works, especially from other PCs, where 127.0.0.1 won't be correct.
2. From within the container
Once your container is started, you can also execute a command inside it, like this: docker exec -it ${container_id} /bin/bash. You'll have to find out the container's ID beforehand, using something like docker-compose ps -q. This will start a bash shell inside the container and "connect" you to it. (If there's no /bin/bash installed in the container, this will not work. Try e. g. /bin/sh instead.) Now your terminal will be inside the container and just be able to use the commands present there. So, to get back to your local PC, don't forget to issue exit.
Conclusion
IMHO, the crucial point is, that the physical PC you are working in front of and the container running inside it are almost completely different systems, connected only by the docker daemon and some virtual network access. You'll have to keep that in mind and decide what you want to do/run inside the container and what to do outside, on the host.
Here is a little further reference that might help you. And this answer is about how to find out your container ID in an automated way. (Assuming that you are running just that one container!)

VS Code and Remote containers - how to pass a variable?

I am using VS Code with remote containers extension
My container accepts json files to override specific configurations supported by it.
For example, if I running from a terminal, the command looks something like this:
$ docker run -it my-container --Flag1 /path/to/file.json --Flag2 /path/to/file2.json
However, I can't find a way to pass these flags using either:
devcontainer.json
or
devcontainer.env
I believe they should go in the devcontainer.env file - but I can't seem to find a way to specify it.
If I use in my devcontainer.env:
Flag1=/path/to/file.json
Flag2=/path/to/file2.json
The container will not start.

How to run ad hoc docker compose commands in Ansible?

I have to run several docker-compose run commands for my Phoenix web app project. From the terminal I have to run this:
$ sudo docker-compose run web mix do deps.get, compile
$ sudo docker-compose run web mix ecto.create
$ sudo docker-compose run web mix ecto.migrate
While this works fine, I would like to automate it using Ansible. I'm well aware there is the docker_service Ansible module that consumes the docker-compose API and I'm also aware of the definition option that makes it easy to covert integrate the configuration inside docker-compose.yml into my playbook.
What I don't know is how do I ensure that the commands above will be run before starting the containers. Can anyone help me with this issue?
I faced a similar situation like yours, finding no way to run docker-compose run commands via docker dedicated modules for Ansible. However I ended using Ansible's shell module with success for my purposes. Here we have some examples, adapted for your situation.
One by one, explicit way
- name: Run mix deps.get and compile
shell: docker-compose run web mix do deps.get, compile
args:
chdir: /path/to/directory/having/your/docker-compose.yml
become: True # because you're using sudo
- name: Run mix ecto.create
shell: docker-compose run web mix ecto.create
args:
chdir: /path/to/directory/having/your/docker-compose.yml
become: True
- name: Run mix ecto.migrate
shell: docker-compose run web mix ecto.migrate
args:
chdir: /path/to/directory/having/your/docker-compose.yml
become: True
Equivalent way, but shorter
- name: Run mix commands
shell: docker-compose run web mix "{{ item }}"
args:
chdir: /path/to/directory/having/your/docker-compose.yml
loop:
- "do deps.get, compile"
- "ecto.create"
- "ecto.migrate"
become: True
To run those commands before starting the other containers defined in the docker-compose.yml file, maybe a combination of these points can help:
Use docker volumes to persist the results of getting dependencies, compilation and Ecto commands
Use the depends_on configuration option inside the docker-compose.yml file
Use the service parameter of Ansible's docker_service module in your playbook to run only a subset of containers
Use disposable containers with your docker-compose run commands, via the --rm option and possibly with the --no-deps option
In your playbook, execute your docker-compose run commands before the docker_service task
Some notes:
I'm using Ansible 2.5 at the moment of writing this answer.
I'm assuming that docker-compose binary is already installed, it's working fine and it's available on the standard system PATH on the managed host.
The docker-compose.yml file already exists and has the path /path/to/directory/having/your/docker-compose.yml, as used in the examples. A variable for that file path could also be used.
That's it!

How do you pass an environment variable to Solr running inside Docker when the environment variable only exists inside the container?

I need to do a dataimport from a PostgreSQL container running inside docker to a Solr server also running inside of Docker.
In my docker run command I specify the --link option which creates the environment variable $POSTGRESQL_PORT_5432_TCP_ADDR inside the solr docker container, and I need to pass this into Solr to use in my solrconfig.xml file.
I've heard that this is possible by passing JVM environment variables to the Solr startup command, but docker run starts Solr automatically. The only workaround I've found is doing something like:
docker run --name solr -d -p 8983:8983 --link postgresql --volumes-from solr_cores makuk66/docker-solr /bin/true
Starting the container with bin/true so it does nothing, and then
docker exec -it solr /bin/bash
to get into the container, finally running the solr startup command myself with the flag
-Dsolr.database.ip=$POSTGRESQL_PORT_5432_TCP_ADDR
However this is an involved manual process, and I'm wondering if there's a better way.
Looking on the page Taking Solr to Production you see
The bin/solr script simply passes options starting with -D on to the JVM during startup. For running in production, we recommend setting these properties in the SOLR_OPTS variable defined in the include file. Keeping with our soft-commit example, in /var/solr/solr.in.sh, you would do:
SOLR_OPTS="$SOLR_OPTS -Dsolr.autoSoftCommit.maxTime=10000"
So all you need to do is edit the SOLR_OPTS environment variable in solr.bin.sh.
It's a bit different for Docker because you don't directly have access to solr.bin.sh, but it after some trial and error, it was as easy as adding this to my Dockerfile.
RUN echo 'SOLR_OPTS="$SOLR_OPTS -Dsolr.database.ip=$POSTGRESQL_PORT_5432_TCP_ADDR"' >> /opt/solr/bin/solr.in.sh
Then you can use it in the solrconfig.xml file as
${solr.database.ip}
An important thing to note is that you can call the JVM environment variable whatever you want as long as you make sure not to overwrite anything important. I could have called it
-Dsolr.potato
if I wanted to.
For some reason the solr.in.cmd file looks exactly the same as solr.in.sh which confused me on how to set variables there. In windows containers, the command to accomplish the same - from a dockerfile, would be:
RUN Add-Content C:\solr\bin\solr.in.cmd 'set SOLR_OPTS=%SOLR_OPTS% -Dsolr.database.ip=%POSTGRESQL_PORT_5432_TCP_ADDR%'

why docker not executing init as lxc

Docker is not running init. So services are not started during startup. Lxc runs init during lxc-start.Since Docker is using lxc why it is not running init. What are the advantages of not running init and depending on supervisord for daemonization?
I think that running /sbin/init is just default behaviour in lxc-start, it awaits a command to be run. There is no default command parameter for run command in docker.
You can run init explicitly in docker:
docker run ubuntu /sbin/init
Personally, I like this behaviour - I prefer to use container for my few apss related processes and I do not need init to be started.
The advantage simply is to keep your container light-weight. You decide which processes to run, and no more than that. That way, docker can start a container really really fast.
By the way, you don't depend on supervisord, as you could for instance write a complicated shell script which you put in your command.
One of the applications of docker is to set it up as an executable. E.g. you can make images that run unit or integration tests. Now, you wouldn't want each of those to run several dozens of services that you don't use, right?