I have to run several docker-compose run commands for my Phoenix web app project. From the terminal I have to run this:
$ sudo docker-compose run web mix do deps.get, compile
$ sudo docker-compose run web mix ecto.create
$ sudo docker-compose run web mix ecto.migrate
While this works fine, I would like to automate it using Ansible. I'm well aware there is the docker_service Ansible module that consumes the docker-compose API and I'm also aware of the definition option that makes it easy to covert integrate the configuration inside docker-compose.yml into my playbook.
What I don't know is how do I ensure that the commands above will be run before starting the containers. Can anyone help me with this issue?
I faced a similar situation like yours, finding no way to run docker-compose run commands via docker dedicated modules for Ansible. However I ended using Ansible's shell module with success for my purposes. Here we have some examples, adapted for your situation.
One by one, explicit way
- name: Run mix deps.get and compile
shell: docker-compose run web mix do deps.get, compile
args:
chdir: /path/to/directory/having/your/docker-compose.yml
become: True # because you're using sudo
- name: Run mix ecto.create
shell: docker-compose run web mix ecto.create
args:
chdir: /path/to/directory/having/your/docker-compose.yml
become: True
- name: Run mix ecto.migrate
shell: docker-compose run web mix ecto.migrate
args:
chdir: /path/to/directory/having/your/docker-compose.yml
become: True
Equivalent way, but shorter
- name: Run mix commands
shell: docker-compose run web mix "{{ item }}"
args:
chdir: /path/to/directory/having/your/docker-compose.yml
loop:
- "do deps.get, compile"
- "ecto.create"
- "ecto.migrate"
become: True
To run those commands before starting the other containers defined in the docker-compose.yml file, maybe a combination of these points can help:
Use docker volumes to persist the results of getting dependencies, compilation and Ecto commands
Use the depends_on configuration option inside the docker-compose.yml file
Use the service parameter of Ansible's docker_service module in your playbook to run only a subset of containers
Use disposable containers with your docker-compose run commands, via the --rm option and possibly with the --no-deps option
In your playbook, execute your docker-compose run commands before the docker_service task
Some notes:
I'm using Ansible 2.5 at the moment of writing this answer.
I'm assuming that docker-compose binary is already installed, it's working fine and it's available on the standard system PATH on the managed host.
The docker-compose.yml file already exists and has the path /path/to/directory/having/your/docker-compose.yml, as used in the examples. A variable for that file path could also be used.
That's it!
Related
I am starting to use kubernetes/Minikube to deploy my application which is currently running on docker containers.
Docker version:19.03.7
Minikube version: v1.25.2
From what I read I gather that first of all I need to build my frontend/backend images inside minikube.
The image is available on the server and I can see it using:
$ docker image ls
The first step, as far as I understand, is to use the "docker build" command:
$docke build -t my-image .
However, the dot at the end, so I understand, means it is looking for a Dockerfile in the curretn directoy, and indeed I get an error:
unable to evaluate symlinks in Dockerfile path: lstat
/home/dep/k8s-config/Dockerfile: no such file or directory
So, where do I get this dockerfile for the "docker build" to succeed?
Thanks
My missunderstanding...
I have the Dockefile now, so I should put it anywhere and use docker build from there.
I have 15 microservices and I want to run them using bazel and docker-compose in my local environtment.
To store bazel output in my host I mount /mtp/output:/tmp/build_ouput in my docker-compose and on other hand use the --output_user_root=/tmp/build_output bazel flag in my Dockerfile command, so we keep bazel output in our host.
something like this:
version: "3.8"
services:
senna-identity:
image: app/identity:latest
build:
context: .
dockerfile: Dockerfile
command: ["--output_user_root=/tmp/build_output","run","//services/identity"]
volumes:
- .:/app
- /tmp/senna_build_output:/tmp/build_output
# Other services are just same as above service
and my Dockerfile:
FROM l.gcr.io/google/bazel:3.5.0
WORKDIR /app
EXPOSE 9000
CMD ["--output_user_root=/tmp/build_output","run","//services/identity"]
But I have two problems:
All microservices using bazel to run their target but we know that bazel don't let two commands run simultaneously, so we get following error when run docker-compose up:
Another command holds the client lock:
pid=1
owner=client
cwd=/app
Waiting for it to complete...
and a second later my other microservice fail because of this error:
Server terminated abruptly (error code: 14, error message: 'Socket closed',
My second problem is the time which docker-compose get to mount /etc/output path, it get about 80 second just for two microservice, I think we can not use this solution for all microservices. Do I'm doing something wrong? should I use another way to develop my microservices locally with bazel or should I use some other tool instead of docker-compose... ?
Is there a way to run docker_compose with parameters?
Something like the following:
docker-compose run --rm app_service python init_script
Now I use shell module for this.
Can I use the docker_compose module instead?
The documentation for the docker_compose module suggests that it can only do the equivalents of docker-compose up, down, and build. None of the other Ansible Docker modules connect to Compose at all.
You could use docker_container as an equivalent to a separate docker run command, but this has the same drawbacks as trying to docker run a separate container in a mostly-Compose environment (you don't get networks or volumes or dependencies declared in the docker-compose.yml file).
Falling back to shell is probably your best option here.
I am using docker-compose for a development project. I have 6 services defined in my docker compose file. I have been using the below script to rebuild the images whenever I make a change.
#!/bin/bash
# file: rebuild.sh
docker-compose down
docker-compose build
docker-compose up
I am looking for a way to reduce the build time as building and restarting all the services seems unnecessary as I am usually only changing one module. I see in the docker-compose docs you can run commands for individual services by specifying the service name after e.g. docker-compose build myservice.
In another terminal window I tried docker-compose build myservice && docker-compose restart myservice while leaving the other ./rebuild.sh command open in the original terminal. In the ./rebuild.sh terminal window I see all the initialization messages being reprinted to the stdout so I know it is restarting that service but the code changes aren't there. What am I doing wrong? I just want to rebuild and restart a single service.
Try:
docker-compose up -d --force-recreate --build myservice
Note that:
-d is for Detached mode,
-force-recreate will recreate containers even is your code did not change,
-build is for build your images before starting containers.
At least the name of your service.
Take a look here.
I think I don't get it. First, I created docker-machine:
$ docker-machine create -d virtualbox dev
$ eval $(docker-machine env dev)
Then I wrote Dockerfile and docker-compose.yml:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
version: '2'
services:
db:
image: postgres
web:
build: .
restart: always
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
Finally, I built and started the image:
$ docker-compose build --no-cache
$ docker-compose start
I checked ip of my virtual machine
$ docker-machine ip dev
and successfully opened the site in my browser. But when I made some changes in my code - nothing happened. So I logged to the "dev" machine:
$ docker-machine ssh dev
and I didn't find my code! So I logged to the docker "web" image:
$ docker exec -it project_web_1 bash
and there was a code, but unchanged.
What is the docker-machine for? What is the sense? Why docker doesn't syncing files after changes? It looks like docker + docker-machine + docker-compose are pain in the a...s for local development :-)
Thanks.
Docker is the command-line tool that uses containerization to manage multiple images and containers and volumes and such -- a container is basically a lightweight virtual machine. See https://docs.docker.com/ for extensive documentation.
Until recently Docker didn't run on native Mac or Windows OS, so another tool was created, Docker-Machine, which creates a virtual machine (using yet another tool, e.g. Oracle VirtualBox), runs Docker on that VM, and helps coordinate between the host OS and the Docker VM.
Since Docker isn't running on your actual host OS, docker-machine needs to deal with IP addresses and ports and volumes and such. And its settings are saved in environment variables, which means you have to run commands like this every time you open a new shell:
eval $(docker-machine env default)
docker-machine ip default
Docker-Compose is essentially a higher-level scripting interface on top of Docker itself, making it easier (ostensibly) to manage launching several containers simultaneously. Its config file (docker-compose.yml) is confusing since some of its settings are passed down to the lower-level docker process, and some are used only at the higher level.
I agree that it's a mess; my advice is to start with a single Dockerfile and get it running either with docker-machine or with the new beta native Mac/Windows Docker, and ignore docker-compose until you feel more comfortable with the lower-level tools.