Currently i have the configuration similar to below -
services:
gunicorn:
build:
context: .
dockerfile: Dockerfile.gunicorn
restart: always
ports:
- 8001
command: "something"
nginx:
build:
context: .
dockerfile: Dockerfile.nginx
ports:
- 80:80
command: "/usr/sbin/nginx"
We want our gunicorn containers to scale dynamically & the configuration gets dynamically updated for new nodes in the nginx service .
Ideally if i scale using below command, configurations should get added automatically in nginx -
docker-compose scale gunicorn=2
I read about https://github.com/jwilder/nginx-proxy for docker-compose , but i guess it needs to have the container configuration & VIRTUAL_HOST env passed for them to add dynamically in nginx config .
Please suggest .
You might be better off trying to use a tool designed for this purpose. https://github.com/containous/traefik is one example.
Related
I'm trying to containerize my application. I use mongodb and 2 more micro services.
As you can see in the docker compose file below, I have some problems.
My requirements:
main_image needs to connect to MongoDB.
gui_image needs to connect to MongoDB.
gui_image needs to show its GUI on port 8080 (Can use another port as well)
gui_image has to read and write to a file inside my computer.
MongoDB has to access a volume inside my computer.
main_image needs access to the internet.
Here is my questions:
1- Does exposing ports in docker file and docker-compose the same thing?
2- How do I mount a volume to mongodb as best practice?
3- How to accomplish the requirements above with the diagram below in docker-compose?
Here is my docker-compose file:
version: "3"
services:
mongo:
image: mongo:latest
ports:
- 27017:27017
main_image:
build:
context: .
dockerfile: .\my_project\dockerfile
depends_on:
- mongo
gui_image:
build:
context: .
dockerfile: .\my_gui\dockerfile
ports:
- 8080:8080
- 27017:27017
depends_on:
- mongo
Here is my dockerfile under my_gui directory:
FROM continuumio/miniconda3
WORKDIR /app
COPY . .
RUN pip install dash
EXPOSE 8080
EXPOSE 27017
ENTRYPOINT [ "python","gui_script.py"]
And lastly, here is my dockerfile under my_project directory:
FROM continuumio/miniconda3
WORKDIR /app
COPY . .
EXPOSE 27017
ENTRYPOINT [ "python","main_script.py"]
1.The EXPOSE instruction in Dockerfile informs Docker that the container listens on the specified network ports at runtime(like when using docker run -p command).
However using ports in compose is a dynamic way of specifying these ports. So images like nginx or apache which are always supposed to run on port 80 inside the container will use EXPOSE in Dockerfile itself.
While an image which has dynamic port which may be controlled using an environment variable will then use expose in docker run or compose file.
some_webapi:
environment:
- ASPNETCORE_URLS=http://*:80
build:
context: .
dockerfile: ./Dockerfile
2.As documented on the docker hub page for mongo image (https://hub.docker.com/_/mongo/) you can use
volumes:
- '/path/to/your/pc/folder:/path/inside/docker'
3.And for the last question you might wanna use Networking in Compose.
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
Services can join networks like this
gui_image:
build:
context: .
dockerfile: .\my_gui\dockerfile
ports:
- 8080:8080
- 27017:27017
depends_on:
- mongo
networks:
- gui
And also you have to define all the networks used by services in global scope of compose file
version: '3'
services:
networks:
gui:
After that containers will be able to see each other even by their container_name which you can define in services
mongo:
image: mongo:latest
ports:
- 27017:27017
container_name: gui_mogno
then you will be able to connect to mongo with a connection string like this mongodb://gui_mogno:27017/
You can get more information about networking here
So i currently can use "docker-compose up test" which only runs my database and my testing scripts. I want to be able to us say docker-compose up app" or something like that that runs everything besides testing. That way Im not running unnecessary containers. Im not sure if theres a way but thats what I was wondering. If possible Id appreciate some links to some that already do that and I can figure out the rest. Basically can I only run certain containers with a single command without running the others.
Yaml
version: '3'
services:
webapp:
build: ./literate-app
command: nodemon -e vue,js,css start.js
depends_on:
- postgres
links:
- postgres
environment:
- DB_HOST=postgres
ports:
- "3000:3000"
networks:
- literate-net
server:
build: ./readability-server
command: nodemon -L --inspect=0.0.0.0:5555 server.js
networks:
- literate-net
redis_db:
image: redis:alpine
networks:
- literate-net
postgres:
restart: 'always'
#image: 'bitnami/postgresql:latest'
volumes:
- /bitnami
ports:
- "5432:5432"
networks:
- literate-net
environment:
- "FILLA_DB_USER=my_user"
- "FILLA_DB_PASSWORD=password123"
- "FILLA_DB_DATABASE=my_database"
- "POSTGRES_PASSWORD=password123"
build: './database-creation'
test:
image: node:latest
build: ./test
working_dir: /literate-app/test
volumes:
- .:/literate-app
command:
npm run mocha
networks:
- literate-net
depends_on:
- postgres
environment:
- DB_HOST=postgres
networks:
literate-net:
driver: bridge
I can run docker-compose up test
Which only runs the postgres. Though I'd like to be able to just run my app without having to run my testing container.
Edit
Thanks to #ideam for the link
I was able to create an additional yaml file for just testing.
For those that dont want to look it up simply create a new yaml file like so
docker-compose.dev.yml
replace dev with whatever you like besides override which causes docker-compose up to automatically run that unless otherwise specified
To run the new file simply call
docker-compose -f docker-compose.dev.yml up
The -f is a flag for selecting a certain file to run. You can run multiple files to have different enviornments set-up
Appreciate the help
docker-compose up <service_name> will start only the service you have specified and its dependencies. (those specified in the dependends_on option.)
you may also define multiple services in the docker-compose up command:
docker-compose up <service_name> <service_name>
note - what does it mean "start the service and its dependecies"?
usually your production services (containers) are attached to each other via the dependes_on chain, therefore you can start only the last containers of the chain. for example, take the following compose file:
version: '3.7'
services:
frontend:
image: efrat19/vuejs
ports:
- "80:8080"
depends_on:
- backend
backend:
image: nginx:alpine
depends_on:
- fpm
fpm:
image: php:7.2
testing:
image: hze∂ƒxhbd
depends_on:
- frontend
all the services are chained in the depends_on option, while the testing container is down bellow the frontend. so when you hit docker-compose up frontend docker will run the fpm first, then the backend, then the frontend, and it will ignore the testing container, which is not required for running the frontend.
Starting with docker-compose 1.28.0 the new service profiles are just made for that! With profiles you can mark services to be only started in specific profiles:
services:
webapp:
# ...
server:
# ...
redis_db:
# ...
postgres:
# ...
test:
profiles: ["test"]
# ...
docker-compose up # start only your app services
docker-compose --profile test up # start app and test services
docker-compose run test # run test service
Maybe you want to share your docker-compose.yml for a better answer than this.
For reusing docker-compose configurations have a look at https://docs.docker.com/compose/extends/#example-use-case which explains the combination of multiple configuration files for reuse of configs for different use cases (test, production, etc.)
I have 'docker-compose.yml' file like below (skipped only volumes. environment and network). I would like to add new port to 'logstash' service without restarting all 3 services. I did 'docker-compose build logstash --no-cache' but it didn't add the port
docker#ubuntu-elastic:~/docker-elk$ cat docker-compose.yml
version: '2'
services:
elasticsearch:
build:
context: elasticsearch/
ports:
- "9200:9200"
- "9300:9300"
logstash:
build:
context: logstash/
ports:
- "11514:11514/udp"
- "8514:8514/udp"
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
ports:
- "5601:5601"
depends_on:
- elasticsearch
This will do the trick:
docker-compose up -d logstash
If you do not change the other sections, this should also only update logstash:
docker-compose up -d
To make sure that only logstash gets updated, even if the other sections where updated too, use the first command.
For example, I defined two services in my docker-compose
backend:
env_file:
- .env.backend
build:
context: .
dockerfile: Dockerfile
ports:
- "8686:8686"
frontend:
env_file:
- .env.front_end
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
In frontend app, I defined environment with this endpoint:
BACKEND_ENDPOINT=http://www.backend.com/api/v1/backend/
The problem I don't know how to solve here is: how can I convert above endpoint to relative endpoint based on backend service. For example, if you run only backend under localhost, url should be: localhost:8686/api/v1/backend. So the url of backend in above docker-compose file should be: [backend_address]:8686/api/v1/backend. So how can I map address and port automatically here.
Thanks
With docker-compose, all services are always resolvable by their names.
In your example, you should add a depends_on from your frontend to your backend, and simply use the name backend to address the backend server.
frontend:
depends_on:
- 'backend'
The frontend should be able to resolve the backend at backend:8086. Note that the depends_on isn't required for name resolution, but it is required to ensure that the port is open when your frontend container is created.
I am using docker compose to scale the docker containers. Is there any way to create the links dynamically?
I am using --force-recreate option but I think it create a new container. I want to switch the link (HAProxy) to some other container dynamically.
Any kind of help is appreciated.
Thanks,
Sanjiv
Yes, but you need a Docker-aware load balancer that is configured to do so.
dockercloud/haproxy
Docker produces and open-sources their own HAProxy image that does support this, and does not require your to --force-recreate. It does require v2 of docker-compose.
https://github.com/docker/dockercloud-haproxy
version: '2'
services:
web:
image: dockercloud/hello-world
lb:
image: dockercloud/haproxy
links:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 80:80
Once the stack is up, you can scale the web service using docker-compose scale web=3. dockercloud/haproxy will automatically reload its configuration.