Scale service with multiple ports and pass these ports to another service - docker-compose

I'm trying to run a system with a controller and multiple workers. While doing a udp-Healthcheck the controller acts as a UDP client and the worker act as a server. The ports where the controller/client should send the udp request to, is passed through args, using Dockers ENTRYPOINT.
The worker also gets his port that he listens to through args, using ENTRYPOINT as well.
I now want to scale the number of workers dynamically using
Docker-compose up --scale worker=[numberOfWorkers]
but I couldn't find anything on how to pass the ports that are now dynamically created from a range of ports. Here is my Docker-compose.yml:
version: '3'
services:
controller:
build: controller
container_name: controller1
volumes:
- network-log-volume:/var/controller_logs
entrypoint: [/controller, worker:80]
worker:
build: worker
ports:
- "1001-1005:80"
entrypoint: [/worker, "80", ""]
The previous "hardcoded" version (which was perfectly working) looked like this:
version: '3'
services:
controller:
build: controller
container_name: controller1
volumes:
- network-log-volume:/var/controller_logs
entrypoint: [/controller, worker1:1231, worker2:1232, worker3:1233, worker4:1234, worker5:1235, worker6:1236]
worker1:
build: worker
container_name: worker1
expose:
- "1231"
ports:
- "1231:1231"
entrypoint: [/worker, "1231", ""]
worker2:
build: worker
container_name: worker2
expose:
- "1232"
ports:
- "1232:1232"
entrypoint: [/worker, "1232", ""]
worker3:
build: worker
container_name: worker3
expose:
- "1233"
ports:
- "1233:1233"
entrypoint: [/worker, "1233", ""]
worker4:
build: worker
container_name: worker4
expose:
- "1234"
ports:
- "1234:1234"
entrypoint: [/worker, "1234", ""]
worker5:
build: worker
container_name: worker5
expose:
- "1235"
ports:
- "1235:1235"
entrypoint: [/worker, "1235", ""]
So I basically want to make the above Docker-compose.yml be scalable dynamically rather that doing it like that. Thanks!
Edit: Due to the way my lecture is structured I can not change a lot about how the controller and the worker work in general. It is supposed to work with only the docker-compose.yml
I read a bit about using swarms, but I also read that they are deprecated and not to be used anymore.

Related

I am trying to stand up 2 ghost containers, with mysql on the back end with eeacms/haproxy as the load balancer in docker containers error 503

I have tried many configurations and scenarios based around this which is mostly a tutorial that stops at one ghost instance. I am trying to scale it to 2 with docker-deploy up -d --scale ghost=2. When I hit the individual IP;s of the ghost containers , they work but port 80 is 503.
version: "3.1"
volumes:
mysql-volume:
ghost-volume:
networks:
ghost-network:
services:
mysql:
image: mysql:5.7
container_name: mysql
volumes:
- mysql-volume:/var/lib/mysql
networks:
- ghost-network
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: db
MYSQL_USER: blog-user
MYSQL_PASSWORD: supersecret
ghost:
build: ./ghost
image: laminar/ghost:3.0
volumes:
- ghost-volume:/var/lib/ghost/content
networks:
- ghost-network
restart: always
ports:
- "2368"
environment:
database__client: mysql
database__connection__host: mysql
database__connection__user: blog-user
database__connection__password: supersecret
database__connection__database: db
depends_on:
- mysql
entrypoint: ["wait-for-it.sh", "mysql", "--", "docker-entrypoint.sh"]
command: ["node", "current/index.js"]
haproxy:
image: eeacms/haproxy
depends_on:
- ghost
ports:
- "80:5000"
- "1936:1936"
environment:
BACKENDS: "ghost"
DNS_ENABLED: "true"
LOG_LEVEL: "info"
What I get on localhost:80 is a 503 error the particular eeacms/haproxy image is supposed to be self-configuring any help appreciated
I needed to add a backend URL to the environment and also tell ghost it was installed in an alternate location by adding URL: localhost:5050

Containers launched with Docker-Compose cannot connect to each other

I have a beginner question with Docker Compose. I am trying to extend the docker-compose-slim.yml example file from Zipkin GitHub repository.
I need to change it so that it can include a simple FastAPI app that I have written. Unfortunately, I cannot make them connect to each other. FastAPI gets rejected when it attempts to send a POST request to the Zipkin container, even though they are both connected to the same network with explicit links and port mapping defined in the YAML file. However, I am able to connect to both of them from the host, however.
Could you please tell me what I have done wrong?
Here is the error message:
Error emitting zipkin trace. ConnectionError(MaxRetryError("HTTPConnectionPool(host='127.0.0.1', port=9411): Max retries exceeded with url: /api/v2/spans (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fce354711c0>: Failed to es
tablish a new connection: [Errno 111] **Connection refused**'))"))
Here is the Docker Compose YAML file:
version: '2.4'
services:
zipkin:
image: openzipkin/zipkin-slim
container_name: zipkin
environment:
- STORAGE_TYPE=mem
ports:
# Port used for the Zipkin UI and HTTP Api
- 9411:9411
depends_on:
- storage
storage:
image: busybox:1.31.0
container_name: fake_storage
myfastapi:
build: .
ports:
- 8000:8000
links:
- zipkin
depends_on:
- zipkin
dependencies:
image: busybox:1.31.0
container_name: fake_dependencies
networks:
default:
name: foo_network
Here is the Dockerfile:
FROM python:3.8.5
ADD . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 8000
CMD ["uvicorn", "wsgi:app", "--host", "0.0.0.0", "--port", "8000"]
You must tell the containers the network "foo_network". The External flag says that the containers are not accessible from outside. Of course you don't have to bet, but I thought as an example it might be quite good.
And because of the "links" function look here Link
version: '2.4'
services:
zipkin:
image: openzipkin/zipkin-slim
container_name: zipkin
environment:
- STORAGE_TYPE=mem
ports:
# Port used for the Zipkin UI and HTTP Api
- 9411:9411
depends_on:
- storage
networks:
- foo_network
storage:
image: busybox:1.31.0
container_name: fake_storage
networks:
- foo_network
myfastapi:
build: .
ports:
- 8000:8000
links:
- zipkin
depends_on:
- zipkin
networks:
- foo_network
dependencies:
image: busybox:1.31.0
container_name: fake_dependencies
networks:
- foo_network
networks:
foo_network:
external: false

Intergrate elasticsearch with multiple mongodb in docker-compose

I have a implemented a microservice architecture with several servers and databases. I have installed elasticsearch with docker and when I do docker-compose up, everything seems to run fine.
However I would like to integrate the elasticsearch with the several databases (2 mongodb in this sample below) in the system. How do I synch the two mongodb in two different containers with elasticsearch so that I can search them?
client:
container_name: client
stdin_open: true
build:
context: ./client
dockerfile: Dockerfile
restart: always
volumes:
- './client:/app'
ports:
- '1000:3000'
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
weatherdb:
container_name: weather-db
image: mongo
restart: always
ports:
- '2002:27017'
volumes:
- ./weather_service/weather_db:/data/db
networks:
- backend
weather-service:
container_name: weather-service
build: ./weather_service
restart: always
ports:
- "1002:3000"
depends_on:
- weatherdb
links:
- elasticsearch
networks:
- backend
newsdb:
container_name: news-db
image: mongo
restart: always
ports:
- '2003:27017'
volumes:
- ./news_service/news_db:/data/db
networks:
- backend
news-service:
container_name: news-service
build: ./news_service
restart: always
ports:
- "1003:3000"
depends_on:
- newsdb
links:
- elasticsearch
networks:
- backend
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.0
container_name: elasticsearch
restart: always
ports:
- 9200:9200
- 9300:9300
environment:
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
network.bind_host: 0.0.0.0
network.host: 0.0.0.0
discovery.type: single-node
volumes:
- ./elasticsearch/esdata:/usr/share/elasticsearch/data
networks:
- backend
Its very simple to just add a elasticsearch docker section in any docker-compose file and start it, all these are independent docker containers and as long as their exposed port on host is not interfering each other and you have the correct configuration in place it should work.
Please refer elasticsearch multi-docker installation using docker file for more info.
NOTE: You have not mentioned what exact issue you are facing, you have mentioned everything ie all docker containers are running file, so please explain in detail what exactly you are trying to solve

How to access each docker containers

In my docker-compose , I have 2 containers .
version: '3.7'
services:
app:
container_name: myApp
image: myImage
restart: always
depends_on:
- mongodb
networks:
- myNetwork
ports:
- 9000:3000
environment:
ROOT_URL: http://192.168.0.122:9000
MONGO_URL: mongodb://mongodb/pos-db
PORT: 3000
METEOR_SETTINGS: '{ "private": { "APP_NAME": "pos-db" } }'
mongodb:
container_name: mongodb
image: mongo
restart: always
networks:
myNetwork:
aliases:
- mongodb
ports:
- 4001:27017
networks:
myNetwork:
external:
name: myNetwork
I want to access from container myApp to container mongodb for backup data.
How to make this 2 containers access each other?
You can put both services in the same network. When you put with same network you can reach both by container name.
version: '3.7'
services:
app:
container_name: myApp
image: myImage
restart: always
depends_on:
- mongodb
ports:
- 9000:3000
environment:
ROOT_URL: http://192.168.0.122:9000
MONGO_URL: mongodb://mongodb:4001/pos-db
PORT: 3000
METEOR_SETTINGS: '{ "private": { "APP_NAME": "pos-db" } }'
mongodb:
container_name: mongodb
image: mongo
restart: always
ports:
- 4001:27017
You don't need to specify the network. Docker containers in one docker compose file see themselves out of the box. You can access mongo from the app container on mongodb://mongodb:4001

RabbitMq refuses connection when run in docker

My docker-compose file looks like this:
version: '2'
services:
explore:
image: explore
build:
context: ./Explore
dockerfile: VsDockerfile
environment:
- "ElasticUrl=http://localhost:9200"
- "RabbitMq/Host=localhost"
- "RabbitMq/Username=guest"
- "RabbitMq/Password=guest"
networks:
- localnet
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.3
container_name: elasticsearch
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- 9200:9200
volumes:
- ./esdata:/usr/share/elasticsearch/data
networks:
- localnet
rabbit:
image: rabbitmq:3.6.7-management
hostname: rabbit
ports:
- 15672:15672
- 5672:5672
networks:
- localnet
networks:
localnet:
external:
name: localnet
If I type http://localhost:15672 in the browser, I get the rabbitmq interface, but if I tries to connect from my Explore project like this:
public SqlToRabbitProcessor(SqlToRabbitRepository sqlToRabbitRepository)
{
_sqlToRabbitRepository = sqlToRabbitRepository;
var factory = new ConnectionFactory
{
HostName = Environment.GetEnvironmentVariable("RabbitMq/Host"),
UserName = Environment.GetEnvironmentVariable("RabbitMq/Username"),
Password = Environment.GetEnvironmentVariable("RabbitMq/Password")
};
var rabbit = factory.CreateConnection();
channel = rabbit.CreateModel();
}
Then it breaks in the line
var rabbit = factory.CreateConnection();
with the error saying
ExtendedSocketException: Connection refused 127.0.0.1:5672
System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)
ConnectFailureException: Connection failed
RabbitMQ.Client.EndpointResolverExtensions.SelectOne(IEndpointResolver resolver, Func selector)
BrokerUnreachableException: None of the specified endpoints were reachable
RabbitMQ.Client.ConnectionFactory.CreateConnection(IEndpointResolver endpointResolver, string clientProvidedName)
As my comment under the question suggested, it's because the "localhost" defined in the web application part is it's containers localhost, and not the docker host..
just needed to change
- "ElasticUrl=http://localhost:9200"
- "RabbitMq/Host=localhost"
to
- "ElasticUrl=http://elasticsearch:9200"
- "RabbitMq/Host=rabbit"
I had the same issue with docker-compose.
I solved it by with hostname:
rabbit:
hostname: rabbit
command: sh -c "rabbitmq-plugins enable rabbitmq_management; rabbitmq-server"
image: rabbitmq
environment:
RABBITMQ_DEFAULT_USER: admin
RABBITMQ_DEFAULT_PASS: admin
ports:
- 5672:5672
- 15672:15672
Follow instructions on this post.
Just to benefit people that stumble upon this question. The --link feature is now considered legacy and is a prime candidate to be deprecated by docker.
The easiest way is to use
depends_on:
In order to do this, its recommended to first create a network like so"
docker network create <network_name>
Then use docker-compose up to spawn services that bind with each other. Look at the example below where I've bound my spring-boot app to rabbit-mq. You can clone my repo from here
version: "3.1"
services:
rabbitmq-container:
image: rabbitmq:3.5.3-management
hostname: rabbitmq-container
ports:
- 5673:5673
- 5672:5672
- 15672:15672
networks:
- resolute
resolute-container:
build: .
ports:
- 8080:8080
environment:
- spring_rabbitmq_host=rabbitmq-container
- spring_rabbitmq_port=5672
- spring_rabbitmq_username=guest
- spring_rabbitmq_password=guest
- resolute_rabbitmq_publishQueueName=resolute-run-request
- resolute_rabbitmq_exchange=resolute
depends_on:
- rabbitmq-container
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- resolute
networks:
resolute:
external:
name: resolute
See how I've created a network called resolute and bound the apps to the same network. I've also given my rabbitmq-container a hostname. This is because docker now prepends the container name and that makes it difficult to bind services by name.