Wrong ip in serialized dask.distributed.Queue() objects: Failed to deserialize - docker-compose

I am running a small dask distributed cluster using docker-compose (also tested using docker swarm), and connecting to it from my local laptop.
I'm trying to run a small test using dask.distributed.Queue(). Those do not deserialize properly on the workers, as they contain the ip:port used to connect to the cluster (scheduler) from my local laptop, not the ip available inside the docker compose virtual network.
How should I connect to the cluster, or set the cluster up, so that Queue() objects serialize with the proper scheduler address?
Error messages:
distributed.protocol.pickle - INFO - Failed to deserialize b'\x80\x04\x95\x9b\x00\x00\x00\x00\x00\x00\x00\x8c\x12distributed.queues\x94\x8c\x05Queue\x94\x93\x94)\x81\x94\x8c&queue-7f344bb49fac489e9b330c7ec5ebb736\x94\x8c\x14tcp://localhost:8786\x94\x86\x94bh\x02)\x81\x94\x8c&queue-896fde1ea60a426f98846735372b33c7\x94h\x05\x86\x94b\x86\x94.'
OSError: Timed out trying to connect to 'tcp://localhost:8786' after 10 s: in <distributed.comm.tcp.TCPConnector object at 0x7ffa5cc49588>: OSError: [Errno 99] Cannot assign requested address
Test code:
import dask.distributed
import dask.bag as db
client = dask.distributed.Client('localhost:8786')
q = dask.distributed.Queue()
def foo(q1, q2):
while True:
d = q1.get()
print(d)
q2.put(d)
q1 = dask.distributed.Queue()
q2 = dask.distributed.Queue()
client.submit(foo, q1, q2)
docker-compose.yml
version: '3'
services:
scheduler:
hostname: scheduler
image: daskdev/dask
command: dask-scheduler
ports:
- "8786:8786"
- "8787:8787"
volumes:
- /:/host
worker-1:
hostname: worker-1
image: daskdev/dask
command: dask-worker scheduler:8786
depends_on:
- scheduler
volumes:
- /:/host

Related

Docker compose multi-container with zookeeper, kafka and python script on Azure container instances not able to connect to kafka

I am trying to get a zookeeper/kafka non-clustered setup to be able to talk to containers with python scripts. I want to be able to run a zookeeper/kafka container and 2 or more containers with python scripts communicating to the zookeeper/kafka, all running in containers or container groups on Azure.
To test this, I have created the below docker container group, with zookeeper and kafka as 2 services and a 3rd service that starts a simple python script to produce a steady pace of messages to a kafka topic. The docker-compose.yml that I am using is as follows:
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
networks:
- my-network
kafka:
image: confluentinc/cp-kafka:latest
container_name: kafka
depends_on:
- zookeeper
ports:
- 29092:29092
networks:
- my-network
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
kafka_producer:
build: ../kafka_producer
image: annabotkafka.azurecr.io/kafka_producer:v1
container_name: kafka_producer
depends_on:
- kafka
volumes:
- .:/usr/src/kafka_producer
networks:
- my-network
environment:
KAFKA_SERVERS: kafka:9092
networks:
my-network:
driver: bridge
The kafka_producer.py script is as follows:
import os
from time import sleep
import json
from confluent_kafka import Producer
def acked(err, msg):
if err is not None:
print("Failed to deliver message: {0}: {1}"
.format(msg.value(), err.str()))
else:
print("Message produced: {0}".format(msg.value()))
# Function to send a status message out on the status topic
def send_status(producer,counter):
msg = {'counter':counter}
json_dump = json.dumps(msg)
producer.produce("counter", json_dump.encode('utf-8'), callback=acked)
producer.poll()
# Define kafkaProducer to push messages to the status topic
producer = Producer({'bootstrap.servers': 'kafka:9092'})
for j in range(9999):
print("Iteration", j)
send_status(producer, j)
sleep(2)
When I 'docker-compose up' this on my Ubuntu 20.04 dev machine, I get the expected behaviour: a stead stream of messages sent to the kafka producer.
After I 'docker-compuse push' this to Azure Container instances and create a container in Azure with the image, the kafka_producer script appears to no longer be able to connect to the kafka broker at kafka:9092.
These are the logs from the container group after startup:
Iteration 0
%3|1629363616.468|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Failed to resolve 'kafka:9092': Name or service not known (after 25ms in state CONNECT)
%3|1629363618.465|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Failed to resolve 'kafka:9092': Name or service not known (after 22ms in state CONNECT, 1 identical error(s) suppressed)
Iteration 1
Iteration 2
I had understood that the container group is on the same network subnet and on a single host so I would expect this to operate the same as on my dev machine locally.
My next step will be the have separate containers with different python scripts that I will want to communicate with kafka in this container group. Having the producer script within the same container group is not my longterm expectation, but I believed this simpler setup should work.
Any suggestions for where I am going wrong?
From Azure documentation
Within a container group, container instances can reach each other via localhost on any port, even if those ports aren't exposed externally on the group's IP address or from the container.
This makes it sound like the containers are using a host network, not a Docker bridge like you've setup in Compose (where your code works fine)
Therefore, you ought to connect with localhost:29092
If you don't actually need message persistence, then I'd suggest using sockets via HTTP, gRPC or Zeromq between your scripts rather than a Kafka container

Unable to connect to the mongodb instance running in a docker container from inside the container

I have built a docker container consisting of a mongodb database and an azure http-triggered function. The following yaml is the docker compose file:
version: '3.4'
services:
mongo:
image: mongo
container_name: mongodb
restart: always
ports:
- 37017:27017
storage.emulator:
image: "mcr.microsoft.com/azure-storage/azurite:latest"
container_name: storage.emulator
ports:
- 20000:10000
- 20001:10001
- 20002:10002
my.functions:
image: ${DOCKER_REGISTRY-}myfunctions
build:
context: .
dockerfile: domain/My.Functions/Dockerfile
ports:
- 9080:80
depends_on:
- storage.emulator
- mongo
The mongodb instance runs well and I am able to connect to it from outside of the container by mongodb://localhost:37017 connection string to seed some data.
The Azure Function running inside the container is supposed to communicate with the mongodb instance by mongodb://localhost:27017 connection string but it fails according to the following error message:
"Unspecified/localhost:27017", ReasonChanged: "Heartbeat", State:
"Disconnected", ServerVersion: , TopologyVersion: , Type: "Unknown",
HeartbeatException: "MongoDB.Driver.MongoConnectionException: An
exception occurred while opening a connection to the server.
my.functions_1 | ---> System.Net.Sockets.SocketException (99):
Cannot assign requested address
How can I address this problem? Why the mongodb is unavailable internally to the azure function within the same container?
Because docker containers runs on isolated networks of their own. So when you try to connect localhost, actually you are trying to my.functions app and obviously it does not have service running on that part.
You should use docker-compose service name
mongodb://mongo:27017

Fedora: Application Container Cannot establish connection to DB Container

I am having trouble connecting to a db container from the application container on a Fedora Host. I have verified being able to connect to the database using the same credentials via the psql command line interface. Using the same information in my application does not work.
Here is my docker compose file
version: '3.3'
services:
postgrestest:
build: ./vrs
command: python3 app.py
volumes:
- ./vrs/:/appuser/
ports:
- 5000:5000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:12-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER={{user}}
- POSTGRES_PASSWORD={{password}}
- POSTGRES_DB=sharepointvrs
volumes:
postgres_data:
This is the code used to connect to the container, from within the application container:
dbconfig = environment["database"]
try:
connection = psycopg2.connect(
dbname=dbconfig["dbname"], #sharepointvrs
user=dbconfig["user"],
password=dbconfig["password"],
host=dbconfig["host"], # tried 0.0.0.0, localhost, and IP address obtained from docker inspect
port=dbconfig["port"] # 5432
)
connection.autocommit = True
except:
print("Database initialization failed.")
I've tried both using localhost and using the IP obtained from running a docker inspect:
# tried 0.0.0.0, localhost, and IP address obtained from docker inspect
In your app's config, set the database host to 'db'
That exists as a DNS alias, available in the other containers, based on what you set the service name to in your compose file:
services:
db:
# ...
The issue was due to firewall interface policies configured by the docker installation on Fedora.
Docker must be added to the trusted zone before it gets installed.
More information here

Docker container communication with other container on diffirent host/server

I am having two servers (CentOS8).
On server1 I have mysql-server container and on server2 I have zabbix-front-end i.e zabbix-web-apache-mysql (container name zabbixfrontend).
I am trying to connect to mysql-server from zabbixfrontend container. Getting error
bash-4.4$ mysql -h <MYSQL_SERVER_IP> -P 3306 -uroot -p
Enter password:
ERROR 2002 (HY000): Can't connect to MySQL server on '<MYSQL_SERVER_IP>' (115)
When I do nc from zabbixfrontend container to my mysql-server IP I get "No route to host." error message.
bash-4.4$ nc -zv <MYSQL_SERVER_IP> 3306
Ncat: Version 7.70 ( https://nmap.org/ncat )
Ncat: No route to host.
NOTE : I am successfully do nc from the host machine (server2) mysql-server container.
docker-compose.yml
version: '3.5'
services:
zabbix-web-apache-mysql:
image: zabbix/zabbix-web-apache-mysql:centos-8.0-latest
container_name: zabbixfrontend
#network_mode: host
ports:
- "80:8080"
- "443:8443"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- ./zbx_env/etc/ssl/apache2:/etc/ssl/apache2:ro
- ./usr/share/zabbix/:/usr/share/zabbix/
env_file:
- .env_db_mysql
- .env_web
secrets:
- MYSQL_USER
- MYSQL_PASSWORD
- MYSQL_ROOT_PASSWORD
# zbx_net_frontend:
sysctls:
- net.core.somaxconn=65535
secrets:
MYSQL_USER:
file: ./.MYSQL_USER
MYSQL_PASSWORD:
file: ./.MYSQL_PASSWORD
MYSQL_ROOT_PASSWORD:
file: ./.MYSQL_ROOT_PASSWORD
docker logs zabbixfrontend out as below
** Deploying Zabbix web-interface (Apache) with MySQL database
** Using MYSQL_USER variable from ENV
** Using MYSQL_PASSWORD variable from ENV
********************
* DB_SERVER_HOST: <MYSQL_SERVER_IP>
* DB_SERVER_PORT: 3306
* DB_SERVER_DBNAME: zabbix
********************
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
The nc message is telling the truth: No route to host.
This happens because when you deploy your front-end container in the docker bridge network, its IP address belongs to the 172.18.0.0/16 subnet and you a are trying to reach an the database via an IP address that belongs to a different subnet (10.0.0.0/16).
On the other hand, when you deploy your front-end container on the host network, you no longer face that problem, because now the IP is literally using the IP address of the host machine, 10.0.0.2 and there is no need for a route to be explicitly created to reach 10.0.0.3.
Now the problem you are facing is that you can no longer access the web-ui via the browser. This happens because I assume you kept the ports:" option in your docker-compose.yml and tried to access the service on localhost:80/443. The source and destination ports do not need to be specified if you run the container on the host network. The container will just listen directly on the host on the port that's opened inside the container.
Try to run the front-end container with this config and then access it on localhost:8080 and localhost:8443:
...
network_mode: host
# ports:
# - "80:8080"
# - "443:8443"
volumes:
...
Running containers on the host network is not something that I would usually recommend, but hence your setup is quite special, having one container running on one docker host and another container running in another independent docker host, I assume you don't want create an overlay network and eventually register the two docker hosts to a swarm.

How to connect to cockroachdb container from other container using docker-compose?

I've tried to mimic [MySQL way][1] to do this, but didn't work for me.
I've also tried several variation ranging from: adding network interface to
explicitly specifying container IP, none of them work (since container IP
always changed).
The error message is:
"could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"
}
These are my code:
My flask app:
connection = psycopg2.connect(
database=os.environ.get("DB_NAME"),
user=os.environ.get("DB_USER"),
password=os.environ.get("DB_PASSWORD"),
sslmode=os.environ.get("DB_SSL"),
host=os.environ.get("DB_HOST"),
port=os.environ.get("DB_PORT"),
)
My docker-compose file:
version: '3'
services:
flask-api:
image: flask-api:0.7.0
ports:
- '5000:5000'
environment:
- DB_NAME = knotdb
- DB_HOST = roach1
- DB_PORT = 26257
- DB_USER = root
- DB_SSL = disable
links:
- roach1
roach1:
image: cockroachdb/cockroach:v1.1.3
command: start --insecure --host=127.0.0.1
ports:
- "26257:26257"
volumes:
- ./cockroach-data/roach1:/cockroach/cockroach-data
My other various attempts:
# adding network to both service : didn't work
# using ip instead of alias: didn't work
links:
- roach1:127.0.0.1
1: How to handle IP addresses when linking docker containers with each other using docker-compose?