Docker_Error:-"socket.gaierror: [Errno -3] Temporary failure in name resolution" error comes while run celery on docker image - docker-compose

Docker-compose.yml
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: sunilsuthar/sim
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4004:80"
networks:
- webnet
rabbit:
hostname: rabbit
image: sunilsuthar/query_with_rabbitmq
environment:
- RABBITMQ_DEFAULT_USER=rvihzpae
- RABBITMQ_DEFAULT_PASS=Z0AWdEAbJpjvy1btDRYqTq2lDoJcXHv7
links:
- rabbitmq
ports:
- "15672:15672"
- "5672:5672"
tty: true
celery:
image: sunilsuthar/query_with_rabbitmq
command: celery worker -l info -A app.celery
user: nobody
volumes:
- '.:/app'
networks:
webnet:

Check whether your docker container is on the correct network and whether you can ping the server with rabbitmq. In my case firewall settings were reset and local network was unreachable from within the container. Restarting docker daemon resolved the issue.

Related

Docker compose GrayLog

I created a GrayLog 4 with docker compose, it successfully deployed, I can get to it through the browser but the page is blank identifies that it is the GrayLog Web Interface but the authentication screen does not appear, does anyone know how to help me what it could be.
version: '3'
services:
mongo:
image: mongo:4.2
# Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/7.10/docker.html
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
environment:
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
deploy:
resources:
limits:
memory: 1g
# Graylog: https://hub.docker.com/r/graylog/graylog/
graylog:
image: graylog/graylog:4.0
environment:
# CHANGE ME (must be at least 16 characters)!
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
# Password: admin
- GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
entrypoint: /usr/bin/tini -- wait-for-it elasticsearch:9200 -- /docker-entrypoint.sh
restart: always
depends_on:
- mongo
- elasticsearch
ports:
# Graylog web interface and REST API
- 9000:9000
# Syslog TCP
- 1514:1514
# Syslog UDP
- 1514:1514/udp
# GELF TCP
- 12201:12201
# GELF UDP
- 12201:12201/udp
enter image description here
In your screenshot the IP address ends in a 6, but Graylog is bouund to 127.0.0.1. Set http_bind to 127.0.0.1 and http_publish or http_external to the interface IP that ends in 6.
ref: Graylog docs

Clickhouse Client - Code: 62. DB::Exception: Empty query

I'm trying to run clickhouse-server and clickhouse-client services using Docker and Docker Compose. Based on clickhouse docker-compose file and another compose sample, I created the services in my docker-compose.yml file as you can see below:
docker-compose.yml:
ch_server:
container_name: myapp_ch_server
image: yandex/clickhouse-server
ports:
- "8181:8123"
- "9000:9000"
- "9009:9009"
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
volumes:
- ./ch_db_data:/var/lib/clickhouse/
- ./ch_db_logs:/val/log/clickhouse-server/
networks:
- myapp-network
ch_client:
container_name: myapp_ch_client
image: yandex/clickhouse-client
command: ['--host', 'ch_server']
networks:
- myapp-network
When I run docker-compose up command, the following exception occurs from clickhouse-client service:
myapp_ch_client | Code: 62. DB::Exception: Empty query
myapp_ch_client exited with code 62
Do you have any idea how to fix this error?
It just needs to pass the SQL-query in command-params:
version: "2.4"
services:
ch_server:
container_name: myapp_ch_server
image: yandex/clickhouse-server
ports:
- "8123:8123"
- "9000:9000"
- "9009:9009"
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
volumes:
- ./ch_db_data:/var/lib/clickhouse/
- ./ch_db_logs:/var/log/clickhouse-server/
networks:
- myapp-network
healthcheck:
test: wget --no-verbose --tries=1 --spider localhost:8123/ping || exit 1
interval: 2s
timeout: 2s
retries: 16
ch_client:
container_name: myapp_ch_client
image: yandex/clickhouse-client
command: ['--host', 'ch_server', '--query', 'select * from system.functions order by name limit 4']
networks:
- myapp-network
depends_on:
ch_server:
condition: service_healthy
networks:
myapp-network:
It doesn't make sense to define clickhouse-client in docker-compose. clickhouse-client usually run outside of docker-compose file:
define docker-compose that defines servers (such as ClickHouse (nodes of cluster), Zookeeper, Apache Kafka, etc). For example, let's consider the config with one node of ClickHouse:
version: "2.4"
services:
ch_server:
container_name: myapp_ch_server
image: yandex/clickhouse-server
ports:
- "8123:8123"
- "9000:9000"
- "9009:9009"
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
volumes:
- ./ch_db_data:/var/lib/clickhouse/
- ./ch_db_logs:/var/log/clickhouse-server/
networks:
- myapp-network
healthcheck:
test: wget --no-verbose --tries=1 --spider localhost:8123/ping || exit 1
interval: 2s
timeout: 2s
retries: 16
networks:
myapp-network:
in the separate terminal run clickhouse-client
cd _folder_where_docker-compose_located
docker-compose exec ch_server clickhouse-client
2021 version as this tutorial https://dev.to/titronium/clickhouse-server-in-1-minute-with-docker-4gf2
clickhouse-client:
image: yandex/clickhouse-client:latest
depends_on:
- clickhouse-server
links:
- clickhouse-server
entrypoint:
- /bin/sleep
command:
- infinity
the last line command: - infinity mean it will wait you there forever to connect
Actually, you have access to a ClickHouse client on the command line of the ClickHouse server.
You can easily connect to your server container and call
clickhuse-client

Containers launched with Docker-Compose cannot connect to each other

I have a beginner question with Docker Compose. I am trying to extend the docker-compose-slim.yml example file from Zipkin GitHub repository.
I need to change it so that it can include a simple FastAPI app that I have written. Unfortunately, I cannot make them connect to each other. FastAPI gets rejected when it attempts to send a POST request to the Zipkin container, even though they are both connected to the same network with explicit links and port mapping defined in the YAML file. However, I am able to connect to both of them from the host, however.
Could you please tell me what I have done wrong?
Here is the error message:
Error emitting zipkin trace. ConnectionError(MaxRetryError("HTTPConnectionPool(host='127.0.0.1', port=9411): Max retries exceeded with url: /api/v2/spans (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fce354711c0>: Failed to es
tablish a new connection: [Errno 111] **Connection refused**'))"))
Here is the Docker Compose YAML file:
version: '2.4'
services:
zipkin:
image: openzipkin/zipkin-slim
container_name: zipkin
environment:
- STORAGE_TYPE=mem
ports:
# Port used for the Zipkin UI and HTTP Api
- 9411:9411
depends_on:
- storage
storage:
image: busybox:1.31.0
container_name: fake_storage
myfastapi:
build: .
ports:
- 8000:8000
links:
- zipkin
depends_on:
- zipkin
dependencies:
image: busybox:1.31.0
container_name: fake_dependencies
networks:
default:
name: foo_network
Here is the Dockerfile:
FROM python:3.8.5
ADD . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 8000
CMD ["uvicorn", "wsgi:app", "--host", "0.0.0.0", "--port", "8000"]
You must tell the containers the network "foo_network". The External flag says that the containers are not accessible from outside. Of course you don't have to bet, but I thought as an example it might be quite good.
And because of the "links" function look here Link
version: '2.4'
services:
zipkin:
image: openzipkin/zipkin-slim
container_name: zipkin
environment:
- STORAGE_TYPE=mem
ports:
# Port used for the Zipkin UI and HTTP Api
- 9411:9411
depends_on:
- storage
networks:
- foo_network
storage:
image: busybox:1.31.0
container_name: fake_storage
networks:
- foo_network
myfastapi:
build: .
ports:
- 8000:8000
links:
- zipkin
depends_on:
- zipkin
networks:
- foo_network
dependencies:
image: busybox:1.31.0
container_name: fake_dependencies
networks:
- foo_network
networks:
foo_network:
external: false

jfrog-artifactory docker image - halting due to nofile usage

I have a docker-compose topology with jenkins-gitlab-artifactory,and i am using the jfrog-artifactoey docker image from jfrog :
https://www.jfrog.com/confluence/display/RTF/Installing+with+Docker
here is my docker-compose file:
version: "3"
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins:lts
ports:
- "8080:8080"
volumes:
- jenkins_home:/var/jenkins_home
artifactory:
container_name: artifactory
image: docker.bintray.io/jfrog/artifactory-oss:6.16.0
ports:
- "8081:8081"
volumes:
- artifactory_data:/var/opt/jfrog/artifactory
ulimits:
nproc: 65535
nofile:
soft: 32000
hard: 40000
volumes:
jenkins_home:
artifactory_data:
At first i got an error ERROR: Max number of open files 1024, is too low. Cannot run Artifactory!
After setting the ulimit in docker compose the container is up and running , but the artifactory container is exiting with the following log :
/opt/jfrog/artifactory/bin/artifactory.sh: line 185: 230 Killed $TOMCAT_HOME/bin/catalina.sh run

Cannot connect from inside docker swarm cluster to external mongodb service

If I run single docker container of my backend, it runs well and connects to mongodb which is running on host. But when I run my backend using docker-compose, it doesn't connect to mongodb and prints to console:
MongoError: failed to connect to server [12.345.678.912:27017] on first connect [MongoError: connection 0 to 12.345.678.912:27017 timed out]
docker-compose.yml contents:
version: "3"
services:
web:
image: __BE-IMAGE__
deploy:
replicas: 1
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 2048M
ports:
- "1337:8080"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "1340:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
how I run single docker container:
docker run -p 1337:8080 BE-IMAGE
you need to link the mongo port since localhost is not the same from inside versus outside the containers
ports:
- "1337:8080"
- "27017:27017"
On port definitions left hand side is outside, right side is internal to your container ... Your error says internal to your container it cannot see port 27017 ... above is just linking that mongo port so the container can access that port outside of docker