Orion Reports Error While Provisioning a Device - fiware-orion

I am following up with the FIWARE-IOTAgent-LWM2M tutorial available here trying to pre-provision an LWM2M device. I cloned github repo, installed dependencies and created docker-compose.yml file. All containers (including lightweightm2m-iotagent) started successfully.
However, when I tried provisioning the device using:
(curl localhost:4041/iot/devices -s -S --header 'Content-Type: application/json' \
--header 'Accept: application/json' --header 'fiware-service: factory' --header 'fiware-servicepath: /robots' \
-d #- | python -mjson.tool) <<EOF
{
"devices": [
{
"device_id": "robot1",
"entity_type": "Robot",
"attributes": [
{
"name": "Battery",
"type": "number"
}
],
"lazy": [
{
"name": "Message",
"type": "string"
}
],
"commands": [
{
"name": "Position",
"type": "location"
}
],
"internal_attributes": {
"lwm2mResourceMapping": {
"Battery" : {
"objectType": 7392,
"objectInstance": 0,
"objectResource": 1
},
"Message" : {
"objectType": 7392,
"objectInstance": 0,
"objectResource": 2
},
"Position" : {
"objectType": 7392,
"objectInstance": 0,
"objectResource": 3
}
}
}
}
]
}
EOF
I get the following error:
{
"message": "Request error connecting to the Context Broker: {\"code\":\"400\",\"reasonPhrase\":\"Bad Request\",\"details\":\"JSON Parse Error: unknown field: /contextRegistrations/contextRegistration/attributes/attribute/isDomain\"}",
"name": "BAD_REQUEST"
}
I am not sure how to debug this. Any idea how to fix it?
Question Edited: Below is the docker-compose file I'm using.
version: "3.1"
services:
mongo:
image: mongo:3.6
command: --nojournal
ports:
- "27017:27017"
expose:
- "27017"
orion:
image: fiware/orion
links:
- mongo
ports:
- "1026:1026"
command: -dbhost mongo -logLevel DEBUG
depends_on:
- mongo
expose:
- "1026"
lightweightm2m-iotagent:
image: telefonicaiot/lightweightm2m-iotagent
hostname: idas
links:
- orion
expose:
- "4041"
- "5684"
ports:
- "4041:4041"
- "5684:5684/udp"
mosquitto:
image: ansi/mosquitto
ports:
- "1883:1883"
expose:
- "1883"

It is a known issue, already fixed in master branch.
It has been fixed recently on November 7th, 2018. On the other hand telefonicaiot/lightweightm2m-iotagent:latest (and telefonicaiot/lightweightm2m-iotagent should default to latest) last update at the time of writting this is November 13th, 2018 so it should include the fix.
Probably your telefonicaiod/lightweightm2m-iotagent image is out of date. Pulling it again from Dockerhub should solve the problem.

Related

ECONNREFUSED 127.0.0.1:27017 - Compass <> MongoDB Replication Set connection challenges

I'm testing using docker compose for a replicated set on my local machine and am having trouble with getting Compass connected and functioning with the local set. Everything has deployed fine and I can see the exposed ports listening on the host machine. I just keep getting ECONNREFUSED.
Here is the Docker Compose file.
services:
mongo1:
hostname: mongo1
container_name: mongo1
image: mongo:latest
networks:
- mongo-network
expose:
- 27017
ports:
- 30001:27017
volumes:
- md1:/data/db
restart: always
command: mongod --replSet mongo-net
mongo2:
hostname: mongo2
container_name: mongo2
image: mongo:latest
networks:
- mongo-network
expose:
- 27017
ports:
- 30002:27017
volumes:
- md2:/data/db
restart: always
command: mongod --replSet mongo-net
mongo3:
hostname: mongo3
container_name: mongo3
image: mongo:latest
networks:
- mongo-network
expose:
- 27017
ports:
- 30003:27017
volumes:
- md3:/data/db
restart: always
command: mongod --replSet mongo-net
mongoinit:
image: mongo
restart: "no"
depends_on:
- mongo1
- mongo2
- mongo3
command: >
sh -c "mongosh --host mongo1:27017 --eval
'
db = (new Mongo("localhost:27017")).getDB("test");
config = {
"_id" : "mongo-net",
"members" : [
{
"_id" : 0,
"host" : "localhost:27017",
"priority": 1
},
{
"_id" : 1,
"host" : "localhost:27017",
"priority": 2
},
{
"_id" : 2,
"host" : "localhost:27017",
"priority": 3
}
]
};
rs.initiate(config);
'"
networks:
mongo-network:
volumes:
md1:
md2:
md3:
The containers and replication set deploy fine and are communicating. I can see the defined ports exposed and listening.
My issue is trying to use Compass to connect to the replicated set. I get ECONNREFUSED.
What's odd is I can actually see the client connecting on the primary logs, but get no other information about what the connection was refused/DCed.
{
"attr": {
"connectionCount": 7,
"connectionId": 129,
"remote": "192.168.32.1:61508",
"uuid": "f78c3803-fc7c-4655-9b77-94329bd41a7d"
},
"c": "NETWORK",
"ctx": "listener",
"id": 22943,
"msg": "Connection accepted",
"s": "I",
"t": {
"$date": "2022-11-02T18:57:20.032+00:00"
}
}
{
"attr": {
"client": "conn129",
"doc": {
"application": {
"name": "MongoDB Compass"
},
"driver": {
"name": "nodejs",
"version": "4.8.1"
},
"os": {
"architecture": "arm64",
"name": "darwin",
"type": "Darwin",
"version": "22.1.0"
},
"platform": "Node.js v16.5.0, LE (unified)|Node.js v16.5.0, LE (unified)"
},
"remote": "192.168.32.1:61508"
},
"c": "NETWORK",
"ctx": "conn129",
"id": 51800,
"msg": "client metadata",
"s": "I",
"t": {
"$date": "2022-11-02T18:57:20.034+00:00"
}
}
{
"attr": {
"connectionCount": 6,
"connectionId": 129,
"remote": "192.168.32.1:61508",
"uuid": "f78c3803-fc7c-4655-9b77-94329bd41a7d"
},
"c": "NETWORK",
"ctx": "conn129",
"id": 22944,
"msg": "Connection ended",
"s": "I",
"t": {
"$date": "2022-11-02T18:57:20.044+00:00"
}
}

Container on same network not communicating with each other

I have a mongodb container which i name e-learning
and i have a docker image which should connect to the mongodb container to update my database but it's not working i get this error:
Unknown, Last error: connection() error occurred during connection handshake: dial tcp 127.0.0.1:27017: connect: connection refused }
here's my docker build file
# syntax=docker/dockerfile:1
FROM golang:1.18
WORKDIR /go/src/github.com/worker
COPY go.mod go.sum main.go ./
RUN go mod download
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM jrottenberg/ffmpeg:4-alpine
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
ENV LD_LIBRARY_PATH=/usr/local/lib
COPY --from=jrottenberg/ffmpeg / /
COPY app.env /root
COPY --from=0 /go/src/github.com/worker/app .
CMD ["./app"]
my docker compose file
version: "3.9"
services:
worker:
image: worker
environment:
- MONGO_URI="mongodb://localhost:27017/"
- MONGO_DATABASE=e-learning
- RABBITMQ_URI=amqp://user:password#rabbitmq:5672/
- RABBITMQ_QUEUE=upload
networks:
- app_network
external_links:
- e-learning
- rabbitmq
volumes:
- worker:/go/src/github.com/worker:rw
networks:
app_network:
external: true
volumes:
worker:
my docker inspect network
[
{
"Name": "app_network",
"Id": "f688edf02a194fd3b8a2a66076f834a23fa26cead20e163cde71ef32fc1ab598",
"Created": "2022-06-27T12:18:00.283531947+03:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"2907482267e1f6e42544e5e8d852c0aac109ec523c6461e003572963e299e9b0": {
"Name": "rabbitmq",
"EndpointID": "4b46e091e4d5a79782185dce12cb2b3d79131b92d2179ea294a639fe82a1e79a",
"MacAddress": "02:42:ac:14:00:03",
"IPv4Address": "172.20.0.3/16",
"IPv6Address": ""
},
"8afd004a981715b8088af53658812215357e156ede03905fe8fdbe4170e8b13f": {
"Name": "e-learning",
"EndpointID": "1c12d592a0ef6866d92e9989f2e5bc3d143602fc1e7ad3d980efffcb87b7e076",
"MacAddress": "02:42:ac:14:00:02",
"IPv4Address": "172.20.0.2/16",
"IPv6Address": ""
},
"ad026f7e10c9c1c41071929239363031ff72ad1b9c6765ef5c977da76f24ea31": {
"Name": "video-transformation-worker-1",
"EndpointID": "ce3547014a6856725b6e815181a2c3383d307ae7cf7132e125c58423f335b73f",
"MacAddress": "02:42:ac:14:00:04",
"IPv4Address": "172.20.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Change MONGO_URI="mongodb://localhost:27017/" to MONGO_URI="mongodb://e-learning:27017/" (working on the assumption that e-learning is the mongo container).
Within a container attached to a bridge network (the default) localhost (127.0.0.1) is the container itself. So your app container is trying to access the database at port 27017 on itself (not on the host or on the db container). The easiest solution is to use the automatic DNS resolution between containers that docker provides.
I added extra hosts and changed my mongo uri to host.docker.internal
and it solved my problems
version: "3.9"
services:
worker:
image: worker
environment:
- MONGO_URI="mongodb://host.docker.internal:27017/"
- MONGO_DATABASE=e-learning
- RABBITMQ_URI=amqp://user:password#rabbitmq:5672/
- RABBITMQ_QUEUE=upload
networks:
- app_network
external_links:
- e-learning
- rabbitmq
volumes:
- worker:/go/src/github.com/worker:rw
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
app_network:
external: true
volumes:
worker:

How to insert data using docker compose

Overview: I would like to insert data in file.json in db within docker-compose using CURL.
My Try: I have tried to mount file.json inside the FROST container, then run the command CURL within docker-compose.
curl -X POST -H "Content-Type: application/json" -d file.json http://localhost:8080/FROST-Server/v1.1/Things
as the result the docker-compose yaml is so
version: '2'
services:
web:
image: fraunhoferiosb/frost-server:latest
environment:
- serviceRootUrl=http://localhost:8080/FROST-Server
- http_cors_enable=true
- http_cors_allowed.origins=*
- persistence_db_driver=org.postgresql.Driver
- persistence_db_url=jdbc:postgresql://database:5432/sensorthings
- persistence_db_username=sensorthings
- persistence_db_password=ChangeMe
- persistence_autoUpdateDatabase=true
ports:
- 8080:8080
- 1883:1883
volumes:
- ./file.json:file.json
depends_on:
- database
command:
- curl
- -X
- POST
- --data-binary
- ./file.json
- -H
- "Content-Type: application/json" # <-- YAML string quoting
- -H
- "Accept: application/json"
- "http://localhost:8080/FROST-Server/v1.1/Things"
database:
image: postgis/postgis:14-3.2-alpine
environment:
- POSTGRES_DB=sensorthings
- POSTGRES_USER=sensorthings
- POSTGRES_PASSWORD=ChangeMe
volumes:
- postgis_volume:/var/lib/postgresql/data
volumes:
postgis_volume:
After running docker-compuse up, i got this error
Creating network "frost_default" with the default driver
Creating frost_database_1 ... done
Creating frost_web_1 ... done
Attaching to frost_database_1, frost_web_1
database_1 |
database_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
database_1 |
database_1 | 2022-06-13 11:24:06.420 UTC [1] LOG: starting PostgreSQL 14.3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20211027) 10.3.1 20211027, 64-bit
database_1 | 2022-06-13 11:24:06.421 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
database_1 | 2022-06-13 11:24:06.421 UTC [1] LOG: listening on IPv6 address "::", port 5432
database_1 | 2022-06-13 11:24:06.458 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
database_1 | 2022-06-13 11:24:06.486 UTC [22] LOG: database system was shut down at 2022-06-13 09:58:59 UTC
database_1 | 2022-06-13 11:24:06.498 UTC [1] LOG: database system is ready to accept connections
web_1 | % Total % Received % Xferd Average Speed Time Time Time Current
web_1 | Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
web_1 | curl: (7) Failed to connect to localhost port 8080: Connection refused
frost_web_1 exited with code 7
note: docker-compose.yml and file.json are in same path.
file.json
{
"name": "Car",
"description": "Smart connected car equipped with sensors.",
"properties": {
"vin": "5YJ3EDEB5KF223462"
},
"Locations": [
{
"name": "Parking lot",
"description": "The parking lot of the fictive company.",
"encodingType": "application/vnd.geo+json",
"location": {
"type": "Point",
"coordinates": [8.419432640075684, 49.01395040719586]
}
}
],
"Datastreams": [
{
"name": "Oil pump",
"description": "Measuring the motor oil pressure.",
"observationType": "http://www.opengis.net/def/observationType/OGC-OM/2.0/OM_Measurement",
"unitOfMeasurement": {
"name": "Bar",
"symbol": "bar",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/unit/Instances.html#Bar"
},
"Sensor": {
"name": "OIL_PRES_SENSOR2",
"description": "Oil pressure sensor",
"encodingType": "application/pdf",
"metadata": "..."
},
"ObservedProperty": {
"name": "Pressure",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/quantity/Instances.html#Pressure",
"description": "The oil pressure."
},
"Observations": [
{
"phenomenonTime": "2022-01-10T10:00:00Z",
"result": 2.1
},
{
"phenomenonTime": "2022-01-10T10:01:10Z",
"result": 2.3
},
{
"phenomenonTime": "2022-01-10T10:02:20Z",
"result": 2.7
},
{
"phenomenonTime": "2022-01-10T10:03:30Z",
"result": 2.9
},
{
"phenomenonTime": "2022-01-10T10:04:40Z",
"result": 4.1
},
{
"phenomenonTime": "2022-01-10T10:05:50Z",
"result": 3.7
}
]
},
{
"name": "Motor Temperature",
"description": "The temperature of the motor.",
"observationType": "http://www.opengis.net/def/observationType/OGC-OM/2.0/OM_Measurement",
"unitOfMeasurement": {
"name": "Centigrade",
"symbol": "C",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/unit/Instances.html#DegreeCentigrade"
},
"Sensor": {
"name": "DHT22/Temperature",
"description": "Temperature sensor of a DHT22",
"encodingType": "application/pdf",
"metadata": "https://www.sparkfun.com/datasheets/Sensors/Temperature/DHT22.pdf"
},
"ObservedProperty": {
"name": "Temperature",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/quantity/Instances.html#ThermodynamicTemperature",
"description": "The temperature."
},
"Observations": [
{
"phenomenonTime": "2019-03-14T10:00:00Z",
"result": 21.0
},
{
"phenomenonTime": "2019-03-14T10:01:00Z",
"result": 23.1
},
{
"phenomenonTime": "2019-03-14T10:02:00Z",
"result": 40.5
},
{
"phenomenonTime": "2019-03-14T10:03:00Z",
"result": 47.1
},
{
"phenomenonTime": "2019-03-14T10:04:00Z",
"result": 32.2
},
{
"phenomenonTime": "2019-03-14T10:05:00Z",
"result": 30.3
}
]
},
{
"name": "Access",
"description": "The access state (e.g., open, closed) of the vehicle.",
"observationType": "http://www.opengis.net/def/observationType/OGC-OM/2.0/OM_Measurement",
"unitOfMeasurement": {
"name": "Centimeter",
"symbol": "cm",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/unit/Instances.html#Centimeter"
},
"Sensor": {
"name": "Distance Sensor",
"description": "Measure the distance.",
"encodingType": "application/pdf",
"metadata": "..."
},
"ObservedProperty": {
"name": "Length",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/quantity/Instances.html#Length",
"description": "The length of the measured distance."
},
"Observations": [
{
"phenomenonTime": "2019-03-14T10:00:00Z",
"result": 0.0
},
{
"phenomenonTime": "2019-03-14T10:01:00Z",
"result": 0.0
},
{
"phenomenonTime": "2019-03-14T10:02:00Z",
"result": 21.4
},
{
"phenomenonTime": "2019-03-14T10:03:00Z",
"result": 21.7
},
{
"phenomenonTime": "2019-03-14T10:04:00Z",
"result": 20.9
},
{
"phenomenonTime": "2019-03-14T10:05:00Z",
"result": 0.0
}
]
}
]
}
any other solution is welcome too:)
I think you're executing your "curl" command before your "web" container starts. If I were you, I will run the "web" on "0.0.0.0" instead of "localhost", and use separate container to sleep for some time, then execute the curl command (don't forget about the container entrypopoint and override it if required).
I did run your docker-compose and execute the curl manually, but I got 400:
curl -vvv -X POST --data-binary ./file.json -H "Content-Type: application/json" -H "Accept: application/json" "http://localhost:8080/FROST-Server/v1.1/Things"
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying 127.0.0.1:8080...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8080 (#0)
> POST /FROST-Server/v1.1/Things HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.68.0
> Content-Type: application/json
> Accept: application/json
> Content-Length: 11
>
* upload completely sent off: 11 out of 11 bytes
* Mark bundle as not supporting multiuse
< HTTP/1.1 400
< Access-Control-Allow-Origin: *
< Access-Control-Expose-Headers: Location
< Content-Length: 224
< Date: Tue, 14 Jun 2022 12:43:06 GMT
< Connection: close
<
* Closing connection 0
{"code":400,"type":"error","message":"Unexpected character ('.' (code 46)): expected a valid value (JSON String, Number, Array, Object or token 'null', 'true' or 'false')\n at [Source: (BufferedReader); line: 1, column: 2]"}
Since I'm not familiar with this service, I don't know how to fix this, but you can test bellow docker-compose if you could fix this:
version: '2'
services:
curl:
image: alpine/curl:latest
entrypoint: /bin/sh
command:
- -c
- sleep 10
- curl
- -vvv
- -X
- POST
- --data-binary
- /file.json
- -H
- "Content-Type: application/json" # <-- YAML string quoting
- -H
- "Accept: application/json"
- "http://web:8080/FROST-Server/v1.1/Things"
volumes:
- ./file.json:/file.json
depends_on:
- web
web:
image: fraunhoferiosb/frost-server:latest
environment:
- serviceRootUrl=http://0.0.0.0:8080/FROST-Server
- http_cors_enable=true
- http_cors_allowed.origins=*
- persistence_db_driver=org.postgresql.Driver
- persistence_db_url=jdbc:postgresql://database:5432/sensorthings
- persistence_db_username=sensorthings
- persistence_db_password=ChangeMe
- persistence_autoUpdateDatabase=true
ports:
- 8080:8080
- 1883:1883
# volumes:
# - ./file.json:/usr/local/tomcat/file.json
depends_on:
- database
# command:
# - curl
# - -X
# - POST
# - --data-binary
# - ./file.json
# - -H
# - "Content-Type: application/json" # <-- YAML string quoting
# - -H
# - "Accept: application/json"
# - "http://localhost:8080/FROST-Server/v1.1/Things"
database:
image: postgis/postgis:14-3.2-alpine
environment:
- POSTGRES_DB=sensorthings
- POSTGRES_USER=sensorthings
- POSTGRES_PASSWORD=ChangeMe
volumes:
- postgis_volume:/var/lib/postgresql/data
volumes:
postgis_volume:
Edit 1: volume was missing from "curl" container
The End Solution of the problem. I would like to thank #RoohAllahGodazgar for helping me out.
Explanation: we need to have another image as curl image which we can run curl request on it after the FROST server is run. on curl service, we should mount first the dummy data inside the container and after that run sleep 20s so that we make sure that FROST server is run, then run crul method. after that the container will be exited.
version: '2'
services:
curl:
image: alpine/curl:latest
command: sh -c "sleep 20 && curl -X POST -d #/file.json http://web:8080/FROST-Server/v1.1/Things"
volumes:
- ./file.json:/file.json
depends_on:
- web
web:
image: fraunhoferiosb/frost-server:latest
environment:
- serviceRootUrl=http://0.0.0.0:8080/FROST-Server
- http_cors_enable=true
- http_cors_allowed.origins=*
- persistence_db_driver=org.postgresql.Driver
- persistence_db_url=jdbc:postgresql://database:5432/sensorthings
- persistence_db_username=sensorthings
- persistence_db_password=ChangeMe
- persistence_autoUpdateDatabase=true
ports:
- 8080:8080
- 1883:1883
depends_on:
- database
database:
image: postgis/postgis:14-3.2-alpine
environment:
- POSTGRES_DB=sensorthings
- POSTGRES_USER=sensorthings
- POSTGRES_PASSWORD=ChangeMe
volumes:
- postgis_volume:/var/lib/postgresql/data
volumes:
postgis_volume:

Unable to connect to Mongodb container from flask container

#Docker compose file
version: "3.4" # optional since v1.27.0
services:
flaskblog:
build:
context: flaskblog
dockerfile: Dockerfile
image: flaskblog
restart: unless-stopped
environment:
APP_ENV: "prod"
APP_DEBUG: "False"
APP_PORT: 5000
MONGODB_DATABASE: flaskdb
MONGODB_USERNAME: flaskuser
MONGODB_PASSWORD:
MONGODB_HOSTNAME: mongodbuser
volumes:
- appdata:/var/www
depends_on:
- mongodb
networks:
- frontend
- backend
mongodb:
image: mongo:4.2
#container_name: mongodb
restart: unless-stopped
command: mongod --auth
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD:
MONGO_INITDB_DATABASE: flaskdb
MONGODB_DATA_DIR: /data/db2
MONDODB_LOG_DIR: /dev/null
volumes:
- mongodbdata:/data/db2
#- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
networks:
- backend
networks:
backend:
driver: bridge
volumes:
mongodbdata:
driver: local
appdata:
driver: local
#Flask container file
FROM python:3.8-slim-buster
#python:3.6-stretch
LABEL MAINTAINER="Shekhar Banerjee"
ENV GROUP_ID=1000 \
USER_ID=1000 \
SECRET_KEY="4" \
EMAIL_USER="dankml.com" \
EMAIL_PASS="nzw"
RUN mkdir /home/logix3
WORKDIR /home/logix3
ADD . /home/logix3/
RUN pip install -r requirements.txt
RUN pip install gunicorn
RUN groupadd --gid $GROUP_ID www
RUN useradd --create-home -u $USER_ID --shell /bin/sh --gid www www
USER www
EXPOSE 5000/tcp
#CMD ["python","run.py"]
CMD [ "gunicorn", "-w", "4", "--bind", "127.0.0.1:5000", "run:app"]
These are my docker-compose file and Dockerfile for an application . The container for MongoDB and Flask are working fine individually, but when I try to run them together, I do not get any response on the localhost, There are no error messages in the logs or the containers
Curl in the system gives this :
$ curl -i http://127.0.0.1:5000
curl: (7) Failed to connect to 127.0.0.1 port 5000: Connection refused
Can anyone suggest how do I debug this ??
**Only backend network is functional
[
{
"Name": "mlspace_backend",
"Id": "e7e37cad0058330c99c55ffea2b98281e0c6526763e34550db24431b30030b77",
"Created": "2022-05-15T22:52:25.0281001+05:30",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"12c50f5f70d18b4c7ffc076177b59ff063a8ff81c4926c8cae0bf9e74dc2fc83": {
"Name": "mlspace_mongodb_1",
"EndpointID": "8278f672d9211aec9b539e14ae4eeea5e8f7aaef95448f44aab7ec1a8c61bb0b",
"MacAddress": "02:42:ac:14:00:02",
"IPv4Address": "172.20.0.2/16",
"IPv6Address": ""
},
"75cde7a34d6b3c4517c536a06349579c6c39090a93a017b2c280d987701ed0cf": {
"Name": "mlspace_flaskblog_1",
"EndpointID": "20489de8841d937f01768170a89f74c37ed049d241b096c8de8424c51e58704c",
"MacAddress": "02:42:ac:14:00:03",
"IPv4Address": "172.20.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "backend",
"com.docker.compose.project": "mlspace",
"com.docker.compose.version": "1.23.2"
}
}
]
I checked the logs of the containers. no errors found, flask engine seem to be running but the site wont run , even curl won't given any output

I had a problem in the process of running kafka and zookeeper on docker compose

this is my docker-compose-single-broker.yml file.
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
networks:
my-network:
ipv4_address: 172.18.0.100
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 172.18.0.101
KAFKA_CREATE_TOPICS: "test:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- zookeeper
networks:
my-network:
ipv4_address: 172.18.0.101
networks:
my-network:
name: ecommerce-network # 172.18.0.1 ~
and I executed the command.
docker-compose -f docker-compose-single-broker.yml up -d
I check my network by the command.
docker network inspect ecommerce-network
[
{
"Name": "ecommerce-network",
"Id": "f543bd92e299455454bd1affa993d1a4b7ca2c347d576b24d8f559d0ac7f07c2",
"Created": "2021-05-23T12:42:01.804785417Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"cad97d79a92ea8e0f24b000c8255c2db1ebc64865fab3d7cda37ff52a8755f14": {
"Name": "kafka-docker_kafka_1",
"EndpointID": "4c867d9d5f4d28e608f34247b102f1ff2811a9bbb2f78d30b2f55621e6ac6187",
"MacAddress": "02:42:ac:12:00:65",
"IPv4Address": "172.18.0.101/16",
"IPv6Address": ""
},
"f7df5354b9e114a1a849ea9d558d8543ca5cb02800c5189d9f09ee1b95a517d6": {
"Name": "kafka-docker_zookeeper_1",
"EndpointID": "b304581db258dd3da95e15fb658cae0e40bd38440c1f845b09936d9b69d4fb23",
"MacAddress": "02:42:ac:12:00:64",
version: '2'
"IPv4Address": "172.18.0.100/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
and I entered kafka container. I executed the command to look up the topic list.
however, I couldn't get the topic list even though I waited indefinitely.
this is my kafka container's logs.
What should I do to solve this problem?
Unclear why you think you'll need IP addresses. Remove those
For example, KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 works out of the box with Docker networking
The KAFKA_ADVERTISED_HOST_NAME can simply be localhost if you don't plan on connecting outside of that container, otherwise, it can be the Docker service name you've set of kafka
And you don't need to mount the Docker socket
Related - Connect to Kafka running in Docker
For local development purposes, you could check out my containerized Kafka that uses only single image for both Zookeeper and Kafka if you interested in it
You could find it via
https://github.com/howiesynguyen/Java-basedExamples/tree/main/DockerContainerizedKafka4Dev
Or
https://hub.docker.com/repository/docker/howiesynguyen/kafka4dev
I tested it with one of my examples of Spring Cloud Stream and it worked for me. Hopefully someone can find it helpful
Just in case it would be useful to anybody.
I had pretty similar setup as question's author did and in my case Kafka didn't see Zookeeper (but I was using different image - from Debezium). In my case I figured out that environment variable KAFKA_ZOOKEEPER_CONNECT should be named just ZOOKEEPER_CONNECT.
Solution that worked to me (network can be omitted, it's not necessary):
version: "3.9"
services:
zookeeper:
image: debezium/zookeeper
ports:
- 2181:2181
- 2888:2888
- 3888:3888
networks:
- main
kafka:
image: debezium/kafka
ports:
- 9092:9092
environment:
ZOOKEEPER_CONNECT: zookeeper
depends_on:
- zookeeper
networks:
- main
networks:
main: