Unable to load data from Kafka topic to Postgres using JDBCSinkConnector - postgresql

I have dockerized Kafka and Postgres. I use JDBC Sink connector to load data from Kafka topic to Postgres table. First I create a topic and a stream above it with "AVRO" value format.
CREATE STREAM TEST01 (ROWKEY VARCHAR KEY, COL1 INT, COL2 VARCHAR)
WITH (KAFKA_TOPIC='test01', PARTITIONS=1, VALUE_FORMAT='AVRO');
This is the code of creating Sink Connector:
curl -X PUT http://localhost:8083/connectors/sink-jdbc-postgre-01/config \
-H "Content-Type: application/json" -d '{
"connector.class" : "io.confluent.connect.jdbc.JdbcSinkConnector",
"connection.url" : "jdbc:postgresql://postgres:5432/",
"topics" : "test01",
"key.converter" : "org.apache.kafka.connect.storage.StringConverter",
"value.converter" : "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://schema-registry:8081",
"connection.user" : "postgres",
"connection.password" : "********",
"auto.create" : true,
"auto.evolve" : true,
"insert.mode" : "insert",
"pk.mode" : "record_key",
"pk.fields" : "MESSAGE_KEY"
}'
Then, I check Postgres if there's any data that came from Kafka by using \dt command and it returns the folllowing: Did not find any relations.
Then I check kafka-connect logs and it return the following result:
[2021-03-30 10:05:07,546] INFO Attempting to open connection #2 to PostgreSql (io.confluent.connect.jdbc.util.CachedConnectionProvider)
connect | [2021-03-30 10:05:07,577] INFO Unable to connect to database on attempt 2/3. Will retry in 10000 ms. (io.confluent.connect.jdbc.util.CachedConnectionProvider)
connect | org.postgresql.util.PSQLException: The connection attempt failed.
connect | at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:296)
connect | at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
connect | at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:211)
connect | at org.postgresql.Driver.makeConnection(Driver.java:459)
connect | at org.postgresql.Driver.connect(Driver.java:261)
connect | at java.sql.DriverManager.getConnection(DriverManager.java:664)
connect | at java.sql.DriverManager.getConnection(DriverManager.java:208)
connect | at io.confluent.connect.jdbc.dialect.GenericDatabaseDialect.getConnection(GenericDatabaseDialect.java:224)
connect | at io.confluent.connect.jdbc.util.CachedConnectionProvider.newConnection(CachedConnectionProvider.java:93)
connect | at io.confluent.connect.jdbc.util.CachedConnectionProvider.getConnection(CachedConnectionProvider.java:62)
connect | at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:56)
connect | at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:74)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:546)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:326)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:228)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:196)
connect | at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
connect | at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
connect | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
connect | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
connect | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
connect | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
connect | at java.lang.Thread.run(Thread.java:748)
connect | Caused by: java.net.UnknownHostException: postgres
connect | at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
connect | at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
connect | at java.net.Socket.connect(Socket.java:589)
connect | at org.postgresql.core.PGStream.<init>(PGStream.java:81)
connect | at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:92)
connect | at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:196)
connect | ... 22 more
connect | [2021-03-30 10:05:17,578] INFO Attempting to open connection #3 to PostgreSql (io.confluent.connect.jdbc.util.CachedConnectionProvider)
connect | [2021-03-30 10:05:17,732] ERROR WorkerSinkTask{id=sink-jdbc-postgre-01-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. Error: org.postgresql.util.PSQLException: The connection attempt failed. (org.apache.kafka.connect.runtime.WorkerSinkTask)
connect | org.apache.kafka.connect.errors.ConnectException: org.postgresql.util.PSQLException: The connection attempt failed.
connect | at io.confluent.connect.jdbc.util.CachedConnectionProvider.getConnection(CachedConnectionProvider.java:69)
connect | at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:56)
connect | at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:74)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:546)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:326)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:228)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:196)
connect | at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
connect | at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
connect | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
connect | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
connect | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
connect | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
connect | at java.lang.Thread.run(Thread.java:748)
connect | Caused by: org.postgresql.util.PSQLException: The connection attempt failed.
connect | at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:296)
connect | at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
connect | at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:211)
connect | at org.postgresql.Driver.makeConnection(Driver.java:459)
connect | at org.postgresql.Driver.connect(Driver.java:261)
connect | at java.sql.DriverManager.getConnection(DriverManager.java:664)
connect | at java.sql.DriverManager.getConnection(DriverManager.java:208)
connect | at io.confluent.connect.jdbc.dialect.GenericDatabaseDialect.getConnection(GenericDatabaseDialect.java:224)
connect | at io.confluent.connect.jdbc.util.CachedConnectionProvider.newConnection(CachedConnectionProvider.java:93)
connect | at io.confluent.connect.jdbc.util.CachedConnectionProvider.getConnection(CachedConnectionProvider.java:62)
connect | ... 13 more
connect | Caused by: java.net.UnknownHostException: postgres
connect | at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
connect | at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
connect | at java.net.Socket.connect(Socket.java:589)
connect | at org.postgresql.core.PGStream.<init>(PGStream.java:81)
connect | at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:92)
connect | at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:196)
connect | ... 22 more
connect | [2021-03-30 10:05:17,734] ERROR WorkerSinkTask{id=sink-jdbc-postgre-01-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
I supposed that the problem might be in missing postgresql connector .jar file in /usr/share/java/kafka-connect-jdbc but here it is.
root#connect:/usr/share/java/kafka-connect-jdbc# ls -l
total 8412
-rw-r--r-- 1 root root 17555 Apr 18 2020 common-utils-5.5.0.jar
-rw-r--r-- 1 root root 317816 Apr 18 2020 jtds-1.3.1.jar
-rw-r--r-- 1 root root 230113 Apr 18 2020 kafka-connect-jdbc-5.5.0.jar
-rw-r--r-- 1 root root 927447 Apr 18 2020 postgresql-42.2.10.jar
-rw-r--r-- 1 root root 41139 Apr 18 2020 slf4j-api-1.7.26.jar
-rw-r--r-- 1 root root 7064881 Apr 18 2020 sqlite-jdbc-3.25.2.jar
What could be the solution for that problem?

Thanks to #Robin Moffatt tutorial and #OneCricketeer tip I've found the way to solve this problem. Kafka-connect and Postgres should be in one docker-compose.yml file. I attach the code of docker-compose.yml below. Hope this help to the people who would face with the same problem:
---
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.5.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:5.5.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry:
image: confluentinc/cp-schema-registry:5.5.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
connect:
image: cnfldemos/cp-server-connect-datagen:0.3.2-5.5.0
hostname: connect
container_name: connect
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
# CLASSPATH required due to CC-2422
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-5.5.0.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
ksqldb-server:
image: confluentinc/cp-ksqldb-server:5.5.0
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- broker
- connect
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_BOOTSTRAP_SERVERS: "broker:29092"
KSQL_HOST_NAME: ksqldb-server
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
KSQL_KSQL_CONNECT_URL: "http://connect:8083"
ksqldb-cli:
image: confluentinc/cp-ksqldb-cli:5.5.0
container_name: ksqldb-cli
depends_on:
- broker
- connect
- ksqldb-server
entrypoint: /bin/sh
tty: true
rest-proxy:
image: confluentinc/cp-kafka-rest:5.5.0
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
postgres:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432

Here is the error from your stack trace:
java.net.UnknownHostException: postgres
This means that your Kafka Connect worker machine cannot find the host postgres.

Related

airflow runs connections check against metadata db before init db

I am running airflow locally based on dockerfile, .env, docker-compose.yaml and entrypoint.sh
as "docker-compose -f docker-compose.yaml up"
And just after "airflow init db" in entrypoint.sh I am getting the following error:
[after it all is cool, I can run airflow. But this drives me crazy. Can anyone help me to resolve it, please?]
what's strange is that service queries the tables even before the db has been initiated
airflow_webserver | initiating db
airflow_webserver | DB: postgresql://airflow:***#airflow_metadb:5432/airflow
airflow_webserver | [2022-02-22 13:52:26,318] {db.py:929} INFO - Dropping tables that exist
airflow_webserver | [2022-02-22 13:52:26,570] {migration.py:201} INFO - Context impl PostgresqlImpl.
airflow_webserver | [2022-02-22 13:52:26,570] {migration.py:204} INFO - Will assume transactional DDL.
airflow_metadb | 2022-02-22 13:52:26.712 UTC [71] ERROR: relation "connection" does not exist at character 55
airflow_metadb | 2022-02-22 13:52:26.712 UTC [71] STATEMENT: SELECT connection.conn_id AS connection_conn_id
airflow_metadb | FROM connection GROUP BY connection.conn_id
airflow_metadb | HAVING count(*) > 1
airflow_metadb | 2022-02-22 13:52:26.714 UTC [72] ERROR: relation "connection" does not exist at character 55
airflow_metadb | 2022-02-22 13:52:26.714 UTC [72] STATEMENT: SELECT connection.conn_id AS connection_conn_id
airflow_metadb | FROM connection
airflow_metadb | WHERE connection.conn_type IS NULL
airflow_webserver | [2022-02-22 13:52:26,733] {db.py:921} INFO - Creating tables
airflow 2.2.3
postgres 13
in dockerfile:
ENTRYPOINT ["/entrypoint.sh"]
in docker-compose.yaml:
webserver:
env_file: ./.env
image: airflow
container_name: airflow_webserver
restart: always
depends_on:
- postgres
environment:
<<: *env_common
AIRFLOW__CORE__LOAD_EXAMPLES: ${AIRFLOW__CORE__LOAD_EXAMPLES}
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: ${AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION}
EXECUTOR: ${EXECUTOR}
_AIRFLOW_DB_UPGRADE: ${_AIRFLOW_DB_UPGRADE}
_AIRFLOW_WWW_USER_CREATE: ${_AIRFLOW_WWW_USER_CREATE}
_AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME}
_AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD}
_AIRFLOW_WWW_USER_ROLE: ${_AIRFLOW_WWW_USER_ROLE}
_AIRFLOW_WWW_USER_EMAIL: ${_AIRFLOW_WWW_USER_EMAIL}
logging:
options:
max-size: 10m
max-file: "3"
volumes:
- ./dags:bla-bla
- ./logs:bla-bla
ports:
- ${AIRFLOW_WEBSERVER_PORT}:${AIRFLOW_WEBSERVER_PORT}
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3

Unable to connect to mongodb as docker compose service from another service

when I launch my application using docker-compose, I get an error that my application cannot connect to the database, although the port is exposed and they are in the same network...
This is my docker-compose.yml file:
version: '3'
volumes:
db-data:
driver: local
mongo-config:
driver: local
services:
pulseq-mongodb:
image: mongo:latest
container_name: server-mongodb
restart: always
networks:
- server-net
ports:
- "27017:27017"
expose:
- 27017
volumes:
- db-data:/data/db
- mongo-config:/data/configdb
server:
image: my-server:0.0.1-pre-alpha.1
container_name: server
restart: always
networks:
- server-net
ports:
- "8080:8080"
depends_on:
- server-mongodb
networks:
server-net:
driver: bridge
I'm getting the following error on startup:
server | 2021-11-01 13:05:10.409 INFO 1 --- [localhost:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server localhost:27017
server |
server | com.mongodb.MongoSocketOpenException: Exception opening socket
server | at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70) ~[mongodb-driver-core-4.2.3.jar:na]
server | at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:143) ~[mongodb-driver-core-4.2.3.jar:na]
server | at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:188) ~[mongodb-driver-core-4.2.3.jar:na]
server | at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:144) ~[mongodb-driver-core-4.2.3.jar:na]
server | at java.base/java.lang.Thread.run(Unknown Source) ~[na:na]
I tried to use many solutions, but nothing helped me. Any answer will be helpful.
Thanks.
I fixed this by using the container name, instead of localhost in the application configuration.

Kafka connect cannot find broker in docker

I have a docker-compose file to set up a Kafka infrastructure. The problem is, when I run the docker-compose for the first time on a machine then all the containers run fine, but when I STOP/REMOVE the containers and re-run the docker-compose then image container "cnfldemos/cp-server-connect-datagen:0.2.0-5.4.0" get an exit sometime. I just wanted to update that this problem is not constant and I cannot figure out the root cause.
Could you please help me with the above issue with image container "cnfldemos/cp-server-connect-datagen:0.2.0-5.4.0". By looking at the logs, I understand that there is an unknown host expectation but I don't know why it is trying to find 172.18.0.4 IP. It's not even my IP.
p.s. I am a novice at Kafka docker setup, please help!
Following are the error logs from the cnfldemos/cp-server-connect-datagen:0.2.0-5.4.0 container.
main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 5.4.0-ccs
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: f4201a82bea68cc7
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1585904457838
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (broker/172.18.0.4:29092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (broker/172.18.0.4:29092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (broker/172.18.0.4:29092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Error connecting to node broker:29092 (id: -1 rack: null)
java.net.UnknownHostException: broker: Name or service not known
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324)
at java.net.InetAddress.getAllByName0(InetAddress.java:1277)
at java.net.InetAddress.getAllByName(InetAddress.java:1193)
at java.net.InetAddress.getAllByName(InetAddress.java:1127)
at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:104)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363)
at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:955)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:289)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:969)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1184)
at java.lang.Thread.run(Thread.java:748)
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Error connecting to node broker:29092 (id: -1 rack: null)
java.net.UnknownHostException: broker
Following is the docker-compose.yml file.
I execute the the following docker-compose.yml file by using command
docker-compose up -d
version: '2.1'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.4.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:5.4.1
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry:
image: confluentinc/cp-schema-registry:5.4.1
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
rest-proxy:
image: confluentinc/cp-kafka-rest:5.4.1
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
kafka-topics-ui:
image: landoop/kafka-topics-ui:0.9.4
hostname: kafka-topics-ui
ports:
- "8000:8000"
environment:
KAFKA_REST_PROXY_URL: "http://rest-proxy:8082/"
PROXY: "true"
depends_on:
- zookeeper
- broker
- schema-registry
- rest-proxy
connect:
image: cnfldemos/cp-server-connect-datagen:0.2.0-5.4.0
hostname: connect
container_name: connect
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-5.4.1.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
docker-compose ps
Name Command State Ports
----------------------------------------------------------------------------------------------------------
broker /etc/confluent/docker/run Up 0.0.0.0:9092->9092/tcp
connect /etc/confluent/docker/run Up 0.0.0.0:8083->8083/tcp, 9092/tcp
topics-ui /run.sh Up 0.0.0.0:8000->8000/tcp
rest-proxy /etc/confluent/docker/run Up 0.0.0.0:8082->8082/tcp
schema-registry /etc/confluent/docker/run Up 0.0.0.0:8081->8081/tcp
zookeeper /etc/confluent/docker/run Up 0.0.0.0:2181->2181/tcp, 2888/tcp, 3888/tcp
docker-compose ps output after sometime
Name Command State Ports
-------------------------------------------------------------------------------------------------------------
broker /etc/confluent/docker/run Up 0.0.0.0:9092->9092/tcp
connect /etc/confluent/docker/run Exit 137
topics-ui /run.sh Up 0.0.0.0:8000->8000/tcp
rest-proxy /etc/confluent/docker/run Up 0.0.0.0:8082->8082/tcp
schema-registry /etc/confluent/docker/run Up 0.0.0.0:8081->8081/tcp
zookeeper /etc/confluent/docker/run Up 0.0.0.0:2181->2181/tcp, 2888/tcp, 3888/tcp
we can notice that container 'connect' is exit 137 state.
I don't know why it is trying to find 172.18.0.4 IP. It's not even my IP.
That is your hypervisor IP.
Overall, seems like you might be running into a memory problem because you are connecting to the correct addresses (CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS could be localhost:29092, actually), however the docker network is becoming unstable. (Run docker-compose rm -sf and docker system prune to get to a clean(er) state)
Solutions
Increase your Docker memory in the settings
Don't run containers you don't need (REST Proxy or Topics UI)

Debezium error when connecting to kafka multi-broker in docker swarm

when I set up my Swarm with this stack: Kafka(multi-broker), zookeeper, debezium. Kafka and zookeeper are working, can create topic, consumer and producer, but debezium show error: org.apache.kafka.connect.errors.ConnectException: Failed to connect to and describe Kafka cluster. Check worker's broker connection and security properties. I'm not modifying anything, just default config as docker-stack below:
version: '3.6'
services:
zoo:
image: wurstmeister/zookeeper
ports:
- '2181:2181'
volumes:
- zoo-data:/tmp/zookeeper
deploy:
replicas: 1
placement:
constraints:
- node.labels.type==zoo
kafka:
image: wurstmeister/kafka:latest
ports:
- target: 9094
published: 9094
protocol: tcp
mode: host
environment:
HOSTNAME_COMMAND: "docker info | grep ^Name: | cut -d' ' -f 2"
KAFKA_ZOOKEEPER_CONNECT: zoo:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://_{HOSTNAME_COMMAND}:9094
KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:9094
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
#KAFKA_CREATE_TOPICS: "Topic1:1:2,Topic2:1:1:compact"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- kafka-data:/tmp/kafka-logs
deploy:
mode: global
placement:
constraints:
- node.labels.name==kafka
depends_on:
- zoo
debezium:
image: debezium/connect:0.8
hostname: connect
ports:
- '8083:8083'
environment:
BOOTSTRAP_SERVERS: kafka:9094
GROUP_ID: 1
CONFIG_STORAGE_TOPIC: my_connect_configs
OFFSET_STORAGE_TOPIC: my_connect_offsets
deploy:
placement:
constraints:
- node.labels.type==dbz
depends_on:
- kafka
volumes:
kafka-data:
zoo-data:
When I check docker services log debezium, it show error
shippo_kafka_debezium.1.5l1yhz27r6p2#kafka1 | 2018-08-03 04:33:27,034 ERROR || Stopping due to error [org.apache.kafka.connect.cli.ConnectDistributed]
shippo_kafka_debezium.1.5l1yhz27r6p2#kafka1 | org.apache.kafka.connect.errors.ConnectException: Failed to connect to and describe Kafka cluster. Check worker's broker connection and security properties.
shippo_kafka_debezium.1.5l1yhz27r6p2#kafka1 | at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:64)
shippo_kafka_debezium.1.5l1yhz27r6p2#kafka1 | at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:45)
shippo_kafka_debezium.1.5l1yhz27r6p2#kafka1 | at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:77)
shippo_kafka_debezium.1.5l1yhz27r6p2#kafka1 | Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call.
shippo_kafka_debezium.1.5l1yhz27r6p2#kafka1 | at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
shippo_kafka_debezium.1.5l1yhz27r6p2#kafka1 | at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
shippo_kafka_debezium.1.5l1yhz27r6p2#kafka1 | at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
shippo_kafka_debezium.1.5l1yhz27r6p2#kafka1 | at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:258)
shippo_kafka_debezium.1.5l1yhz27r6p2#kafka1 | at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:58)
shippo_kafka_debezium.1.5l1yhz27r6p2#kafka1 | ... 2 more
shippo_kafka_debezium.1.5l1yhz27r6p2#kafka1 | Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call.
Can anyone show me how to fix this error, I'm new with this stack so, during a few days research, I can't figure out it. Thank you so much !
You can change this line HOSTNAME_COMMAND: "docker info | grep ^Name: | cut -d' ' -f 2" to HOSTNAME_COMMAND: "docker info | grep 'Node Address:' | cut -d' ' -f 4"
or just you can use this docker-compose file
version: '3.2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
deploy:
mode: global
volumes:
- /shared_data/zoo1/data:/data
- /shared_data/zoo1/datalog:/datalog
environment:
ZOO_MY_ID: 1
ZOO_PORT: 2181
ZOO_SERVERS: server.1=zookeeper:2888:3888
kafka:
image: wurstmeister/kafka:latest
ports:
- target: 9094
published: 9094
protocol: tcp
mode: host
deploy:
mode: global
environment:
HOSTNAME_COMMAND: "docker info | grep 'Node Address:' | cut -d' ' -f 4"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://_{HOSTNAME_COMMAND}:9094
KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:9094
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /shared_data/kafka:/var/lib/kafka/data
depends_on:
- zookeeper

kafka-manager not able to read from kafka docker JMX port exposed

I am not able to get my kafka-manager to show messages data/metrics.
I am running kafka-manager locally with the below command,
bin/kafka-manager -Dkafka-manager.zkhosts="localhost:2181"
Also I am checking enable JMX polling on start option.
I am publishing messages to the kafka broker on a topic: test .Kafka-manager view is able to show the topic "test" but does not show the messages count/metrics etc. The kafka-manager application throws an exception which says:
[error] k.m.a.c.OffsetCachePassive - [topic=logstash_topic] An error has occurred while getting topic offsets from broker List((BrokerIdentity(1,kafka-1,9092,9999,false),0))
java.nio.channels.ClosedChannelException: null
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na]
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:98) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na]
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na]
at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na]
at kafka.manager.actor.cluster.OffsetCache$$anonfun$19$$anonfun$21$$anonfun$22.apply(KafkaStateActor.scala:415) ~[kafka-manager.kafka-manager-1.3.3.7-sans-externalized.jar:na]
at kafka.manager.actor.cluster.OffsetCache$$anonfun$19$$anonfun$21$$anonfun$22.apply(KafkaStateActor.scala:412) ~[kafka-manager.kafka-manager-1.3.3.7-sans-externalized.jar:na]
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) ~[org.scala-lang.scala-library-2.11.8.jar:na]
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) ~[org.scala-lang.scala-library-2.11.8.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_161]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_161]
[info] k.m.a.KafkaManagerActor - Updating internal state...
[info] k.m.a.c.BrokerViewCacheActor - Updating broker view...
[info] k.m.a.KafkaManagerActor - Updating internal state...
[error] k.m.j.KafkaJMX$ - Failed to connect to service:jmx:rmi:///jndi/rmi://kafka-1:9999/jmxrmi
java.io.IOException: Failed to retrieve RMIServer stub: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: kafka-1; nested exception is:
java.net.ConnectException: Operation timed out (Connection timed out)]
at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:369) ~[na:1.8.0_161]
at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:270) ~[na:1.8.0_161]
at kafka.manager.jmx.KafkaJMX$.doWithConnection(KafkaJMX.scala:57) ~[kafka-manager.kafka-manager-1.3.3.7-sans-externalized.jar:na]
And my zookeeper and kafka instance are running by docker-compose up -d.
Below is my docker-compose.yml file.
zookeeper:
image: confluentinc/cp-zookeeper:latest
hostname: zookeeper
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- "2181:2181"
kafka-1:
image: confluentinc/cp-kafka:latest
hostname: kafka-1
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-1:9092
KAFKA_JMX_HOSTNAME: kafka-1
KAFKA_JMX_PORT: 9999
KAFKA_JMX_OPTS: "-Djava.rmi.server.hostname=kafka-1 -
Dcom.sun.management.jmxremote.local.only=false -
Dcom.sun.management.jmxremote.rmi.port=9999 -
Dcom.sun.management.jmxremote.port=9999 -
Dcom.sun.management.jmxremote.authenticate=false -
Dcom.sun.management.jmxremote.ssl=false"
ports:
- "9092:9092"
- "9999:9999"
Really stuck at this one. Appreciate any kind of help.
Thanks.
Thank you user7222071, managed to get your code to work after I moved the "-" to the next line for the KAFKA_JMX_OPTS
kafka-1:
image: confluentinc/cp-kafka:5.1.2
volumes:
- kafka-1:/var/lib/kafka/data
ports:
- 19092:19092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-2:32181,zookeeper-3:42181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-1:19092
KAFKA_JMX_HOSTNAME: "kafka-1"
KAFKA_JMX_PORT: 9999
KAFKA_JMX_OPTS: "-Djava.rmi.server.hostname=kafka-1
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.rmi.port=9999
-Dcom.sun.management.jmxremote.port=9999
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false"
extra_hosts:
- "moby:127.0.0.1"
kafka-manager:
image: kafkamanager/kafka-manager:latest
ports:
- 9000:9000
environment:
ZK_HOSTS: zookeeper-1:2218
FWIW, in my scenario I wanted to monitor Kafka Connect and my connectors specifically -
connect:
image: confluentinc/cp-kafka-connect-base:latest
.
.
.
ports:
- "9999:9999"
.
.
.
environment:
KAFKA_JMX_HOSTNAME: "127.0.0.1"
KAFKA_JMX_PORT: 9999
KAFKA_JMX_OPTS: "-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9999"
Then you can access it using a JMX Client on the Docker Host
jmxterm-1.0.2-uber.jar --url 127.0.0.1:9999