ksql-test-runner throwing JMX connector server communication error - apache-kafka

The ksql-test-runner is throwing JMX connector server communication error while running the tool.
Below is the error stack trace that am receiving.
sh-4.4$ ksql-test-runner -s /testing/ksql-statements-enhanced.ksql -i /testing/input.json -o /testing/output.json
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Error: JMX connector server communication error: service:jmx:rmi://primary-ksqldb-server:1089
jdk.internal.agent.AgentConfigurationError: java.rmi.server.ExportException: Port already in use: 1089; nested exception is:
java.net.BindException: Address already in use (Bind failed)
at jdk.management.agent/sun.management.jmxremote.ConnectorBootstrap.exportMBeanServer(ConnectorBootstrap.java:820)
at jdk.management.agent/sun.management.jmxremote.ConnectorBootstrap.startRemoteConnectorServer(ConnectorBootstrap.java:479)
at jdk.management.agent/jdk.internal.agent.Agent.startAgent(Agent.java:447)
at jdk.management.agent/jdk.internal.agent.Agent.startAgent(Agent.java:599)
Caused by: java.rmi.server.ExportException: Port already in use: 1089; nested exception is:
java.net.BindException: Address already in use (Bind failed)
at java.rmi/sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:335)
at java.rmi/sun.rmi.transport.tcp.TCPTransport.exportObject(TCPTransport.java:243)
at java.rmi/sun.rmi.transport.tcp.TCPEndpoint.exportObject(TCPEndpoint.java:412)
at java.rmi/sun.rmi.transport.LiveRef.exportObject(LiveRef.java:147)
at java.rmi/sun.rmi.server.UnicastServerRef.exportObject(UnicastServerRef.java:234)
at jdk.management.agent/sun.management.jmxremote.ConnectorBootstrap$PermanentExporter.exportObject(ConnectorBootstrap.java:203)
at java.management.rmi/javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:153)
at java.management.rmi/javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:138)
at java.management.rmi/javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:473)
at jdk.management.agent/sun.management.jmxremote.ConnectorBootstrap.exportMBeanServer(ConnectorBootstrap.java:816)
... 3 more
Caused by: java.net.BindException: Address already in use (Bind failed)
at java.base/java.net.PlainSocketImpl.socketBind(Native Method)
at java.base/java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:436)
at java.base/java.net.ServerSocket.bind(ServerSocket.java:395)
at java.base/java.net.ServerSocket.<init>(ServerSocket.java:257)
at java.base/java.net.ServerSocket.<init>(ServerSocket.java:149)
at java.rmi/sun.rmi.transport.tcp.TCPDirectSocketFactory.createServerSocket(TCPDirectSocketFactory.java:45)
at java.rmi/sun.rmi.transport.tcp.TCPEndpoint.newServerSocket(TCPEndpoint.java:670)
at java.rmi/sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:324)
... 12 more
Below is the configuration of ksqlDb server in docker-compose.yml file
primary-ksqldb-server:
image: ${KSQL_IMAGE_BASE}confluentinc/ksqldb-server:${KSQL_VERSION}
hostname: primary-ksqldb-server
container_name: primary-ksqldb-server
depends_on:
- kafka
- schema-registry
ports:
- "8088:8088"
- "1099:1099"
- "1089:1089"
logging:
driver: local
options:
max-size: "10m"
max-file: "3"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_LISTENERS: http://0.0.0.0:8088
KSQL_BOOTSTRAP_SERVERS: kafka:9092
KSQL_KSQL_ADVERTISED_LISTENER : http://localhost:8088
KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schema-registry:8081, http://secondary-schema-registry:8082
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
KSQL_KSQL_EXTENSION_DIR: "/usr/ksqldb/ext/"
KSQL_KSQL_SERVICE_ID: "nrt_"
KSQL_KSQL_STREAMS_NUM_STANDBY_REPLICAS: 1
KSQL_KSQL_QUERY_PULL_ENABLE_STANDBY_READS: "true"
KSQL_KSQL_HEARTBEAT_ENABLE: "true"
KSQL_KSQL_LAG_REPORTING_ENABLE : "true"
KSQL_KSQL_QUERY_PULL_MAX_ALLOWED_OFFSET_LAG : 100
KSQL_LOG4J_APPENDER_KAFKA_APPENDER: "org.apache.kafka.log4jappender.KafkaLog4jAppender"
KSQL_LOG4J_APPENDER_KAFKA_APPENDER_LAYOUT: "io.confluent.common.logging.log4j.StructuredJsonLayout"
KSQL_LOG4J_APPENDER_KAFKA_APPENDER_BROKERLIST: localhost:9092
KSQL_LOG4J_APPENDER_KAFKA_APPENDER_TOPIC: KSQL_LOG
KSQL_LOG4J_LOGGER_IO_CONFLUENT_KSQL: INFO,kafka_appender
KSQL_KSQL_QUERY_PULL_METRICS_ENABLED: "true"
KSQL_JMX_OPTS: >
-Djava.rmi.server.hostname=localhost
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=1099
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.rmi.port=1089
Enabling the JMX metrics for ksqlDB server is causing this issue. Without JMX metrics configuration, ksql-test-runner tool is working fine.
I would like to enable JMX metrics and also run ksql-test-runner tool without any issues.

The issue occurred as I was running the command with in docker container of ksqlDb server. The problem resolved by running the command against ksql-cli instance as below.
docker exec ksqldb-cli ksql-test-runner -s script.sql -i input.json -o output.json

Related

How to load rejson.so with docker-compose

I want to store json type in Redis, so I set up for using RedisJSON module with docker-compose. But, I keep failing on running it. The code is below.
I also tried to use redis.conf that is filled with same parameters as command, but Segmentation fault was occured.
What's wrong on my step?
docker-compose.yml
version: '3.8'
services:
redis:
container_name: redis
hostname: redis
image: redis:7.0.0-alpine
command: redis-server --loadmodule /etc/redis/modules/rejson.so
volumes:
- /etc/redis/redis.conf:/etc/redis/redis.conf
- /etc/redis/modules/rejson.so:/etc/redis/modules/rejson.so
Environment
Node.js Version: 16.14.1
Node Redis Version: 4.0.6
Platform: Mac OS 12.3.1
Edited
Segmentation fault was because of unexisting includes option.
Below messages were repeated.
What it means Exec format error?
# oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
# Redis version=7.0.0, bits=64, commit=00000000, modified=0, pid=1, just started
# Configuration loaded
* monotonic clock: POSIX clock_gettime
# Warning: Could not create server TCP listening socket ::1:6380: bind: Address not available
* Running mode=standalone, port=6380.
# Server initialized
# Module /etc/redis/modules/rejson.so failed to load: Error loading shared library /etc/redis/modules/rejson.so: Exec format error
# Can't load module from /etc/redis/modules/rejson.so: server aborting
After a lot of trial and error, I found out that as of this time, the redis v7 docker images seem to lack the rejson module.
I am now using redis/stack-server which already includes all modules (including) rejson. It is based upon redis v6, see https://hub.docker.com/r/redis/redis-stack or https://hub.docker.com/r/redis/redis-stack-server
My compose.yaml now looks like this:
version: "3"
services:
redis:
image: redis/redis-stack
volumes:
- redis_data:/data:rw
ports:
- 6379:6379
restart: unless-stopped
volumes:
redis_data:
In the redis-cli you can then give it a try:
127.0.0.1:6379> JSON.SET employee_profile $ '{ "employee": { "name": "alpha", "age": 40,"married": true } } '
OK
127.0.0.1:6379> JSON.GET employee_profile
"{\"employee\":{\"name\":\"alpha\",\"age\":40,\"married\":true}}"
If you still have Docker volumes from previous installations, you should delete them first. Otherwise your new container might get problems reading an older database.

Getting error messages when running the docker-compose up command on Linux mint

Using the Linux terminal, I'm trying to run the command 'docker-compose up' but instead I get a lot of errors. The following is what is contained in my docker-compose file:
version: "3.3"
services:
postgres:
image: postgres:9.6
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=db
volumes:
- ./gitignore/postgresql:/var/lib/postgresql/data
ports:
- 5432:5432
adminer:
image: adminer
restart: always
ports:
- 8080:8080
and when I run the command on the terminal, these are the errors that I encounter:
~/Desktop/dockerPostgreSQL/docker-composer$ docker-compose up
Creating network "dockercomposer_default" with the default driver
Creating dockercomposer_adminer_1 ...
Creating dockercomposer_postgres_1 ...
Creating dockercomposer_adminer_1
Creating dockercomposer_adminer_1 ... error
ERROR: for dockercomposer_adminer_1 Cannot start service adminer: driver failed programming external connectivity on endpoint dockercomposer_adminer_1 (5d1ce2f6a1e6c4600e8482e87be3e60547955cdadbe64a2fb82facca14491291): Bind for 0.0.0.0:808Creating dockercomposer_postgres_1 ... error
ERROR: for dockercomposer_postgres_1 Cannot start service postgres: driver failed programming external connectivity on endpoint dockercomposer_postgres_1 (e339a5e928066244eed6f119cec4bb9863376264b58e38242034f20da72c7422): Error starting userland proxy: listen tcp 0.0.0.0:5432: bind: address already in use
ERROR: for adminer Cannot start service adminer: driver failed programming external connectivity on endpoint dockercomposer_adminer_1 (5d1ce2f6a1e6c4600e8482e87be3e60547955cdadbe64a2fb82facca14491291): Bind for 0.0.0.0:8080 failed: port is already allocated
ERROR: for postgres Cannot start service postgres: driver failed programming external connectivity on endpoint dockercomposer_postgres_1 (e339a5e928066244eed6f119cec4bb9863376264b58e38242034f20da72c7422): Error starting userland proxy: listen tcp 0.0.0.0:5432: bind: address already in use
ERROR: Encountered errors while bringing up the project.
Could anyone please assist me with this in as to what I could be doing wrong an what I can do to fix these errors?
A service already use 8080 port.
You have to choose an other one in your docker-compose file (8081 for example) :
ports:
- 8081:8080

Mirror maker thread failure

I have swarm network configured when try run Kafka MirrorMake from service inside a docker stack, I get an error from service logs:
mirroring_mirror_maker | [2019-01-10 12:08:49,322] ERROR [mirrormaker-thread-1] Mirror maker thread failure due to (kafka.tools.MirrorMaker$MirrorMakerThread)
mirroring_mirror_maker | java.lang.IllegalStateException: No entry found for connection 2147482646
My consumer.properties code:
bootstrap.servers=kafka:9094
group.id=mm-consumer-group
partition.assignment.strategy=org.apache.kafka.clients.consumer.RoundRobinAssignor
Take a look in the code of producer.properties bellow:
bootstrap.servers=10.1.1.10:9094
compression.type=none
Running command from docker stack:
kafka-mirror-maker.sh --consumer.config /opt/kafka/config/consumer.properties --num.streams 2 --producer.config /opt/kafka/config/producer.properties --whitelist=".*"
If I run the same command from my host machine, it works.
More logging output:
[2019-01-10 16:14:33,470] ERROR [mirrormaker-thread-0] Mirror maker thread failure due to (kafka.tools.MirrorMaker$MirrorMakerThread)
java.lang.IllegalStateException: No entry found for connection 2147482646
at org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:330)
at org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:134)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:885)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:276)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.tryConnect(ConsumerNetworkClient.java:548)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:655)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:635)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:575)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:389)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:297)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:231)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:316)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1214)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1179)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1164)
at kafka.tools.MirrorMaker$ConsumerWrapper.receive(MirrorMaker.scala:481)
at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:370)
[2019-01-10 16:14:33,487] ERROR [mirrormaker-thread-0] Mirror maker thread exited abnormally, stopping the whole mirror maker. (kafka.tools.MirrorMaker$MirrorMakerThread)
After research, the solution are quick simple, I had to add a host mapping with extra_hosts in my docker-compose.yml file.
version: '3.6'
x-proxy: &proxy
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
no_proxy: ${no_proxy}
services:
mirror_maker:
image: wurstmeister/kafka:2.12-2.1.0
configs:
- source: mm-consumer-config
target: /opt/kafka/config/consumer.properties
- source: mm-producer-config
target: /opt/kafka/config/producer.properties
environment:
<<: *proxy
command: kafka-mirror-maker.sh --consumer.config /opt/kafka/config/consumer.properties --num.streams 2 --producer.config /opt/kafka/config/producer.properties --whitelist=.*
networks:
- workshop
extra_hosts:
- outside:10.1.1.10
- manager:10.1.1.11
restart: always
depends_on:
- zookeeper
- kafka
deploy:
# replicas: 2
placement:
constraints:
- node.role == manager
configs:
mm-consumer-config:
file: ./services/mirror_maker/config/consumer.properties
mm-producer-config:
file: ./services/mirror_maker/config/producer.properties
networks:
workshop:
name: workshop
external: true

KafkaProducer cannot be created due to missing java.security.auth.login.config

I am attempting to create a KafkaProducer using the akka-stream-kafka library.
INFRASTRUCTURE
Uses docker-compose, showing the kafka and zookeeper instances only.
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.1.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-enterprise-kafka:5.1.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:9092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
I can report that I have been able to connect to the cluster using a kafka-console-consumer, kafka-console-producer on the CLI and the REST API with no issues.
CONFIG
This is my typesafe config, I attempt to use a plaintext connection to connect to the client. I am trying to connect to the kafka broker without any authentication.
bootstrap.servers="localhost:29092"
acks = "all"
retries = 2
batch.size = 16384
linger.ms = 1
buffer.memory = 33554432
max.block.ms = 5000
CODE
val config = ConfigFactory.load().getConfig("akka.kafka.producer")
val stringSerializer = new StringSerializer()
ProducerSettings[String, String](config, stringSerializer, stringSerializer)
// some code has been omitted here.
Producer.plainSink(producerSettings)
EXCEPTION
This is the stacktrace that I receive, it tells me that there is no jaas config:
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:456)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:41)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:33)
at akka.stream.stage.GraphStage.createLogicAndMaterializedValue(GraphStage.scala:93)
at akka.stream.impl.GraphStageIsland.materializeAtomic(PhasedFusingActorMaterializer.scala:630)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:450)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:415)
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
at org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133)
at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98)
at org.apache.kafka.common.security.JaasContext.loadClientContext(JaasContext.java:84)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:119)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:413)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
How can I connect to the Kafka Cluster using no auth as required to run locally?
I have tried adding KAFKA_OPTS as an environment variable to the kafka service in docker-compose as well as adding it to the application.conf.
sasl.jaas.config="org.apache.kafka.common.security.plain.PlainLoginModule required username='confluent' password='confluent-secret';"
In the former case, some associated services such as the kafka-rest API failed. In the latter case, I get the following exception:
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:456)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:41)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:33)
at akka.stream.stage.GraphStage.createLogicAndMaterializedValue(GraphStage.scala:93)
at akka.stream.impl.GraphStageIsland.materializeAtomic(PhasedFusingActorMaterializer.scala:630)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:450)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:415)
Caused by: org.apache.kafka.common.KafkaException: java.lang.IllegalArgumentException: No serviceName defined in either JAAS or Kafka config
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:125)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:140)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:413)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:41)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:33)
Caused by: java.lang.IllegalArgumentException: No serviceName defined in either JAAS or Kafka config
at org.apache.kafka.common.security.kerberos.KerberosLogin.getServiceName(KerberosLogin.java:297)
at org.apache.kafka.common.security.kerberos.KerberosLogin.configure(KerberosLogin.java:87)
at org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:52)
at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:89)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:114)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:140)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:413)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)

kafka-manager not able to read from kafka docker JMX port exposed

I am not able to get my kafka-manager to show messages data/metrics.
I am running kafka-manager locally with the below command,
bin/kafka-manager -Dkafka-manager.zkhosts="localhost:2181"
Also I am checking enable JMX polling on start option.
I am publishing messages to the kafka broker on a topic: test .Kafka-manager view is able to show the topic "test" but does not show the messages count/metrics etc. The kafka-manager application throws an exception which says:
[error] k.m.a.c.OffsetCachePassive - [topic=logstash_topic] An error has occurred while getting topic offsets from broker List((BrokerIdentity(1,kafka-1,9092,9999,false),0))
java.nio.channels.ClosedChannelException: null
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na]
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:98) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na]
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na]
at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na]
at kafka.manager.actor.cluster.OffsetCache$$anonfun$19$$anonfun$21$$anonfun$22.apply(KafkaStateActor.scala:415) ~[kafka-manager.kafka-manager-1.3.3.7-sans-externalized.jar:na]
at kafka.manager.actor.cluster.OffsetCache$$anonfun$19$$anonfun$21$$anonfun$22.apply(KafkaStateActor.scala:412) ~[kafka-manager.kafka-manager-1.3.3.7-sans-externalized.jar:na]
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) ~[org.scala-lang.scala-library-2.11.8.jar:na]
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) ~[org.scala-lang.scala-library-2.11.8.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_161]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_161]
[info] k.m.a.KafkaManagerActor - Updating internal state...
[info] k.m.a.c.BrokerViewCacheActor - Updating broker view...
[info] k.m.a.KafkaManagerActor - Updating internal state...
[error] k.m.j.KafkaJMX$ - Failed to connect to service:jmx:rmi:///jndi/rmi://kafka-1:9999/jmxrmi
java.io.IOException: Failed to retrieve RMIServer stub: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: kafka-1; nested exception is:
java.net.ConnectException: Operation timed out (Connection timed out)]
at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:369) ~[na:1.8.0_161]
at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:270) ~[na:1.8.0_161]
at kafka.manager.jmx.KafkaJMX$.doWithConnection(KafkaJMX.scala:57) ~[kafka-manager.kafka-manager-1.3.3.7-sans-externalized.jar:na]
And my zookeeper and kafka instance are running by docker-compose up -d.
Below is my docker-compose.yml file.
zookeeper:
image: confluentinc/cp-zookeeper:latest
hostname: zookeeper
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- "2181:2181"
kafka-1:
image: confluentinc/cp-kafka:latest
hostname: kafka-1
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-1:9092
KAFKA_JMX_HOSTNAME: kafka-1
KAFKA_JMX_PORT: 9999
KAFKA_JMX_OPTS: "-Djava.rmi.server.hostname=kafka-1 -
Dcom.sun.management.jmxremote.local.only=false -
Dcom.sun.management.jmxremote.rmi.port=9999 -
Dcom.sun.management.jmxremote.port=9999 -
Dcom.sun.management.jmxremote.authenticate=false -
Dcom.sun.management.jmxremote.ssl=false"
ports:
- "9092:9092"
- "9999:9999"
Really stuck at this one. Appreciate any kind of help.
Thanks.
Thank you user7222071, managed to get your code to work after I moved the "-" to the next line for the KAFKA_JMX_OPTS
kafka-1:
image: confluentinc/cp-kafka:5.1.2
volumes:
- kafka-1:/var/lib/kafka/data
ports:
- 19092:19092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-2:32181,zookeeper-3:42181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-1:19092
KAFKA_JMX_HOSTNAME: "kafka-1"
KAFKA_JMX_PORT: 9999
KAFKA_JMX_OPTS: "-Djava.rmi.server.hostname=kafka-1
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.rmi.port=9999
-Dcom.sun.management.jmxremote.port=9999
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false"
extra_hosts:
- "moby:127.0.0.1"
kafka-manager:
image: kafkamanager/kafka-manager:latest
ports:
- 9000:9000
environment:
ZK_HOSTS: zookeeper-1:2218
FWIW, in my scenario I wanted to monitor Kafka Connect and my connectors specifically -
connect:
image: confluentinc/cp-kafka-connect-base:latest
.
.
.
ports:
- "9999:9999"
.
.
.
environment:
KAFKA_JMX_HOSTNAME: "127.0.0.1"
KAFKA_JMX_PORT: 9999
KAFKA_JMX_OPTS: "-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9999"
Then you can access it using a JMX Client on the Docker Host
jmxterm-1.0.2-uber.jar --url 127.0.0.1:9999