I have swarm network configured when try run Kafka MirrorMake from service inside a docker stack, I get an error from service logs:
mirroring_mirror_maker | [2019-01-10 12:08:49,322] ERROR [mirrormaker-thread-1] Mirror maker thread failure due to (kafka.tools.MirrorMaker$MirrorMakerThread)
mirroring_mirror_maker | java.lang.IllegalStateException: No entry found for connection 2147482646
My consumer.properties code:
bootstrap.servers=kafka:9094
group.id=mm-consumer-group
partition.assignment.strategy=org.apache.kafka.clients.consumer.RoundRobinAssignor
Take a look in the code of producer.properties bellow:
bootstrap.servers=10.1.1.10:9094
compression.type=none
Running command from docker stack:
kafka-mirror-maker.sh --consumer.config /opt/kafka/config/consumer.properties --num.streams 2 --producer.config /opt/kafka/config/producer.properties --whitelist=".*"
If I run the same command from my host machine, it works.
More logging output:
[2019-01-10 16:14:33,470] ERROR [mirrormaker-thread-0] Mirror maker thread failure due to (kafka.tools.MirrorMaker$MirrorMakerThread)
java.lang.IllegalStateException: No entry found for connection 2147482646
at org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:330)
at org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:134)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:885)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:276)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.tryConnect(ConsumerNetworkClient.java:548)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:655)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:635)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:575)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:389)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:297)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:231)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:316)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1214)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1179)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1164)
at kafka.tools.MirrorMaker$ConsumerWrapper.receive(MirrorMaker.scala:481)
at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:370)
[2019-01-10 16:14:33,487] ERROR [mirrormaker-thread-0] Mirror maker thread exited abnormally, stopping the whole mirror maker. (kafka.tools.MirrorMaker$MirrorMakerThread)
After research, the solution are quick simple, I had to add a host mapping with extra_hosts in my docker-compose.yml file.
version: '3.6'
x-proxy: &proxy
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
no_proxy: ${no_proxy}
services:
mirror_maker:
image: wurstmeister/kafka:2.12-2.1.0
configs:
- source: mm-consumer-config
target: /opt/kafka/config/consumer.properties
- source: mm-producer-config
target: /opt/kafka/config/producer.properties
environment:
<<: *proxy
command: kafka-mirror-maker.sh --consumer.config /opt/kafka/config/consumer.properties --num.streams 2 --producer.config /opt/kafka/config/producer.properties --whitelist=.*
networks:
- workshop
extra_hosts:
- outside:10.1.1.10
- manager:10.1.1.11
restart: always
depends_on:
- zookeeper
- kafka
deploy:
# replicas: 2
placement:
constraints:
- node.role == manager
configs:
mm-consumer-config:
file: ./services/mirror_maker/config/consumer.properties
mm-producer-config:
file: ./services/mirror_maker/config/producer.properties
networks:
workshop:
name: workshop
external: true
Related
The ksql-test-runner is throwing JMX connector server communication error while running the tool.
Below is the error stack trace that am receiving.
sh-4.4$ ksql-test-runner -s /testing/ksql-statements-enhanced.ksql -i /testing/input.json -o /testing/output.json
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Error: JMX connector server communication error: service:jmx:rmi://primary-ksqldb-server:1089
jdk.internal.agent.AgentConfigurationError: java.rmi.server.ExportException: Port already in use: 1089; nested exception is:
java.net.BindException: Address already in use (Bind failed)
at jdk.management.agent/sun.management.jmxremote.ConnectorBootstrap.exportMBeanServer(ConnectorBootstrap.java:820)
at jdk.management.agent/sun.management.jmxremote.ConnectorBootstrap.startRemoteConnectorServer(ConnectorBootstrap.java:479)
at jdk.management.agent/jdk.internal.agent.Agent.startAgent(Agent.java:447)
at jdk.management.agent/jdk.internal.agent.Agent.startAgent(Agent.java:599)
Caused by: java.rmi.server.ExportException: Port already in use: 1089; nested exception is:
java.net.BindException: Address already in use (Bind failed)
at java.rmi/sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:335)
at java.rmi/sun.rmi.transport.tcp.TCPTransport.exportObject(TCPTransport.java:243)
at java.rmi/sun.rmi.transport.tcp.TCPEndpoint.exportObject(TCPEndpoint.java:412)
at java.rmi/sun.rmi.transport.LiveRef.exportObject(LiveRef.java:147)
at java.rmi/sun.rmi.server.UnicastServerRef.exportObject(UnicastServerRef.java:234)
at jdk.management.agent/sun.management.jmxremote.ConnectorBootstrap$PermanentExporter.exportObject(ConnectorBootstrap.java:203)
at java.management.rmi/javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:153)
at java.management.rmi/javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:138)
at java.management.rmi/javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:473)
at jdk.management.agent/sun.management.jmxremote.ConnectorBootstrap.exportMBeanServer(ConnectorBootstrap.java:816)
... 3 more
Caused by: java.net.BindException: Address already in use (Bind failed)
at java.base/java.net.PlainSocketImpl.socketBind(Native Method)
at java.base/java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:436)
at java.base/java.net.ServerSocket.bind(ServerSocket.java:395)
at java.base/java.net.ServerSocket.<init>(ServerSocket.java:257)
at java.base/java.net.ServerSocket.<init>(ServerSocket.java:149)
at java.rmi/sun.rmi.transport.tcp.TCPDirectSocketFactory.createServerSocket(TCPDirectSocketFactory.java:45)
at java.rmi/sun.rmi.transport.tcp.TCPEndpoint.newServerSocket(TCPEndpoint.java:670)
at java.rmi/sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:324)
... 12 more
Below is the configuration of ksqlDb server in docker-compose.yml file
primary-ksqldb-server:
image: ${KSQL_IMAGE_BASE}confluentinc/ksqldb-server:${KSQL_VERSION}
hostname: primary-ksqldb-server
container_name: primary-ksqldb-server
depends_on:
- kafka
- schema-registry
ports:
- "8088:8088"
- "1099:1099"
- "1089:1089"
logging:
driver: local
options:
max-size: "10m"
max-file: "3"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_LISTENERS: http://0.0.0.0:8088
KSQL_BOOTSTRAP_SERVERS: kafka:9092
KSQL_KSQL_ADVERTISED_LISTENER : http://localhost:8088
KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schema-registry:8081, http://secondary-schema-registry:8082
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
KSQL_KSQL_EXTENSION_DIR: "/usr/ksqldb/ext/"
KSQL_KSQL_SERVICE_ID: "nrt_"
KSQL_KSQL_STREAMS_NUM_STANDBY_REPLICAS: 1
KSQL_KSQL_QUERY_PULL_ENABLE_STANDBY_READS: "true"
KSQL_KSQL_HEARTBEAT_ENABLE: "true"
KSQL_KSQL_LAG_REPORTING_ENABLE : "true"
KSQL_KSQL_QUERY_PULL_MAX_ALLOWED_OFFSET_LAG : 100
KSQL_LOG4J_APPENDER_KAFKA_APPENDER: "org.apache.kafka.log4jappender.KafkaLog4jAppender"
KSQL_LOG4J_APPENDER_KAFKA_APPENDER_LAYOUT: "io.confluent.common.logging.log4j.StructuredJsonLayout"
KSQL_LOG4J_APPENDER_KAFKA_APPENDER_BROKERLIST: localhost:9092
KSQL_LOG4J_APPENDER_KAFKA_APPENDER_TOPIC: KSQL_LOG
KSQL_LOG4J_LOGGER_IO_CONFLUENT_KSQL: INFO,kafka_appender
KSQL_KSQL_QUERY_PULL_METRICS_ENABLED: "true"
KSQL_JMX_OPTS: >
-Djava.rmi.server.hostname=localhost
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=1099
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.rmi.port=1089
Enabling the JMX metrics for ksqlDB server is causing this issue. Without JMX metrics configuration, ksql-test-runner tool is working fine.
I would like to enable JMX metrics and also run ksql-test-runner tool without any issues.
The issue occurred as I was running the command with in docker container of ksqlDb server. The problem resolved by running the command against ksql-cli instance as below.
docker exec ksqldb-cli ksql-test-runner -s script.sql -i input.json -o output.json
Installed the standalone ksqlDB in my local computer.
While running the ksql server, i am getting the following error
Remote server at http://ksqldb-server:8088 does not appear to be a valid KSQL
server. Please ensure that the URL provided is for an active KSQL server.
The server responded with the following error:
Error issuing GET to KSQL server. path:/info
Caused by: java.net.UnknownHostException: failed to resolve 'ksqldb-server'
after 2 queries
Caused by: failed to resolve 'ksqldb-server' after 2 queries
PS: my docker-compose.yml looks like as follows
---
version: '2'
services:
ksqldb-server:
image: confluentinc/ksqldb-server:0.23.1
hostname: ksqldb-server
container_name: ksqldb-server
ports:
- "8088:8088"
environment:
KSQL_LISTENERS: http://0.0.0.0:8088
KSQL_BOOTSTRAP_SERVERS: localhost:9092
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
ksqldb-cli:
image: confluentinc/ksqldb-cli:0.23.1
container_name: ksqldb-cli
depends_on:
- ksqldb-server
entrypoint: /bin/sh
tty: true
While running docker-compose up, it is giving following error
[2022-01-11 07:21:54,280] INFO [AdminClient clientId=adminclient-1] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager:235)
ksqldb-server | org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: fetchMetadata
ksqldb-server | [2022-01-11 07:21:54,282] ERROR Failed to start KSQL (io.confluent.ksql.rest.server.KsqlServerMain:68)
ksqldb-server | java.lang.RuntimeException: Failed to get Kafka cluster information
ksqldb-server | at io.confluent.ksql.services.KafkaClusterUtil.getKafkaClusterId(KafkaClusterUtil.java:107)
ksqldb-server | at io.confluent.ksql.rest.server.KsqlRestApplication.buildApplication(KsqlRestApplication.java:669)
ksqldb-server | at io.confluent.ksql.rest.server.KsqlServerMain.createExecutable(KsqlServerMain.java:154)
ksqldb-server | at io.confluent.ksql.rest.server.KsqlServerMain.main(KsqlServerMain.java:61)
ksqldb-server | Caused by: java.util.concurrent.TimeoutException
ksqldb-server | at java.base/java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1886)
ksqldb-server | at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2021)
ksqldb-server | at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:180)
ksqldb-server | at io.confluent.ksql.services.KafkaClusterUtil.getKafkaClusterId(KafkaClusterUtil.java:105)
ksqldb-server | ... 3 more
ksqldb-server exited with code 255
^CGracefully stopping... (press Ctrl+C again to force)
Stopping ksqldb-cli ... done
in server.properties file, i added
listeners=PLAINTEXT://localhost:9092
and restarted kafka but it is not solved the problem.
kafka consumer and producer are working fine. the problem is only with kSQL
Any help is appreciated
There are two things that are wrong in this config:
You need to specify in the ksqldb-server yaml the name of the service which you have used to expose the kafka-cluster, if you used strimzi it usually ends up with "kafka-bootstrap"
The second error is that you also need to expose the ksqldb-server so that the ksql-cli can connect to that using the ENV variable KSQL_BOOTSTRAP_SERVERS.
If you want there is a template ksql-db-service from the helm chart you can use:
apiVersion: v1
kind: Service
metadata:
name: my-ksql-db-service
namespace: default
spec:
ports:
- name: http
port: 8088
protocol: TCP
targetPort: ksqldb-server
selector:
app.kubernetes.io/name: ksqldb-server
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
You need to do a similar thing for kafka if you haven't already but exposing port 9092.
However this process might be error-prone since you have to write manually a lot of yamls. There is no need to reinvent the wheel. Use helm charts and operators!
Install kafka using strimzi
Install ksql using Helm chart
You can use this job template to execute the query:
apiVersion: batch/v1
kind: Job
metadata:
namespace: default
name: first-query
spec:
template:
spec:
containers:
- name: ksqldb-query
imagePullPolicy: IfNotPresent
image: confluentinc/cp-ksqldb-cli
command: ['/bin/bash']
args:
- -c
- >-
ksql --execute "select * from your_source emit changes" --config-file ksql-server.properties <KSQL_BOOTSTRAP_SERVERS>
envFrom:
- configMapRef:
name: my-ksql-db-4e876644-ksqldb
restartPolicy: 'Never'
You can template better the yaml-job but did not want to complicate things
Let me know if something is not clear
I am trying to setup a local kafka-connect stack with docker-compose and I have a problem with my scala producer that's supposed to send avro messages to a kafka topic using schema registry.
In my producer (scala) code I do the following:
val kafkaBootstrapServer = "kafka:9092"
val schemaRegistryUrl = "http://schema-registry:8081"
val topicName = "test"
val props = new Properties()
props.put("bootstrap.servers", kafkaBootstrapServer)
props.put("schema.registry.url", schemaRegistryUrl)
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
props.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer")
props.put("acks", "1")
and my docker-compose script reads:
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.5.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:5.5.0
hostname: kafka
container_name: kafka
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka:9092"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CREATE_TOPICS: "test:1:1"
schema-registry:
image: confluentinc/cp-schema-registry:5.5.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- kafka
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: zookeeper:2181
producer:
image: producer-app:1.0
depends_on:
- schema-registry
- kafka
EDIT: now the schema registry seems to be up:
schema-registry | [2021-01-17 22:27:27,704] INFO HV000001: Hibernate Validator 6.0.17.Final (org.hibernate.validator.internal.util.Version)
kafka | [2021-01-17 22:27:27,918] INFO [Controller id=1001] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
kafka | [2021-01-17 22:27:27,919] TRACE [Controller id=1001] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
kafka | [2021-01-17 22:27:27,923] DEBUG [Controller id=1001] Topics not in preferred replica for broker 1001 Map() (kafka.controller.KafkaController)
kafka | [2021-01-17 22:27:27,924] TRACE [Controller id=1001] Leader imbalance ratio for broker 1001 is 0.0 (kafka.controller.KafkaController)
schema-registry | [2021-01-17 22:27:28,010] INFO JVM Runtime does not support Modules (org.eclipse.jetty.util.TypeUtil)
schema-registry | [2021-01-17 22:27:28,011] INFO Started o.e.j.s.ServletContextHandler#22d6f11{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
schema-registry | [2021-01-17 22:27:28,035] INFO Started o.e.j.s.ServletContextHandler#15eebbff{/ws,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
schema-registry | [2021-01-17 22:27:28,058] INFO Started NetworkTrafficServerConnector#2698dc7{HTTP/1.1,[http/1.1]}{0.0.0.0:8081} (org.eclipse.jetty.server.AbstractConnector)
schema-registry | [2021-01-17 22:27:28,059] INFO Started #4137ms (org.eclipse.jetty.server.Server)
schema-registry | [2021-01-17 22:27:28,059] INFO Server started, listening for requests... (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain)
but prior to this, during the execution of the script I get:
schema-registry | ===> Launching ...
schema-registry | ===> Launching schema-registry ...
producer_1 | [main] ERROR io.confluent.kafka.schemaregistry.client.rest.RestService - Failed to send HTTP request to endpoint: http://schema-registry:8081/subjects/test-value/versions
producer_1 | java.net.ConnectException: Connection refused (Connection refused)
Could this be due to some dependency issue? It's like it is running the producer before completely starting the schema-registry! I did put depends on - schema-registry for the producer ...
It looks like the cause here was that your app was trying to call the Schema Registry before it had finished starting up. Perhaps your app should include some error handling for this condition and maybe retry after a backoff period on the first x failures?
For anyone else going through this, I was getting a connection refused error with my schema registry for the longest time. I have my schema-registry running on a docker container with a separate container for a REST API that is trying to use the schema-registry container. What fixed it for me was changing my connection URL in the REST container from http://localhost:8081 -> http://schema-registry-server:8081.
Connecting to the Schema Registry in the REST container:
schema_registry = SchemaRegistryClient(
url={
"url": "http://schema-registry-server:8081"
},
)
Here's the schema registry part of my docker compose file
# docker-compose.yml
schema-registry-server:
image: confluentinc/cp-schema-registry
hostname: schema-registry-server
container_name: schema-registry-server
depends_on:
- kafka
- zookeeper
ports:
- 8081:8081
environment:
- SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=kafka:9092
- SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=zookeeper:32181
- SCHEMA_REGISTRY_HOST_NAME=schema-registry-server
- SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081
- SCHEMA_REGISTRY_DEBUG=true
I am attempting to create a KafkaProducer using the akka-stream-kafka library.
INFRASTRUCTURE
Uses docker-compose, showing the kafka and zookeeper instances only.
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.1.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-enterprise-kafka:5.1.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:9092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
I can report that I have been able to connect to the cluster using a kafka-console-consumer, kafka-console-producer on the CLI and the REST API with no issues.
CONFIG
This is my typesafe config, I attempt to use a plaintext connection to connect to the client. I am trying to connect to the kafka broker without any authentication.
bootstrap.servers="localhost:29092"
acks = "all"
retries = 2
batch.size = 16384
linger.ms = 1
buffer.memory = 33554432
max.block.ms = 5000
CODE
val config = ConfigFactory.load().getConfig("akka.kafka.producer")
val stringSerializer = new StringSerializer()
ProducerSettings[String, String](config, stringSerializer, stringSerializer)
// some code has been omitted here.
Producer.plainSink(producerSettings)
EXCEPTION
This is the stacktrace that I receive, it tells me that there is no jaas config:
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:456)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:41)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:33)
at akka.stream.stage.GraphStage.createLogicAndMaterializedValue(GraphStage.scala:93)
at akka.stream.impl.GraphStageIsland.materializeAtomic(PhasedFusingActorMaterializer.scala:630)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:450)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:415)
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
at org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:133)
at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98)
at org.apache.kafka.common.security.JaasContext.loadClientContext(JaasContext.java:84)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:119)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:413)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
How can I connect to the Kafka Cluster using no auth as required to run locally?
I have tried adding KAFKA_OPTS as an environment variable to the kafka service in docker-compose as well as adding it to the application.conf.
sasl.jaas.config="org.apache.kafka.common.security.plain.PlainLoginModule required username='confluent' password='confluent-secret';"
In the former case, some associated services such as the kafka-rest API failed. In the latter case, I get the following exception:
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:456)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:41)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:33)
at akka.stream.stage.GraphStage.createLogicAndMaterializedValue(GraphStage.scala:93)
at akka.stream.impl.GraphStageIsland.materializeAtomic(PhasedFusingActorMaterializer.scala:630)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:450)
at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:415)
Caused by: org.apache.kafka.common.KafkaException: java.lang.IllegalArgumentException: No serviceName defined in either JAAS or Kafka config
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:125)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:140)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:413)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
at akka.kafka.ProducerSettings.createKafkaProducer(ProducerSettings.scala:226)
at akka.kafka.scaladsl.Producer$.$anonfun$flexiFlow$1(Producer.scala:155)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:41)
at akka.kafka.internal.ProducerStage$DefaultProducerStage.createLogic(ProducerStage.scala:33)
Caused by: java.lang.IllegalArgumentException: No serviceName defined in either JAAS or Kafka config
at org.apache.kafka.common.security.kerberos.KerberosLogin.getServiceName(KerberosLogin.java:297)
at org.apache.kafka.common.security.kerberos.KerberosLogin.configure(KerberosLogin.java:87)
at org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:52)
at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:89)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:114)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:140)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:65)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:413)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:318)
I am not able to get my kafka-manager to show messages data/metrics.
I am running kafka-manager locally with the below command,
bin/kafka-manager -Dkafka-manager.zkhosts="localhost:2181"
Also I am checking enable JMX polling on start option.
I am publishing messages to the kafka broker on a topic: test .Kafka-manager view is able to show the topic "test" but does not show the messages count/metrics etc. The kafka-manager application throws an exception which says:
[error] k.m.a.c.OffsetCachePassive - [topic=logstash_topic] An error has occurred while getting topic offsets from broker List((BrokerIdentity(1,kafka-1,9092,9999,false),0))
java.nio.channels.ClosedChannelException: null
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na]
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:98) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na]
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na]
at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na]
at kafka.manager.actor.cluster.OffsetCache$$anonfun$19$$anonfun$21$$anonfun$22.apply(KafkaStateActor.scala:415) ~[kafka-manager.kafka-manager-1.3.3.7-sans-externalized.jar:na]
at kafka.manager.actor.cluster.OffsetCache$$anonfun$19$$anonfun$21$$anonfun$22.apply(KafkaStateActor.scala:412) ~[kafka-manager.kafka-manager-1.3.3.7-sans-externalized.jar:na]
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) ~[org.scala-lang.scala-library-2.11.8.jar:na]
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) ~[org.scala-lang.scala-library-2.11.8.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_161]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_161]
[info] k.m.a.KafkaManagerActor - Updating internal state...
[info] k.m.a.c.BrokerViewCacheActor - Updating broker view...
[info] k.m.a.KafkaManagerActor - Updating internal state...
[error] k.m.j.KafkaJMX$ - Failed to connect to service:jmx:rmi:///jndi/rmi://kafka-1:9999/jmxrmi
java.io.IOException: Failed to retrieve RMIServer stub: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: kafka-1; nested exception is:
java.net.ConnectException: Operation timed out (Connection timed out)]
at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:369) ~[na:1.8.0_161]
at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:270) ~[na:1.8.0_161]
at kafka.manager.jmx.KafkaJMX$.doWithConnection(KafkaJMX.scala:57) ~[kafka-manager.kafka-manager-1.3.3.7-sans-externalized.jar:na]
And my zookeeper and kafka instance are running by docker-compose up -d.
Below is my docker-compose.yml file.
zookeeper:
image: confluentinc/cp-zookeeper:latest
hostname: zookeeper
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- "2181:2181"
kafka-1:
image: confluentinc/cp-kafka:latest
hostname: kafka-1
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-1:9092
KAFKA_JMX_HOSTNAME: kafka-1
KAFKA_JMX_PORT: 9999
KAFKA_JMX_OPTS: "-Djava.rmi.server.hostname=kafka-1 -
Dcom.sun.management.jmxremote.local.only=false -
Dcom.sun.management.jmxremote.rmi.port=9999 -
Dcom.sun.management.jmxremote.port=9999 -
Dcom.sun.management.jmxremote.authenticate=false -
Dcom.sun.management.jmxremote.ssl=false"
ports:
- "9092:9092"
- "9999:9999"
Really stuck at this one. Appreciate any kind of help.
Thanks.
Thank you user7222071, managed to get your code to work after I moved the "-" to the next line for the KAFKA_JMX_OPTS
kafka-1:
image: confluentinc/cp-kafka:5.1.2
volumes:
- kafka-1:/var/lib/kafka/data
ports:
- 19092:19092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-2:32181,zookeeper-3:42181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-1:19092
KAFKA_JMX_HOSTNAME: "kafka-1"
KAFKA_JMX_PORT: 9999
KAFKA_JMX_OPTS: "-Djava.rmi.server.hostname=kafka-1
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.rmi.port=9999
-Dcom.sun.management.jmxremote.port=9999
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false"
extra_hosts:
- "moby:127.0.0.1"
kafka-manager:
image: kafkamanager/kafka-manager:latest
ports:
- 9000:9000
environment:
ZK_HOSTS: zookeeper-1:2218
FWIW, in my scenario I wanted to monitor Kafka Connect and my connectors specifically -
connect:
image: confluentinc/cp-kafka-connect-base:latest
.
.
.
ports:
- "9999:9999"
.
.
.
environment:
KAFKA_JMX_HOSTNAME: "127.0.0.1"
KAFKA_JMX_PORT: 9999
KAFKA_JMX_OPTS: "-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9999"
Then you can access it using a JMX Client on the Docker Host
jmxterm-1.0.2-uber.jar --url 127.0.0.1:9999