Can't produce messages to Apache Kafka from outside Hortonworks HDP sandbox 2.5 - apache-kafka

I have a problem trying to produce messages to Apache Kafka topic from outside of Hortonworks HDP Sandbox. Right now I am using version 2.5 deployed to Azure but I experienced similar behaviour using HDP 2.6 on local VirtualBox. I was able to open port 6667 and confirm that TCP connection is getting through to the VM. I am also able to access the list of topics.
Using listeners=PLAINTEXT://0.0.0.0:6667
C:\GIT\kafka\bin\windows>kafka-console-producer.bat --broker-list MY_PUBLIC_IP:6667 --topic test1
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/core/build/dependant-libs-2.11.11/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/tools/build/dependant-libs-2.11.11/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/connect/api/build/dependant-libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/connect/file/build/dependant-libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/connect/json/build/dependant-libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>sadasdas
>[2017-11-04 23:06:14,672] WARN [Producer clientId=console-producer] Connection to node 1001 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2017-11-04 23:06:15,174] ERROR Error when sending message to topic test1 with key: null, value: 8 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test1-0: 1508 ms has passed since batch creation plus linger time
[2017-11-04 23:06:15,724] WARN [Producer clientId=console-producer] Connection to node 1001 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2017-11-04 23:06:16,875] WARN [Producer clientId=console-producer] Connection to node 1001 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
With the following configuration entries added:
advertised.port=6667
advertised.listeners=PLAINTEXT://MY_PUBLIC_IP:6667
advertised.host.name=MY_PUBLIC_IP
result:
C:\GIT\kafka\bin\windows>kafka-console-producer.bat --broker-list MY_PUBLIC_IP:6667 --topic test1
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/core/build/dependant-libs-2.11.11/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/tools/build/dependant-libs-2.11.11/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/connect/api/build/dependant-libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/connect/file/build/dependant-libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/GIT/kafka/connect/json/build/dependant-libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>asdasd
[2017-11-04 22:37:11,713] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 2 : {test1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
Could you give me any advice as to what may be wrong? I'd like to stick with the sandbox for now.
UPDATE
I finally got it working. This was the solution:
advertised.listeners=PLAINTEXT://sandbox.hortonworks.com:6667
Seems like my Kafka can only talk to brokers when hostname is given, not IP...

Related

ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)

~/kafka$ bin/kafka-server-start.sh config/server.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/boitran/hive/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/boitran/kafka/libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/boitran/kafka/libs/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
[2022-12-29 13:46:12,977] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2022-12-29 13:46:13,359] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
java.lang.NoSuchMethodError: scala.Predef$.refArrayOps([Ljava/lang/Object;)Ljava/lang/Object;
at kafka.Kafka$.getPropsFromArgs(Kafka.scala:43) ~[kafka_2.13-3.3.1.jar:?]
at kafka.Kafka$.main(Kafka.scala:86) [kafka_2.13-3.3.1.jar:?]
at kafka.Kafka.main(Kafka.scala) [kafka_2.13-3.3.1.jar:?]
Exception in thread "main" java.lang.NoSuchMethodError: scala.Option.orNull(Lscala/$less$colon$less;)Ljava/lang/Object;
at kafka.utils.Exit$.exit(Exit.scala:28)
at kafka.Kafka$.main(Kafka.scala:122)
at kafka.Kafka.main(Kafka.scala)
I have install and start kafka with exactly following to apache site, but it's error
Error is common if you have a lower version of Scala installed on your computer. Either download Kafka 3.3.1-2.12, for example, or upgrade Scala

Kafka set up 2 authentications SASL_PLAINTEXT and SASL_SSL

I intended to setup 2 authentication modes which are SASL_PLAINTEXT and SASL_SSL. SASL_PLAINTEXT will be used between brokers and zookeeper, and SASL_SSL will be used with external producers and consumers.
I can completely set either one of them, but can't set them both at the same time.
Now Broker can authenticate with Zookeeper, but I can't have Producer to authenticate to Broker via SASL_SSL:9093.
Server.properties
listeners=SASL_PLAINTEXT://172.22.10.21:9092,SASL_SSL://172.22.10.21:9093
advertised.listeners=SASL_PLAINTEXT://172.22.10.21:9092,SASL_SSL://172.22.10.21:9093
ssl.endpoint.identification.algorithm=
ssl.client.auth=required
ssl.truststore.location=/home/aaapi/ssl/kafka.server.truststore.jks
ssl.truststore.password=serversecret
ssl.keystore.location=/home/aaapi/ssl/kafka.server.keystore.jks
ssl.keystore.password=serversecret
ssl.key.password=serversecret
ssl.enabled.protocols=TLSv1.2
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=SCRAM-SHA-256,SCRAM-SHA-512,PLAIN
sasl.mechanism=SCRAM-SHA-512 here
server_jaas.conf
sasl_ssl.KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="adminssl"
password="adminssl-secret";
};
sasl_plaintext.KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_kafkabroker1="kafkabroker1-secret";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret";
};
zookeeper_jaas.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="admin-secret";
};
client_ssl.properties
security.protocol=SASL_SSL
#bootstrap.servers=172.22.10.21:9093
sasl.mechanism=SCRAM-SHA-512
ssl.enabled.protocols=TLSv1.2
ssl.endpoint.identification.algorithm=
ssl.truststore.location=/home/aaapi/ssl/kafka.client.truststore.jks
ssl.truststore.password=clientsecret
ssl.keystore.location=/home/aaapi/ssl/kafka.server.keystore.jks
ssl.keystore.password=serversecret
ssl.key.password=serversecret
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="adminssl" \
password="adminssl-secret";
Error
/opt/kafka/bin/kafka-console-producer.sh --broker-list 172.22.10.21:9093 --topic test1 --producer.config /home/aaapi/client_config/consumer/client_ssl.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/kafka-3.2.1-src/tools/build/dependant-libs-2.13.6/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/kafka-3.2.1-src/trogdor/build/dependant-libs-2.13.6/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/kafka-3.2.1-src/connect/runtime/build/dependant-libs/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/kafka-3.2.1-src/connect/mirror/build/dependant-libs/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory]
>[2022-11-11 19:45:06,787] ERROR [Producer clientId=console-producer] Connection to node -1 (ip-172-22-10-21.ap-southeast-1.compute.internal/172.22.10.21:9093) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512 (org.apache.kafka.clients.NetworkClient)
[2022-11-11 19:45:06,788] WARN [Producer clientId=console-producer] Bootstrap broker 172.22.10.21:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2022-11-11 19:45:07,388] ERROR [Producer clientId=console-producer] Connection to node -1 (ip-172-22-10-21.ap-southeast-1.compute.internal/172.22.10.21:9093) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512 (org.apache.kafka.clients.NetworkClient)
[2022-11-11 19:45:07,388] WARN [Producer clientId=console-producer] Bootstrap broker 172.22.10.21:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2022-11-11 19:45:08,323] ERROR [Producer clientId=console-producer] Connection to node -1 (ip-172-22-10-21.ap-southeast-1.compute.internal/172.22.10.21:9093) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512 (org.apache.kafka.clients.NetworkClient)
[2022-11-11 19:45:08,323] WARN [Producer clientId=console-producer] Bootstrap broker 172.22.10.21:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2022-11-11 19:45:09,724] ERROR [Producer clientId=console-producer] Connection to node -1 (ip-172-22-10-21.ap-southeast-1.compute.internal/172.22.10.21:9093) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512 (org.apache.kafka.clients.NetworkClient)
[2022-11-11 19:45:09,724] WARN [Producer clientId=console-producer] Bootstrap broker 172.22.10.21:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2022-11-11 19:45:11,149] ERROR [Producer clientId=console-producer] Connection to node -1 (ip-172-22-10-21.ap-southeast-1.compute.internal/172.22.10.21:9093) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512 (org.apache.kafka.clients.NetworkClient)

Mirror Maker2 not able to connect to target cluster broker

I have two Kafka clusters on AWS MSK (in same environment and region). I have a KafkaConnect cluster setup on the destination cluster and have setup a mirror maker connector to run. The submission of the connector is fine and there are no errors.
When I try to check status of the connector, it says, RUNNING:
{"name":"mirror-maker-test-connector","connector":{"state":"RUNNING","worker_id":"<ip>:<port>"},"tasks":[task_list],"type":"source"}
I see the following exception:
[2022-01-12 19:46:33,772] DEBUG [Producer clientId=connector-producer-mirror-maker-test-connector-0] Connection with b-2.<broker_ip> disconnected (org.apache.kafka.common.network.Selector)
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:120)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:551)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243)
at java.base/java.lang.Thread.run(Thread.java:829)
[2022-01-12 19:46:33,773] DEBUG [Producer clientId=connector-producer-mirror-maker-test-connector-0] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient)
[2022-01-12 19:46:33,773] WARN [Producer clientId=connector-producer-mirror-maker-test-connector-0] Bootstrap broker b-2.<broker_ip>:9094 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
I am able to connect to the specified broker with netcat from within the Kafka Connect k8s pod.
Anyone faced this issue before?
I got it to work - had to add SSL properties for both consumer and producer when submitting the Mirror Maker connector.
"target.cluster.security.protocol": "SSL",
"target.cluster.ssl.truststore.location":"<certs_path>",
"target.cluster.ssl.truststore.password": "<password>"
"source.cluster.security.protocol": "SSL",
"source.cluster.ssl.truststore.location": "<certs_path>",
"source.cluster.ssl.truststore.password": "<password>"

Native Apache Kafka and Zookeeper with Confluent components?

Could you please tell me about compatibility of Apache Kafka and Zookeeper (native Apache distributuins) with some Confluent's components. I have already installed in my environment Kafka and Zookeeper as a multinodes clusters. But now I need to add schema-registry, kafka-connect.
So I actually tried to deploy Confluent Schema registry from their official docker image. I logged in and was able to successfully telnet kafka broker on port 9093
root#schema-0:/usr/bin# telnet kafka-0.kafka-hs 9093
Trying 10.244.3.47...
Connected to kafka-0.kafka-hs.log-platform.svc.cluster.local.
Escape character is '^]'.
After I tried to do some tests:
# /usr/bin/kafka-avro-console-producer \
--broker-list localhost:9093 --topic bar \
--property value.schema='{"type":"record","name":"myrecord","fields" \
[{"name":"f1","type":"string"}]}'
Add some values:
{"f1": "value1"}
But no luck :(. Got next errors:
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
(org.apache.kafka.clients.producer.ProducerConfig)
[2018-01-28 11:23:23,561] INFO Kafka version : 1.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2018-01-28 11:23:23,561] INFO Kafka commitId : ec61c5e93da662df (org.apache.kafka.common.utils.AppInfoParser){"f1": "value1"}
[2018-01-28 11:23:36,233] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-01-28 11:23:36,335] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-01-28 11:23:36,486] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Entire system is spinning on Kubernetes
Confluent Platform is Apache Kafka, but with additional components (such as Schema Registry) bundled with it.
The error you're getting is related to the network configuration. You need to make sure that your Broker is available to other nodes, including Schema Registry. In your Schema Registry config you've specified broker-list localhost:9093 but this should be your Kafka broker. In addition as Dmitry Minkovsky mentions, make sure you've set the advertised listener in your broker. This article might help.

Kafka client can't receive messages

I have kafka and zookeeper set up on a remote machine. On that machine I'm able to see below working using the test method on official website.
> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic listings-incoming
This is a message
This is another message
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --listings-incoming --from-beginning
This is a message
This is another message
but when I use my local consumer script it is not working:
bin/kafka-console-consumer.sh —bootstrap-server X.X.X.X:9092 —listings-incoming —from-beginning —consumer-property group.id=group2
Haven't seen messages showing up but what is showing is:
[2017-08-11 14:39:56,425] WARN Auto-commit of offsets {listings-incoming-4=OffsetAndMetadata{offset=0, metadata=''}, listings-incoming-2=OffsetAndMetadata{offset=0, metadata=''}, listings-incoming-3=OffsetAndMetadata{offset=0, metadata=''}, listings-incoming-0=OffsetAndMetadata{offset=0, metadata=''}, listings-incoming-1=OffsetAndMetadata{offset=0, metadata=''}} failed for group group1: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records. (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
*****************update**********************
My zookeeper and kafka is running on the same machine, right now my configuration on advertised.listeners is this:
advertised.listeners=PLAINTEXT://the.machine.ip.address:9092
I tried to change it to:
advertised.listeners=PLAINTEXT://my.client.ip.address:9092
and then run the client side consumer script, it gives error:
[2017-08-11 15:49:01,591] WARN Error while fetching metadata with
correlation id 3 : {listings-incoming=LEADER_NOT_AVAILABLE}
(org.apache.kafka.clients.NetworkClient) [2017-08-11 15:49:22,106]
WARN Bootstrap broker 10.161.128.238:9092 disconnected
(org.apache.kafka.clients.NetworkClient) [2017-08-11 15:49:22,232]
WARN Error while fetching metadata with correlation id 7 :
{listings-incoming=LEADER_NOT_AVAILABLE}
(org.apache.kafka.clients.NetworkClient) [2017-08-11 15:49:22,340]
WARN Error while fetching metadata with correlation id 8 :
{listings-incoming=LEADER_NOT_AVAILABLE}
(org.apache.kafka.clients.NetworkClient) [2017-08-11 15:49:40,453]
WARN Bootstrap broker 10.161.128.238:9092 disconnected
(org.apache.kafka.clients.NetworkClient) [2017-08-11 15:49:40,531]
WARN Error while fetching metadata with correlation id 12 :
{listings-incoming=LEADER_NOT_AVAILABLE}
(org.apache.kafka.clients.NetworkClient)
You probably have not configured your advertised.listeners properly in the brokers server.properties file.
From https://kafka.apache.org/documentation/
advertised.listeners Listeners to publish to ZooKeeper for clients to
use, if different than the listeners above. In IaaS environments, this
may need to be different from the interface to which the broker binds.
If this is not set, the value for listeners will be used.
and in the same documentation
listeners Listener List - Comma-separated list of URIs we will listen
on and the listener names. If the listener name is not a security
protocol, listener.security.protocol.map must also be set. Specify
hostname as 0.0.0.0 to bind to all interfaces. Leave hostname empty to
bind to default interface. Examples of legal listener lists:
PLAINTEXT://myhost:9092,SSL://:9091
CLIENT://0.0.0.0:9092,REPLICATION://localhost:9093
So if advertised.listeners is not set and listeners is just listening to localhost:9092 or 127.0.0.1:9092 or 0.0.0.0:9092 then the clients will be told to connect to localhost when they make a meta-data request to the bootstrap server. That works when the client is actually running in the same machine as the broker but it will fail when you connect remotely.
You should set advertised.listeners to be a fully qualified domain name or public IP address for the host that the broker is running on.
For example
advertised.listeners=PLAINTEXT://kafkabrokerhostname.confluent.io:9092
or
advertised.listeners=PLAINTEXT://192.168.1.101:9092