I'm trying to setup my kafka cluster to accept SASL_SSL / SCRAM authentication.
First of all, everything is working actually and clients connect using SASL_SSL / GSSAPI. My Zk servers are configured also for SASL authentication and TLS.
I'm using confluent docker images for Kafka and Zk:
confluentinc/cp-kafka:6.0.1
confluentinc/cp-zookeeper:5.5.3-3
So I just modified my setup to allow SCRAM-SHA-512 in Kafka: KAFKA_SASL_ENABLED_MECHANISMS=GSSAPI,SCRAM-SHA-512
Following these instructions I now want to add the users in Zk and this is where the problems start (from the Kafka node):
[root#kafka1 [RCI] ~]# /usr/bin/podman exec kafka kafka-configs --zk-tls-config-file /etc/kafka/secrets/zk-ssl.properties --zookeeper Zk:3181 --alter --entity-type topics --entity-name test_jerome --add-config 'retention.ms=1'
Warning: --zookeeper is deprecated and will be removed in a future version of Kafka.
Use --bootstrap-server instead to specify a broker to connect to.
[2021-01-21 17:44:14,807] WARN zookeeper.ssl.keyStore.location not specified (org.apache.zookeeper.common.X509Util)
Error while executing config command with args '--zk-tls-config-file /etc/kafka/secrets/zk-ssl.properties --zookeeper Zk:3181 --alter --entity-type topics --entity-name test_jerome --add-config retention.ms=1'
org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL for /config/changes
at org.apache.zookeeper.KeeperException.create(KeeperException.java:128)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:564)
at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1646)
at kafka.zk.KafkaZkClient.makeSurePersistentPathExists(KafkaZkClient.scala:1568)
at kafka.zk.KafkaZkClient.createConfigChangeNotification(KafkaZkClient.scala:395)
at kafka.zk.AdminZkClient.changeEntityConfig(AdminZkClient.scala:385)
at kafka.zk.AdminZkClient.changeTopicConfig(AdminZkClient.scala:342)
at kafka.zk.AdminZkClient.changeConfigs(AdminZkClient.scala:278)
at kafka.admin.ConfigCommand$.alterConfigWithZk(ConfigCommand.scala:167)
at kafka.admin.ConfigCommand$.processCommandWithZk(ConfigCommand.scala:118)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:92)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
Error: non zero exit code: 1: OCI runtime error
ZK logs are not really helpful:
[2021-01-21 17:58:08,333] INFO Successfully authenticated client: authenticationID=admin; authorizationID=admin. (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2021-01-21 17:58:08,333] INFO Successfully authenticated client: authenticationID=admin; authorizationID=admin. (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2021-01-21 17:58:08,333] INFO Setting authorizedID: admin (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2021-01-21 17:58:08,333] INFO Setting authorizedID: admin (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2021-01-21 17:58:08,334] INFO adding SASL authorization for authorizationID: admin (org.apache.zookeeper.server.ZooKeeperServer)
[2021-01-21 17:58:08,334] INFO adding SASL authorization for authorizationID: admin (org.apache.zookeeper.server.ZooKeeperServer)
As you can see, the SASL authentication works and the admin user is well identified.
If I login into the Zk shell (from the Zk node), you cann see that the acls are fully open:
getAcl /config/users
'world,'anyone
: cdrwa
getAcl /config
'world,'anyone
: cdrwa
getAcl /
'world,'anyone
: cdrwa
If I create the directory inside the Zk shell (from the Zk node) it works:
create /config/users/topicctl
Created /config/users/topicctl
There are no logs on the Zk server when I do this as I do not authenticate.
I now spend the afternoon on this problem without any progress.
What could be the problem please ?
I finally found my issue thanks to this post: Kafka not starting up if zookeeper.set.acl is set to true
I just added this info to the KAFKA_OPTS env variable: "-Dzookeeper.kerberos.removeHostFromPrincipal=true -Dzookeeper.kerberos.removeRealmFromPrincipal=true -Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider -Dzookeeper.requireClientAuthScheme=sasl"
My main question is now: why sasl authentication was working before that !
Related
version of confluent platform: 5.4.1
I followed the document and previous question to setup the SCRAM authentication:
https://docs.confluent.io/current/kafka/authentication_sasl/authentication_sasl_scram.html#
kafka SASL/SCRAM Failed authentication
After I modified my configurations, the SASL authentication of zookeeper server is successful but the kafka server is still failed. the below shows the log messages and my related configuration, please help advise on it
zookeeper server output:
[2020-07-18 23:53:42,917] INFO Successfully authenticated client: authenticationID=adminuser; authorizationID=adminuser. (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2020-07-18 23:53:43,143] INFO Setting authorizedID: adminuser (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2020-07-18 23:53:43,143] INFO adding SASL authorization for authorizationID: adminuser (org.apache.zookeeper.server.ZooKeeperServer)
[2020-07-18 23:53:51,162] INFO Successfully authenticated client: authenticationID=adminuser; authorizationID=adminuser. (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2020-07-18 23:53:51,162] INFO Setting authorizedID: adminuser (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2020-07-18 23:53:51,162] INFO adding SASL authorization for authorizationID: adminuser (org.apache.zookeeper.server.ZooKeeperServer)
kafka server error message:
org.apache.kafka.common.errors.DisconnectException: Cancelled fetchMetadata request with correlation id 11 due to node -1 being disconnected
[2020-07-19 00:23:59,921] INFO [SocketServer brokerId=0] Failed authentication with /192.168.20.10 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2020-07-19 00:24:00,095] WARN [Producer clientId=confluent-metrics-reporter] Bootstrap broker 192.168.20.10:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2020-07-19 00:24:00,403] INFO [SocketServer brokerId=0] Failed authentication with /192.168.20.10 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2020-07-19 00:24:00,597] INFO [SocketServer brokerId=0] Failed authentication with /192.168.20.10 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2020-07-19 00:24:00,805] INFO [SocketServer brokerId=0] Failed authentication with /192.168.20.10 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
zookeeper_server_jaas.conf:
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_adminuser="adminuserpwd";
};
zookeeper.properties:
server.001=192.168.20.10:2888:3888
authProvider.001=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
zookeeper-server-start:
...
export ZK_AUTH_ARGS=$base_dir/../data/zookeeper_server_jaas.conf
exec $base_dir/kafka-run-class $EXTRA_ARGS -Djava.security.auth.login.config=$ZK_AUTH_ARGS org.apache.zookeeper.server.quorum.QuorumPeerMain "$#"
Added user:
bin/kafka-configs --zookeeper 192.168.20.10:2181 --alter --add-config 'SCRAM-SHA-256=[password=adminuserpwd],SCRAM-SHA-512=[password=adminuserpwd]' --entity-type users --entity-name adminuser
bin/kafka-configs --zookeeper 192.168.20.10:2181 --describe --entity-type users --entity-name adminuser
Configs for user-principal 'adminuser' are SCRAM-SHA-512=salt=MTdxamZocWJlY2F2dDFhZGc0dmluZm5hcmo=,stored_key=o21ptVzTVZoR/hafmOgTSYmr2F1TORPo6xDaZGAph+6OncE1pw/AyLRwduCx0Qx97bKoPWmlYShfXtbug6u8kg==,server_key=1B/1/CzPTpMBO9MpfKZb504JFLZUia0D6LatAllSYkrTa8XWbaISDGQ29Yf4UU+jQmo+iQgK0jX+KaV+fUV6XA==,iterations=4096,SCRAM-SHA-256=salt=MWlrZGs5dHd4dDhiZmdqZGxnN2cwOGpuaGs=,stored_key=vSJ83eDvilj4JyQyehPaGmG3EZISRRfo3j8iY8uiWLU=,server_key=Bu/KfHnv6bSay/n4dO/h55O9WLLaAjiLtJQzfpr4cs0=,iterations=4096
kafka_server_jaas.conf:
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="adminuser"
password="adminuserpwd";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="adminuser"
password="adminuserpwd";
};
kafka server.properties:
...
listeners=SASL_PLAINTEXT://192.168.20.10:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
sasl.enabled.mechanisms=SCRAM-SHA-256
advertised.listeners=SASL_PLAINTEXT://192.168.20.10:9092
zookeeper.connect=192.168.20.10:2181
authorizer.class.name=io.confluent.kafka.security.authorizer.ConfluentServerAuthorizer
super.users=User:adminuser
allow.everyone.if.no.acl.found=false
...
kafka-server-start:
...
KAFKA_AUTH_ARGS=$base_dir/../data/kafka_server_jaas.conf
exec $base_dir/kafka-run-class $EXTRA_ARGS -Djava.security.auth.login.config=$KAFKA_AUTH_ARGS io.confluent.support.metrics.SupportedKafka "$#"
I am using kafka_2.3.0, Ubuntu 16.04
Below are the configurations for Kafka broker and the zookeeper nodes. Currently i am testing this on single machine so the IP shall remain same all over and port shall differ.
kafka-broker 1 configuration.
broker.id=1
listeners=SASL_PLAINTEXT://192.168.1.172:9092
advertised.listeners=SASL_PLAINTEXT://192.168.1.172:9092
log.dirs=/home/emgda/data/kafka/1/kafka-logs
zookeeper.connect=192.168.1.172:2181,192.168.1.172:2182,192.168.1.172:2183
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
kafka-broker 2 configuration.
broker.id=2
listeners=SASL_PLAINTEXT://192.168.1.172:9093
advertised.listeners=SASL_PLAINTEXT://192.168.1.172:9093
log.dirs=/home/emgda/data/kafka/2/kafka-logs
zookeeper.connect=192.168.1.172:2181,192.168.1.172:2182,192.168.1.172:2183
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
kafka-broker 3 configuration.
broker.id=3
listeners=SASL_PLAINTEXT://192.168.1.172:9094
advertised.listeners=SASL_PLAINTEXT://192.168.1.172:9094
log.dirs=/home/emgda/data/kafka/3/kafka-logs
zookeeper.connect=192.168.1.172:2181,192.168.1.172:2182,192.168.1.172:2183
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
kafka_jass.config
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-secret";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="super"
password="adminsecret";
};
zookeeper_jass.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_super="adminsecret";
};
Zookeeper-node 1 configuration
dataDir=/home/emgda/data/zookeeper/1/
clientPort=2181
server.1=localhost:2666:3666
server.2=localhost:2667:3667
server.3=localhost:2668:3668
requireClientAuthScheme=sasl
Zookeeper-node 2 configuration
dataDir=/home/emgda/data/zookeeper/2/
clientPort=2182
server.1=localhost:2666:3666
server.2=localhost:2667:3667
server.3=localhost:2668:3668
requireClientAuthScheme=sasl
Zookeeper-node 3 configuration
dataDir=/home/emgda/data/zookeeper/3/
clientPort=2183
server.1=localhost:2666:3666
server.2=localhost:2667:3667
server.3=localhost:2668:3668
requireClientAuthScheme=sasl
Zookeeper nodes in cluster start properly and the kafka is also able to authenticate to zookeeper as below zookeeper logs will help understand what heppens when first kafka broker comes up,
[2019-12-30 13:35:29,465] INFO Accepted socket connection from /192.168.1.172:42362 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2019-12-30 13:35:29,480] INFO Client attempting to establish new session at /192.168.1.172:42362 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-30 13:35:29,487] INFO Established session 0x10000d285210003 with negotiated timeout 6000 for client /192.168.1.172:42362 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-30 13:35:29,529] INFO Successfully authenticated client: authenticationID=super; authorizationID=super. (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2019-12-30 13:35:29,529] INFO Setting authorizedID: super (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2019-12-30 13:35:29,530] INFO adding SASL authorization for authorizationID: super (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-30 13:36:54,999] INFO Closed socket connection for client /192.168.1.172:42362 which had sessionid 0x10000d285210003 (org.apache.zookeeper.server.NIOServerCnxn)
Error while starting first Kafka broker as below,
[2019-12-30 13:35:58,417] ERROR [Controller id=1, targetBrokerId=1] Connection to node 1 (192.168.1.172/192.168.1.172:9092) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256 (org.apache.kafka.clients.NetworkClient)
[2019-12-30 13:35:58,421] INFO [SocketServer brokerId=1] Failed authentication with /192.168.1.172 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256) (org.apache.kafka.common.network.Selector)
While I am trying to create a Kakfa-broker user using below command, I get below error
emgda#ubuntu:~/softwares/kafka_2.12-2.3.0$ ./bin/kafka-configs.sh --zookeeper 192.168.1.172:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin
Error while executing config command with args '--zookeeper 192.168.1.172:2181 --alter --add-config SCRAM-SHA-256=[iterations=8192,password=admin-secret],SCRAM-SHA-512=[password=admin-secret] --entity-type users --entity-name admin'
org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL for /config/users/admin
at org.apache.zookeeper.KeeperException.create(KeeperException.java:124)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:560)
at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1610)
at kafka.zk.KafkaZkClient.createOrSet$1(KafkaZkClient.scala:357)
at kafka.zk.KafkaZkClient.setOrCreateEntityConfigs(KafkaZkClient.scala:367)
at kafka.zk.AdminZkClient.changeEntityConfig(AdminZkClient.scala:378)
at kafka.zk.AdminZkClient.changeUserOrUserClientIdConfig(AdminZkClient.scala:312)
at kafka.zk.AdminZkClient.changeConfigs(AdminZkClient.scala:276)
at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:153)
at kafka.admin.ConfigCommand$.processCommandWithZk(ConfigCommand.scala:104)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:80)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
Question:
What am I missing so that kafka-broker authenticates in a proper way?
Kafka SSL peer is not authenticated, returning ANONYMOUS instead error when client connecting the brokers SASL port, it allows the connection on PLAINTEXT or SSL ports.
I have kafka 2.2.0 in windows systems with SSL enabled, where the kafka broker plaintext is running on 9092 and SSL on 9093. On top of that, configured the SASL with SCRAM mechanism with listener port as 9094, ending-up with error as mentioned in problem summary while running producer as kafka-console-producer.bat --broker-list localhost:9094 --topic xxx
Here are the SASL configurations, not provided other configuration like basic and SSL
zookeeper.properties
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000
server.properties
listeners=PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9093,SASL_SSL://0.0.0.0:9094
advertised.listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093,SASL_SSL://localhost:9094
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
zookeeper_server_jaas.conf
Server {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-pwd"
user_admin="admin-pwd"
user_other1="other1-pwd"
user_other2="other2-pwd";
};
producer.properties
security.protocol=SSL
kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-pwd";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-pwd";
};
kafka_client_jaas.conf
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-pwd";
};
Start the Zookeeper as
SET ZOO_LOG_DIR=C:/Work/kafka_2.11-2.2.0-for-ssl/zookeeper-data
SET KAFKA_HOME=C:/Work/kafka_2.11-2.2.0-for-ssl
set KAFKA_OPTS=-Djava.security.auth.login.config=%KAFKA_HOME%/config/zookeeper_server_jaas.conf
zookeeper-server-start.bat %KAFKA_HOME%/config/zookeeper.properties
Start the kafka as
set KAFKA_HOME=C:/Work/kafka_2.11-2.2.0-for-ssl
set KAFKA_OPTS=-Djava.security.auth.login.config=%KAFKA_HOME%/config/kafka_server_jaas.conf
kafka-server-start.bat %KAFKA_HOME%/config/server.properties
Start the Producer as
SET KAFKA_HOME=C:/Work/kafka_2.11-2.2.0-for-ssl
set KAFKA_OPTS=-Djava.security.auth.login.config=%KAFKA_HOME%/config/kafka_client_jaas.conf
kafka-console-producer.bat --broker-list localhost:9094 --topic xxx
The producer is only working if I use the broker port as 9092. Did I missed something and end-up with mis-configuration. Any inputs?
Updated:
Here is the error while connecting the producer/consumer
[2019-10-14 15:39:42,108] DEBUG [SslTransportLayer channelId=127.0.0.1:9094-127.0.0.1:63848-0 key=sun.nio.ch.SelectionKeyImpl#222a223c] SSL peer is not authenticated, returning ANONYMOUS instead (org.apache.kafka.common.network.SslTransportLayer)
[2019-10-14 15:39:42,108] DEBUG [SslTransportLayer channelId=127.0.0.1:9094-127.0.0.1:63848-0 key=sun.nio.ch.SelectionKeyImpl#222a223c] SSL handshake completed successfully with peerHost '127.0.0.1' peerPort 63848 peerPrincipal 'User:ANONYMOUS' cipherSuite 'TLS_DHE_DSS_WITH_AES_256_CBC_SHA256' (org.apache.kafka.common.network.SslTransportLayer)
[2019-10-14 15:39:42,108] DEBUG Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-10-14 15:39:42,108] DEBUG Handling Kafka request API_VERSIONS during authentication (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-10-14 15:39:42,108] DEBUG Set SASL server state to HANDSHAKE_REQUEST during authentication (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-10-14 15:39:42,108] DEBUG Set SASL server state to FAILED during authentication (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-10-14 15:39:42,108] INFO [SocketServer brokerId=0] Failed authentication with 127.0.0.1/127.0.0.1 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
I had same problem. Authentication with SASL SCRAM wasn't working on 2.2.x and 2.3.x Kafka versions. On 2.1 it was OK.
In the end I resolved the issue by providing zookeeper chroot path (/kafkaTest) when creating principals:
./kafka-configs --zookeeper zookeeper-01:2181/kafkaTest --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin
Seems like when credentials are created in zookeeper root path, Kafka can't find them to validate.
I hope it will solve your issue as well!
I have installed confluent-oss-5.0.0 on Azure VM and exposed all necessary ports to access using public IP Address.
I tried to change the etc/kafka/server.properties below things to achieve but no luck
Approach - 1
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://<publicIP>:9092
--------------------------------------
Approach - 2
advertised.listeners=PLAINTEXT://<publicIP>:9092
--------------------------------------
Approach - 3
listeners=PLAINTEXT://<publicIP>:9092
I experienced below error
pj#pj-HP-EliteBook-840-G1:~/confluent-kafka/confluent-oss-5.0.0/bin$ kafka-console-producer --broker-list <publicIp>:9092 --topic pj_test123>dfsds
[2019-03-25 19:13:38,784] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
pj#pj-HP-EliteBook-840-G1:~/confluent-kafka/confluent-oss-5.0.0/bin$ kafka-console-producer --broker-list <publicIp>:9092 --topic pj_test123
>message1
>message2
>[2019-03-25 19:20:13,216] ERROR Error when sending message to topic pj_test123 with key: null, value: 3 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for pj_test123-0: 1503 ms has passed since batch creation plus linger time
[2019-03-25 19:20:13,218] ERROR Error when sending message to topic pj_test123 with key: null, value: 3 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
pj#pj-HP-EliteBook-840-G1:~/confluent-kafka/confluent-oss-5.0.0/bin$ kafka-console-consumer --bootstrap-server <publicIp>:9092 --topic pj_test123 --from-beginning
[2019-03-25 19:29:27,742] WARN [Consumer clientId=consumer-1, groupId=console-consumer-42352] Error while fetching metadata with correlation id 2 : {pj_test123=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
pj#pj-HP-EliteBook-840-G1:~/confluent-kafka/confluent-oss-5.0.0/bin$ kafka-console-consumer --bootstrap-server <publicIp>:9092 --topic pj_test123 --from-beginning
[2019-03-25 19:27:06,589] WARN [Consumer clientId=consumer-1, groupId=console-consumer-33252] Connection to node 0 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
All other service like zookeeper, kafka-connect and restAPI are working fine using the <PublicIP>:<port>
kafka-topics --zookeeper 13.71.115.20:2181 --list --- This is working
Ref:
Not able to access messages from confluent kafka on EC2
https://kafka.apache.org/documentation/#brokerconfigs
Why I cannot connect to Kafka from outside?
Solutions
Thanks, #Robin Moffatt, It works for me. I do below changes along with allowing all Kafka related ports on Azure networking
kafka#kafka:~/confluent-oss-5.0.0$ sudo vi etc/kafka/server.properties
listeners=INTERNAL://0.0.0.0:9092,EXTERNAL://0.0.0.0:19092
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
advertised.listeners=INTERNAL://<privateIp>:9092,EXTERNAL://<publicIp>:19092
inter.broker.listener.name=INTERNAL
You need to configure both internal and external listeners for your broker. This article details how: https://rmoff.net/2018/08/02/kafka-listeners-explained/.
You will also have to give public access to port 9092 (your broker). TO do that,
Go to your Virtual machine in Azure portal
Select Networking under settings in the left menu
Add inbound port rule
Add port 9092 to be accessbile from anywhere
I have an environment problem.
I want using zookeeper and Kafka cluster to solve my problem.
My zookeeper version is 3.4.12 and Kafka is 2.12-2.1.0
I also change the zoo.cfg in zookeeper.
dataDir=D:/WEBSOCKET/zookeeper-3.4.12/data
and server.properties in kafka.
log.dirs=D:/WEBSOCKET/kafka_2.12-2.1.0/logs
I see all the tutorial and do it the exact same way.
And also usgin kafka open zookeeper.
this is my command:
1) open zookeeper (zkServer.cmd)
2) in kafka
.\bin\windows\kafka-server-start.bat .\config\server.properties
3) create topic
.\bin\windows\kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic hello
4) create a producer
.\bin\windows\kafka-console-producer.bat --bootstrap-server localhost:2181 --topic hello
5) create a consumer
.\bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:2181 --topic hello
when I get in step 5, I always fail.
zookeeper will give me a lot of console like :
WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#383] - Exception causing close of session 0x0: null
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#215] - Accepted socket connection from /127.0.0.1:55192
and
2019-01-08 17:05:24,822 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1040] - Closed socket connection for client /127.0.0.1:50874 (no session established for client)
2019-01-08 17:05:25,783 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#215] - Accepted socket connection from /127.0.0.1:56089
I don't know how to fix it. I google for two days...
when I open my Kafka with step 2, my zookeeper some times doesn't hvae any response or shows me this:
[ProcessThread(sid:0 cport:2181)::PrepRequestProcessor#596] - Got user-level KeeperException when processing sessionid:0x1000058f8960000 type:multi cxid:0x36 zxid:0x69 txntype:-1 reqpath:n/a aborting remaining multi ops. Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
I also google this, but not helpful.
I set this in kfaka before:
advertised.host.name = localhost
listeners=PLAINTEXT://127.0.0.1:9092
my host have set
127.0.0.1 localhost
please help me create local server I want to coding my project..
thank you read all.
Producers and consumers need to use port 9092 (Kafka)
You are seeing Zookeeper logs and errors because you are trying to use bootstrap-server or broker-list and port 2181 (Zookeeper)
Check again the quickstart guide