Creating topics in SASL/GSSAPI (Kerberos) based Kafka Cluster - apache-kafka

We have a SASL/GSSAPI (Kerberos) based authentication scheme in our Kafka cluster. Brokers are configured to authenticate with Zookeeper and each other. We added a principal to the "Super Users" list on all the brokers so that we can create topics using that principal, however, topic creation is failing, seemingly because of lack of privileges:
[2019-09-11 02:16:30,905] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2019-09-11 02:16:30,912] INFO Waiting for keeper state SaslAuthenticated (org.I0Itec.zkclient.ZkClient)
[2019-09-11 02:16:31,157] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2019-09-11 02:16:31,161] INFO Client will use GSSAPI as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-09-11 02:16:31,164] INFO Opening socket connection to server broker101.prod/13.14.15.16:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2019-09-11 02:16:31,177] INFO Socket connection established to broker101.prod/13.14.15.16:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-09-11 02:16:31,179] INFO TGT refresh thread started. (org.apache.zookeeper.Login)
[2019-09-11 02:16:31,193] INFO TGT valid starting at: Tue Aug 20 02:16:31 UTC 2019 (org.apache.zookeeper.Login)
[2019-09-11 02:16:31,194] INFO TGT expires: Wed Aug 21 02:16:31 UTC 2019 (org.apache.zookeeper.Login)
[2019-09-11 02:16:31,194] INFO TGT refresh sleeping until: Tue Aug 20 21:34:57 UTC 2019 (org.apache.zookeeper.Login)
[2019-09-11 02:16:31,203] INFO Session establishment complete on server broker101.prod/13.14.15.16:2181, sessionid = 0x16c60b863b00035, negotiated timeout = 30000 (org.apache.zookeeper.ClientCnxn)
[2019-09-11 02:16:31,204] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2019-09-11 02:16:31,214] ERROR An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's received SASL token. Zookeeper Client will go to AUTH_FAILED state. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-09-11 02:16:31,214] ERROR SASL authentication with Zookeeper Quorum member failed: javax.security.sasl.SaslException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's received SASL token. Zookeeper Client will go to AUTH_FAILED state. (org.apache.zookeeper.ClientCnxn)
[2019-09-11 02:16:31,215] INFO zookeeper state changed (AuthFailed) (org.I0Itec.zkclient.ZkClient)
[2019-09-11 02:16:31,215] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
Exception in thread "main" org.I0Itec.zkclient.exception.ZkAuthFailedException: Authentication failure
at org.I0Itec.zkclient.ZkClient.waitForKeeperState(ZkClient.java:947)
at org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:924)
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1231)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131)
at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:103)
at kafka.utils.ZkUtils$.apply(ZkUtils.scala:85)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:58)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
Is it even possible to create topics with a principal other than principal name used by broker to authentication with zookeeper? if yes, then how?
We can successfully create topics using principal which is used by brokers to authenticate with Zookeeper. We were thinking if Super User can perhaps do anything on the cluster, including creating new topics. Is that perception incorrect?

Related

Kafka stretched cluster stopped when second DC become down

My Kafka version:
/opt/kafka/bin/kafka-topics.sh --version
2.4.1 (Commit:c57222ae8cd7866b)
My Kafka cluster configuration looks like:
6 nodes Kafka cluster
6 x Zookeeper i.e. is installed on each node/broker
2 DC's, there are 3 nodes in each DC
rack-awareness feature is enabled on each node:
node1 DC1:
broker.id=1
broker.rack=dc1
node2 DC1:
broker.id=2
broker.rack=dc1
node3 DC1:
broker.id=3
broker.rack=dc1
node1 DC2:
broker.id=4
broker.rack=dc2
node2 DC2:
broker.id=5
broker.rack=dc2
node3 DC2:
broker.id=6
broker.rack=dc2
When the whole DC2 become down the kafka cluster stopped and node1 from DC1 show errors like this:
[2022-03-16 07:38:45,422] INFO Unable to read additional data from server sessionid 0x40000004f930002, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:45,549] INFO Unable to read additional data from server sessionid 0x200ab15af610000, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:45,787] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2022-03-16 07:38:45,787] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2022-03-16 07:38:45,787] INFO Opening socket connection to server dc2kafkabr2/A.B.C.72:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:45,788] INFO Socket error occurred: dc2kafkabr2/A.B.C.72:2181: Connection refused (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,503] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2022-03-16 07:38:46,503] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2022-03-16 07:38:46,503] INFO Opening socket connection to server dc1kafkabr1/A.B.C.68:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,504] INFO Socket connection established, initiating session, client: /A.B.C.68:35796, server: dc1kafkabr1/A.B.C.68:2181 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,505] INFO Unable to read additional data from server sessionid 0x40000004f930002, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,616] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2022-03-16 07:38:46,617] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2022-03-16 07:38:46,617] INFO Opening socket connection to server dc1kafkabr2/A.B.C.69:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,617] INFO Socket connection established, initiating session, client: /A.B.C.68:38936, server: dc1kafkabr2/A.B.C.69:2181 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,619] INFO Unable to read additional data from server sessionid 0x200ab15af610000, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,896] INFO Client successfully logged in. (org.apache.zookeeper.Login)
However when the Kafka nodes will be stopped normally/humanly in DC2 by systemctl command then Kafka cluster works properly on the nodes in DC1.
The question is why if DC2 is turned off, the Kafka cluster stops working? How to prevent of it? Any idea?
Best Regards,
Dan
Dears,
After next tests I know that problem is by side of Zookeeper because when I trun off two brokers in DC2 the Kafka cluster still works. After turn off kafka.service on the last broker in DC2 the Kafka cluster still works. But when I turn off zookeeper.service on the last broker in DC2 the cluster becomes unresponsive.
This is my zookeeper's configuration:
cat zookeeper.properties
tickTime=2000
dataDir=/opt/zookeeper/data
#dataLogDir=/var/log/zookeeper
clientPort=2181
initLimit=5
syncLimit=3
############## HARDENING #################
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
###########################################
server.1=A.B.C.68:2888:3888
server.2=A.B.C.69:2888:3888
server.3=A.B.C.70:2888:3888
server.4=A.B.C.71:2888:3888
server.5=A.B.C.72:2888:3888
server.6=A.B.C.73:2888:3888
Any idea what is wrong in this configuration?
Best Regards,
Dan
Zookeeper quorum is not ensure and this is reason.

Kafka SASL_SSL No JAAS configuration section named 'Client' was found in specified JAAS configuration file

I'm trying to activate authentication using SASL/PLAIN in my kafka broker.
the JAAS configuration file is as the following
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";
};
I launch kafka service using the following commands
export KAFKA_OPTS="-Djava.security.auth.login.config=<PATH>kafka_server_jaas.conf
/bin/kafka-server-start.sh /config/server.properties
The kafka service is not started properly and I got these errors in the log
javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/home/kafka/kafka/config/kafka_server_jaas.conf'.
at org.apache.zookeeper.client.ZooKeeperSaslClient.<init>(ZooKeeperSaslClient.java:189)
at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1161)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1210)
[2022-03-16 12:13:16,587] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,588] ERROR [ZooKeeperClient Kafka server] Auth failed, initialized=false connectionState=CONNECTING (kafka.zookeeper.ZooKeeperClient)
[2022-03-16 12:13:16,592] INFO Socket connection established, initiating session, client: /127.0.0.1:46706, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,611] INFO Session establishment complete on server localhost/127.0.0.1:2181, session id = 0x100002dd98c0000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,612] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2022-03-16 12:13:16,752] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread)
[2022-03-16 12:13:16,786] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener)
[2022-03-16 12:13:16,788] INFO Cleared cache (kafka.server.FinalizedFeatureCache)
[2022-03-16 12:13:16,957] INFO Cluster ID = 6WTadNCMRAW4dHoc_JUnIg (kafka.server.KafkaServer)
[2022-03-16 12:13:16,968] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID 6WTadNCMRAW4dHoc_JUnIg doesn't match stored clusterId Some(RJXzPwJeRfawIa_yA0B26A) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
at kafka.server.KafkaServer.startup(KafkaServer.scala:228)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
I already added the following lines to server.properties
listeners=SASL_SSL://localhost:9092
security.protocol=SASL_SSL
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
[2022-03-16 12:13:16,968] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID 6WTadNCMRAW4dHoc_JUnIg doesn't match stored clusterId Some(RJXzPwJeRfawIa_yA0B26A) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
at kafka.server.KafkaServer.startup(KafkaServer.scala:228)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
This issue occurs when there is a mismatch in cluster ID stored in Zookeeper and Kafka data directories for the broker.
In this case, cluster ID of the broker stored in
Zookeeper data is 6WTadNCMRAW4dHoc_JUnIg
Kafka meta.properties is RJXzPwJeRfawIa_yA0B26A
Reason:
Zookeeper data directory got deleted.
Deleting Zookeeper dataDir and restarting both Zookeeper and Kafka service will not work. Because Zookeeper creates a new Cluster ID and assigns it to the broker when it registers and if there is no entry already. This new cluster ID will be different from the one in meta.properties.
This issue can be fixed by following below steps
delete both Kafka log.dirs and Zookeeper dataDir - results in data loss; Both Kafka and Zookeeper service needs to be restarted
delete meta.properties in Kafka log.dirs directory - no data loss; Kafka service needs to be started anyway
update cluster ID in meta.properties with the value stored in Zookeeper data; In this case, replace RJXzPwJeRfawIa_yA0B26A with 6WTadNCMRAW4dHoc_JUnIg - no data loss; Kafka service needs to be started anyway
javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file
The Client section is used to authenticate a SASL connection with ZooKeeper. Above error javax.security.auth.login.LoginException is a warning and Kafka will connect to Zookeeper server without SASL authentication if Zookeeper allows it.
[2022-03-16 12:13:16,587] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,588] ERROR [ZooKeeperClient Kafka server] Auth failed, initialized=false connectionState=CONNECTING (kafka.zookeeper.ZooKeeperClient)
[2022-03-16 12:13:16,592] INFO Socket connection established, initiating session, client: /127.0.0.1:46706, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,611] INFO Session establishment complete on server localhost/127.0.0.1:2181, session id = 0x100002dd98c0000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
The KafkaServer section is used by the Broker and provides SASL configuration options for inter broker connection. The username and password are used by the broker to initiate connections to other brokers. The set of properties user_username defines the passwords for all users to connect to the broker.

kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING with remote host

I installed a Zookeeper and one Kafka broker server in one of my cloud server instances, and they are working well. But when trying to connect to the remote Zookeeper server, the Kafka broker is not able to reach that IP address and port number. The firewall is also in inactive mode.
The summary is:
one zookeeper server - in cloud instance [146.646.64.66*]
one Kafka broker server - in cloud instance [146.646.64.66*]
two Kafka broker server - in my local PC [localhost]
I have updated the zookeeper.connect property of the local Kafka broker server's property file as follows:
zookeeper.connect=146.646.64.66*:2181
The following is the error that the CMD shows:
[2021-06-17 19:47:01,443] INFO Initiating client connection, connectString=174.138.31.159:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$#6736fa8d (org.apache.zookeeper.ZooKeeper)
[2021-06-17 19:47:01,468] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2021-06-17 19:47:01,545] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[2021-06-17 19:47:01,553] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2021-06-17 19:47:19,557] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient)
[2021-06-17 19:47:21,663] INFO Opening socket connection to server 146.646.64.66*/146.646.64.66*:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2021-06-17 19:47:21,801] WARN Client session timed out, have not heard from server in 20251ms for sessionid 0x0 (org.apache.zookeeper.ClientCnxn)
[2021-06-17 19:47:21,929] INFO Session: 0x0 closed (org.apache.zookeeper.ZooKeeper)
[2021-06-17 19:47:21,929] INFO EventThread shut down for session: 0x0 (org.apache.zookeeper.ClientCnxn)
[2021-06-17 19:47:21,934] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient)
[2021-06-17 19:47:21,944] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:271)
at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:267)
at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:125)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1948)
at kafka.server.KafkaServer.createZkClient$1(KafkaServer.scala:431)
at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:456)
at kafka.server.KafkaServer.startup(KafkaServer.scala:191)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
[2021-06-17 19:47:21,982] INFO shutting down (kafka.server.KafkaServer)
Please help me solve this problem.
remove all cached log files or change the directory of the log path of the server.properties file that you are going to run. the cache log files' data can be affected due to your server history.

Kafka unable to connect to zookeeper ensemble on EKS

I am trying to run Kafka cluster on AWS EKS cluster (v1.16). I am using bitnami helm charts.
https://github.com/bitnami/charts/tree/master/bitnami/kafka
https://github.com/bitnami/charts/tree/master/bitnami/zookeeper
I have deployed zookeeper ensemble successfully using below command:
helm install zookeeper bitnami/zookeeper --set replicaCount=3 --set auth.enabled=false --set allowAnonymousLogin=true --set persistence.storageClass=ebs --set persistence.accessModes={ReadWriteOnce} --set persistence.size=1Gi --set podLabels."app\.kubernetes\.io/version"="1.0"
It outputs:
ZooKeeper can be accessed via port 2181 on the following DNS name from within your cluster:
zookeeper.pulse.svc.cluster.local
Now, I am trying to deploy Kafka cluster with below command:
helm install kafka bitnami/kafka --set replicaCount=3 --set zookeeper.enabled=false --set externalZookeeper.servers=zookeeper.pulse.svc.cluster.local --set autoCreateTopicsEnable=true --set persistence.storageClass=ebs --set persistence.accessModes={ReadWriteOnce} --set persistence.size=1Gi --set podLabels."app\.kubernetes\.io/version"="1.0"
It outputs:
Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:
kafka.pulse.svc.cluster.local
Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:
kafka-0.kafka-headless.pulse.svc.cluster.local
kafka-1.kafka-headless.pulse.svc.cluster.local
kafka-2.kafka-headless.pulse.svc.cluster.local
It creates 3 pods but none of the pod is able to connect to zookeeper. I am not getting what is the issue here.
Kafka pod logs:
2020-07-06T11:22:40.506134648Z 11:22:40.50 Welcome to the Bitnami kafka container
2020-07-06T11:22:40.507301179Z 11:22:40.50 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-kafka
2020-07-06T11:22:40.508519907Z 11:22:40.50 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-kafka/issues
2020-07-06T11:22:40.51039472Z 11:22:40.50
2020-07-06T11:22:40.511630347Z 11:22:40.51 INFO ==> ** Starting Kafka setup **
2020-07-06T11:22:40.55379314Z 11:22:40.55 WARN ==> You set the environment variable ALLOW_PLAINTEXT_LISTENER=yes. For safety reasons, do not use this flag in a production environment.
2020-07-06T11:22:40.561203295Z 11:22:40.56 INFO ==> Initializing Kafka...
2020-07-06T11:22:40.565054949Z 11:22:40.56 INFO ==> No injected configuration files found, creating default config files
2020-07-06T11:22:40.723721499Z 11:22:40.72 INFO ==> Configuring Kafka for inter-broker communications with PLAINTEXT authentication.
2020-07-06T11:22:40.726161543Z 11:22:40.72 WARN ==> Inter-broker communications are configured as PLAINTEXT. This is not safe for production environments.
2020-07-06T11:22:40.727497832Z 11:22:40.72 INFO ==> Configuring Kafka for client communications with PLAINTEXT authentication.
2020-07-06T11:22:40.731790674Z 11:22:40.73 WARN ==> Client communications are configured using PLAINTEXT listeners. For safety reasons, do not use this in a production environment.
2020-07-06T11:22:40.73699684Z 11:22:40.73 INFO ==> ** Kafka setup finished! **
2020-07-06T11:22:40.737001986Z
2020-07-06T11:22:40.746297253Z 11:22:40.74 INFO ==> ** Starting Kafka **
2020-07-06T11:22:41.512303802Z [2020-07-06 11:22:41,511] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
2020-07-06T11:22:42.008231959Z [2020-07-06 11:22:42,007] INFO starting (kafka.server.KafkaServer)
2020-07-06T11:22:42.009112085Z [2020-07-06 11:22:42,008] INFO Connecting to zookeeper on zookeeper.pulse.svc.cluster.local (kafka.server.KafkaServer)
2020-07-06T11:22:42.028233655Z [2020-07-06 11:22:42,028] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper.pulse.svc.cluster.local. (kafka.zookeeper.ZooKeeperClient)
2020-07-06T11:22:42.032763227Z [2020-07-06 11:22:42,032] INFO Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.032776511Z [2020-07-06 11:22:42,032] INFO Client environment:host.name=kafka-0.kafka-headless.pulse.svc.cluster.local (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.03283528Z [2020-07-06 11:22:42,032] INFO Client environment:java.version=11.0.7 (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.032984414Z [2020-07-06 11:22:42,032] INFO Client environment:java.vendor=BellSoft (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033005091Z [2020-07-06 11:22:42,032] INFO Client environment:java.home=/opt/bitnami/java (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.03312054Z [2020-07-06 11:22:42,032] INFO Client environment:java.class.path=/opt/bitnami/kafka/bin/../libs/activation-1.1.1.jar:/opt/bitnami/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/bitnami/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/bitnami/kafka/bin/../libs/commons-cli-1.4.jar:/opt/bitnami/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/bitnami/kafka/bin/../libs/connect-api-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/connect-basic-auth-extension-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/connect-file-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/connect-json-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/connect-mirror-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/connect-mirror-client-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/connect-runtime-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/connect-transforms-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-api-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-locator-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-utils-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-annotations-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-core-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-databind-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-dataformat-csv-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-datatype-jdk8-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-base-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-paranamer-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-scala_2.12-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/bitnami/kafka/bin/../libs/jakarta.annotation-api-1.3.4.jar:/opt/bitnami/kafka/bin/../libs/jakarta.inject-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/opt/bitnami/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/bitnami/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/bitnami/kafka/bin/../libs/javassist-3.26.0-GA.jar:/opt/bitnami/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/bitnami/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/bitnami/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/jersey-client-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-common-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-hk2-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-media-jaxb-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-server-2.28.jar:/opt/bitnami/kafka/bin/../libs/jetty-client-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-continuation-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-http-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-io-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-security-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-server-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlet-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlets-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-util-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/bitnami/kafka/bin/../libs/kafka-clients-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-log4j-appender-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-examples-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-scala_2.12-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-test-utils-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-tools-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/kafka_2.12-2.5.0-sources.jar:/opt/bitnami/kafka/bin/../libs/kafka_2.12-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/log4j-1.2.17.jar:/opt/bitnami/kafka/bin/../libs/lz4-java-1.7.1.jar:/opt/bitnami/kafka/bin/../libs/maven-artifact-3.6.3.jar:/opt/bitnami/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/bitnami/kafka/bin/../libs/netty-buffer-4.1.45.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-codec-4.1.45.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-common-4.1.45.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-handler-4.1.45.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-resolver-4.1.45.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-transport-4.1.45.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-transport-native-epoll-4.1.45.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-transport-native-unix-common-4.1.45.Final.jar:/opt/bitnami/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/bitnami/kafka/bin/../libs/paranamer-2.8.jar:/opt/bitnami/kafka/bin/../libs/plexus-utils-3.2.1.jar:/opt/bitnami/kafka/bin/../libs/reflections-0.9.12.jar:/opt/bitnami/kafka/bin/../libs/rocksdbjni-5.18.3.jar:/opt/bitnami/kafka/bin/../libs/scala-collection-compat_2.12-2.1.3.jar:/opt/bitnami/kafka/bin/../libs/scala-java8-compat_2.12-0.9.0.jar:/opt/bitnami/kafka/bin/../libs/scala-library-2.12.10.jar:/opt/bitnami/kafka/bin/../libs/scala-logging_2.12-3.9.2.jar:/opt/bitnami/kafka/bin/../libs/scala-reflect-2.12.10.jar:/opt/bitnami/kafka/bin/../libs/slf4j-api-1.7.30.jar:/opt/bitnami/kafka/bin/../libs/slf4j-log4j12-1.7.30.jar:/opt/bitnami/kafka/bin/../libs/snappy-java-1.1.7.3.jar:/opt/bitnami/kafka/bin/../libs/validation-api-2.0.1.Final.jar:/opt/bitnami/kafka/bin/../libs/zookeeper-3.5.7.jar:/opt/bitnami/kafka/bin/../libs/zookeeper-jute-3.5.7.jar:/opt/bitnami/kafka/bin/../libs/zstd-jni-1.4.4-7.jar (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033182063Z [2020-07-06 11:22:42,033] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033188827Z [2020-07-06 11:22:42,033] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.03322714Z [2020-07-06 11:22:42,033] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033253354Z [2020-07-06 11:22:42,033] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033280159Z [2020-07-06 11:22:42,033] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033324405Z [2020-07-06 11:22:42,033] INFO Client environment:os.version=4.14.181-140.257.amzn2.x86_64 (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033355581Z [2020-07-06 11:22:42,033] INFO Client environment:user.name=? (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033399449Z [2020-07-06 11:22:42,033] INFO Client environment:user.home=? (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.03340494Z [2020-07-06 11:22:42,033] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033468947Z [2020-07-06 11:22:42,033] INFO Client environment:os.memory.free=1015MB (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033509114Z [2020-07-06 11:22:42,033] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033536891Z [2020-07-06 11:22:42,033] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.035251257Z [2020-07-06 11:22:42,035] INFO Initiating client connection, connectString=zookeeper.pulse.svc.cluster.local sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$#6ee6f53 (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.038953719Z [2020-07-06 11:22:42,038] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
2020-07-06T11:22:42.043407452Z [2020-07-06 11:22:42,043] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:22:42.045196444Z [2020-07-06 11:22:42,045] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
2020-07-06T11:22:42.053941415Z [2020-07-06 11:22:42,053] INFO Opening socket connection to server zookeeper.pulse.svc.cluster.local/172.20.162.36:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:22:42.057906383Z [2020-07-06 11:22:42,057] INFO Socket connection established, initiating session, client: /100.64.5.213:52738, server: zookeeper.pulse.svc.cluster.local/172.20.162.36:2181 (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:22:42.061035524Z [2020-07-06 11:22:42,060] INFO Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:22:43.632054003Z [2020-07-06 11:22:43,631] INFO Opening socket connection to server zookeeper.pulse.svc.cluster.local/172.20.162.36:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:22:43.632596098Z [2020-07-06 11:22:43,632] INFO Socket connection established, initiating session, client: /100.64.5.213:52756, server: zookeeper.pulse.svc.cluster.local/172.20.162.36:2181 (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:22:43.634993004Z [2020-07-06 11:22:43,634] INFO Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:22:44.760870715Z [2020-07-06 11:22:44,760] INFO Opening socket connection to server zookeeper.pulse.svc.cluster.local/172.20.162.36:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:22:44.761283232Z [2020-07-06 11:22:44,761] INFO Socket connection established, initiating session, client: /100.64.5.213:52772, server: zookeeper.pulse.svc.cluster.local/172.20.162.36:2181 (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:22:44.763353195Z [2020-07-06 11:22:44,763] INFO Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
2
2020-07-06T11:23:12.738834004Z [2020-07-06 11:23:12,738] INFO Session: 0x0 closed (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:23:12.738918322Z [2020-07-06 11:23:12,738] INFO EventThread shut down for session: 0x0 (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:23:12.740751654Z [2020-07-06 11:23:12,740] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient)
2020-07-06T11:23:12.745313347Z [2020-07-06 11:23:12,743] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
2020-07-06T11:23:12.745331011Z kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
2020-07-06T11:23:12.745335139Z at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:262)
2020-07-06T11:23:12.745338245Z at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:258)
2020-07-06T11:23:12.745340837Z at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:119)
2020-07-06T11:23:12.745343374Z at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1863)
2020-07-06T11:23:12.745345577Z at kafka.server.KafkaServer.createZkClient$1(KafkaServer.scala:378)
2020-07-06T11:23:12.745347726Z at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:403)
2020-07-06T11:23:12.745349947Z at kafka.server.KafkaServer.startup(KafkaServer.scala:210)
2020-07-06T11:23:12.745352077Z at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
2020-07-06T11:23:12.745354263Z at kafka.Kafka$.main(Kafka.scala:82)
2020-07-06T11:23:12.745368643Z at kafka.Kafka.main(Kafka.scala)
2020-07-06T11:23:12.745806818Z [2020-07-06 11:23:12,745] INFO shutting down (kafka.server.KafkaServer)
2020-07-06T11:23:12.752833659Z [2020-07-06 11:23:12,752] INFO shut down completed (kafka.server.KafkaServer)
2020-07-06T11:23:12.753305908Z [2020-07-06 11:23:12,753] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
2020-07-06T11:23:12.757961524Z [2020-07-06 11:23:12,757] INFO shutting down (kafka.server.KafkaServer)
Close
Also from Kafka pod machine, curl gives below o/p:
istio-proxy#kafka-0:/$ curl zookeeper.pulse.svc.cluster.local:2181
curl: (52) Empty reply from server
istio-proxy#kafka-0:/$
Note: I am having istio sidecars with mTLS disabled.
Please help.
UPDATE
HI,
It comes out to be the Istio porxy issue. I uninstalled Istio and it worked out.
https://github.com/bitnami/bitnami-docker-kafka/issues/38#issuecomment-451381003
This works fine for me on my local cluster. Since you are using EKS, you are most likely using AWS CNI(?). CNI allocates IP addresses in your VPC and if you are not allowing your security groups access to the VPC range it will not be able to access the pods. (172.20.162.36:2181 looks like VPC address for example).
Another thing you can check if is if you have some NetworkPolicy preventing access:
kubectl get netpol
It's kind of odd that you get the expected response from Zookeeper:
curl zookeeper.pulse.svc.cluster.local:2181
curl: (52) Empty reply from server
So it could be possible that zookeeper.pulse.svc.cluster.local is resolving to an 'accessible' :2181. In any case, it looks like a firewall/network policy issue.

Kafka with Kerberos

I'm encountering the following errors while configuring kafka with Kerberos authentication.
Can somebody please let me know, what could be going wrong here in getting it fixed. Tried various options, but nothing seems to be working for me.
I could notice zookeeper is getting connected and in next attempt it fails
[2019-10-09 05:06:07,942] INFO Initiating client connection, connectString=kafka-d1.example.com:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$#6adbc9d (org.apache.zookeeper.ZooKeeper)
[2019-10-09 05:06:07,945] DEBUG zookeeper.disableAutoWatchReset is false (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:07,959] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:07,961] DEBUG JAAS loginContext is: Client (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,252] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,253] INFO TGT refresh thread started. (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,254] DEBUG Client principal is "kafka/kafka-d1.example.com#EXAMPLE.COM". (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,261] DEBUG Server principal is "krbtgt/EXAMPLE.COM#EXAMPLE.COM". (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,264] INFO TGT valid starting at: Wed Oct 09 05:06:08 EDT 2019 (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,264] INFO TGT expires: Wed Oct 09 15:06:08 EDT 2019 (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,264] INFO TGT refresh sleeping until: Wed Oct 09 13:06:47 EDT 2019 (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,265] INFO Client will use GSSAPI as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,265] DEBUG creating sasl client: Client=kafka/kafka-d1.example.com#EXAMPLE.COM;service=zookeeper;serviceHostname=kafka-d1.example.com (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,272] INFO Opening socket connection to server kafka-d1.example.com/10.14.61.17:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,277] INFO Socket connection established to kafka-d1.example.com/10.14.61.17:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,278] DEBUG Session establishment request sent on kafka-d1.example.com/10.14.61.17:2181 (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,286] INFO Session establishment complete on server kafka-d1.example.com/10.14.61.17:2181, sessionid = 0x16dafa306f20009, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,288] DEBUG ClientCnxn:sendSaslPacket:length=0 (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,289] DEBUG saslClient.evaluateChallenge(len=0) (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,289] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:08,300] ERROR An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))]) occurred when evaluating Zookeeper Quorum Member's received SASL token. Zookeeper Client will go to AUTH_FAILED state. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,300] ERROR SASL authentication with Zookeeper Quorum member failed: javax.security.sasl.SaslException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))]) occurred when evaluating Zookeeper Quorum Member's received SASL token. Zookeeper Client will go to AUTH_FAILED state. (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,300] ERROR [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:08,350] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /consumers
at org.apache.zookeeper.KeeperException.create(KeeperException.java:126)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:546)
at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1559)
at kafka.zk.KafkaZkClient.makeSurePersistentPathExists(KafkaZkClient.scala:1480)
at kafka.zk.KafkaZkClient$$anonfun$createTopLevelPaths$1.apply(KafkaZkClient.scala:1472)
at kafka.zk.KafkaZkClient$$anonfun$createTopLevelPaths$1.apply(KafkaZkClient.scala:1472)
at scala.collection.immutable.List.foreach(List.scala:392)
at kafka.zk.KafkaZkClient.createTopLevelPaths(KafkaZkClient.scala:1472)
at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:373)
at kafka.server.KafkaServer.startup(KafkaServer.scala:202)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)
[2019-10-09 05:06:08,354] INFO shutting down (kafka.server.KafkaServer)
[2019-10-09 05:06:08,356] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:08,357] DEBUG Close called on already closed client (org.apache.zookeeper.ZooKeeper)
[2019-10-09 05:06:08,359] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:08,361] INFO shut down completed (kafka.server.KafkaServer)
[2019-10-09 05:06:08,361] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2019-10-09 05:06:08,364] INFO shutting down (kafka.server.KafkaServer)
Server {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab=/etc/keytabs/zookeeper.keytab
storeKey=true
useTicketCache=false
principal=zookeeper/kafka-d1.EXAMPLE.COM#EXAMPLE.COM;
};
cat /etc/kafka/jaas.conf
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/keytabs/kafka-d1.keytab"
principal="kafka/kafka-d1.EXAMPLE.COM#EXAMPLE.COM";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/keytabs/kafka-d1.keytab"
principal="kafka/kafka-d1.EXAMPLE.COM#EXAMPLE.COM";
};
/etc/krb5.conf
[libdefaults]
default_realm = EXAMPLE.COM
dns_lookup_kdc = false
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = aes256-cts
default_tkt_enctypes = aes256-cts
permitted_enctypes = aes256-cts
udp_preference_limit = 1
kdc_timeout = 3000
ignore_acceptor_hostname = true
[realms]
EXAMPLE.COM = {
kdc = srv-kerb.example.com
admin_server = srv-kerb.example.com
kdc = srv-kerb.example.com
}
[domain_realm]
Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))]) occurred when evaluating SASL token received from the Kafka Broker. This may be caused by Java's being unable to resolve the Kafka Broker's hostname correctly. You may want to try to adding '-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS environment. Users must configure FQDN of kafka brokers when authenticating using SASL and socketChannel.socket().getInetAddress().getHostName() must match the hostname in principal/hostname#realm Kafka Client will go to AUTHENTICATION_FAILED state.
I had the same problem. Changing zookeeper host value, from IP address to FQDN (hostname) and also adding the hostname in /etc/hosts fixed the problem for me.