Can't start kylin on kerberized cluster - kerberos

I have installed Kerberos and Sentry on the CDH cluster successfully (cm-5.8.4), now I need to use Apache Kylin to do some data analysis, but I can't start Kylin and install the sample.sh. When I try to run sh sample.sh, I get this error message:
2017-08-02 14:44:44,805 INFO [main] zookeeper.ZooKeeper: Client
environment:java.library.path=/opt/cloudera/parcels/CDH-5.8.4-
1.cdh5.8.4.p0.5/lib/hadoop/lib/native/:/opt/cloudera/parcels/CDH-5.8.4-
1.cdh5.8.4.p0.5/lib/hadoop/lib/native/:/opt/cloudera/parcels/CDH-5.8.4-
1.cdh5.8.4.p0.5/lib/hadoop/lib/native:/opt/cloudera/parcels/CDH-5.8.4-
1.cdh5.8.4.p0.5/lib/hbase/bin/../lib/native/Linux-amd64-64 2017-08-02 14:44:44,806 INFO [main] zookeeper.ZooKeeper: Client
environment:java.io.tmpdir=/tmp 2017-08-02 14:44:44,806 INFO [main]
zookeeper.ZooKeeper: Client environment:java.compiler= 2017-08-02
14:44:44,806 INFO [main] zookeeper.ZooKeeper: Client
environment:os.name=Linux 2017-08-02 14:44:44,806 INFO [main]
zookeeper.ZooKeeper: Client environment:os.arch=amd64 2017-08-02
14:44:44,806 INFO [main] zookeeper.ZooKeeper: Client
environment:os.version=3.10.0-229.el7.x86_64 2017-08-02 14:44:44,806
INFO [main] zookeeper.ZooKeeper: Client environment:user.name=root
2017-08-02 14:44:44,806 INFO [main] zookeeper.ZooKeeper: Client
environment:user.home=/root 2017-08-02 14:44:44,806 INFO [main]
zookeeper.ZooKeeper: Client
environment:user.dir=/data1/apache-kylin-1.6.0-cdh5.7-bin 2017-08-02
14:44:44,806 INFO [main] zookeeper.ZooKeeper: Initiating client
connection, connectString=localhost:2181 sessionTimeout=90000
watcher=hconnection-0x3cc79c020x0, quorum=localhost:2181,
baseZNode=/hbase 2017-08-02 14:44:44,823 INFO
[main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Opening
socket connection to server localhost/127.0.0.1:2181. Will not
attempt to authenticate using SASL (unknown error) 2017-08-02
14:44:44,829 WARN [main-SendThread(localhost:2181)]
zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error,
closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2017-08-02 14:44:45,934 INFO [main-SendThread(localhost:2181)]
zookeeper.ClientCnxn: Opening socket connection to server
localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL
(unknown error) 2017-08-02 14:44:45,935 WARN
[main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Session 0x0
for server null, unexpected error, closing socket connection and
attempting reconnect
Does anyone know how to solve this problem or show me a guide to use Kylin on a kerberized cluster? Thanks!
The version of Kylin is apache-kylin-1.6.0-cdh5.7-bin.

to install kylin on kerberized cluster is quiet simple.
step 1. ensure you install kylin on a cdh gateway.
step 2. setup the environment correctly.
such as hadoop_home;hive_conf;hbase_conf etc.
step 3. follow the install instruction on kylin website.
if you need more help, leave me a message , so i can show you more detailed documentation on the first 2 steps.

Related

Kafka Authentication with SASL_PLAINTEXT fails

Here's the log:
kafka 16:54:47.56
kafka 16:54:47.57 Welcome to the Bitnami kafka container
kafka 16:54:47.57 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-kafka
kafka 16:54:47.57 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-kafka/issues
kafka 16:54:47.57
kafka 16:54:47.57 INFO ==> ** Starting Kafka setup **
kafka 16:54:47.62 DEBUG ==> Validating settings in KAFKA_* env vars...
kafka 16:54:47.64 WARN ==> You set the environment variable ALLOW_PLAINTEXT_LISTENER=yes. For safety reasons, do not use this flag in a production environment.
kafka 16:54:47.64 INFO ==> Initializing Kafka...
kafka 16:54:47.65 INFO ==> No injected configuration files found, creating default config files
kafka 16:54:47.89 INFO ==> Configuring Kafka for inter-broker communications with SASL_PLAINTEXT authentication.
kafka 16:54:47.89 INFO ==> Configuring Kafka for client communications with SASL_PLAINTEXT authentication.
kafka 16:54:47.91 INFO ==> Generating JAAS authentication file
kafka 16:54:47.93 INFO ==> ** Kafka setup finished! **
kafka 16:54:47.95 INFO ==> ** Starting Kafka **
[2022-05-29 16:54:49,343] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2022-05-29 16:54:49,988] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2022-05-29 16:54:50,157] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2022-05-29 16:54:50,163] INFO starting (kafka.server.KafkaServer)
[2022-05-29 16:54:50,164] INFO Connecting to zookeeper on sharif-zookeeper (kafka.server.KafkaServer)
[2022-05-29 16:54:50,188] INFO [ZooKeeperClient Kafka server] Initializing a new session to sharif-zookeeper. (kafka.zookeeper.ZooKeeperClient)
[2022-05-29 16:54:50,194] INFO Client environment:zookeeper.version=3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01/06/2021 20:03 GMT (org.apache.zookeeper.ZooKeeper)
[2022-05-29 16:54:50,196] INFO Client environment:host.name=sharif-kafka-0.sharif-kafka-headless.default.svc.cluster.local (org.apache.zookeeper.ZooKeeper)
[2022-05-29 16:54:50,196] INFO Client environment:java.version=11.0.12 (org.apache.zookeeper.ZooKeeper)
[2022-05-29 16:54:50,196] INFO Client environment:java.vendor=BellSoft (org.apache.zookeeper.ZooKeeper)
[2022-05-29 16:54:50,196] INFO Client environment:java.home=/opt/bitnami/java (org.apache.zookeeper.ZooKeeper)
[2022-05-29 16:54:50,197] INFO Client environment:java.class.path=/opt/bitnami/kafka/bin/../libs/activation-1.1.1.jar:/opt/bitnami/kafka/bin/../libs/aopalliance-repackaged-2.6.1.jar:/opt/bitnami/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/bitnami/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/bitnami/kafka/bin/../libs/commons-cli-1.4.jar:/opt/bitnami/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/bitnami/kafka/bin/../libs/connect-api-2.8.0.jar:/opt/bitnami/kafka/bin/../libs/connect-basic-auth-extension-2.8.0.jar:/opt/bitnami/kafka/bin/../libs/connect-file-2.8.0.jar:/opt/bitnami/kafka/bin/../libs/connect-json-2.8.0.jar:/opt/bitnami/kafka/bin/../libs/connect-mirror-2.8.0.jar:/opt/bitnami/kafka/bin/../libs/connect-mirror-client-2.8.0.jar:/opt/bitnami/kafka/bin/../libs/connect-runtime-2.8.0.jar:/opt/bitnami/kafka/bin/../libs/connect-transforms-2.8.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-api-2.6.1.jar:/opt/bitnami/kafka/bin/../libs/hk2-locator-2.6.1.jar:/opt/bitnami/kafka/bin/../libs/hk2-utils-2.6.1.jar:/opt/bitnami/kafka/bin/../libs/jackson-annotations-2.10.5.jar:/opt/bitnami/kafka/bin/../libs/jackson-core-2.10.5.jar:/opt/bitnami/kafka/bin/../libs/jackson-databind-2.10.5.1.jar:/opt/bitnami/kafka/bin/../libs/jackson-dataformat-csv-2.10.5.jar:/opt/bitnami/kafka/bin/../libs/jackson-datatype-jdk8-2.10.5.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-base-2.10.5.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.5.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.5.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-paranamer-2.10.5.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-scala_2.12-2.10.5.jar:/opt/bitnami/kafka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/bitnami/kafka/bin/../libs/jakarta.annotation-api-1.3.5.jar:/opt/bitnami/kafka/bin/../libs/jakarta.inject-2.6.1.jar:/opt/bitnami/kafka/bin/../libs/jakarta.validation-api-2.0.2.jar:/opt/bitnami/kafka/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/opt/bitnami/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/bitnami/kafka/bin/../libs/javassist-3.27.0-GA.jar:/opt/bitnami/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/bitnami/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/bitnami/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/jersey-client-2.31.jar:/opt/bitnami/kafka/bin/../libs/jersey-common-2.31.jar:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-2.31.jar:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-core-2.31.jar:/opt/bitnami/kafka/bin/../libs/jersey-hk2-2.31.jar:/opt/bitnami/kafka/bin/../libs/jersey-media-jaxb-2.31.jar:/opt/bitnami/kafka/bin/../libs/jersey-server-2.31.jar:/opt/bitnami/kafka/bin/../libs/jetty-client-9.4.39.v20210325.jar:/opt/bitnami/kafka/bin/../libs/jetty-continuation-9.4.39.v20210325.jar:/opt/bitnami/kafka/bin/../libs/jetty-http-9.4.39.v20210325.jar:/opt/bitnami/kafka/bin/../libs/jetty-io-9.4.39.v20210325.jar:/opt/bitnami/kafka/bin/../libs/jetty-security-9.4.39.v20210325.jar:/opt/bitnami/kafka/bin/../libs/jetty-server-9.4.39.v20210325.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlet-9.4.39.v20210325.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlets-9.4.39.v20210325.jar:/opt/bitnami/kafka/bin/../libs/jetty-util-9.4.39.v20210325.jar:/opt/bitnami/kafka/bin/../libs/jetty-util-ajax-9.4.39.v20210325.jar:/opt/bitnami/kafka/bin/../libs/jline-3.12.1.jar:/opt/bitnami/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/bitnami/kafka/bin/../libs/kafka-clients-2.8.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-log4j-appender-2.8.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-metadata-2.8.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-raft-2.8.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-shell-2.8.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-2.8.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-examples-2.8.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-scala_2.12-2.8.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-test-utils-2.8.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-tools-2.8.0.jar:/opt/bitnami/kafka/bin/../libs/kafka_2.12-2.8.0-sources.jar:/opt/bitnami/kafka/bin/../libs/kafka_2.12-2.8.0.jar:/opt/bitnami/kafka/bin/../libs/log4j-1.2.17.jar:/opt/bitnami/kafka/bin/../libs/lz4-java-1.7.1.jar:/opt/bitnami/kafka/bin/../libs/maven-artifact-3.6.3.jar:/opt/bitnami/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/bitnami/kafka/bin/../libs/netty-buffer-4.1.62.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-codec-4.1.62.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-common-4.1.62.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-handler-4.1.62.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-resolver-4.1.62.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-transport-4.1.62.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-transport-native-epoll-4.1.62.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-transport-native-unix-common-4.1.62.Final.jar:/opt/bitnami/kafka/bin/../libs/osgi-resource-locator-1.0.3.jar:/opt/bitnami/kafka/bin/../libs/paranamer-2.8.jar:/opt/bitnami/kafka/bin/../libs/plexus-utils-3.2.1.jar:/opt/bitnami/kafka/bin/../libs/reflections-0.9.12.jar:/opt/bitnami/kafka/bin/../libs/rocksdbjni-5.18.4.jar:/opt/bitnami/kafka/bin/../libs/scala-collection-compat_2.12-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/scala-java8-compat_2.12-0.9.1.jar:/opt/bitnami/kafka/bin/../libs/scala-library-2.12.13.jar:/opt/bitnami/kafka/bin/../libs/scala-logging_2.12-3.9.2.jar:/opt/bitnami/kafka/bin/../libs/scala-reflect-2.12.13.jar:/opt/bitnami/kafka/bin/../libs/slf4j-api-1.7.30.jar:/opt/bitnami/kafka/bin/../libs/slf4j-log4j12-1.7.30.jar:/opt/bitnami/kafka/bin/../libs/snappy-java-1.1.8.1.jar:/opt/bitnami/kafka/bin/../libs/zookeeper-3.5.9.jar:/opt/bitnami/kafka/bin/../libs/zookeeper-jute-3.5.9.jar:/opt/bitnami/kafka/bin/../libs/zstd-jni-1.4.9-1.jar (org.apache.zookeeper.ZooKeeper)
[2022-05-29 16:54:50,197] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2022-05-29 16:54:50,198] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2022-05-29 16:54:50,198] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2022-05-29 16:54:50,198] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2022-05-29 16:54:50,198] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2022-05-29 16:54:50,198] INFO Client environment:os.version=5.4.190-107.353.amzn2.x86_64 (org.apache.zookeeper.ZooKeeper)
[2022-05-29 16:54:50,199] INFO Client environment:user.name=? (org.apache.zookeeper.ZooKeeper)
[2022-05-29 16:54:50,199] INFO Client environment:user.home=? (org.apache.zookeeper.ZooKeeper)
[2022-05-29 16:54:50,199] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[2022-05-29 16:54:50,199] INFO Client environment:os.memory.free=1011MB (org.apache.zookeeper.ZooKeeper)
[2022-05-29 16:54:50,200] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
[2022-05-29 16:54:50,200] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
[2022-05-29 16:54:50,203] INFO Initiating client connection, connectString=sharif-zookeeper sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$#447a020 (org.apache.zookeeper.ZooKeeper)
[2022-05-29 16:54:50,210] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2022-05-29 16:54:50,216] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[2022-05-29 16:54:50,218] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2022-05-29 16:54:50,367] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2022-05-29 16:54:50,371] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2022-05-29 16:54:50,381] INFO Opening socket connection to server sharif-zookeeper/10.100.190.137:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2022-05-29 16:54:50,389] INFO Socket connection established, initiating session, client: /192.168.34.166:57652, server: sharif-zookeeper/10.100.190.137:2181 (org.apache.zookeeper.ClientCnxn)
[2022-05-29 16:54:50,398] INFO Session establishment complete on server sharif-zookeeper/10.100.190.137:2181, sessionid = 0x100003ea9fd0008, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
[2022-05-29 16:54:50,406] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2022-05-29 16:54:50,420] ERROR SASL authentication failed using login context 'Client' with exception: {} (org.apache.zookeeper.client.ZooKeeperSaslClient)
javax.security.sasl.SaslException: Error in authenticating with a Zookeeper Quorum member: the quorum member's saslToken is null.
at org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslToken(ZooKeeperSaslClient.java:312)
at org.apache.zookeeper.client.ZooKeeperSaslClient.respondToServer(ZooKeeperSaslClient.java:275)
at org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:882)
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:103)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:365)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223)
[2022-05-29 16:54:50,431] INFO Unable to read additional data from server sessionid 0x100003ea9fd0008, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-05-29 16:54:50,430] ERROR [ZooKeeperClient Kafka server] Auth failed. (kafka.zookeeper.ZooKeeperClient)
[2022-05-29 16:54:50,452] INFO EventThread shut down for session: 0x100003ea9fd0008 (org.apache.zookeeper.ClientCnxn)
My kafka_jaas file in kafka server:
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="user_kafka"
password="secret";
};
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="kafka"
password="CMwvKfeVJociGkSToMZQ"
user_kafka="CMwvKfeVJociGkSToMZQ"
user_user_kafka="secret";
org.apache.kafka.common.security.scram.ScramLoginModule required;
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="kafka"
password="2q0T4HFZwa21DCRlfqxX";
};
My zoo_jaas file in ZooKeeper server:
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="kafka"
password="secret";
};
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_kafka="secret";
};
Any help/suggestion would be really beneficial. Thanks.
The following setup worked for me:
Kafka jaas file:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret";
};
Zookeeper jaas file:
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="admin-secret";
};
Kafka producer/consumer client properties:
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";
I noticed that you was running Kafka with Bitnami Kafka container. It can be running successfully using Docker Compose.
First, create a docker-compose.yml file like this:
version: '3'
services:
zookeeper:
image: 'bitnami/zookeeper:3.6'
ports:
- '2181:2181'
environment:
- ZOO_ENABLE_AUTH=yes
- ZOO_SERVER_USERS=kafka
- ZOO_SERVER_PASSWORDS=secret
- ZOO_CLIENT_USER=kafka
- ZOO_CLIENT_PASSWORD=secret
kafka:
image: 'bitnami/kafka:2.8.1'
ports:
- '9093:9093'
environment:
- ALLOW_PLAINTEXT_LISTENER=no
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_CFG_LISTENERS=INTERNAL://:9092,CLIENT://:9093,
- KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka:9092,CLIENT://localhost:9093
- KAFKA_INTER_BROKER_LISTENER_NAME=INTERNAL
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:SASL_PLAINTEXT,CLIENT:SASL_PLAINTEXT
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
#Client credentials
- KAFKA_CLIENT_USERS=user_kafka
- KAFKA_CLIENT_PASSWORDS=secret
#Interbroker credentials
- KAFKA_INTER_BROKER_USER=kafka
- KAFKA_INTER_BROKER_PASSWORD=CMwvKfeVJociGkSToMZQ
#Zookeeper credentials
- KAFKA_ZOOKEEPER_PROTOCOL=SASL
- KAFKA_ZOOKEEPER_USER=kafka
- KAFKA_ZOOKEEPER_PASSWORD=secret
depends_on:
- zookeeper
And then launch the containers using:
$ docker-compose up -d
# list the containers
$ docker-compose ps
Finally you will find kafka and zookeeper containers running.

Kafka unable to connect to zookeeper ensemble on EKS

I am trying to run Kafka cluster on AWS EKS cluster (v1.16). I am using bitnami helm charts.
https://github.com/bitnami/charts/tree/master/bitnami/kafka
https://github.com/bitnami/charts/tree/master/bitnami/zookeeper
I have deployed zookeeper ensemble successfully using below command:
helm install zookeeper bitnami/zookeeper --set replicaCount=3 --set auth.enabled=false --set allowAnonymousLogin=true --set persistence.storageClass=ebs --set persistence.accessModes={ReadWriteOnce} --set persistence.size=1Gi --set podLabels."app\.kubernetes\.io/version"="1.0"
It outputs:
ZooKeeper can be accessed via port 2181 on the following DNS name from within your cluster:
zookeeper.pulse.svc.cluster.local
Now, I am trying to deploy Kafka cluster with below command:
helm install kafka bitnami/kafka --set replicaCount=3 --set zookeeper.enabled=false --set externalZookeeper.servers=zookeeper.pulse.svc.cluster.local --set autoCreateTopicsEnable=true --set persistence.storageClass=ebs --set persistence.accessModes={ReadWriteOnce} --set persistence.size=1Gi --set podLabels."app\.kubernetes\.io/version"="1.0"
It outputs:
Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:
kafka.pulse.svc.cluster.local
Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:
kafka-0.kafka-headless.pulse.svc.cluster.local
kafka-1.kafka-headless.pulse.svc.cluster.local
kafka-2.kafka-headless.pulse.svc.cluster.local
It creates 3 pods but none of the pod is able to connect to zookeeper. I am not getting what is the issue here.
Kafka pod logs:
2020-07-06T11:22:40.506134648Z 11:22:40.50 Welcome to the Bitnami kafka container
2020-07-06T11:22:40.507301179Z 11:22:40.50 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-kafka
2020-07-06T11:22:40.508519907Z 11:22:40.50 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-kafka/issues
2020-07-06T11:22:40.51039472Z 11:22:40.50
2020-07-06T11:22:40.511630347Z 11:22:40.51 INFO ==> ** Starting Kafka setup **
2020-07-06T11:22:40.55379314Z 11:22:40.55 WARN ==> You set the environment variable ALLOW_PLAINTEXT_LISTENER=yes. For safety reasons, do not use this flag in a production environment.
2020-07-06T11:22:40.561203295Z 11:22:40.56 INFO ==> Initializing Kafka...
2020-07-06T11:22:40.565054949Z 11:22:40.56 INFO ==> No injected configuration files found, creating default config files
2020-07-06T11:22:40.723721499Z 11:22:40.72 INFO ==> Configuring Kafka for inter-broker communications with PLAINTEXT authentication.
2020-07-06T11:22:40.726161543Z 11:22:40.72 WARN ==> Inter-broker communications are configured as PLAINTEXT. This is not safe for production environments.
2020-07-06T11:22:40.727497832Z 11:22:40.72 INFO ==> Configuring Kafka for client communications with PLAINTEXT authentication.
2020-07-06T11:22:40.731790674Z 11:22:40.73 WARN ==> Client communications are configured using PLAINTEXT listeners. For safety reasons, do not use this in a production environment.
2020-07-06T11:22:40.73699684Z 11:22:40.73 INFO ==> ** Kafka setup finished! **
2020-07-06T11:22:40.737001986Z
2020-07-06T11:22:40.746297253Z 11:22:40.74 INFO ==> ** Starting Kafka **
2020-07-06T11:22:41.512303802Z [2020-07-06 11:22:41,511] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
2020-07-06T11:22:42.008231959Z [2020-07-06 11:22:42,007] INFO starting (kafka.server.KafkaServer)
2020-07-06T11:22:42.009112085Z [2020-07-06 11:22:42,008] INFO Connecting to zookeeper on zookeeper.pulse.svc.cluster.local (kafka.server.KafkaServer)
2020-07-06T11:22:42.028233655Z [2020-07-06 11:22:42,028] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper.pulse.svc.cluster.local. (kafka.zookeeper.ZooKeeperClient)
2020-07-06T11:22:42.032763227Z [2020-07-06 11:22:42,032] INFO Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.032776511Z [2020-07-06 11:22:42,032] INFO Client environment:host.name=kafka-0.kafka-headless.pulse.svc.cluster.local (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.03283528Z [2020-07-06 11:22:42,032] INFO Client environment:java.version=11.0.7 (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.032984414Z [2020-07-06 11:22:42,032] INFO Client environment:java.vendor=BellSoft (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033005091Z [2020-07-06 11:22:42,032] INFO Client environment:java.home=/opt/bitnami/java (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.03312054Z [2020-07-06 11:22:42,032] INFO Client environment:java.class.path=/opt/bitnami/kafka/bin/../libs/activation-1.1.1.jar:/opt/bitnami/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/bitnami/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/bitnami/kafka/bin/../libs/commons-cli-1.4.jar:/opt/bitnami/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/bitnami/kafka/bin/../libs/connect-api-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/connect-basic-auth-extension-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/connect-file-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/connect-json-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/connect-mirror-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/connect-mirror-client-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/connect-runtime-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/connect-transforms-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-api-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-locator-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/hk2-utils-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jackson-annotations-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-core-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-databind-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-dataformat-csv-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-datatype-jdk8-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-base-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-paranamer-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jackson-module-scala_2.12-2.10.2.jar:/opt/bitnami/kafka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/bitnami/kafka/bin/../libs/jakarta.annotation-api-1.3.4.jar:/opt/bitnami/kafka/bin/../libs/jakarta.inject-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/opt/bitnami/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/bitnami/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/bitnami/kafka/bin/../libs/javassist-3.26.0-GA.jar:/opt/bitnami/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/bitnami/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/bitnami/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/bitnami/kafka/bin/../libs/jersey-client-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-common-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-hk2-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-media-jaxb-2.28.jar:/opt/bitnami/kafka/bin/../libs/jersey-server-2.28.jar:/opt/bitnami/kafka/bin/../libs/jetty-client-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-continuation-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-http-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-io-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-security-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-server-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlet-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-servlets-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jetty-util-9.4.24.v20191120.jar:/opt/bitnami/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/bitnami/kafka/bin/../libs/kafka-clients-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-log4j-appender-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-examples-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-scala_2.12-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-streams-test-utils-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/kafka-tools-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/kafka_2.12-2.5.0-sources.jar:/opt/bitnami/kafka/bin/../libs/kafka_2.12-2.5.0.jar:/opt/bitnami/kafka/bin/../libs/log4j-1.2.17.jar:/opt/bitnami/kafka/bin/../libs/lz4-java-1.7.1.jar:/opt/bitnami/kafka/bin/../libs/maven-artifact-3.6.3.jar:/opt/bitnami/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/bitnami/kafka/bin/../libs/netty-buffer-4.1.45.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-codec-4.1.45.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-common-4.1.45.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-handler-4.1.45.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-resolver-4.1.45.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-transport-4.1.45.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-transport-native-epoll-4.1.45.Final.jar:/opt/bitnami/kafka/bin/../libs/netty-transport-native-unix-common-4.1.45.Final.jar:/opt/bitnami/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/bitnami/kafka/bin/../libs/paranamer-2.8.jar:/opt/bitnami/kafka/bin/../libs/plexus-utils-3.2.1.jar:/opt/bitnami/kafka/bin/../libs/reflections-0.9.12.jar:/opt/bitnami/kafka/bin/../libs/rocksdbjni-5.18.3.jar:/opt/bitnami/kafka/bin/../libs/scala-collection-compat_2.12-2.1.3.jar:/opt/bitnami/kafka/bin/../libs/scala-java8-compat_2.12-0.9.0.jar:/opt/bitnami/kafka/bin/../libs/scala-library-2.12.10.jar:/opt/bitnami/kafka/bin/../libs/scala-logging_2.12-3.9.2.jar:/opt/bitnami/kafka/bin/../libs/scala-reflect-2.12.10.jar:/opt/bitnami/kafka/bin/../libs/slf4j-api-1.7.30.jar:/opt/bitnami/kafka/bin/../libs/slf4j-log4j12-1.7.30.jar:/opt/bitnami/kafka/bin/../libs/snappy-java-1.1.7.3.jar:/opt/bitnami/kafka/bin/../libs/validation-api-2.0.1.Final.jar:/opt/bitnami/kafka/bin/../libs/zookeeper-3.5.7.jar:/opt/bitnami/kafka/bin/../libs/zookeeper-jute-3.5.7.jar:/opt/bitnami/kafka/bin/../libs/zstd-jni-1.4.4-7.jar (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033182063Z [2020-07-06 11:22:42,033] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033188827Z [2020-07-06 11:22:42,033] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.03322714Z [2020-07-06 11:22:42,033] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033253354Z [2020-07-06 11:22:42,033] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033280159Z [2020-07-06 11:22:42,033] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033324405Z [2020-07-06 11:22:42,033] INFO Client environment:os.version=4.14.181-140.257.amzn2.x86_64 (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033355581Z [2020-07-06 11:22:42,033] INFO Client environment:user.name=? (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033399449Z [2020-07-06 11:22:42,033] INFO Client environment:user.home=? (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.03340494Z [2020-07-06 11:22:42,033] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033468947Z [2020-07-06 11:22:42,033] INFO Client environment:os.memory.free=1015MB (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033509114Z [2020-07-06 11:22:42,033] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.033536891Z [2020-07-06 11:22:42,033] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.035251257Z [2020-07-06 11:22:42,035] INFO Initiating client connection, connectString=zookeeper.pulse.svc.cluster.local sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$#6ee6f53 (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:22:42.038953719Z [2020-07-06 11:22:42,038] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
2020-07-06T11:22:42.043407452Z [2020-07-06 11:22:42,043] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:22:42.045196444Z [2020-07-06 11:22:42,045] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
2020-07-06T11:22:42.053941415Z [2020-07-06 11:22:42,053] INFO Opening socket connection to server zookeeper.pulse.svc.cluster.local/172.20.162.36:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:22:42.057906383Z [2020-07-06 11:22:42,057] INFO Socket connection established, initiating session, client: /100.64.5.213:52738, server: zookeeper.pulse.svc.cluster.local/172.20.162.36:2181 (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:22:42.061035524Z [2020-07-06 11:22:42,060] INFO Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:22:43.632054003Z [2020-07-06 11:22:43,631] INFO Opening socket connection to server zookeeper.pulse.svc.cluster.local/172.20.162.36:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:22:43.632596098Z [2020-07-06 11:22:43,632] INFO Socket connection established, initiating session, client: /100.64.5.213:52756, server: zookeeper.pulse.svc.cluster.local/172.20.162.36:2181 (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:22:43.634993004Z [2020-07-06 11:22:43,634] INFO Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:22:44.760870715Z [2020-07-06 11:22:44,760] INFO Opening socket connection to server zookeeper.pulse.svc.cluster.local/172.20.162.36:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:22:44.761283232Z [2020-07-06 11:22:44,761] INFO Socket connection established, initiating session, client: /100.64.5.213:52772, server: zookeeper.pulse.svc.cluster.local/172.20.162.36:2181 (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:22:44.763353195Z [2020-07-06 11:22:44,763] INFO Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
2
2020-07-06T11:23:12.738834004Z [2020-07-06 11:23:12,738] INFO Session: 0x0 closed (org.apache.zookeeper.ZooKeeper)
2020-07-06T11:23:12.738918322Z [2020-07-06 11:23:12,738] INFO EventThread shut down for session: 0x0 (org.apache.zookeeper.ClientCnxn)
2020-07-06T11:23:12.740751654Z [2020-07-06 11:23:12,740] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient)
2020-07-06T11:23:12.745313347Z [2020-07-06 11:23:12,743] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
2020-07-06T11:23:12.745331011Z kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
2020-07-06T11:23:12.745335139Z at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:262)
2020-07-06T11:23:12.745338245Z at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:258)
2020-07-06T11:23:12.745340837Z at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:119)
2020-07-06T11:23:12.745343374Z at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1863)
2020-07-06T11:23:12.745345577Z at kafka.server.KafkaServer.createZkClient$1(KafkaServer.scala:378)
2020-07-06T11:23:12.745347726Z at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:403)
2020-07-06T11:23:12.745349947Z at kafka.server.KafkaServer.startup(KafkaServer.scala:210)
2020-07-06T11:23:12.745352077Z at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
2020-07-06T11:23:12.745354263Z at kafka.Kafka$.main(Kafka.scala:82)
2020-07-06T11:23:12.745368643Z at kafka.Kafka.main(Kafka.scala)
2020-07-06T11:23:12.745806818Z [2020-07-06 11:23:12,745] INFO shutting down (kafka.server.KafkaServer)
2020-07-06T11:23:12.752833659Z [2020-07-06 11:23:12,752] INFO shut down completed (kafka.server.KafkaServer)
2020-07-06T11:23:12.753305908Z [2020-07-06 11:23:12,753] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
2020-07-06T11:23:12.757961524Z [2020-07-06 11:23:12,757] INFO shutting down (kafka.server.KafkaServer)
Close
Also from Kafka pod machine, curl gives below o/p:
istio-proxy#kafka-0:/$ curl zookeeper.pulse.svc.cluster.local:2181
curl: (52) Empty reply from server
istio-proxy#kafka-0:/$
Note: I am having istio sidecars with mTLS disabled.
Please help.
UPDATE
HI,
It comes out to be the Istio porxy issue. I uninstalled Istio and it worked out.
https://github.com/bitnami/bitnami-docker-kafka/issues/38#issuecomment-451381003
This works fine for me on my local cluster. Since you are using EKS, you are most likely using AWS CNI(?). CNI allocates IP addresses in your VPC and if you are not allowing your security groups access to the VPC range it will not be able to access the pods. (172.20.162.36:2181 looks like VPC address for example).
Another thing you can check if is if you have some NetworkPolicy preventing access:
kubectl get netpol
It's kind of odd that you get the expected response from Zookeeper:
curl zookeeper.pulse.svc.cluster.local:2181
curl: (52) Empty reply from server
So it could be possible that zookeeper.pulse.svc.cluster.local is resolving to an 'accessible' :2181. In any case, it looks like a firewall/network policy issue.

Strimzi topic operator not working - Openshift

I deployed Strimzi topic operator on openshift using the following documentation: https://strimzi.io/docs/0.12.2/full.html#deploying-the-topic-operator-standalone-deploying. This does not work and error shown below:
2020-02-26 13:40:30 INFO ZooKeeper:100 - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib`enter code here`
2020-02-26 13:40:30 INFO ZooKeeper:100 - Client environment:java.io.tmpdir=/tmp
2020-02-26 13:40:30 INFO ZooKeeper:100 - Client environment:java.compiler=<NA>
2020-02-26 13:40:30 INFO ZooKeeper:100 - Client environment:os.name=Linux
2020-02-26 13:40:30 INFO ZooKeeper:100 - Client environment:os.arch=amd64
2020-02-26 13:40:30 INFO ZooKeeper:100 - Client environment:os.version=3.10.0-1062.12.1.el7.x86_64
2020-02-26 13:40:30 INFO ZooKeeper:100 - Client environment:user.name=?
2020-02-26 13:40:30 INFO ZooKeeper:100 - Client environment:user.home=?
2020-02-26 13:40:30 INFO ZooKeeper:100 - Client environment:user.dir=/opt/strimzi
2020-02-26 13:40:30 INFO ZooKeeper:442 - Initiating client connection, connectString=172.30.93.216:2181 sessionTimeout=500000 watcher=org.I0Itec.zkclient.ZkClient#74e374fc
2020-02-26 13:40:31 INFO ZkClient:936 - Waiting for keeper state SyncConnected
2020-02-26 13:40:31 INFO ClientCnxn:1025 - Opening socket connection to server soumo-zookeeper-client.kube-system.svc.cluster.local/172.30.93.216:2181. Will not attempt to authenticate using SASL (unknown error)
2020-02-26 13:40:31 INFO ClientCnxn:879 - Socket connection established to soumo-zookeeper-client.kube-system.svc.cluster.local/172.30.93.216:2181, initiating session
2020-02-26 13:40:31 WARN ClientCnxn:1164 - Session 0x0 for server soumo-zookeeper-client.kube-system.svc.cluster.local/172.30.93.216:2181, unexpected error, closing socket connection and attempting reconnect
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:1.8.0_242]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[?:1.8.0_242]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[?:1.8.0_242]
at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[?:1.8.0_242]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:377) ~[?:1.8.0_242]
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:68) ~[org.apache.zookeeper.zookeeper-3.4.14.jar:3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf]
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366) ~[org.apache.zookeeper.zookeeper-3.4.14.jar:3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141) [org.apache.zookeeper.zookeeper-3.4.14.jar:3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf]
2020-02-26 13:40:34 INFO ClientCnxn:1025 - Opening socket connection to server soumo-zookeeper-client.kube-system.svc.cluster.local/172.30.93.216:2181. Will not attempt to authenticate using SASL (unknown error)
2020-02-26 13:40:34 INFO ClientCnxn:879 - Socket connection established to soumo-zookeeper-client.kube-system.svc.cluster.local/172.30.93.216:2181, initiating session
2020-02-26 13:40:34 WARN ClientCnxn:1164 - Session 0x0 for server soumo-zookeeper-client.kube-system.svc.cluster.local/172.30.93.216:2181, unexpected error, closing socket connection and attempting reconnect
java.io.IOException: Connection reset by peer

Kafka with Kerberos

I'm encountering the following errors while configuring kafka with Kerberos authentication.
Can somebody please let me know, what could be going wrong here in getting it fixed. Tried various options, but nothing seems to be working for me.
I could notice zookeeper is getting connected and in next attempt it fails
[2019-10-09 05:06:07,942] INFO Initiating client connection, connectString=kafka-d1.example.com:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$#6adbc9d (org.apache.zookeeper.ZooKeeper)
[2019-10-09 05:06:07,945] DEBUG zookeeper.disableAutoWatchReset is false (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:07,959] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:07,961] DEBUG JAAS loginContext is: Client (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,252] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,253] INFO TGT refresh thread started. (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,254] DEBUG Client principal is "kafka/kafka-d1.example.com#EXAMPLE.COM". (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,261] DEBUG Server principal is "krbtgt/EXAMPLE.COM#EXAMPLE.COM". (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,264] INFO TGT valid starting at: Wed Oct 09 05:06:08 EDT 2019 (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,264] INFO TGT expires: Wed Oct 09 15:06:08 EDT 2019 (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,264] INFO TGT refresh sleeping until: Wed Oct 09 13:06:47 EDT 2019 (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,265] INFO Client will use GSSAPI as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,265] DEBUG creating sasl client: Client=kafka/kafka-d1.example.com#EXAMPLE.COM;service=zookeeper;serviceHostname=kafka-d1.example.com (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,272] INFO Opening socket connection to server kafka-d1.example.com/10.14.61.17:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,277] INFO Socket connection established to kafka-d1.example.com/10.14.61.17:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,278] DEBUG Session establishment request sent on kafka-d1.example.com/10.14.61.17:2181 (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,286] INFO Session establishment complete on server kafka-d1.example.com/10.14.61.17:2181, sessionid = 0x16dafa306f20009, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,288] DEBUG ClientCnxn:sendSaslPacket:length=0 (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,289] DEBUG saslClient.evaluateChallenge(len=0) (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,289] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:08,300] ERROR An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))]) occurred when evaluating Zookeeper Quorum Member's received SASL token. Zookeeper Client will go to AUTH_FAILED state. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,300] ERROR SASL authentication with Zookeeper Quorum member failed: javax.security.sasl.SaslException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))]) occurred when evaluating Zookeeper Quorum Member's received SASL token. Zookeeper Client will go to AUTH_FAILED state. (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,300] ERROR [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:08,350] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /consumers
at org.apache.zookeeper.KeeperException.create(KeeperException.java:126)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:546)
at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1559)
at kafka.zk.KafkaZkClient.makeSurePersistentPathExists(KafkaZkClient.scala:1480)
at kafka.zk.KafkaZkClient$$anonfun$createTopLevelPaths$1.apply(KafkaZkClient.scala:1472)
at kafka.zk.KafkaZkClient$$anonfun$createTopLevelPaths$1.apply(KafkaZkClient.scala:1472)
at scala.collection.immutable.List.foreach(List.scala:392)
at kafka.zk.KafkaZkClient.createTopLevelPaths(KafkaZkClient.scala:1472)
at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:373)
at kafka.server.KafkaServer.startup(KafkaServer.scala:202)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)
[2019-10-09 05:06:08,354] INFO shutting down (kafka.server.KafkaServer)
[2019-10-09 05:06:08,356] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:08,357] DEBUG Close called on already closed client (org.apache.zookeeper.ZooKeeper)
[2019-10-09 05:06:08,359] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:08,361] INFO shut down completed (kafka.server.KafkaServer)
[2019-10-09 05:06:08,361] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2019-10-09 05:06:08,364] INFO shutting down (kafka.server.KafkaServer)
Server {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab=/etc/keytabs/zookeeper.keytab
storeKey=true
useTicketCache=false
principal=zookeeper/kafka-d1.EXAMPLE.COM#EXAMPLE.COM;
};
cat /etc/kafka/jaas.conf
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/keytabs/kafka-d1.keytab"
principal="kafka/kafka-d1.EXAMPLE.COM#EXAMPLE.COM";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/keytabs/kafka-d1.keytab"
principal="kafka/kafka-d1.EXAMPLE.COM#EXAMPLE.COM";
};
/etc/krb5.conf
[libdefaults]
default_realm = EXAMPLE.COM
dns_lookup_kdc = false
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = aes256-cts
default_tkt_enctypes = aes256-cts
permitted_enctypes = aes256-cts
udp_preference_limit = 1
kdc_timeout = 3000
ignore_acceptor_hostname = true
[realms]
EXAMPLE.COM = {
kdc = srv-kerb.example.com
admin_server = srv-kerb.example.com
kdc = srv-kerb.example.com
}
[domain_realm]
Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))]) occurred when evaluating SASL token received from the Kafka Broker. This may be caused by Java's being unable to resolve the Kafka Broker's hostname correctly. You may want to try to adding '-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS environment. Users must configure FQDN of kafka brokers when authenticating using SASL and socketChannel.socket().getInetAddress().getHostName() must match the hostname in principal/hostname#realm Kafka Client will go to AUTHENTICATION_FAILED state.
I had the same problem. Changing zookeeper host value, from IP address to FQDN (hostname) and also adding the hostname in /etc/hosts fixed the problem for me.

How to Connect to Specific Zookeeper Clusters?

I am implementing a Scala function which parses the JSON file and loads the data into HBase.It is working good on a local machine whose zookeeper servers are localhost.
Now for scaling purposes, I am trying to implement this on a Hadoop clusters with three specific zookeeper clusters node00,node01,solr.
I was wondering how to access the specific clusters through my code.Do I need to connect them beforehand as there is following error always.
17/12/27 18:20:17 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting to reconnect
java.net.ConnectException: Connection refused
Please get back to me on this
Edit1:
Thanks a lot for your response.
I couldn't check by the methods you suggested as I was not having sufficient permissions.
Following is the code which takes in the input JSON and parses to HBase.
val input_base =""
val files = List("ffff")
for (collection_name <- files){
val input_filename = input_base + collection_name + ".json"
var col_name = collection_name
var collection_Id = "1007"
#transient val hadoopConf = new Configuration()
#transient val conf = HBaseConfiguration.create(hadoopConf)
conf.set(TableOutputFormat.OUTPUT_TABLE, tableName)
#transient val jobConfig: JobConf = new JobConf(conf, this.getClass)
jobConfig.setOutputFormat(classOf[TableOutputFormat])
jobConfig.set(TableOutputFormat.OUTPUT_TABLE, tableName)
sc.textFile(input_filename).map(l => parse_json(l,col_name,collection_Id)).saveAsHadoopDataset(jobConfig)
}
I need help in giving those nodes as parameters in my code.
The zookeeper quorum has solr2.dlrl,node02.dlrl,node00.dlrl as nodes and 2181 as port.
Please help me in ths
If I run the rowcounter function it is able to properly connect to the clusters
hbase org.apache.hadoop.hbase.mapreduce.RowCounter ram
It is shown as follows:
18/01/02 14:52:41 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
18/01/02 14:52:41 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
18/01/02 14:52:41 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
18/01/02 14:52:41 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
18/01/02 14:52:41 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-696.6.3.el6.x86_64
18/01/02 14:52:41 INFO zookeeper.ZooKeeper: Client environment:user.name=cs5604f17_cta
18/01/02 14:52:41 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/cs5604f17_cta
18/01/02 14:52:41 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/cs5604f17_cta/ram/cmt/CMT/cs5604f17_cmt
18/01/02 14:52:41 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=solr2.dlrl:2181,node02.dlrl:2181,node00.dlrl:2181 sessionTimeout=60000 watcher=hconnection-0x524af2a60x0, quorum=solr2.dlrl:2181,node02.dlrl:2181,node00.dlrl:2181, baseZNode=/hbase
18/01/02 14:52:41 INFO zookeeper.ClientCnxn: Opening socket connection to server solr2.dlrl/10.0.0.125:2181. Will not attempt to authenticate using SASL (unknown error)
18/01/02 14:52:41 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /10.0.0.127:54862, server: solr2.dlrl/10.0.0.125:2181
18/01/02 14:52:41 INFO zookeeper.ClientCnxn: Session establishment complete on server solr2.dlrl/10.0.0.125:2181, sessionid = 0x160aa7b7c311ea2, negotiated timeout = 60000
18/01/02 14:52:41 INFO util.RegionSizeCalculator: Calculating region sizes for table "ram-irma".
18/01/02 14:52:42 INFO client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService
18/01/02 14:52:42 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x160aa7b7c311ea2
18/01/02 14:52:42 INFO zookeeper.ZooKeeper: Session: 0x160aa7b7c311ea2 closed
18/01/02 14:52:42 INFO zookeeper.ClientCnxn: EventThread shut down
18/01/02 14:52:42 INFO mapreduce.JobSubmitter: number of splits:12
18/01/02 14:52:42 INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
18/01/02 14:52:43 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1514688768112_0007
18/01/02 14:52:43 INFO impl.YarnClientImpl: Submitted application application_1514688768112_0007
18/01/02 14:52:43 INFO mapreduce.Job: The url to track the job: http://node02.dlrl:8088/proxy/application_1514688768112_0007/
18/01/02 14:52:43 INFO mapreduce.Job: Running job: job_1514688768112_0007
18/01/02 14:52:48 INFO mapreduce.Job: Job job_1514688768112_0007 running in uber mode : false