My Kafka version:
/opt/kafka/bin/kafka-topics.sh --version
2.4.1 (Commit:c57222ae8cd7866b)
My Kafka cluster configuration looks like:
6 nodes Kafka cluster
6 x Zookeeper i.e. is installed on each node/broker
2 DC's, there are 3 nodes in each DC
rack-awareness feature is enabled on each node:
node1 DC1:
broker.id=1
broker.rack=dc1
node2 DC1:
broker.id=2
broker.rack=dc1
node3 DC1:
broker.id=3
broker.rack=dc1
node1 DC2:
broker.id=4
broker.rack=dc2
node2 DC2:
broker.id=5
broker.rack=dc2
node3 DC2:
broker.id=6
broker.rack=dc2
When the whole DC2 become down the kafka cluster stopped and node1 from DC1 show errors like this:
[2022-03-16 07:38:45,422] INFO Unable to read additional data from server sessionid 0x40000004f930002, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:45,549] INFO Unable to read additional data from server sessionid 0x200ab15af610000, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:45,787] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2022-03-16 07:38:45,787] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2022-03-16 07:38:45,787] INFO Opening socket connection to server dc2kafkabr2/A.B.C.72:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:45,788] INFO Socket error occurred: dc2kafkabr2/A.B.C.72:2181: Connection refused (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,503] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2022-03-16 07:38:46,503] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2022-03-16 07:38:46,503] INFO Opening socket connection to server dc1kafkabr1/A.B.C.68:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,504] INFO Socket connection established, initiating session, client: /A.B.C.68:35796, server: dc1kafkabr1/A.B.C.68:2181 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,505] INFO Unable to read additional data from server sessionid 0x40000004f930002, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,616] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2022-03-16 07:38:46,617] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2022-03-16 07:38:46,617] INFO Opening socket connection to server dc1kafkabr2/A.B.C.69:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,617] INFO Socket connection established, initiating session, client: /A.B.C.68:38936, server: dc1kafkabr2/A.B.C.69:2181 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,619] INFO Unable to read additional data from server sessionid 0x200ab15af610000, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,896] INFO Client successfully logged in. (org.apache.zookeeper.Login)
However when the Kafka nodes will be stopped normally/humanly in DC2 by systemctl command then Kafka cluster works properly on the nodes in DC1.
The question is why if DC2 is turned off, the Kafka cluster stops working? How to prevent of it? Any idea?
Best Regards,
Dan
Dears,
After next tests I know that problem is by side of Zookeeper because when I trun off two brokers in DC2 the Kafka cluster still works. After turn off kafka.service on the last broker in DC2 the Kafka cluster still works. But when I turn off zookeeper.service on the last broker in DC2 the cluster becomes unresponsive.
This is my zookeeper's configuration:
cat zookeeper.properties
tickTime=2000
dataDir=/opt/zookeeper/data
#dataLogDir=/var/log/zookeeper
clientPort=2181
initLimit=5
syncLimit=3
############## HARDENING #################
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
###########################################
server.1=A.B.C.68:2888:3888
server.2=A.B.C.69:2888:3888
server.3=A.B.C.70:2888:3888
server.4=A.B.C.71:2888:3888
server.5=A.B.C.72:2888:3888
server.6=A.B.C.73:2888:3888
Any idea what is wrong in this configuration?
Best Regards,
Dan
Zookeeper quorum is not ensure and this is reason.
Related
I'm trying to activate authentication using SASL/PLAIN in my kafka broker.
the JAAS configuration file is as the following
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";
};
I launch kafka service using the following commands
export KAFKA_OPTS="-Djava.security.auth.login.config=<PATH>kafka_server_jaas.conf
/bin/kafka-server-start.sh /config/server.properties
The kafka service is not started properly and I got these errors in the log
javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/home/kafka/kafka/config/kafka_server_jaas.conf'.
at org.apache.zookeeper.client.ZooKeeperSaslClient.<init>(ZooKeeperSaslClient.java:189)
at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1161)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1210)
[2022-03-16 12:13:16,587] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,588] ERROR [ZooKeeperClient Kafka server] Auth failed, initialized=false connectionState=CONNECTING (kafka.zookeeper.ZooKeeperClient)
[2022-03-16 12:13:16,592] INFO Socket connection established, initiating session, client: /127.0.0.1:46706, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,611] INFO Session establishment complete on server localhost/127.0.0.1:2181, session id = 0x100002dd98c0000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,612] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2022-03-16 12:13:16,752] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread)
[2022-03-16 12:13:16,786] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener)
[2022-03-16 12:13:16,788] INFO Cleared cache (kafka.server.FinalizedFeatureCache)
[2022-03-16 12:13:16,957] INFO Cluster ID = 6WTadNCMRAW4dHoc_JUnIg (kafka.server.KafkaServer)
[2022-03-16 12:13:16,968] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID 6WTadNCMRAW4dHoc_JUnIg doesn't match stored clusterId Some(RJXzPwJeRfawIa_yA0B26A) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
at kafka.server.KafkaServer.startup(KafkaServer.scala:228)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
I already added the following lines to server.properties
listeners=SASL_SSL://localhost:9092
security.protocol=SASL_SSL
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
[2022-03-16 12:13:16,968] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID 6WTadNCMRAW4dHoc_JUnIg doesn't match stored clusterId Some(RJXzPwJeRfawIa_yA0B26A) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
at kafka.server.KafkaServer.startup(KafkaServer.scala:228)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
This issue occurs when there is a mismatch in cluster ID stored in Zookeeper and Kafka data directories for the broker.
In this case, cluster ID of the broker stored in
Zookeeper data is 6WTadNCMRAW4dHoc_JUnIg
Kafka meta.properties is RJXzPwJeRfawIa_yA0B26A
Reason:
Zookeeper data directory got deleted.
Deleting Zookeeper dataDir and restarting both Zookeeper and Kafka service will not work. Because Zookeeper creates a new Cluster ID and assigns it to the broker when it registers and if there is no entry already. This new cluster ID will be different from the one in meta.properties.
This issue can be fixed by following below steps
delete both Kafka log.dirs and Zookeeper dataDir - results in data loss; Both Kafka and Zookeeper service needs to be restarted
delete meta.properties in Kafka log.dirs directory - no data loss; Kafka service needs to be started anyway
update cluster ID in meta.properties with the value stored in Zookeeper data; In this case, replace RJXzPwJeRfawIa_yA0B26A with 6WTadNCMRAW4dHoc_JUnIg - no data loss; Kafka service needs to be started anyway
javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file
The Client section is used to authenticate a SASL connection with ZooKeeper. Above error javax.security.auth.login.LoginException is a warning and Kafka will connect to Zookeeper server without SASL authentication if Zookeeper allows it.
[2022-03-16 12:13:16,587] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,588] ERROR [ZooKeeperClient Kafka server] Auth failed, initialized=false connectionState=CONNECTING (kafka.zookeeper.ZooKeeperClient)
[2022-03-16 12:13:16,592] INFO Socket connection established, initiating session, client: /127.0.0.1:46706, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,611] INFO Session establishment complete on server localhost/127.0.0.1:2181, session id = 0x100002dd98c0000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
The KafkaServer section is used by the Broker and provides SASL configuration options for inter broker connection. The username and password are used by the broker to initiate connections to other brokers. The set of properties user_username defines the passwords for all users to connect to the broker.
I installed a Zookeeper and one Kafka broker server in one of my cloud server instances, and they are working well. But when trying to connect to the remote Zookeeper server, the Kafka broker is not able to reach that IP address and port number. The firewall is also in inactive mode.
The summary is:
one zookeeper server - in cloud instance [146.646.64.66*]
one Kafka broker server - in cloud instance [146.646.64.66*]
two Kafka broker server - in my local PC [localhost]
I have updated the zookeeper.connect property of the local Kafka broker server's property file as follows:
zookeeper.connect=146.646.64.66*:2181
The following is the error that the CMD shows:
[2021-06-17 19:47:01,443] INFO Initiating client connection, connectString=174.138.31.159:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$#6736fa8d (org.apache.zookeeper.ZooKeeper)
[2021-06-17 19:47:01,468] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2021-06-17 19:47:01,545] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[2021-06-17 19:47:01,553] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2021-06-17 19:47:19,557] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient)
[2021-06-17 19:47:21,663] INFO Opening socket connection to server 146.646.64.66*/146.646.64.66*:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2021-06-17 19:47:21,801] WARN Client session timed out, have not heard from server in 20251ms for sessionid 0x0 (org.apache.zookeeper.ClientCnxn)
[2021-06-17 19:47:21,929] INFO Session: 0x0 closed (org.apache.zookeeper.ZooKeeper)
[2021-06-17 19:47:21,929] INFO EventThread shut down for session: 0x0 (org.apache.zookeeper.ClientCnxn)
[2021-06-17 19:47:21,934] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient)
[2021-06-17 19:47:21,944] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:271)
at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:267)
at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:125)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1948)
at kafka.server.KafkaServer.createZkClient$1(KafkaServer.scala:431)
at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:456)
at kafka.server.KafkaServer.startup(KafkaServer.scala:191)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
[2021-06-17 19:47:21,982] INFO shutting down (kafka.server.KafkaServer)
Please help me solve this problem.
remove all cached log files or change the directory of the log path of the server.properties file that you are going to run. the cache log files' data can be affected due to your server history.
I have setup Kafka 3-node cluster and Zookeeper 3-node cluster, on separate nodes. Using Kafka I can produce and consume messages successfully and run commands like kafka-topic.sh to get topic lists and their informations from Zookeeper, but there are some errors on Kafka server.log file. The following warning appears continuously:
[2018-02-18 21:50:01,241] WARN Client session timed out, have not heard from server in 320190154ms for sessionid 0x161a94b101f0001 (org.apache.zookeeper.ClientCnxn)
[2018-02-18 21:50:01,242] INFO Client session timed out, have not heard from server in 320190154ms for sessionid 0x161a94b101f0001, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2018-02-18 21:50:01,343] INFO zookeeper state changed (Disconnected) (org.I0Itec.zkclient.ZkClient)
[2018-02-18 21:50:01,989] INFO Opening socket connection to server zookeeper3/192.168.1.206:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-02-18 21:50:02,008] INFO Socket connection established to zookeeper3/192.168.1.206:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2018-02-18 21:50:02,042] INFO Session establishment complete on server zookeeper3/192.168.1.206:2181, sessionid = 0x161a94b101f0001, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2018-02-18 21:50:02,042] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2018-02-18 21:59:31,570] INFO [Group Metadata Manager on Broker 102]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
It seems the Kafka sessions in zookeeper expires periodically!
In Zookeeper logs are the following warninngs, too:
2018-02-18 18:20:06,149 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#368] - caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x161a94b101f0001, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:239)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
at java.lang.Thread.run(Thread.java:748)
2018-02-18 18:20:06,151 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1044] - Closed socket connection for client /192.168.1.203:43162 which had sessionid 0x161a94b101f0001
2018-02-18 18:20:06,781 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#368] - caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x161a94b101f0002, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:239)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
at java.lang.Thread.run(Thread.java:748)
2018-02-18 18:20:06,782 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1044] - Closed socket connection for client /192.168.1.201:45330 which had sessionid 0x161a94b101f0002
2018-02-18 18:37:29,127 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#192] - Accepted socket connection from /192.168.1.202:52480
2018-02-18 18:37:29,139 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#942] - Client attempting to establish new session at /192.168.1.202:52480
2018-02-18 18:37:29,143 [myid:1] - INFO [CommitProcessor:1:ZooKeeperServer#687] - Established session 0x161a94b101f0003 with negotiated timeout 30000 for client /192.168.1.202:52480
2018-02-18 18:37:29,432 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1044] - Closed socket connection for client /192.168.1.202:52480 which had sessionid 0x161a94b101f0003
I think it's because zookeeper can't get heartbeat from Kafka nodes. The followings are Zookeeper zoo.cfg:
tickTime=2000
dataDir=/var/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=zookeeper1:2888:3888
server.2=zookeeper2:2888:3888
server.3=zookeeper3:2888:3888
and Kafka server.properties customized setting:
broker.id=1
listeners = PLAINTEXT://kafka1:9092
num.partitions=24
delete.topic.enable=true
default.replication.factor=2
log.dirs=/data/kafka/data
zookeeper.connect=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
log.retention.hours=168
I use the same zookeeper cluster for Hadoop HA without any problem. I think there is something wrong with the Kafka properties listeners and advertised.listeners. I read the Kafka documentation but couldn't understand their meaning.
In the host file of all OSes, hostnames such that zookeeper1 to zookeeper3 and kafka1 to kafka3 are defined and reachable through ping command. I removed the following lines from hosts:
127.0.0.1 localhost
127.0.1.1 hostname
I think it couldn't cause the problem.
Kafka version: 0.11
Zookeeper version: 3.4.10
Can anyone help?
We were facing a similar issue with Kafka. As #Soheil pointed out it was due to a Major GC running.
When a Major GC runs, then Kafka would sometimes not be able to send heartbeat to zookeeper. For us the Major GC was running almost once every 15 sec. On taking a heap dump, we realized it was due to a Metric Memory Leak in Kafka.
I installed fresh zookeeper & kafka both. I started them both. Then when I want to see the list of topics with this command:
bin/kafka-topics.sh --list --zookeeper localhost 2181
It gives me the socket connection closed. Here is the screen shot:
darpanshah#darpan-ubuntu:/opt/Kafka$ bin/kafka-topics.sh --list --zookeeper localhost 2181
2016-08-17 10:44:44,053] INFO Accepted socket connection from /127.0.0.1:48452 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2016-08-17 10:44:44,059] INFO Client attempting to establish new session at /127.0.0.1:48452 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-08-17 10:44:44,069] INFO Established session 0x15698f5ac360001 with negotiated timeout 30000 for client /127.0.0.1:48452 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-08-17 10:44:44,095] INFO Processed session termination for sessionid: 0x15698f5ac360001 (org.apache.zookeeper.server.PrepRequestProcessor)
[2016-08-17 10:44:44,105] INFO Closed socket connection for client /127.0.0.1:48452 which had sessionid 0x15698f5ac360001 (org.apache.zookeeper.server.NIOServerCnxn)
darpanshah#darpan-ubuntu:/opt/Kafka$
Thanks in advance. Got stuck.
That is not an error. The topic details are fetched from Zookeeper. Hence the client (invoked by kafka-topics.sh) first connects to Zookeeper, then establishes a session, gets the data and then disconnects at the end.
This is the expected behavior of any clients that will get some data from Zookeeper.
How to programmatically detect which server in a ZooKeeper ensemble a client is connected to?
I'm using the Apache Curator API and I am listening for state changes in connection by registering ConnectionStateListener. I would like to know which server in the ensemble a client is connected to when the client reconnects if the server it was connected to goes down.
You can see this in the logs produced by Curator. In the example output below, the CuratorFramework client has been given 4 different ZooKeeper instances in the connectionString that it can connect to. As can be seen in the log, it choses the first:
21:13:45.384 [main] INFO org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting
21:13:45.386 [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183,127.0.0.1:2184 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState#2876f0c
21:13:45.388 [main-SendThread(127.0.0.1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server 127.0.0.1/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
21:13:45.388 [main-SendThread(127.0.0.1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to 127.0.0.1/127.0.0.1:2181, initiating session
21:13:45.392 [main-SendThread(127.0.0.1:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server 127.0.0.1/127.0.0.1:2181, sessionid = 0x14aac461eb70004, negotiated timeout = 40000
In case the ZooKeeper server that the client has connected to crashes, you will also see the new server that the client connects to in the logs:
21:23:03.675 [main-SendThread(127.0.0.1:2182)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server 127.0.0.1/127.0.0.1:2182. Will not attempt to authenticate using SASL (unknown error)
21:23:03.677 [main-SendThread(127.0.0.1:2182)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to 127.0.0.1/127.0.0.1:2182, initiating session
21:23:03.697 [main-SendThread(127.0.0.1:2182)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server 127.0.0.1/127.0.0.1:2182, sessionid = 0x14aac461eb70004, negotiated timeout = 40000
21:23:03.697 [main-EventThread] INFO org.apache.curator.framework.state.ConnectionStateManager - State change: RECONNECTED