Mesos master setting up - centos

I already setting up Zookeeper, Mesos master and slave on my computer, it work, but when I use the same configuration on dedicated servers, I got an issue on the master election.
I just have one master and 2 slaves, so the quorum = 1.
The main issue is when I start the master, it's not elected as leader because there are already one master with IP 127.0.0.1.
I use this command :
./mesos-master.sh --ip=172.16.10.11 --work_dir=/var/lib/mesos --zk=zk://172.16.10.11:2181/mesos --quorum=1 --hostname=172.16.10.11
The log for the master :
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0608 11:41:27.994071 6163 logging.cpp:188] INFO level logging started!
I0608 11:41:27.994287 6163 main.cpp:237] Build: 2016-06-07 17:52:23 by root
I0608 11:41:27.994300 6163 main.cpp:239] Version: 0.28.2
I0608 11:41:27.994307 6163 main.cpp:242] Git tag: 0.28.2
I0608 11:41:27.994313 6163 main.cpp:246] Git SHA: ceecad69bd9656cf405ca7378ad021c4ad51aaed
I0608 11:41:27.994349 6163 main.cpp:260] Using 'HierarchicalDRF' allocator
I0608 11:41:28.008420 6163 leveldb.cpp:174] Opened db in 13.989064ms
I0608 11:41:28.010076 6163 leveldb.cpp:181] Compacted db in 1.581918ms
I0608 11:41:28.010134 6163 leveldb.cpp:196] Created db iterator in 18237ns
I0608 11:41:28.010195 6163 leveldb.cpp:202] Seeked to beginning of db in 48811ns
I0608 11:41:28.010273 6163 leveldb.cpp:271] Iterated through 3 keys in the db in 64139ns
I0608 11:41:28.010313 6163 replica.cpp:779] Replica recovered with log positions 33 -> 34 with 0 holes and 0 unlearned
I0608 11:41:28.011041 6187 log.cpp:236] Attempting to join replica to ZooKeeper group
I0608 11:41:28.011241 6188 recover.cpp:447] Starting replica recovery
I0608 11:41:28.011683 6163 main.cpp:471] Starting Mesos master
I0608 11:41:28.014093 6183 recover.cpp:473] Replica is in VOTING status
I0608 11:41:28.014225 6183 recover.cpp:462] Recover process terminated
I0608 11:41:28.014282 6182 master.cpp:375] Master 36976def-b8f7-40e8-b843-895cf276bcd2 (172.16.10.11) started on 172.16.10.11:5050
I0608 11:41:28.014310 6182 master.cpp:377] Flags at startup: --allocation_interval="1secs" --allocator="HierarchicalDRF" --authenticate="false" --authenticate_http="false" --authenticate_slaves="false" --authenticators="crammd5" --authorizers="local" --framework_sorter="drf" --help="false" --hostname="172.16.10.11" --hostname_lookup="true" --http_authenticators="basic" --initialize_driver_logging="true" --ip="172.16.10.11" --log_auto_initialize="true" --log_dir="/var/log/mesos" --logbufsecs="0" --logging_level="INFO" --max_completed_frameworks="50" --max_completed_tasks_per_framework="1000" --max_slave_ping_timeouts="5" --port="5050" --quiet="false" --quorum="1" --recovery_slave_removal_limit="100%" --registry="replicated_log" --registry_fetch_timeout="1mins" --registry_store_timeout="20secs" --registry_strict="false" --root_submissions="true" --slave_ping_timeout="15secs" --slave_reregister_timeout="10mins" --user_sorter="drf" --version="false" --webui_dir="/usr/share/mesos/webui" --work_dir="/var/lib/mesos" --zk="zk://172.16.10.11:2181/mesos" --zk_session_timeout="10secs"
I0608 11:41:28.014503 6182 master.cpp:424] Master allowing unauthenticated frameworks to register
I0608 11:41:28.014511 6182 master.cpp:429] Master allowing unauthenticated slaves to register
I0608 11:41:28.014521 6182 master.cpp:467] Using default 'crammd5' authenticator
W0608 11:41:28.014533 6182 authenticator.cpp:511] No credentials provided, authentication requests will be refused
I0608 11:41:28.014755 6182 authenticator.cpp:518] Initializing server SASL
I0608 11:41:28.017192 6181 group.cpp:349] Group process (group(1)#172.16.10.11:5050) connected to ZooKeeper
I0608 11:41:28.017233 6181 group.cpp:831] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0608 11:41:28.017242 6181 group.cpp:427] Trying to create path '/mesos/log_replicas' in ZooKeeper
I0608 11:41:28.017684 6186 master.cpp:1650] Successfully attached file '/var/log/mesos/mesos-master.INFO'
I0608 11:41:28.017796 6186 contender.cpp:147] Joining the ZK group
I0608 11:41:28.018542 6186 group.cpp:349] Group process (group(4)#172.16.10.11:5050) connected to ZooKeeper
I0608 11:41:28.018571 6186 group.cpp:831] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0608 11:41:28.018579 6186 group.cpp:427] Trying to create path '/mesos' in ZooKeeper
I0608 11:41:28.018890 6183 group.cpp:349] Group process (group(3)#172.16.10.11:5050) connected to ZooKeeper
I0608 11:41:28.018918 6183 group.cpp:831] Syncing group operations: queue size (joins, cancels, datas) = (1, 0, 0)
I0608 11:41:28.018928 6183 group.cpp:427] Trying to create path '/mesos' in ZooKeeper
I0608 11:41:28.019326 6184 group.cpp:349] Group process (group(2)#172.16.10.11:5050) connected to ZooKeeper
I0608 11:41:28.019338 6184 group.cpp:831] Syncing group operations: queue size (joins, cancels, datas) = (1, 0, 0)
I0608 11:41:28.019343 6184 group.cpp:427] Trying to create path '/mesos/log_replicas' in ZooKeeper
I0608 11:41:28.019915 6187 network.hpp:413] ZooKeeper group memberships changed
I0608 11:41:28.019966 6186 detector.cpp:152] Detected a new leader: (id='21')
I0608 11:41:28.019968 6181 group.cpp:700] Trying to get '/mesos/log_replicas/0000000020' in ZooKeeper
I0608 11:41:28.020023 6186 group.cpp:700] Trying to get '/mesos/json.info_0000000021' in ZooKeeper
I0608 11:41:28.022219 6182 contender.cpp:263] New candidate (id='29') has entered the contest for leadership
I0608 11:41:28.022341 6181 group.cpp:700] Trying to get '/mesos/log_replicas/0000000027' in ZooKeeper
I0608 11:41:28.022517 6188 detector.cpp:479] A new leading master (UPID=master#127.0.0.1:5050) is detected
I0608 11:41:28.022569 6188 master.cpp:1711] The newly elected leader is master#127.0.0.1:5050 with id d34a6527-cd0d-4f74-8d5a-784488918f0c
I0608 11:41:28.023035 6185 network.hpp:461] ZooKeeper group PIDs: { log-replica(1)#127.0.0.1:5050, log-replica(1)#172.16.10.11:5050 }
E0608 11:41:28.023324 6189 process.cpp:1958] Failed to shutdown socket with fd 24: Transport endpoint is not connected
I0608 11:41:28.023620 6181 network.hpp:413] ZooKeeper group memberships changed
I0608 11:41:28.023685 6182 group.cpp:700] Trying to get '/mesos/log_replicas/0000000020' in ZooKeeper
I0608 11:41:28.024274 6182 group.cpp:700] Trying to get '/mesos/log_replicas/0000000027' in ZooKeeper
I0608 11:41:28.024601 6182 group.cpp:700] Trying to get '/mesos/log_replicas/0000000028' in ZooKeeper
I0608 11:41:28.024883 6182 network.hpp:461] ZooKeeper group PIDs: { log-replica(1)#127.0.0.1:5050, log-replica(1)#172.16.10.11:5050 }
E0608 11:41:28.025148 6189 process.cpp:1958] Failed to shutdown socket with fd 24: Transport endpoint is not connected
I0608 11:41:36.008740 6183 network.hpp:413] ZooKeeper group memberships changed
I0608 11:41:36.008821 6186 group.cpp:700] Trying to get '/mesos/log_replicas/0000000020' in ZooKeeper
I0608 11:41:36.009698 6186 group.cpp:700] Trying to get '/mesos/log_replicas/0000000028' in ZooKeeper
I0608 11:41:36.010126 6181 network.hpp:461] ZooKeeper group PIDs: { log-replica(1)#127.0.0.1:5050, log-replica(1)#172.16.10.11:5050 }
E0608 11:41:36.010527 6189 process.cpp:1958] Failed to shutdown socket with fd 26: Transport endpoint is not connected
I don't have other Mesos on the network, and my master has id 35069631-aace-4733-9440-60d9bc620d9a
Edit : I add complete log

Related

mysql table record not being consumed by Kafka

I just started learning kafka and I am running kafka 2.13-2.80 on windows server 2012 R2. I started zookeeper using the following:
zookeeper-server-start.bat ../../config/zookeeper.properties
I started kafka using the following:
kafka-server-start.bat ../../config/server.properties
I started a connector with the following:
connect-standalone.bat ../../config/connect-standalone.properties ../../config/mysql.properties
The content of my mysql.properties file is as follows:
name=test-source-mysql-jdbc-autoincrement
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:mysql://127.0.0.1:3306/DBName?user=username&password=userpassword
mode=incrementing
incrementing.column.name=id
topic.prefix=test-mysql-jdbc-
I started a consumer with and without a partition option:
kafka-console-consumer.bat -topic test-mysql-jdbc-groups -bootstrap-server localhost:9092 -from-beginning [-partition 0]
All seemingly started without issues but when I add a record to my mysql table called groups, I do not see it in my consumer. I checked all the various logs. The only error messages I saw were in the state-change.log and they looked like the following:
ERROR [Broker id=0] Ignoring StopReplica request (delete=true) from controller 0 with correlation id 5 epoch 1 for partition mytopic-2 as the local replica for the partition is in an offline log directory (state.change.logger)
ERROR [Broker id=0] Ignoring StopReplica request (delete=true) from controller 0 with correlation id 5 epoch 1 for partition mytopic-1 as the local replica for the partition is in an offline log directory (state.change.logger)
ERROR [Broker id=0] Ignoring StopReplica request (delete=true) from controller 0 with correlation id 5 epoch 1 for partition mytopic-0 as the local replica for the partition is in an offline log directory (state.change.logger)
ERROR [Broker id=0] Received LeaderAndIsrRequest with correlation id 1 from controller 0 epoch 2 for partition mytopic-0 (last update controller epoch 1) but cannot become follower since the new leader -1 is unavailable. (state.change.logger)
ERROR [Broker id=0] Received LeaderAndIsrRequest with correlation id 1 from controller 0 epoch 2 for partition mytopic-1 (last update controller epoch 1) but cannot become follower since the new leader -1 is unavailable. (state.change.logger)
ERROR [Broker id=0] Received LeaderAndIsrRequest with correlation id 1 from controller 0 epoch 2 for partition mytopic-2 (last update controller epoch 1) but cannot become follower since the new leader -1 is unavailable. (state.change.logger)
I also notice this message in zookeeper
INFO Expiring session timeout of exceeded (org.apache.zookeeper.server.ZooKeeperServer)
Please could anyone give me pointers as to what I could be doing wrong? Thanks

How to run confluent-5.3.2-2.12 platform?

Environment:
CentOS7
openjdk version "1.8.0_181"
I downloaded confluent-5.3.2-2.12.tar.gz and extracted to /opt/confluent.
I am following "Installing and Running KSQL | Level Up your KSQL by Confluent" (https://youtu.be/icwHpPm-TCA).
Executed the following commands:
[root#srvr0 ~]# cd /opt/confluent/confluent-5.3.2/bin/
[root#srvr0 bin]# confluent start
bash: confluent: command not found...
Update1:
With reference to, https://docs.confluent.io/current/quickstart/ce-quickstart.html, executed the following commands:
curl -L https://cnfl.io/cli | sh -s -- -b /opt/confluent/confluent-5.3.2/bin
/opt/confluent/confluent-5.3.2/bin/confluent-hub install --no-prompt confluentinc/kafka-connect-datagen:latest
/opt/confluent/confluent-5.3.2/bin/confluent local start
Logs:
[root#srvr0 ~]# curl -L https://cnfl.io/cli | sh -s -- -b /opt/confluent/confluent-5.3.2/bin
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 162 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
100 10288 100 10288 0 0 3567 0 0:00:02 0:00:02 --:--:-- 16176
confluentinc/cli info checking S3 for latest tag
confluentinc/cli info found version: latest for latest/linux/amd64
confluentinc/cli info NOTICE: see licenses located in /tmp/tmp.h8m7jASeAh/confluent
confluentinc/cli info installed /opt/confluent/confluent-5.3.2/bin/confluent
confluentinc/cli info please ensure /opt/confluent/confluent-5.3.2/bin is in your PATH
[root#srvr0 ~]# cp /tmp/tmp.h8m7jASeAh/confluent
cp: missing destination file operand after ‘/tmp/tmp.h8m7jASeAh/confluent’
Try 'cp --help' for more information.
[root#srvr0 ~]# cp -a /tmp/tmp.h8m7jASeAh/confluent /opt/confluent
[root#srvr0 ~]# /opt/confluent/confluent-5.3.2/bin/confluent-hub install --no-prompt confluentinc/kafka-connect-datagen:latest
Running in a "--no-prompt" mode
Implicit acceptance of the license below:
Apache License 2.0
https://www.apache.org/licenses/LICENSE-2.0
Downloading component Kafka Connect Datagen 0.2.0, provided by Confluent, Inc. from Confluent Hub and installing into /opt/confluent/confluent-5.3.2/share/confluent-hub-components
Adding installation directory to plugin path in the following files:
/opt/confluent/confluent-5.3.2/etc/kafka/connect-distributed.properties
/opt/confluent/confluent-5.3.2/etc/kafka/connect-standalone.properties
/opt/confluent/confluent-5.3.2/etc/schema-registry/connect-avro-distributed.properties
/opt/confluent/confluent-5.3.2/etc/schema-registry/connect-avro-standalone.properties
Completed
[root#srvr0 ~]# /opt/confluent/confluent-5.3.2/bin/confluent local start
The local commands are intended for a single-node development environment
only, NOT for production usage. https://docs.confluent.io/current/cli/index.html
Using CONFLUENT_CURRENT: /tmp/confluent.R3YJZ2UC
Starting zookeeper
zookeeper is [UP]
Starting kafka
kafka is [UP]
Starting schema-registry
schema-registry is [UP]
Starting kafka-rest
kafka-rest is [UP]
Starting connect
connect is [UP]
Starting ksql-server
ksql-server is [UP]
Starting control-center
|control-center failed to start
control-center is [DOWN]
Update2:
Logs:
[root#srvr0 confluent-5.3.2]# cat ./logs/controller.log
[2020-01-16 12:20:40,220] DEBUG preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer#66d3c617, name=log4j:logger=kafka.controller (kafka.controller)
[2020-01-16 12:21:40,097] INFO [ControllerEventThread controllerId=0] Starting (kafka.controller.ControllerEventManager$ControllerEventThread)
[2020-01-16 12:21:40,174] INFO [Controller id=0] 0 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController)
[2020-01-16 12:21:40,176] INFO [Controller id=0] Registering handlers (kafka.controller.KafkaController)
[2020-01-16 12:21:40,182] INFO [Controller id=0] Deleting log dir event notifications (kafka.controller.KafkaController)
[2020-01-16 12:21:40,193] INFO [Controller id=0] Deleting isr change notifications (kafka.controller.KafkaController)
[2020-01-16 12:21:40,197] INFO [Controller id=0] Initializing controller context (kafka.controller.KafkaController)
[2020-01-16 12:21:40,361] INFO [Controller id=0] Initialized broker epochs cache: Map(0 -> 24) (kafka.controller.KafkaController)
[2020-01-16 12:21:40,370] DEBUG [Controller id=0] Register BrokerModifications handler for Set(0) (kafka.controller.KafkaController)
[2020-01-16 12:21:40,384] DEBUG [Channel manager on controller 0]: Controller 0 trying to connect to broker 0 (kafka.controller.ControllerChannelManager)
[2020-01-16 12:21:40,444] INFO [RequestSendThread controllerId=0] Starting (kafka.controller.RequestSendThread)
[2020-01-16 12:21:40,445] INFO [Controller id=0] Partitions being reassigned: Map() (kafka.controller.KafkaController)
[2020-01-16 12:21:40,447] INFO [Controller id=0] Currently active brokers in the cluster: Set(0) (kafka.controller.KafkaController)
[2020-01-16 12:21:40,448] INFO [Controller id=0] Currently shutting brokers in the cluster: Set() (kafka.controller.KafkaController)
[2020-01-16 12:21:40,448] INFO [Controller id=0] Current list of topics in the cluster: Set() (kafka.controller.KafkaController)
[2020-01-16 12:21:40,449] INFO [Controller id=0] Fetching topic deletions in progress (kafka.controller.KafkaController)
[2020-01-16 12:21:40,456] INFO [Controller id=0] List of topics to be deleted: (kafka.controller.KafkaController)
[2020-01-16 12:21:40,456] INFO [Controller id=0] List of topics ineligible for deletion: (kafka.controller.KafkaController)
[2020-01-16 12:21:40,457] INFO [Controller id=0] Initializing topic deletion manager (kafka.controller.KafkaController)
[2020-01-16 12:21:40,458] INFO [Topic Deletion Manager 0] Initializing manager with initial deletions: Set(), initial ineligible deletions: Set() (kafka.controller.TopicDeletionManager)
[2020-01-16 12:21:40,459] INFO [Controller id=0] Sending update metadata request (kafka.controller.KafkaController)
[2020-01-16 12:21:40,485] INFO [ReplicaStateMachine controllerId=0] Initializing replica state (kafka.controller.ZkReplicaStateMachine)
[2020-01-16 12:21:40,487] INFO [ReplicaStateMachine controllerId=0] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine)
[2020-01-16 12:21:40,518] INFO [ReplicaStateMachine controllerId=0] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine)
[2020-01-16 12:21:40,519] DEBUG [ReplicaStateMachine controllerId=0] Started replica state machine with initial state -> Map() (kafka.controller.ZkReplicaStateMachine)
[2020-01-16 12:21:40,523] INFO [PartitionStateMachine controllerId=0] Initializing partition state (kafka.controller.ZkPartitionStateMachine)
[2020-01-16 12:21:40,525] INFO [PartitionStateMachine controllerId=0] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine)
[2020-01-16 12:21:40,535] WARN [RequestSendThread controllerId=0] Controller 0's connection to broker srvr0:9092 (id: 0 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.io.IOException: Connection to srvr0:9092 (id: 0 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:295)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:249)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
[2020-01-16 12:21:40,539] DEBUG [PartitionStateMachine controllerId=0] Started partition state machine with initial state -> Map() (kafka.controller.ZkPartitionStateMachine)
[2020-01-16 12:21:40,540] INFO [Controller id=0] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController)
[2020-01-16 12:21:40,542] INFO [Controller id=0] Removing partitions Set() from the list of reassigned partitions in zookeeper (kafka.controller.KafkaController)
[2020-01-16 12:21:40,543] INFO [Controller id=0] No more partitions need to be reassigned. Deleting zk path /admin/reassign_partitions (kafka.controller.KafkaController)
[2020-01-16 12:21:40,550] INFO [Controller id=0] Partitions undergoing preferred replica election: (kafka.controller.KafkaController)
[2020-01-16 12:21:40,551] INFO [Controller id=0] Partitions that completed preferred replica election: (kafka.controller.KafkaController)
[2020-01-16 12:21:40,553] INFO [Controller id=0] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController)
[2020-01-16 12:21:40,554] INFO [Controller id=0] Resuming preferred replica election for partitions: (kafka.controller.KafkaController)
[2020-01-16 12:21:40,555] INFO [Controller id=0] Starting preferred replica leader election for partitions (kafka.controller.KafkaController)
[2020-01-16 12:21:40,593] INFO [Controller id=0] Starting the controller scheduler (kafka.controller.KafkaController)
[2020-01-16 12:21:40,637] WARN [RequestSendThread controllerId=0] Controller 0's connection to broker srvr0:9092 (id: 0 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.io.IOException: Connection to srvr0:9092 (id: 0 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:295)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:249)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
[2020-01-16 12:21:40,738] WARN [RequestSendThread controllerId=0] Controller 0's connection to broker srvr0:9092 (id: 0 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.io.IOException: Connection to srvr0:9092 (id: 0 rack: null) failed.
...
[2020-01-16 12:21:41,559] INFO [Controller id=0] New topics: [Set(__confluent.support.metrics)], deleted topics: [Set()], new partition replica assignment [Map(__confluent.support.metrics-0 -> Vector(0))] (kafka.controller.KafkaController)
[2020-01-16 12:21:41,559] INFO [Controller id=0] New partition creation callback for __confluent.support.metrics-0 (kafka.controller.KafkaController)
[2020-01-16 12:21:41,653] WARN [RequestSendThread controllerId=0] Controller 0's connection to broker srvr0:9092 (id: 0 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.io.IOException: Connection to srvr0:9092 (id: 0 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:295)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:249)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
[2020-01-16 12:21:41,754] WARN [RequestSendThread controllerId=0] Controller 0's connection to broker srvr0:9092 (id: 0 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.io.IOException: Connection to srvr0:9092 (id: 0 rack: null) failed.
...
[2020-01-16 12:21:45,596] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
[2020-01-16 12:21:45,597] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
[2020-01-16 12:21:45,601] DEBUG [Controller id=0] Preferred replicas by broker Map(0 -> Map(__confluent.support.metrics-0 -> Vector(0))) (kafka.controller.KafkaController)
[2020-01-16 12:21:45,605] DEBUG [Controller id=0] Topics not in preferred replica for broker 0 Map() (kafka.controller.KafkaController)
[2020-01-16 12:21:45,609] TRACE [Controller id=0] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController)
[2020-01-16 12:21:45,616] WARN [RequestSendThread controllerId=0] Controller 0's connection to broker srvr0:9092 (id: 0 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.io.IOException: Connection to srvr0:9092 (id: 0 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:295)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:249)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
[2020-01-16 12:21:45,717] WARN [RequestSendThread controllerId=0] Controller 0's connection to broker srvr0:9092 (id: 0 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.io.IOException: Connection to srvr0:9092 (id: 0 rack: null) failed.
...
[2020-01-16 12:22:10,944] INFO [ControllerEventThread controllerId=0] Shutting down (kafka.controller.ControllerEventManager$ControllerEventThread)
[2020-01-16 12:22:10,946] INFO [ControllerEventThread controllerId=0] Stopped (kafka.controller.ControllerEventManager$ControllerEventThread)
[2020-01-16 12:22:10,946] INFO [ControllerEventThread controllerId=0] Shutdown completed (kafka.controller.ControllerEventManager$ControllerEventThread)
[2020-01-16 12:22:10,947] DEBUG [Controller id=0] Resigning (kafka.controller.KafkaController)
[2020-01-16 12:22:10,948] DEBUG [Controller id=0] Unregister BrokerModifications handler for Set(0) (kafka.controller.KafkaController)
[2020-01-16 12:22:10,951] INFO [PartitionStateMachine controllerId=0] Stopped partition state machine (kafka.controller.ZkPartitionStateMachine)
[2020-01-16 12:22:10,953] INFO [ReplicaStateMachine controllerId=0] Stopped replica state machine (kafka.controller.ZkReplicaStateMachine)
[2020-01-16 12:22:10,955] INFO [RequestSendThread controllerId=0] Shutting down (kafka.controller.RequestSendThread)
[2020-01-16 12:22:10,956] TRACE [RequestSendThread controllerId=0] shutdownInitiated latch count reached zero. Shutdown called. (kafka.controller.RequestSendThread)
[2020-01-16 12:22:10,956] INFO [RequestSendThread controllerId=0] Stopped (kafka.controller.RequestSendThread)
[2020-01-16 12:22:10,956] INFO [RequestSendThread controllerId=0] Shutdown completed (kafka.controller.RequestSendThread)
[2020-01-16 12:22:10,960] INFO [Controller id=0] Resigned (kafka.controller.KafkaController)
Update 3:
Now, even worst... only zookeeper is starting. other services are failing to start...
Logs:
[root#srvr0 ~]# /opt/confluent/confluent-5.3.2/bin/confluent local start
The local commands are intended for a single-node development environment
only, NOT for production usage. https://docs.confluent.io/current/cli/index.html
Using CONFLUENT_CURRENT: /tmp/confluent.R3YJZ2UC
Starting zookeeper
zookeeper is [UP]
Starting kafka
-Kafka failed to start
kafka is [DOWN]
Cannot start Schema Registry, Kafka Server is not running. Check your deployment
Error: exit status 127
[root#srvr0 ~]#
Update 4:
confluent local start, zookeeper-server-start and kafka-server-start logs:
[root#srvr0 ~]# /opt/confluent/confluent-5.3.2/bin/confluent local start
Updates are available for confluent. To install them, please run:
$ confluent update
The local commands are intended for a single-node development environment
only, NOT for production usage. https://docs.confluent.io/current/cli/index.html
Using CONFLUENT_CURRENT: /tmp/confluent.xgVLokw7
Starting zookeeper
zookeeper is [UP]
Starting kafka
|Kafka failed to start
kafka is [DOWN]
Cannot start Schema Registry, Kafka Server is not running. Check your deployment
Error: exit status 127
[root#srvr0 ~]# /opt/confluent/confluent-5.3.2/bin/zookeeper-server-start
USAGE: /opt/confluent/confluent-5.3.2/bin/zookeeper-server-start [-daemon] zookeeper.properties
[root#srvr0 ~]# /opt/confluent/confluent-5.3.2/bin/kafka-server-start
USAGE: /opt/confluent/confluent-5.3.2/bin/kafka-server-start [-daemon] server.properties [--override property=value]*
server.properties hasn't been edited and its contents as follows:
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
##################### Confluent Metrics Reporter #######################
# Confluent Control Center and Confluent Auto Data Balancer integration
#
# Uncomment the following lines to publish monitoring data for
# Confluent Control Center and Confluent Auto Data Balancer
# If you are using a dedicated metrics cluster, also adjust the settings
# to point to your metrics kakfa cluster.
#metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter
#confluent.metrics.reporter.bootstrap.servers=localhost:9092
#
# Uncomment the following line if the metrics cluster has a single broker
#confluent.metrics.reporter.topic.replicas=1
##################### Confluent Proactive Support ######################
# If set to true, and confluent-support-metrics package is installed
# then the feature to collect and report support metrics
# ("Metrics") is enabled. If set to false, the feature is disabled.
#
confluent.support.metrics.enable=true
# The customer ID under which support metrics will be collected and
# reported.
#
# When the customer ID is set to "anonymous" (the default), then only a
# reduced set of metrics is being collected and reported.
#
# Confluent customers
# -------------------
# If you are a Confluent customer, then you should replace the default
# value with your actual Confluent customer ID. Doing so will ensure
# that additional support metrics will be collected and reported.
#
confluent.support.customer.id=anonymous
############################# Group Coordinator Settings #############################
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
############################# Confluent Authorizer Settings #############################
# Uncomment to enable Confluent Authorizer with support for ACLs, LDAP groups and RBAC
#authorizer.class.name=io.confluent.kafka.security.authorizer.ConfluentServerAuthorizer
# Semi-colon separated list of super users in the format <principalType>:<principalName>
#super.users=
# Specify a valid Confluent license. By default free-tier license will be used
#confluent.license=
# Replication factor for the topic used for licensing. Default is 3.
confluent.license.topic.replication.factor=1
# Uncomment the following lines and specify values where required to enable RBAC
# Enable RBAC provider
#confluent.authorizer.access.rule.providers=ACL,RBAC
# Bootstrap servers for RBAC metadata. Must be provided if this broker is not in the metadata cluster
#confluent.metadata.bootstrap.servers=PLAINTEXT://127.0.0.1:9092
# Replication factor for the metadata topic used for authorization. Default is 3.
confluent.metadata.topic.replication.factor=1
# Listeners for metadata server
#confluent.metadata.server.listeners=http://0.0.0.0:8090
# Advertised listeners for metadata server
#confluent.metadata.server.advertised.listeners=http://127.0.0.1:8090
Please help me in resolving the issue!
It's not clear where srvr0:9092 is defined; I suggest reviewing your server.properties file to fix the connection strings.
You don't need to run confluent at all. You can follow the base Apache Kafka guides for running both Zookeeper and Kafka
zookeeper-server-start + kafka-server-start
Or you can use Confluent's APT/YUM repos rather than just extracting tarballs, then use systemctl to control services.
Or, using Docker is another way to get started quickly.

Can't change kafka broker-id in Incubator Helm chart?

I have one Zookeeper server (say xx.xx.xx.xxx:2181) running on one GCP Compute Instance VM separately.
I have 3 GKE clusters all in different regions on which I am trying to install Kafka broker nodes so that all nodes connect to one Zookeeper server(xx.xx.xx.xxx:2181).
I installed the Zookeeper server on the VM following this guide with zookeeper properties looking like below:
dataDir=/tmp/data
clientPort=2181
maxClientCnxns=0
initLimit=5
syncLimit=2
tickTime=2000
# list of servers
server.1=0.0.0.0:2888:3888
I am using this Incubator Helm Chart to deploy the brokers on GKE clusters.
As per the README.md I am trying to install with the below command:
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
helm install --name my-kafka \
--set replicas=1,zookeeper.enabled=false,configurationOverrides."broker\.id"=1,configurationOverrides."zookeeper\.connect"="xx.xx.xx.xxx:2181" \
incubator/kafka
Error
When I deploy using any of the above ways described above on all of the three GKE Clusters, only one of the brokers gets connected to the Zookeeper server and the other two pods just restart infinitely.
When I check the Zookeeper log (on the VM), it looks something like below:
...
[2019-10-30 14:32:30,930] INFO Accepted socket connection from /xx.xx.xx.xxx:54978 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2019-10-30 14:32:30,936] INFO Client attempting to establish new session at /xx.xx.xx.xxx:54978 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-10-30 14:32:30,938] INFO Established session 0x100009621af0057 with negotiated timeout 6000 for client /xx.xx.xx.xxx:54978 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-10-30 14:32:32,335] INFO Got user-level KeeperException when processing sessionid:0x100009621af0057 type:create cxid:0xc zxid:0x422 txntype:-1 reqpath:n/a Error Path:/config/users Error:KeeperErrorCode = NodeExists for /config/users (org.apache.zookeeper.server.PrepRequestProcessor)
[2019-10-30 14:32:34,472] INFO Got user-level KeeperException when processing sessionid:0x100009621af0057 type:create cxid:0x14 zxid:0x424 txntype:-1 reqpath:n/a Error Path:/brokers/ids/0 Error:KeeperErrorCode = NodeExists for /brokers/ids/0 (org.apache.zookeeper.server.PrepRequestProcessor)
[2019-10-30 14:32:35,126] INFO Processed session termination for sessionid: 0x100009621af0057 (org.apache.zookeeper.server.PrepRequestProcessor)
[2019-10-30 14:32:35,127] INFO Closed socket connection for client /xx.xx.xx.xxx:54978 which had sessionid 0x100009621af0057 (org.apache.zookeeper.server.NIOServerCnxn)
[2019-10-30 14:36:49,123] INFO Expiring session 0x100009621af003b, timeout of 6000ms exceeded (org.apache.zookeeper.server.ZooKeeperServer)
...
I am sure I have created firewall rules to open necessary ports and that is not a problem because one of the broker nodes is able to connect (the one who reaches first).
To me, this seems like the borkerID are not getting changed for some reason and that is the reason why Zookeeper is rejecting the connections.
I say this because kubectl logs pod/my-kafka-n outputs something like below:
...
[2019-10-30 19:56:24,614] INFO [SocketServer brokerId=0] Shutdown completed (kafka.network.SocketServer)
...
[2019-10-30 19:56:24,627] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)
...
As we can see above says brokerId=0 for all of the pods in all the 3 clusters.
However, when I do kubectl exec -ti pod/my-kafka-n -- env | grep BROKER, I can see the environment variable KAFKA_BROKER_ID is changed to 1, 2 and 3 for different brokers as I set.
What am I doing wrong? What is the correct way to change the kafka-broker id or to make all brokers connect to one Zookeeper instance?
make all brokers connect to one Zookeeper instance?
Seems like you are doing that okay via the configurationOverrides option. That'll deploy all pods with the same configuration.
That being said, the broker ID should not be the same per pod. If you inspect the statefulset YAML, it appears that the broker ID is calculated based on the POD_NAME variable
Sidenote
3 GKE clusters all in different regions on which I am trying to install Kafka broker nodes so that all nodes connect to one Zookeeper server
It's not clear to me how you would able to deploy to 3 sepearate clusters in one API call. But, this architecture isn't recommended by Kafka, Zookeeper, or Kubernetes communities unless these regions are "geographically close"

Kafka gives Invalid receive size with Hyperledger Fabric Orderer connection

I was setting up a new cluster for Hyperledger Fabric on EKS. The cluster has 4 kafka nodes, 3 zookeeper nodes, 4 peers, 3 orderers, 1 CA. All the containers come up individually, and the kafka/zookeeper backend is also stable. I can SSH into any kafka/zookeeper and check for connections to any other nodes, create topics, post messages etc. The kafka is accessible via Telnet from all orderers.
When I try to create a channel I get the following error from the orderer:
2019-04-25 13:34:17.660 UTC [orderer.common.broadcast] ProcessMessage -> WARN 025 [channel: channel1] Rejecting broadcast of message from 192.168.94.15:53598 with SERVICE_UNAVAILABLE: rejected by Consenter: backing Kafka cluster has not completed booting; try again later
2019-04-25 13:34:17.660 UTC [comm.grpc.server] 1 -> INFO 026 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=192.168.94.15:53598 grpc.code=OK grpc.call_duration=14.805833ms
2019-04-25 13:34:17.661 UTC [common.deliver] Handle -> WARN 027 Error reading from 192.168.94.15:53596: rpc error: code = Canceled desc = context canceled
2019-04-25 13:34:17.661 UTC [comm.grpc.server] 1 -> INFO 028 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=192.168.94.15:53596 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=24.987468ms
And the Kafka leader reports the following error:
[2019-04-25 14:07:09,453] WARN [SocketServer brokerId=2] Unexpected error from /192.168.89.200; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 369295617 larger than 104857600)
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:132)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:231)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:192)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:528)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:469)
at org.apache.kafka.common.network.Selector.poll(Selector.java:398)
at kafka.network.Processor.poll(SocketServer.scala:535)
at kafka.network.Processor.run(SocketServer.scala:452)
at java.lang.Thread.run(Thread.java:748)
[2019-04-25 14:13:53,917] INFO [GroupMetadataManager brokerId=2] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
The error indicates that you are receiving messages larger than the permitted maximum size, that defaults to ~100MB. Try to increase the following property in server.properties file, so that it can fit larger receive (in this case at least 369295617 bytes):
# Set to 500MB
socket.request.max.bytes=500000000
and then restart your Kafka Cluster.
If this doesn't work for you, then I guess that you are trying to connect to a non-SSL listener. Therefore, you'd have to verify that broker's SSL listener port is 9092 (or the corresponding port in case you are not using the default one) . The following should do the trick:
listeners=SSL://:9092
advertised.listeners=SSL://:9092
inter.broker.listener.name=SSL

Flink: HA mode killing leading jobmanager terminating standby jobmanagers

I am trying to get Flink to run in HA mode using Zookeeper, but when I try to test it by killing the leader JobManager all my standby jobmanagers get killed too.
So instead of a standby jobmanager taking over as the new Leader, they all get killed which isn't supposed to happen.
My setup:
4 servers, 3 of those servers have Zookeeper running, but only 1 server will host all the JobManagers.
ad011.local: Zookeeper + Jobmanagers
ad012.local: Zookeeper + Taskmanager
ad013.local: Zookeeper
ad014.local: nothing interesting
My masters file looks like this:
ad011.local:8081
ad011.local:8082
ad011.local:8083
My flink-conf.yaml:
jobmanager.rpc.address: ad011.local
blob.server.port: 6130,6131,6132
jobmanager.heap.mb: 512
taskmanager.heap.mb: 128
taskmanager.numberOfTaskSlots: 4
parallelism.default: 2
taskmanager.tmp.dirs: /var/flink/data
metrics.reporters: jmx
metrics.reporter.jmx.class: org.apache.flink.metrics.jmx.JMXReporter
metrics.reporter.jmx.port: 8789,8790,8791
high-availability: zookeeper
high-availability.zookeeper.quorum: ad011.local:2181,ad012.local:2181,ad013.local:2181
high-availability.zookeeper.path.root: /flink
high-availability.zookeeper.path.cluster-id: /cluster-one
high-availability.storageDir: /var/flink/recovery
high-availability.jobmanager.port: 50000,50001,50002
When I run flink by using start-cluster.sh script I see my 3 JobManagers running, and going to the WebUI they all point to ad011.local:8081, which is the leader. Which is okay I guess?
I then try to test the failover by killing the leader using kill and then all my other standby JobManagers stop too.
This is what I see in my standby JobManager logs:
2017-09-29 08:08:41,590 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager at akka.tcp://flink#ad011.local:50002/user/jobmanager.
2017-09-29 08:08:41,590 INFO org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService - Starting ZooKeeperLeaderElectionService org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService#72d546c8.
2017-09-29 08:08:41,598 INFO org.apache.flink.runtime.webmonitor.WebRuntimeMonitor - Starting with JobManager akka.tcp://flink#ad011.local:50002/user/jobmanager on port 8083
2017-09-29 08:08:41,598 INFO org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService - Starting ZooKeeperLeaderRetrievalService.
2017-09-29 08:08:41,645 INFO org.apache.flink.runtime.webmonitor.JobManagerRetriever - New leader reachable under akka.tcp://flink#ad011.local:50000/user/jobmanager:f7dc2c48-dfa5-45a4-a63e-ff27be21363a.
2017-09-29 08:08:41,651 INFO org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService - Starting ZooKeeperLeaderRetrievalService.
2017-09-29 08:08:41,722 INFO org.apache.flink.runtime.clusterframework.standalone.StandaloneResourceManager - Received leader address but not running in leader ActorSystem. Cancelling registration.
2017-09-29 09:26:13,472 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink#ad011.local:50000] has failed, address is now gated for [5000] ms. Reason: [Disassociated]
2017-09-29 09:26:14,274 INFO org.apache.flink.runtime.jobmanager.JobManager - RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested.
2017-09-29 09:26:14,284 INFO org.apache.flink.runtime.blob.BlobServer - Stopped BLOB server at 0.0.0.0:6132
Any help would be appreciated.
Solved it by running my cluster using ./bin/start-cluster.sh instead of using service files (which calls the same script), the service file kills the other jobmanagers apparently.