JBOSS/Keycloak cluster wait 1 minute before voting for coordinator - jboss

I have 3 nodes keycloak cluster. If one node is down, JBOSS start coordinator selection in 1 minute. Is it possible to decrease this timeout, because of downtime? How can I config fail node detection timeout?
[root#keycloak-01 ~]# date; systemctl stop keycloak
Tue May 25 11:35:46 MSK 2021
11:36:58,629 INFO [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-43,ejb,keycloak-02) ISPN000094: Received new cluster view for channel ejb: [keycloak-02|24] (2) [keycloak-02, keycloak-03]
11:36:58,630 INFO [org.infinispan.CLUSTER] (VERIFY_SUSPECT.TimerThread-43,ejb,keycloak-02) ISPN100001: Node keycloak-01 left the cluster
11:36:58,772 INFO [org.infinispan.CLUSTER] (non-blocking-thread--p7-t1) [Context=quartz] ISPN100007: After merge (or coordinator change), recovered members [keycloak-01, keycloak-02, keycloak-03] with topology id 104
11:36:58,774 INFO [org.infinispan.CLUSTER] (non-blocking-thread--p7-t1) [Context=quartz] ISPN100008: Updating cache members list [keycloak-02, keycloak-03], topology id 105
11:36:58,808 INFO [org.infinispan.CLUSTER] (non-blocking-thread--p15-t2) [Context=offlineClientSessions] ISPN100002: Starting rebalance with members [keycloak-02, keycloak-03], phase READ_OLD_WRITE_ALL, topology id 106

I offer to pay more attention to 'Failure detection' (FD and FD_ALL) in docs. I solved my task with the help of:
<protocol type="FD_ALL">
<property name="timeout">5000</property>
<property name="interval">3000</property>
<property name="timeout_check_interval">2000</property>
</protocol>

Related

How to run confluent-5.3.2-2.12 platform?

Environment:
CentOS7
openjdk version "1.8.0_181"
I downloaded confluent-5.3.2-2.12.tar.gz and extracted to /opt/confluent.
I am following "Installing and Running KSQL | Level Up your KSQL by Confluent" (https://youtu.be/icwHpPm-TCA).
Executed the following commands:
[root#srvr0 ~]# cd /opt/confluent/confluent-5.3.2/bin/
[root#srvr0 bin]# confluent start
bash: confluent: command not found...
Update1:
With reference to, https://docs.confluent.io/current/quickstart/ce-quickstart.html, executed the following commands:
curl -L https://cnfl.io/cli | sh -s -- -b /opt/confluent/confluent-5.3.2/bin
/opt/confluent/confluent-5.3.2/bin/confluent-hub install --no-prompt confluentinc/kafka-connect-datagen:latest
/opt/confluent/confluent-5.3.2/bin/confluent local start
Logs:
[root#srvr0 ~]# curl -L https://cnfl.io/cli | sh -s -- -b /opt/confluent/confluent-5.3.2/bin
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 162 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
100 10288 100 10288 0 0 3567 0 0:00:02 0:00:02 --:--:-- 16176
confluentinc/cli info checking S3 for latest tag
confluentinc/cli info found version: latest for latest/linux/amd64
confluentinc/cli info NOTICE: see licenses located in /tmp/tmp.h8m7jASeAh/confluent
confluentinc/cli info installed /opt/confluent/confluent-5.3.2/bin/confluent
confluentinc/cli info please ensure /opt/confluent/confluent-5.3.2/bin is in your PATH
[root#srvr0 ~]# cp /tmp/tmp.h8m7jASeAh/confluent
cp: missing destination file operand after ‘/tmp/tmp.h8m7jASeAh/confluent’
Try 'cp --help' for more information.
[root#srvr0 ~]# cp -a /tmp/tmp.h8m7jASeAh/confluent /opt/confluent
[root#srvr0 ~]# /opt/confluent/confluent-5.3.2/bin/confluent-hub install --no-prompt confluentinc/kafka-connect-datagen:latest
Running in a "--no-prompt" mode
Implicit acceptance of the license below:
Apache License 2.0
https://www.apache.org/licenses/LICENSE-2.0
Downloading component Kafka Connect Datagen 0.2.0, provided by Confluent, Inc. from Confluent Hub and installing into /opt/confluent/confluent-5.3.2/share/confluent-hub-components
Adding installation directory to plugin path in the following files:
/opt/confluent/confluent-5.3.2/etc/kafka/connect-distributed.properties
/opt/confluent/confluent-5.3.2/etc/kafka/connect-standalone.properties
/opt/confluent/confluent-5.3.2/etc/schema-registry/connect-avro-distributed.properties
/opt/confluent/confluent-5.3.2/etc/schema-registry/connect-avro-standalone.properties
Completed
[root#srvr0 ~]# /opt/confluent/confluent-5.3.2/bin/confluent local start
The local commands are intended for a single-node development environment
only, NOT for production usage. https://docs.confluent.io/current/cli/index.html
Using CONFLUENT_CURRENT: /tmp/confluent.R3YJZ2UC
Starting zookeeper
zookeeper is [UP]
Starting kafka
kafka is [UP]
Starting schema-registry
schema-registry is [UP]
Starting kafka-rest
kafka-rest is [UP]
Starting connect
connect is [UP]
Starting ksql-server
ksql-server is [UP]
Starting control-center
|control-center failed to start
control-center is [DOWN]
Update2:
Logs:
[root#srvr0 confluent-5.3.2]# cat ./logs/controller.log
[2020-01-16 12:20:40,220] DEBUG preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer#66d3c617, name=log4j:logger=kafka.controller (kafka.controller)
[2020-01-16 12:21:40,097] INFO [ControllerEventThread controllerId=0] Starting (kafka.controller.ControllerEventManager$ControllerEventThread)
[2020-01-16 12:21:40,174] INFO [Controller id=0] 0 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController)
[2020-01-16 12:21:40,176] INFO [Controller id=0] Registering handlers (kafka.controller.KafkaController)
[2020-01-16 12:21:40,182] INFO [Controller id=0] Deleting log dir event notifications (kafka.controller.KafkaController)
[2020-01-16 12:21:40,193] INFO [Controller id=0] Deleting isr change notifications (kafka.controller.KafkaController)
[2020-01-16 12:21:40,197] INFO [Controller id=0] Initializing controller context (kafka.controller.KafkaController)
[2020-01-16 12:21:40,361] INFO [Controller id=0] Initialized broker epochs cache: Map(0 -> 24) (kafka.controller.KafkaController)
[2020-01-16 12:21:40,370] DEBUG [Controller id=0] Register BrokerModifications handler for Set(0) (kafka.controller.KafkaController)
[2020-01-16 12:21:40,384] DEBUG [Channel manager on controller 0]: Controller 0 trying to connect to broker 0 (kafka.controller.ControllerChannelManager)
[2020-01-16 12:21:40,444] INFO [RequestSendThread controllerId=0] Starting (kafka.controller.RequestSendThread)
[2020-01-16 12:21:40,445] INFO [Controller id=0] Partitions being reassigned: Map() (kafka.controller.KafkaController)
[2020-01-16 12:21:40,447] INFO [Controller id=0] Currently active brokers in the cluster: Set(0) (kafka.controller.KafkaController)
[2020-01-16 12:21:40,448] INFO [Controller id=0] Currently shutting brokers in the cluster: Set() (kafka.controller.KafkaController)
[2020-01-16 12:21:40,448] INFO [Controller id=0] Current list of topics in the cluster: Set() (kafka.controller.KafkaController)
[2020-01-16 12:21:40,449] INFO [Controller id=0] Fetching topic deletions in progress (kafka.controller.KafkaController)
[2020-01-16 12:21:40,456] INFO [Controller id=0] List of topics to be deleted: (kafka.controller.KafkaController)
[2020-01-16 12:21:40,456] INFO [Controller id=0] List of topics ineligible for deletion: (kafka.controller.KafkaController)
[2020-01-16 12:21:40,457] INFO [Controller id=0] Initializing topic deletion manager (kafka.controller.KafkaController)
[2020-01-16 12:21:40,458] INFO [Topic Deletion Manager 0] Initializing manager with initial deletions: Set(), initial ineligible deletions: Set() (kafka.controller.TopicDeletionManager)
[2020-01-16 12:21:40,459] INFO [Controller id=0] Sending update metadata request (kafka.controller.KafkaController)
[2020-01-16 12:21:40,485] INFO [ReplicaStateMachine controllerId=0] Initializing replica state (kafka.controller.ZkReplicaStateMachine)
[2020-01-16 12:21:40,487] INFO [ReplicaStateMachine controllerId=0] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine)
[2020-01-16 12:21:40,518] INFO [ReplicaStateMachine controllerId=0] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine)
[2020-01-16 12:21:40,519] DEBUG [ReplicaStateMachine controllerId=0] Started replica state machine with initial state -> Map() (kafka.controller.ZkReplicaStateMachine)
[2020-01-16 12:21:40,523] INFO [PartitionStateMachine controllerId=0] Initializing partition state (kafka.controller.ZkPartitionStateMachine)
[2020-01-16 12:21:40,525] INFO [PartitionStateMachine controllerId=0] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine)
[2020-01-16 12:21:40,535] WARN [RequestSendThread controllerId=0] Controller 0's connection to broker srvr0:9092 (id: 0 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.io.IOException: Connection to srvr0:9092 (id: 0 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:295)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:249)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
[2020-01-16 12:21:40,539] DEBUG [PartitionStateMachine controllerId=0] Started partition state machine with initial state -> Map() (kafka.controller.ZkPartitionStateMachine)
[2020-01-16 12:21:40,540] INFO [Controller id=0] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController)
[2020-01-16 12:21:40,542] INFO [Controller id=0] Removing partitions Set() from the list of reassigned partitions in zookeeper (kafka.controller.KafkaController)
[2020-01-16 12:21:40,543] INFO [Controller id=0] No more partitions need to be reassigned. Deleting zk path /admin/reassign_partitions (kafka.controller.KafkaController)
[2020-01-16 12:21:40,550] INFO [Controller id=0] Partitions undergoing preferred replica election: (kafka.controller.KafkaController)
[2020-01-16 12:21:40,551] INFO [Controller id=0] Partitions that completed preferred replica election: (kafka.controller.KafkaController)
[2020-01-16 12:21:40,553] INFO [Controller id=0] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController)
[2020-01-16 12:21:40,554] INFO [Controller id=0] Resuming preferred replica election for partitions: (kafka.controller.KafkaController)
[2020-01-16 12:21:40,555] INFO [Controller id=0] Starting preferred replica leader election for partitions (kafka.controller.KafkaController)
[2020-01-16 12:21:40,593] INFO [Controller id=0] Starting the controller scheduler (kafka.controller.KafkaController)
[2020-01-16 12:21:40,637] WARN [RequestSendThread controllerId=0] Controller 0's connection to broker srvr0:9092 (id: 0 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.io.IOException: Connection to srvr0:9092 (id: 0 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:295)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:249)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
[2020-01-16 12:21:40,738] WARN [RequestSendThread controllerId=0] Controller 0's connection to broker srvr0:9092 (id: 0 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.io.IOException: Connection to srvr0:9092 (id: 0 rack: null) failed.
...
[2020-01-16 12:21:41,559] INFO [Controller id=0] New topics: [Set(__confluent.support.metrics)], deleted topics: [Set()], new partition replica assignment [Map(__confluent.support.metrics-0 -> Vector(0))] (kafka.controller.KafkaController)
[2020-01-16 12:21:41,559] INFO [Controller id=0] New partition creation callback for __confluent.support.metrics-0 (kafka.controller.KafkaController)
[2020-01-16 12:21:41,653] WARN [RequestSendThread controllerId=0] Controller 0's connection to broker srvr0:9092 (id: 0 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.io.IOException: Connection to srvr0:9092 (id: 0 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:295)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:249)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
[2020-01-16 12:21:41,754] WARN [RequestSendThread controllerId=0] Controller 0's connection to broker srvr0:9092 (id: 0 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.io.IOException: Connection to srvr0:9092 (id: 0 rack: null) failed.
...
[2020-01-16 12:21:45,596] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
[2020-01-16 12:21:45,597] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
[2020-01-16 12:21:45,601] DEBUG [Controller id=0] Preferred replicas by broker Map(0 -> Map(__confluent.support.metrics-0 -> Vector(0))) (kafka.controller.KafkaController)
[2020-01-16 12:21:45,605] DEBUG [Controller id=0] Topics not in preferred replica for broker 0 Map() (kafka.controller.KafkaController)
[2020-01-16 12:21:45,609] TRACE [Controller id=0] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController)
[2020-01-16 12:21:45,616] WARN [RequestSendThread controllerId=0] Controller 0's connection to broker srvr0:9092 (id: 0 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.io.IOException: Connection to srvr0:9092 (id: 0 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:295)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:249)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
[2020-01-16 12:21:45,717] WARN [RequestSendThread controllerId=0] Controller 0's connection to broker srvr0:9092 (id: 0 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.io.IOException: Connection to srvr0:9092 (id: 0 rack: null) failed.
...
[2020-01-16 12:22:10,944] INFO [ControllerEventThread controllerId=0] Shutting down (kafka.controller.ControllerEventManager$ControllerEventThread)
[2020-01-16 12:22:10,946] INFO [ControllerEventThread controllerId=0] Stopped (kafka.controller.ControllerEventManager$ControllerEventThread)
[2020-01-16 12:22:10,946] INFO [ControllerEventThread controllerId=0] Shutdown completed (kafka.controller.ControllerEventManager$ControllerEventThread)
[2020-01-16 12:22:10,947] DEBUG [Controller id=0] Resigning (kafka.controller.KafkaController)
[2020-01-16 12:22:10,948] DEBUG [Controller id=0] Unregister BrokerModifications handler for Set(0) (kafka.controller.KafkaController)
[2020-01-16 12:22:10,951] INFO [PartitionStateMachine controllerId=0] Stopped partition state machine (kafka.controller.ZkPartitionStateMachine)
[2020-01-16 12:22:10,953] INFO [ReplicaStateMachine controllerId=0] Stopped replica state machine (kafka.controller.ZkReplicaStateMachine)
[2020-01-16 12:22:10,955] INFO [RequestSendThread controllerId=0] Shutting down (kafka.controller.RequestSendThread)
[2020-01-16 12:22:10,956] TRACE [RequestSendThread controllerId=0] shutdownInitiated latch count reached zero. Shutdown called. (kafka.controller.RequestSendThread)
[2020-01-16 12:22:10,956] INFO [RequestSendThread controllerId=0] Stopped (kafka.controller.RequestSendThread)
[2020-01-16 12:22:10,956] INFO [RequestSendThread controllerId=0] Shutdown completed (kafka.controller.RequestSendThread)
[2020-01-16 12:22:10,960] INFO [Controller id=0] Resigned (kafka.controller.KafkaController)
Update 3:
Now, even worst... only zookeeper is starting. other services are failing to start...
Logs:
[root#srvr0 ~]# /opt/confluent/confluent-5.3.2/bin/confluent local start
The local commands are intended for a single-node development environment
only, NOT for production usage. https://docs.confluent.io/current/cli/index.html
Using CONFLUENT_CURRENT: /tmp/confluent.R3YJZ2UC
Starting zookeeper
zookeeper is [UP]
Starting kafka
-Kafka failed to start
kafka is [DOWN]
Cannot start Schema Registry, Kafka Server is not running. Check your deployment
Error: exit status 127
[root#srvr0 ~]#
Update 4:
confluent local start, zookeeper-server-start and kafka-server-start logs:
[root#srvr0 ~]# /opt/confluent/confluent-5.3.2/bin/confluent local start
Updates are available for confluent. To install them, please run:
$ confluent update
The local commands are intended for a single-node development environment
only, NOT for production usage. https://docs.confluent.io/current/cli/index.html
Using CONFLUENT_CURRENT: /tmp/confluent.xgVLokw7
Starting zookeeper
zookeeper is [UP]
Starting kafka
|Kafka failed to start
kafka is [DOWN]
Cannot start Schema Registry, Kafka Server is not running. Check your deployment
Error: exit status 127
[root#srvr0 ~]# /opt/confluent/confluent-5.3.2/bin/zookeeper-server-start
USAGE: /opt/confluent/confluent-5.3.2/bin/zookeeper-server-start [-daemon] zookeeper.properties
[root#srvr0 ~]# /opt/confluent/confluent-5.3.2/bin/kafka-server-start
USAGE: /opt/confluent/confluent-5.3.2/bin/kafka-server-start [-daemon] server.properties [--override property=value]*
server.properties hasn't been edited and its contents as follows:
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
##################### Confluent Metrics Reporter #######################
# Confluent Control Center and Confluent Auto Data Balancer integration
#
# Uncomment the following lines to publish monitoring data for
# Confluent Control Center and Confluent Auto Data Balancer
# If you are using a dedicated metrics cluster, also adjust the settings
# to point to your metrics kakfa cluster.
#metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter
#confluent.metrics.reporter.bootstrap.servers=localhost:9092
#
# Uncomment the following line if the metrics cluster has a single broker
#confluent.metrics.reporter.topic.replicas=1
##################### Confluent Proactive Support ######################
# If set to true, and confluent-support-metrics package is installed
# then the feature to collect and report support metrics
# ("Metrics") is enabled. If set to false, the feature is disabled.
#
confluent.support.metrics.enable=true
# The customer ID under which support metrics will be collected and
# reported.
#
# When the customer ID is set to "anonymous" (the default), then only a
# reduced set of metrics is being collected and reported.
#
# Confluent customers
# -------------------
# If you are a Confluent customer, then you should replace the default
# value with your actual Confluent customer ID. Doing so will ensure
# that additional support metrics will be collected and reported.
#
confluent.support.customer.id=anonymous
############################# Group Coordinator Settings #############################
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
############################# Confluent Authorizer Settings #############################
# Uncomment to enable Confluent Authorizer with support for ACLs, LDAP groups and RBAC
#authorizer.class.name=io.confluent.kafka.security.authorizer.ConfluentServerAuthorizer
# Semi-colon separated list of super users in the format <principalType>:<principalName>
#super.users=
# Specify a valid Confluent license. By default free-tier license will be used
#confluent.license=
# Replication factor for the topic used for licensing. Default is 3.
confluent.license.topic.replication.factor=1
# Uncomment the following lines and specify values where required to enable RBAC
# Enable RBAC provider
#confluent.authorizer.access.rule.providers=ACL,RBAC
# Bootstrap servers for RBAC metadata. Must be provided if this broker is not in the metadata cluster
#confluent.metadata.bootstrap.servers=PLAINTEXT://127.0.0.1:9092
# Replication factor for the metadata topic used for authorization. Default is 3.
confluent.metadata.topic.replication.factor=1
# Listeners for metadata server
#confluent.metadata.server.listeners=http://0.0.0.0:8090
# Advertised listeners for metadata server
#confluent.metadata.server.advertised.listeners=http://127.0.0.1:8090
Please help me in resolving the issue!
It's not clear where srvr0:9092 is defined; I suggest reviewing your server.properties file to fix the connection strings.
You don't need to run confluent at all. You can follow the base Apache Kafka guides for running both Zookeeper and Kafka
zookeeper-server-start + kafka-server-start
Or you can use Confluent's APT/YUM repos rather than just extracting tarballs, then use systemctl to control services.
Or, using Docker is another way to get started quickly.

Can't change kafka broker-id in Incubator Helm chart?

I have one Zookeeper server (say xx.xx.xx.xxx:2181) running on one GCP Compute Instance VM separately.
I have 3 GKE clusters all in different regions on which I am trying to install Kafka broker nodes so that all nodes connect to one Zookeeper server(xx.xx.xx.xxx:2181).
I installed the Zookeeper server on the VM following this guide with zookeeper properties looking like below:
dataDir=/tmp/data
clientPort=2181
maxClientCnxns=0
initLimit=5
syncLimit=2
tickTime=2000
# list of servers
server.1=0.0.0.0:2888:3888
I am using this Incubator Helm Chart to deploy the brokers on GKE clusters.
As per the README.md I am trying to install with the below command:
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
helm install --name my-kafka \
--set replicas=1,zookeeper.enabled=false,configurationOverrides."broker\.id"=1,configurationOverrides."zookeeper\.connect"="xx.xx.xx.xxx:2181" \
incubator/kafka
Error
When I deploy using any of the above ways described above on all of the three GKE Clusters, only one of the brokers gets connected to the Zookeeper server and the other two pods just restart infinitely.
When I check the Zookeeper log (on the VM), it looks something like below:
...
[2019-10-30 14:32:30,930] INFO Accepted socket connection from /xx.xx.xx.xxx:54978 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2019-10-30 14:32:30,936] INFO Client attempting to establish new session at /xx.xx.xx.xxx:54978 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-10-30 14:32:30,938] INFO Established session 0x100009621af0057 with negotiated timeout 6000 for client /xx.xx.xx.xxx:54978 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-10-30 14:32:32,335] INFO Got user-level KeeperException when processing sessionid:0x100009621af0057 type:create cxid:0xc zxid:0x422 txntype:-1 reqpath:n/a Error Path:/config/users Error:KeeperErrorCode = NodeExists for /config/users (org.apache.zookeeper.server.PrepRequestProcessor)
[2019-10-30 14:32:34,472] INFO Got user-level KeeperException when processing sessionid:0x100009621af0057 type:create cxid:0x14 zxid:0x424 txntype:-1 reqpath:n/a Error Path:/brokers/ids/0 Error:KeeperErrorCode = NodeExists for /brokers/ids/0 (org.apache.zookeeper.server.PrepRequestProcessor)
[2019-10-30 14:32:35,126] INFO Processed session termination for sessionid: 0x100009621af0057 (org.apache.zookeeper.server.PrepRequestProcessor)
[2019-10-30 14:32:35,127] INFO Closed socket connection for client /xx.xx.xx.xxx:54978 which had sessionid 0x100009621af0057 (org.apache.zookeeper.server.NIOServerCnxn)
[2019-10-30 14:36:49,123] INFO Expiring session 0x100009621af003b, timeout of 6000ms exceeded (org.apache.zookeeper.server.ZooKeeperServer)
...
I am sure I have created firewall rules to open necessary ports and that is not a problem because one of the broker nodes is able to connect (the one who reaches first).
To me, this seems like the borkerID are not getting changed for some reason and that is the reason why Zookeeper is rejecting the connections.
I say this because kubectl logs pod/my-kafka-n outputs something like below:
...
[2019-10-30 19:56:24,614] INFO [SocketServer brokerId=0] Shutdown completed (kafka.network.SocketServer)
...
[2019-10-30 19:56:24,627] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)
...
As we can see above says brokerId=0 for all of the pods in all the 3 clusters.
However, when I do kubectl exec -ti pod/my-kafka-n -- env | grep BROKER, I can see the environment variable KAFKA_BROKER_ID is changed to 1, 2 and 3 for different brokers as I set.
What am I doing wrong? What is the correct way to change the kafka-broker id or to make all brokers connect to one Zookeeper instance?
make all brokers connect to one Zookeeper instance?
Seems like you are doing that okay via the configurationOverrides option. That'll deploy all pods with the same configuration.
That being said, the broker ID should not be the same per pod. If you inspect the statefulset YAML, it appears that the broker ID is calculated based on the POD_NAME variable
Sidenote
3 GKE clusters all in different regions on which I am trying to install Kafka broker nodes so that all nodes connect to one Zookeeper server
It's not clear to me how you would able to deploy to 3 sepearate clusters in one API call. But, this architecture isn't recommended by Kafka, Zookeeper, or Kubernetes communities unless these regions are "geographically close"

Kafka brokers not starting up

I have 2 broker cluster of kafka 0.10.1 running up previously on my development servers with zookeeper 3.3.6 correctly.
I recently tried upgrading broker version to latest kafka 2.3.0 but it didn't start.
There is nothing much changed in the configuration.
Can anybody direct me what possibly could go wrong. Why brokers are not getting started?
Changed server.properties on broker server 1
broker.id=1
log.dirs=/mnt/kafka_2.11-2.3.0/logs
zookeeper.connect=local1:2181,local2:2181
listeners=PLAINTEXT://local1:9092
advertised.listeners=PLAINTEXT://local1:9092
Changed server.properties on broker server 2
broker.id=2
log.dirs=/mnt/kafka_2.11-2.3.0/logs
zookeeper.connect=local1:2181,local2:2181
listeners=PLAINTEXT://local2:9092
advertised.listeners=PLAINTEXT://local2:9092
NOTE:
1. Zookeeper is running on both servers
2. Kafka directories namely /brokers, /brokers/ids, /consumers etc are getting created.
3. Nothing is getting registered under /brokers/ids. Zookeeper CLI get /brokers/ids returns
[]
4. Command lsof -i tcp:9082 returns
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 18290 cass 118u IPv6 52133 0t0 TCP local2:9092 (LISTEN)
4. logs/server.log has no errors logged.
5. No more logs are getting appended to server.log.
Server logs
[2019-07-01 10:56:14,534] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2019-07-01 10:56:14,801] INFO Awaiting socket connections on local2:9092. (kafka.network.Acceptor)
[2019-07-01 10:56:14,829] INFO [SocketServer brokerId=1] Created data-plane acceptor and processors for endpoint : EndPoint(local2,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
[2019-07-01 10:56:14,830] INFO [SocketServer brokerId=1] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
[2019-07-01 10:56:14,850] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-07-01 10:56:14,851] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-07-01 10:56:14,851] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-07-01 10:56:14,852] INFO [ExpirationReaper-1-ElectPreferredLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-07-01 10:56:14,860] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2019-07-01 10:56:14,892] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient)
From the docs regarding ZooKeeper
Stable version
The current stable branch is 3.4 and the latest release of that branch is 3.4.9.
Upgrading zookeeper version to latest 3.5.5 helped and Kafka broker started correctly.
It would have been great if docs had stated the incompatibility with previous zookeeper version.
PS: Answer added to help someone struck with similar issue because of zookeeper version.

Kafka gives Invalid receive size with Hyperledger Fabric Orderer connection

I was setting up a new cluster for Hyperledger Fabric on EKS. The cluster has 4 kafka nodes, 3 zookeeper nodes, 4 peers, 3 orderers, 1 CA. All the containers come up individually, and the kafka/zookeeper backend is also stable. I can SSH into any kafka/zookeeper and check for connections to any other nodes, create topics, post messages etc. The kafka is accessible via Telnet from all orderers.
When I try to create a channel I get the following error from the orderer:
2019-04-25 13:34:17.660 UTC [orderer.common.broadcast] ProcessMessage -> WARN 025 [channel: channel1] Rejecting broadcast of message from 192.168.94.15:53598 with SERVICE_UNAVAILABLE: rejected by Consenter: backing Kafka cluster has not completed booting; try again later
2019-04-25 13:34:17.660 UTC [comm.grpc.server] 1 -> INFO 026 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=192.168.94.15:53598 grpc.code=OK grpc.call_duration=14.805833ms
2019-04-25 13:34:17.661 UTC [common.deliver] Handle -> WARN 027 Error reading from 192.168.94.15:53596: rpc error: code = Canceled desc = context canceled
2019-04-25 13:34:17.661 UTC [comm.grpc.server] 1 -> INFO 028 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=192.168.94.15:53596 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=24.987468ms
And the Kafka leader reports the following error:
[2019-04-25 14:07:09,453] WARN [SocketServer brokerId=2] Unexpected error from /192.168.89.200; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 369295617 larger than 104857600)
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:132)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:231)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:192)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:528)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:469)
at org.apache.kafka.common.network.Selector.poll(Selector.java:398)
at kafka.network.Processor.poll(SocketServer.scala:535)
at kafka.network.Processor.run(SocketServer.scala:452)
at java.lang.Thread.run(Thread.java:748)
[2019-04-25 14:13:53,917] INFO [GroupMetadataManager brokerId=2] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
The error indicates that you are receiving messages larger than the permitted maximum size, that defaults to ~100MB. Try to increase the following property in server.properties file, so that it can fit larger receive (in this case at least 369295617 bytes):
# Set to 500MB
socket.request.max.bytes=500000000
and then restart your Kafka Cluster.
If this doesn't work for you, then I guess that you are trying to connect to a non-SSL listener. Therefore, you'd have to verify that broker's SSL listener port is 9092 (or the corresponding port in case you are not using the default one) . The following should do the trick:
listeners=SSL://:9092
advertised.listeners=SSL://:9092
inter.broker.listener.name=SSL

Flink: HA mode killing leading jobmanager terminating standby jobmanagers

I am trying to get Flink to run in HA mode using Zookeeper, but when I try to test it by killing the leader JobManager all my standby jobmanagers get killed too.
So instead of a standby jobmanager taking over as the new Leader, they all get killed which isn't supposed to happen.
My setup:
4 servers, 3 of those servers have Zookeeper running, but only 1 server will host all the JobManagers.
ad011.local: Zookeeper + Jobmanagers
ad012.local: Zookeeper + Taskmanager
ad013.local: Zookeeper
ad014.local: nothing interesting
My masters file looks like this:
ad011.local:8081
ad011.local:8082
ad011.local:8083
My flink-conf.yaml:
jobmanager.rpc.address: ad011.local
blob.server.port: 6130,6131,6132
jobmanager.heap.mb: 512
taskmanager.heap.mb: 128
taskmanager.numberOfTaskSlots: 4
parallelism.default: 2
taskmanager.tmp.dirs: /var/flink/data
metrics.reporters: jmx
metrics.reporter.jmx.class: org.apache.flink.metrics.jmx.JMXReporter
metrics.reporter.jmx.port: 8789,8790,8791
high-availability: zookeeper
high-availability.zookeeper.quorum: ad011.local:2181,ad012.local:2181,ad013.local:2181
high-availability.zookeeper.path.root: /flink
high-availability.zookeeper.path.cluster-id: /cluster-one
high-availability.storageDir: /var/flink/recovery
high-availability.jobmanager.port: 50000,50001,50002
When I run flink by using start-cluster.sh script I see my 3 JobManagers running, and going to the WebUI they all point to ad011.local:8081, which is the leader. Which is okay I guess?
I then try to test the failover by killing the leader using kill and then all my other standby JobManagers stop too.
This is what I see in my standby JobManager logs:
2017-09-29 08:08:41,590 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager at akka.tcp://flink#ad011.local:50002/user/jobmanager.
2017-09-29 08:08:41,590 INFO org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService - Starting ZooKeeperLeaderElectionService org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService#72d546c8.
2017-09-29 08:08:41,598 INFO org.apache.flink.runtime.webmonitor.WebRuntimeMonitor - Starting with JobManager akka.tcp://flink#ad011.local:50002/user/jobmanager on port 8083
2017-09-29 08:08:41,598 INFO org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService - Starting ZooKeeperLeaderRetrievalService.
2017-09-29 08:08:41,645 INFO org.apache.flink.runtime.webmonitor.JobManagerRetriever - New leader reachable under akka.tcp://flink#ad011.local:50000/user/jobmanager:f7dc2c48-dfa5-45a4-a63e-ff27be21363a.
2017-09-29 08:08:41,651 INFO org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService - Starting ZooKeeperLeaderRetrievalService.
2017-09-29 08:08:41,722 INFO org.apache.flink.runtime.clusterframework.standalone.StandaloneResourceManager - Received leader address but not running in leader ActorSystem. Cancelling registration.
2017-09-29 09:26:13,472 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink#ad011.local:50000] has failed, address is now gated for [5000] ms. Reason: [Disassociated]
2017-09-29 09:26:14,274 INFO org.apache.flink.runtime.jobmanager.JobManager - RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested.
2017-09-29 09:26:14,284 INFO org.apache.flink.runtime.blob.BlobServer - Stopped BLOB server at 0.0.0.0:6132
Any help would be appreciated.
Solved it by running my cluster using ./bin/start-cluster.sh instead of using service files (which calls the same script), the service file kills the other jobmanagers apparently.