kafka_2.12-2.3.0 broker shutdown in windows 10 - apache-kafka

below is the error i am getting in the console while trying to start the kafka server with kafka-server-start command in command prompt .
ERROR Error while creating log for kafka_example-0 in dir
C:\Users\user11\Softwareskafka_2.12-2.3.0kafka_logs (kafka.server.LogDirFailureChannel)
java.io.IOException: The requested operation cannot be performed on a file with a user-mapped section open
at java.io.RandomAccessFile.setLength(Native Method)
at kafka.log.AbstractIndex.$anonfun$resize$1(AbstractIndex.scala:188)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
at kafka.log.AbstractIndex.resize(AbstractIndex.scala:174)
at kafka.log.AbstractIndex.$anonfun$trimToValidSize$1(AbstractIndex.scala:240)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:240)
INFO [ReplicaManager broker=0] Stopping serving replicas in dir
C:\Users\user11\Softwareskafka_2.12-2.3.0kafka_logs (kafka.server.ReplicaManager)
ERROR [ReplicaManager broker=0] Error while making broker the leader for partition Topic: kafka_example; Partition: 0; Leader: None; AllReplicas: ; InSyncReplicas: in dir None (kafka.server.ReplicaManager)
org.apache.kafka.common.errors.KafkaStorageException: Error while creating log for kafka_example-0 in dir C:\Users\user11\Softwareskafka_2.12-2.3.0kafka_logs
Caused by: java.io.IOException: The requested operation cannot be performed on a file with a user-mapped section open
at java.io.RandomAccessFile.setLength(Native Method)
at kafka.log.AbstractIndex.$anonfun$resize$1(AbstractIndex.scala:188)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
at kafka.log.AbstractIndex.resize(AbstractIndex.scala:174)
at kafka.log.AbstractIndex.$anonfun$trimToValidSize$1(AbstractIndex.scala:240)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:240)
at kafka.log.LogSegment.recover(LogSegment.scala:397)
at kafka.log.Log.recoverSegment(Log.scala:493)
at kafka.log.Log.recoverLog(Log.scala:608)
at kafka.log.Log.$anonfun$loadSegments$3(Log.scala:568)
INFO Replica loaded for partition --from-beginning-0 with initial high watermark 0 (kafka.cluster.Replica)
INFO [Partition --from-beginning-0 broker=0] --from-beginning-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-0 (kafka.coordinator.group.GroupMetadataManager)
INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(__consumer_offsets-0, --from-beginning-0, Kafka_Example-0) (kafka.server.ReplicaFetcherManager)
INFO [ReplicaAlterLogDirsManager on broker 0] Removed fetcher for partitions Set(__consumer_offsets-0, --from-beginning-0, Kafka_Example-0) (kafka.server.ReplicaAlterLogDirsManager)
INFO [ReplicaManager broker=0] Broker 0 stopped fetcher for partitions __consumer_offsets-0,--from-beginning-0,Kafka_Example-0 and stopped moving logs for partitions because they are in the failed log directory C:\Users\user11\Softwareskafka_2.12-2.3.0kafka_logs. (kafka.server.ReplicaManager)
INFO Stopping serving logs in dir C:\Users\user11\Softwareskafka_2.12-2.3.0kafka_logs (kafka.log.LogManager)
ERROR Shutdown broker because all log dirs in C:\Users\user11\Softwareskafka_2.12-2.3.0kafka_logs have failed (kafka.log.LogManager)
on my local using java 8 version and above subject mentioned is kafka version.

Related

Kafka - Failed to clean up log for __consumer_offsets-10 in dir

I am seeing the following exception in one of the broker log files.
Set up : 3 brokers
I am ok to remove the files c:\tmp directory. However, little curious why this broker got into this state.
log4j:ERROR Failed to rename [C:\confluent-5.5.0/logs/log-cleaner.log] to [C:\confluent-5.5.0/logs/log-cleaner.log.2020-06-18-09].
[2020-06-18 14:10:41,361] ERROR Failed to clean up log for __consumer_offsets-10 in dir C:\tmp\kafka-logs-3 due to IOException (kafka.s
erver.LogDirFailureChannel)
java.nio.file.FileSystemException: C:\tmp\kafka-logs-3\__consumer_offsets-10\00000000000000000000.timeindex.cleaned -> C:\tmp\kafka-log
s-3\__consumer_offsets-10\00000000000000000000.timeindex.swap: The process cannot access the file because it is being used by another p
rocess.
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
at java.nio.file.Files.move(Files.java:1395)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:834)
at kafka.log.AbstractIndex.renameTo(AbstractIndex.scala:207)
at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:497)
at kafka.log.Log.$anonfun$replaceSegments$4(Log.scala:2288)
at kafka.log.Log.$anonfun$replaceSegments$4$adapted(Log.scala:2288)
at scala.collection.immutable.List.foreach(List.scala:392)
at kafka.log.Log.replaceSegments(Log.scala:2288)
at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:605)
at kafka.log.Cleaner.$anonfun$doClean$6(LogCleaner.scala:530)
at kafka.log.Cleaner.doClean(LogCleaner.scala:529)
at kafka.log.Cleaner.clean(LogCleaner.scala:503)
at kafka.log.LogCleaner$CleanerThread.cleanLog(LogCleaner.scala:372)
at kafka.log.LogCleaner$CleanerThread.cleanFilthiestLog(LogCleaner.scala:345)
at kafka.log.LogCleaner$CleanerThread.tryCleanFilthiestLog(LogCleaner.scala:325)
at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:314)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
Suppressed: java.nio.file.FileSystemException: C:\tmp\kafka-logs-3\__consumer_offsets-10\00000000000000000000.timeindex.cleaned
-> C:\tmp\kafka-logs-3\__consumer_offsets-10\00000000000000000000.timeindex.swap: The process cannot access the file because it is bei
ng used by another process.
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
at java.nio.file.Files.move(Files.java:1395)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:831)
... 15 more
[2020-06-18 14:10:41,441] WARN [ReplicaManager broker=3] Stopping serving replicas in dir C:\tmp\kafka-logs-3 (kafka.server.ReplicaMana
ger)
[2020-06-18 14:10:41,445] INFO [ReplicaFetcherManager on broker 3] Removed fetcher for partitions Set(__consumer_offsets-22, __consumer
_offsets-4, stock-prices-2, __consumer_offsets-7, __consumer_offsets-46, stock-prices-1, __consumer_offsets-25, __consumer_offsets-49,
__consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-37, stock-prices-0, __consumer_offsets-19, stoc
k_topic-0, __consumer_offsets-13, __consumer_offsets-43, __consumer_offsets-1, __consumer_offsets-34, __consumer_offsets-10, __consumer
_offsets-40) (kafka.server.ReplicaFetcherManager)
[2020-06-18 14:10:41,448] INFO [ReplicaAlterLogDirsManager on broker 3] Removed fetcher for partitions Set(__consumer_offsets-22, __con
sumer_offsets-4, stock-prices-2, __consumer_offsets-7, __consumer_offsets-46, stock-prices-1, __consumer_offsets-25, __consumer_offsets
-49, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-37, stock-prices-0, __consumer_offsets-19,
stock_topic-0, __consumer_offsets-13, __consumer_offsets-43, __consumer_offsets-1, __consumer_offsets-34, __consumer_offsets-10, __con
sumer_offsets-40) (kafka.server.ReplicaAlterLogDirsManager)
[2020-06-18 14:10:41,492] WARN [ReplicaManager broker=3] Broker 3 stopped fetcher for partitions __consumer_offsets-22,__consumer_offse
ts-4,stock-prices-2,__consumer_offsets-7,__consumer_offsets-46,stock-prices-1,__consumer_offsets-25,__consumer_offsets-49,__consumer_of
fsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-37,stock-prices-0,__consumer_offsets-19,stock_topic-0,__consume
r_offsets-13,__consumer_offsets-43,__consumer_offsets-1,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-40 and stopped m
oving logs for partitions because they are in the failed log directory C:\tmp\kafka-logs-3. (kafka.server.ReplicaManager)
[2020-06-18 14:10:41,494] WARN Stopping serving logs in dir C:\tmp\kafka-logs-3 (kafka.log.LogManager)
[2020-06-18 14:10:41,576] ERROR Shutdown broker because all log dirs in C:\tmp\kafka-logs-3 have failed (kafka.log.LogManager)

Shutdown broker because all log dirs have failed

[2019-10-29 10:09:36,903] INFO [ReplicaManager broker=0] Broker 0 stopped fetcher for partitions __consumer_offsets-30,__consumer_offsets-8,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-46,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-36,__consumer_offsets-42,topic-0,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-11,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-39,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-10 and stopped moving logs for partitions because they are in the failed log directory C:\tmp\kafka-logs. (kafka.server.ReplicaManager)
[2019-10-29 10:09:36,908] INFO Stopping serving logs in dir C:\tmp\kafka-logs (kafka.log.LogManager)
[2019-10-29 10:09:36,952] ERROR Shutdown broker because all log dirs in C:\tmp\kafka-logs have failed (kafka.log.LogManager)
i have started zookeeper,Kafka and producer also. But when i tried to consume data immediately this error is coming in Windows
command: .\bin\windows\Kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic topic
I had the similar issue and had to do trial and error. But what I eventually did was to disable the other JRE versions and leave only one enabled. See image attached. This seems to have resolved my problem since my broker doesn't crash anymore.

Kafka broker shutdown while cleaning up log files

Kafka cluster with 3 brokers(version:1.1.0) and is well running for over 6 months.
Then we modified partitions from 3 to 48 for every topic after 2018/12/12, then the brokers shutdown every 5-10 days.
Then we upgraded the broker from 1.1.0 to 2.1.0, but the brokers still keep shutting down every 5-10 days.
Each time, one broker shut down after the following error log, then several minutes later, the other 2 brokers shut down too, with the same error but other partition log files.
[2019-01-11 17:16:36,572] INFO [ProducerStateManager partition=__transaction_state-11] Writing producer snapshot at offset 807760 (kafka.log.ProducerStateManager)
[2019-01-11 17:16:36,572] INFO [Log partition=__transaction_state-11, dir=/kafka/logs] Rolled new log segment at offset 807760 in 4 ms. (kafka.log.Log)
[2019-01-11 17:16:46,150] WARN Resetting first dirty offset of __transaction_state-35 to log start offset 194404 since the checkpointed offset 194345 is invalid. (kafka.log.LogCleanerManager$)
[2019-01-11 17:16:46,239] ERROR Failed to clean up log for __transaction_state-11 in dir /kafka/logs due to IOException (kafka.server.LogDirFailureChannel)
java.nio.file.NoSuchFileException: /kafka/logs/__transaction_state-11/00000000000000807727.log
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:409)
at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)
at java.nio.file.Files.move(Files.java:1395)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:809)
at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:222)
at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:488)
at kafka.log.Log.asyncDeleteSegment(Log.scala:1838)
at kafka.log.Log.$anonfun$replaceSegments$6(Log.scala:1901)
at kafka.log.Log.$anonfun$replaceSegments$6$adapted(Log.scala:1896)
at scala.collection.immutable.List.foreach(List.scala:388)
at kafka.log.Log.replaceSegments(Log.scala:1896)
at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:583)
at kafka.log.Cleaner.$anonfun$doClean$6(LogCleaner.scala:515)
at kafka.log.Cleaner.$anonfun$doClean$6$adapted(LogCleaner.scala:514)
at scala.collection.immutable.List.foreach(List.scala:388)
at kafka.log.Cleaner.doClean(LogCleaner.scala:514)
at kafka.log.Cleaner.clean(LogCleaner.scala:492)
at kafka.log.LogCleaner$CleanerThread.cleanLog(LogCleaner.scala:353)
at kafka.log.LogCleaner$CleanerThread.cleanFilthiestLog(LogCleaner.scala:319)
at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:300)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
Suppressed: java.nio.file.NoSuchFileException: /kafka/logs/__transaction_state-11/00000000000000807727.log -> /kafka/logs/__transaction_state-11/00000000000000807727.log.deleted
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396)
at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)
at java.nio.file.Files.move(Files.java:1395)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:806)
... 17 more
[2019-01-11 17:16:46,245] INFO [ReplicaManager broker=2] Stopping serving replicas in dir /kafka/logs (kafka.server.ReplicaManager)
[2019-01-11 17:16:46,314] INFO Stopping serving logs in dir /kafka/logs (kafka.log.LogManager)
[2019-01-11 17:16:46,326] ERROR Shutdown broker because all log dirs in /kafka/logs have failed (kafka.log.LogManager)
if you have not changed log.retention.bytes or log.retention.hours or log.retention.minutes or log.retention.ms configs, Kafka tries to delete logs after 7 days. So based on the exception, Kafka wants to clean up file /kafka/logs/__transaction_state-11/00000000000000807727.log but, there is no such file in Kafka log directory and it throws an exception which causes broker shut down.
if you are able to shut down cluster and Zookeeper do it and clean up /kafka/logs/__transaction_state-11 manually.
Note: I don't know it is harmful or not but you can follow safely remove Kafka topic posts.

Monitor kafka under Prometheus and Grafana

I wish to monitor Kafka with Prometheus and Grafana.
I have downloaded kafka_2.11-0.10.0.0
cd kafka_2.11-0.10.0.0
and downloaded :
wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.6/jmx_prometheus_javaagent-0.6.jar
wget https://raw.githubusercontent.com/prometheus/jmx_exporter/master/example_configs/kafka-0-8-2.yml
Started Zookeeper using :
nohup bin/zookeeper-server-start.sh config/zookeeper.properties >> zookeeper.log &
KAFKA_OPTS="$KAFKA_OPTS -javaagent:$PWD/jmx_prometheus_javaagent-0.6.jar=7071:$PWD/kafka-0-8-2.yml"
Started kafka using :
nohup bin/kafka-server-start.sh config/server.properties >> kafka.log &
logs of zookeeper :
INFO Got user-level KeeperException when processing sessionid:0x15b18c79a630075 type:create cxid:0x81f216 zxid:0x2b87c2 txntype:-1 reqpath:n/a Error
Path:/consumers/logstash/ids/logstash_wavescore-staging-1490769576466-92cd1041 Error:KeeperErrorCode = NodeExists for /consumers/logstash/ids/logstash_wavescore-staging-149076
9576466-92cd1041 (org.apache.zookeeper.server.PrepRequestProcessor)
INFO Got user-level KeeperException when processing sessionid:0x15b18c79a630075 type:create cxid:0x81f219 zxid:0x2b87c3 txntype:-1 reqpath:n/a Error
Path:/consumers/logstash/ids/logstash_wavescore-staging-1490769576466-92cd1041 Error:KeeperErrorCode = NodeExists for /consumers/logstash/ids/logstash_wavescore-staging-149076
9576466-92cd1041 (org.apache.zookeeper.server.PrepRequestProcessor)
INFO Got user-level KeeperException when processing sessionid:0x15b18c79a630075 type:create cxid:0x81f21c zxid:0x2b87c4 txntype:-1 reqpath:n/a Error
Path:/consumers/logstash/ids/logstash_wavescore-staging-1490769576466-92cd1041 Error:KeeperErrorCode = NodeExists for /consumers/logstash/ids/logstash_wavescore-staging-149076
9576466-92cd1041 (org.apache.zookeeper.server.PrepRequestProcessor)
logs of kafka :
INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,36] in 1 milliseconds. (kafka.coordinator.GroupMetadataM
anager)
INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,39] (kafka.coordinator.GroupMetadataManager)
INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,39] in 0 milliseconds. (kafka.coordinator.GroupMetadataM
anager)
INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,42] (kafka.coordinator.GroupMetadataManager)
INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,42] in 1 milliseconds. (kafka.coordinator.GroupMetadataM
anager)
INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,45] (kafka.coordinator.GroupMetadataManager)
INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,45] in 0 milliseconds. (kafka.coordinator.GroupMetadataM
anager)
INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from [__consumer_offsets,48] (kafka.coordinator.GroupMetadataManager)
INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from [__consumer_offsets,48] in 1 milliseconds. (kafka.coordinator.GroupMetadataM
anager)
INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.GroupMetadataManager
When i check the netstat in the server , there is no port 7071 was opened and curl localhost:7071 was resulted curl: (7) couldn't connect to host .
Reference Link :
https://www.robustperception.io/monitoring-kafka-with-prometheus/
in this link they were using kafka version : kafka_2.11-0.10.1.0.tgz
I have downloaded : kafka_2.11-0.10.0.0
It might be the wrong variable. I have managed to monitor my kafka broker with jolokia agent and metricbeat but it should be the same. Here is the script i am using to start the broker:
export KAFKA_JMX_OPTS=-javaagent:/opt/kafka/jolokia-jvm-1.3.7-agent.jar=port=8778,host=localhost
./bin/kafka-server-start.sh -daemon config/server_cluster.properties
Hope it can help.
NB: Make sure the owner of the agent jar is the same than the user you use to launch the broker.

Kafka, Unable to produce and consume events

While trying to set kafka on 2 replica and 1 master boxes, got a weird condition where I was not able to consume or produce to a topic.
Using Mirror Maker to sync data between replica <--> Master. Getting following logs unending :
[2016-08-26 14:28:33,897] WARN Bootstrap broker localhost:9092 disconnected (org.apache.kafka.clients.NetworkClient) [2016-08-26
14:28:43,515] WARN Bootstrap broker localhost:9092 disconnected
(org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:45,118]
WARN Bootstrap broker localhost:9092 disconnected
(org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:46,721]
WARN Bootstrap broker localhost:9092 disconnected
(org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:48,324]
WARN Bootstrap broker localhost:9092 disconnected
(org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:49,927]
WARN Bootstrap broker localhost:9092 disconnected
(org.apache.kafka.clients.NetworkClient) [2016-08-26 14:28:53,029]
WARN Bootstrap broker localhost:9092 disconnected
(org.apache.kafka.clients.NetworkClient)**
Only way I could recover was by restarting Kafka which produced this kind of logs :
[2016-08-26 14:30:54,856] WARN Found a corrupted index file, /tmp/kafka-logs/__consumer_offsets-43/00000000000000000000.index,
deleting and rebuilding index... (kafka.log.Log) [2016-08-26
14:30:54,856] INFO Recovering unflushed segment 0 in log
__consumer_offsets-43. (kafka.log.Log) [2016-08-26 14:30:54,857] INFO Completed load of log __consumer_offsets-43 with log end offset 0
(kafka.log.Log) [2016-08-26 14:30:54,860] WARN Found a corrupted index
file,
/tmp/kafka-logs/__consumer_offsets-26/00000000000000000000.index,
deleting and rebuilding index... (kafka.log.Log) [2016-08-26
14:30:54,860] INFO Recovering unflushed segment 0 in log
__consumer_offsets-26. (kafka.log.Log) [2016-08-26 14:30:54,861] INFO Completed load of log __consumer_offsets-26 with log end offset 0
(kafka.log.Log) [2016-08-26 14:30:54,864] WARN Found a corrupted index
file,
/tmp/kafka-logs/__consumer_offsets-35/00000000000000000000.index,
deleting and rebuilding index... (kafka.log.Log)**
ERROR Error when sending message to topic dr_ubr_analytics_limits with key: null, value: 1 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update
metadata after 60000 ms.**
This is my test phase so I was able to restart and recover from the master box but I want know what caused this issue and how can it be avoided. Is there a way to debug this issue?
Trying to achieve following via Kafka