I am getting the following error:
s20/02/15 07:31:33 INFO producer.SyncProducer: Connected to localhost:2181 for producing
20/02/15 07:31:33 INFO producer.SyncProducer: Disconnecting from localhost:2181
20/02/15 07:31:33 WARN client.ClientUtils$: Fetching topic metadata with correlation id 0 for topics [Set(flkf)] from broker [id:0,host:localhost,port:2181] failed
java.io.EOFException: Received -1 when reading from channel, socket has likely been closed.
You've pointed flume at Zookeeper, not a broker. Try port 9092
I am getting an error while migrating data between Kafka brokers.
I am using kafka-reassignment tool to reassign partitions to a different broker without any throttling(because it didn't worked with the below command.). There were around 400 partitions of 50 topics.
Apache Kafka 1.1.0
Confluent Docker Image tag : 4.1.0
Command:
kafka-reassign-partitions --zookeeper IP:2181 --reassignment-json-file proposed.json --execute —throttle 100000000
After some time, I am able to see the below error continuously on the target broker.
[2019-09-21 11:24:07,625] INFO [ReplicaFetcher replicaId=4, leaderId=0, fetcherId=0] Error sending fetch request (sessionId=514675011, epoch=INITIAL) to node 0: java.io.IOException: Connection to 0 was disconnected before the response was read. (org.apache.kafka.clients.FetchSessionHandler)
[2019-09-21 11:24:07,626] WARN [ReplicaFetcher replicaId=4, leaderId=0, fetcherId=0] Error in response for fetch request (type=FetchRequest, replicaId=4, maxWait=500, minBytes=1, maxBytes=10485760, fetchData={TOPIC-4=(offset=4624271, logStartOffset=4624271, maxBytes=104
8576), TOPIC-2=(offset=1704819, logStartOffset=1704819, maxBytes=1048576), TOPIC-8=(offset=990485, logStartOffset=990485, maxBytes=1048576), TOPIC-1=(offset=1696764, logStartOffset=1696764, maxBytes=1048576), TOPIC-7=(offset=991507, logStartOffset=991507, maxBytes=10485
76), TOPIC-5=(offset=988660, logStartOffset=988660, maxBytes=1048576)}, isolationLevel=READ_UNCOMMITTED, toForget=, metadata=(sessionId=514675011, epoch=INITIAL)) (kafka.server.ReplicaFetcherThread)
java.io.IOException: Connection to 0 was disconnected before the response was read
at org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:97)
at kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:96)
at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:220)
at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:43)
at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:146)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
Zookeeper status:
ls /admin/reassign_partitions
[]
I am using t2.medium type EC2 instances and gp2 type EBS volumes with 120GB size.
I am able to connect to the zookeeper from all brokers.
[zk: localhost:2181(CONNECTED) 3] ls /brokers/ids [0, 1, 2, 3]
I am using IP address for all brokers, so DNS mismatch is also not the case.
Also, I am not able to see any topic scheduled for reassignment in zookeeper.
[zk: localhost:2181(CONNECTED) 2] ls /admin/reassign_partitions
[]
Interestingly, I can see data is pilling up for the partitions which are not listed above. But the partitions listed in the error are not getting migrated as of now.
I am using confluent kafka docker image.
Kafka Broker Setting:
https://gist.github.com/ethicalmohit/cd44f580356ca02250760a307d90b54d
If you can give us some more details on your topology maybe we can understand better the problem.
Some thoughts:
- Can you connect via zookeeper-cli at kafka-0:2181 ? kafka-0 resolves to the correct host ?
- If reassignment is in progress either you have to manual stop this by deleting the appropriate key in zookeeper (warning, this may make some topic or partition broken) either you have to wait for this job to finish. Can you monitor the ongoing reassignment and give some info about that ?
This has been solved by increasing the value of replica.socket.receive.buffer.bytes in all destination brokers.
After changing the above parameter and restarting broker. I was able to see the data in above-mentioned partitions.
I'm facing issue while finding the depth of a topic in kafka with SSL enabled port by using kafka.tools.GetOffsetShell, but could able to consume the messages from the same port.
However could be able to execute the command to get the depth of a topic for PLAINTEXT port.
Below is the error stack for the same
sshuser#wn0:~$sudo bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list wn0.internal.cloudapp.net:9093 --topic TP.TOPIC --time -1 --offsets 1 --security-protocol SSL
{metadata.broker.list=wn0.internal.cloudapp.net:9093, request.timeout.ms=1000, client.id=GetOffsetShell, security.protocol=SSL}
[2017-09-26 19:21:59,026] WARN Fetching topic metadata with correlation id 0 for topics [Set(TP.TOPIC)] from broker [BrokerEndPoint(0,wn0.internal.cloudapp.net:9093)] failed (kafka.client.ClientUtils$)
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:99)
at kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:140)
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:131)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:84)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:81)
at kafka.producer.SyncProducer.send(SyncProducer.scala:126)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:96)
at kafka.tools.GetOffsetShell$.main(GetOffsetShell.scala:98)
at kafka.tools.GetOffsetShell.main(GetOffsetShell.scala)
kafka.common.KafkaException: fetching topic metadata for topics [Set(tweets)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
at kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:78)
at kafka.utils.Utils$.swallow(Utils.scala:172)
at kafka.utils.Logging$class.swallowError(Logging.scala:106)
at kafka.utils.Utils$.swallowError(Utils.scala:45)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:78)
at kafka.producer.Producer.send(Producer.scala:77)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at com.aail.kafka.KafkaConnection.r(KafkaConnection.java:141)
at com.aail.kafka.Postgresconnection.main(Postgresconnection.java:40)
Caused by: java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
... 10 more
[2016-02-17 09:02:36,876] ERROR Failed to send requests for topics tweets with correlation ids in [0,32]
How to resolve this issue any help would be appreciated
Thanks in advance
I would follow the points below to investigate which you might have already checked into.
Check if the console producer and console consumer are working as expected.Following are the commands after the Zookeeper and Kafka servers are started and topic is been created.
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
Check if the advertised.host.name is been set to the correct value in server.properties file
Please do let me know if any of these help to solve the issue or if you were able to solve it with some other changes.
Tried to run Kafka producer in one machine and consumer in another machine.
Set the following properties:
advertised.host.name
advertised.port
But getting the following error on console consumer:
bin/kafka-console-consumer.sh --zookeeper ip:2181 --topic topic
--from-beginning
[2016-01-18 16:38:00,939] WARN Fetching topic metadata with correlation id 2112 for topics [Set(topic)] from broker [id:0,host:user-Desktop,port:9092] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
[2016-01-18 16:38:00,939] WARN [console-consumer-82496_gopikrishnan-B85M-D3H-A-1453114849146-e6661d41-leader-finder-thread], Failed to find leader for Set([topic,0]) (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
kafka.common.KafkaException: fetching topic metadata for topics [Set(topic)] from broker [ArrayBuffer(id:0,host:user-Desktop,port:9092)] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
Caused by: java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
... 3 more
What has to be done to fix the issue. Thanks in advance.
Adding host address in etc/hosts fixed the issue.