I am trying to integrate MongoDB and Storm-Kafka, Kafka Producer produces data from MongoDB but it fails to fetch from Consumer side.
Kafka version :0.10.*
Storm version :1.2.1
Do i need to add any functionality in Consumer?
Related
I wanted use Apache Spark Structured Streaming along with Kafka, Spark Structured Streaming Supports Kafka 0.10 and above and my Kafka cluster uses kafka version 0.8.2.1 . I want to replicate some of the topics from current kafka 0.8.2.1 cluster to new Kafka Cluster which is based on 2.2.0.
To do this i tried using kafka-console-consumer on Kafka 2.2.0 cluster to listen the messages from kafka cluster 0.8.2.1 and piped the result of kafka-console-consumer to kafka-console-producer on the the kafka 2.2.0 cluster. But that didn't kafka-console-consumer on Kafka 2.2.0 cluster was not able to receive any messages.
As of now I have solved this problem by reading the data from kafka 0.8.2.1 cluster using the Java Client APIs and I am writing the data read from older kafka cluster(0.8.2.1) to newer kafka cluster(2.2.0) using the client APIs.
Can anyone suggest some better ways to mirror two kafka clusters running different versions of Kafka?
What are the advantages of using Apache Storm's KafkaBolt in apache storm 1.2.2 instead of using the kafka producer apis directly from the bolt in topology to publish to downstream kafka topics?
Apache MetaModel kafka consumer not working for zookeepers offset storage.
I am using Apache MetaModel 5.1 and kafka version 0.10.2.1. I am facing issue with kafka consumer(metamodel internal consumer) as its not consuming any messages from topic
my test environment kafka offset storage is zookeeper. when I tried with changing offset storage to KAFKA(on different environment), consumer worked fine.
As of now I don't want to change offset storage to kafka so is there any other way to fix this issue on Apache MetaModel kafka consumer side?
I am using Kafka client library comes with Kafka 0.11.0.1. I noticed that using kafkaconsumer does not need to configure zookeeper anymore. Does that mean zookeep server will automatically be located by the kafka bootstrap server?
Since Kafka 0.9 the KafkaConsumer implementation stores offsets commit and consumer group information in Kafka brokers themselves. This eliminates the zookeeper dependency and increases the scalability of the consumers.
we are planning to upgrade Kafka client from 0.8.0 to 0.10.0.1 but since in consumers the offset in 0.8.0 version is stored in zookeeper where as it is stored in broker in version 0.10.0.1, if we start consumer with the same group and client id as of version 0.8.0 in 0.10.0.1 then will new consumer fetch the messages from where old consumer stopped consuming. If data loss is going to happen can we try migrating the offsets from zookeeper to broker and then start our new consumer
You can continue storing offsets in zookeeper on 0.10. In fact, if you just upgraded the client binaries, you won't see any change in the offset commit behavior. Where you will have to start thinking about migration of data and offsets is when you move to using the new consumer API in your application. This is where you will need to stop your old application instance based on the old API, check the offsets stored in zookeeper, and then start the new consumer API implementation from that offset to about data loss or duplication.