I am new to Kafka Streams.
I would like to connect to Kafka Cluster and read all Stream Topologies.
Is there an API that would allow doing that?
I am looking at the Topology class, is there a way to list all Topologies?
https://docs.confluent.io/5.5.0/streams/javadocs/index.html
That is not possible. Brokers don't know anything about Kafka Streams applications.
Related
I want to build a Kafka connector which needs to read from the Kafka topic and make a call to the GRPC service to get some data and write the whole data into another kafka topic.
I have written a Kafka Sink connector which reads from a topic and called a GRPC service. But not sure how to redirect this data into a Kafka topic.
Kafka Streams can read from topics, call external services as necessary, then forward this data to a new topic in the same cluster.
MirrorMaker2 can be used between different clusters, but using Connect transforms is generally not recommended with external services.
Or you could make your gRPC service into a Kafka producer.
Basically I want to send messages from a MQTT(mosquito) broker to a knative event source(kafka) . In case of a simple kafka broker I could use the confluent's kafkaconnect but in this case it's a knative event source rather than a broker. The problems lies with conversion to cloud events.
Since you have a current MQTT broker which can read/write to Kafka, you might the the Kafka source to convert the Kafka messages to CloudEvents and send them to your service.
If you're asking about how to connect the MQTT broker with Kafka, I'd suggest either finding or writing an MQTT source or using something outside the Knative ecosystem.
I'm currently planning the architecture for an application that reads from a Kafka topic and after some conversion puts data to RabbitMq.
I'm kind new for Kafka Streams and they look a good choice for my task. But the problem is that Kafka server is hosted at another vendor's place, so I can't even install Cafka Connector to RabbitMq Sink plugin.
Is it possible to write Kafka steam application that doesn't have any Sink points, but just processes input stream? I can just push to RabbitMQ in foreach operations, but I'm not sure will Stream even work without a sink point.
foreach is a Sink action, so to answer your question directly, no.
However, Kafka Streams should be limited to only Kafka Communication.
Kafka Connect can be installed and ran anywhere, if that is what you wanted to use... You can also use other Apache tools like Camel, Spark, NiFi, Flink, etc to write to RabbitMQ after consuming from Kafka, or write any application in a language of your choice. For example, the Spring Integration or Cloud Streams frameworks allows a single contract between many communication channels
I created an application which use Kafka. So how can I find how many MB/sec my consumers reads ?
My topics have only one partition
Are you using the Java Kafka consumer API? If yes, some JMX metrics are exposed by it and some more specific to consumer fetching. You can see them here:
https://kafka.apache.org/documentation/#consumer_fetch_monitoring
a similar question has been answered before but the solution doesn't work for my use case.
We run 2 Kafka clusters each in 2 separate DCs. Our overall incoming traffic is split between these 2 DCs.
I'd be running separate Kafka streaming app in each DC to transform that data and want to write to a Kafka topic in a single DC.
How can I achieve that?
Ultimately we'd be indexing the kafka topic data in Druid. Its not possible to run separate Druid clusters since we are trying to aggregate the data.
I've read that its not possible with a single Kafka stream. Is there a way I can use another Kafka stream to read from DC1 and write to DC2 kafka cluster ?
As you wrote yourself, you cannot use the Kafka Streams API to read from Kafka cluster A and write to a different Kafka cluster B.
Instead, if you want to move data between Kafka clusters (whether it's in the same DC or across DCs) you should use a tool such as Apache Kafka's Mirror Maker or Confluent Replicator.