I am looking for a kafka rest api which will list the same details as
kafka-consumer-groups.sh - - describe
Would return, basically i am trying to get the details of offset of each partition and its lag for a particular consumer group
There's several tools outside of Kafka to do this
Remora https://github.com/zalando-incubator/remora
LinkedIn Burrow
Various Prometheus Kafka lag exporters
Related
This question already has answers here:
How to expose kafka metrics to /actuator/metrics with spring boot 2
(4 answers)
Closed 2 months ago.
hi i have a consumer with multiple listeners with concurrency as 3. Each listener consume one topic. I'm trying to get the consumer lag of the containers in prometheus metric endpint. I checked the default metrics and only the listener related success and failure count and sum are available. Is there any option that i can get consumer lag exposed as a prometheus metric ?
EDIT
I'm using spring-kafka and with that in the documentation it says i can simple get the listener and broker related metrics (https://docs.spring.io/spring-kafka/reference/html/#monitoring-listener-performance). What i did was call the prometheus endpoint like myUrl/prometheus and i was able to see the listener logs.
So is there a way to view consumer lag like that?
As far as I know, Spring does not provide an easy way to do this (see Monitoring documentation for more details).
You can use some of the existing solutions to eliminate dependency on your consumers implementations. For example:
Kafka Lag Exporter.
It provides a metrics like kafka_consumergroup_group_lag with labels: cluster_name, group, topic, partition, member_host, consumer_id, client_id.
It is easy to set up and can run anywhere, but it provides features to run easily on Kubernetes clusters.
Kafka Exporter
It provides more different kafka metrics. For the lag, it has an example of the kafka_consumergroup_lag metric with the labels: consumergroup, partition, topic
Is there a way to get number of messages that arrived on a Kafka topic in a day?
Looking for a solution that can fetch the number of messages arrived on a topic for a particular day.
Ps- we have confluent enterprise and also using prometheus and grafana for metrics.
I'm not familiar with confluent enterprise but we are collecting Kafka metrics using Burrow to Prometheus and using this metric to show incoming message rate:
sum(rate(**kafka_burrow_topic_partition_offset**{cluster="Kafka1",topic="myTopic"}[5m]))
read this https://www.datadoghq.com/blog/collecting-kafka-performance-metrics/#monitor-consumer-health-with-burrow
From Kafka Monitoring doc:
kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec
I need to write an API to display consumer metrics in java. I thought of using Kafka admin client. Is there any way we can retrieve consumer metrics information
I checked admin client code. I have Kafka consumer class. I have consumer metrics method, but it is not holding the information.
Kafka consumer can report a lot of metrics through JMX, you just need to add a few Java properties to your consumer, either in the code or through command-line. For example, for console consumer:
$ KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=9398 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false" kafka-console-consumer ...
How can I check if specific group ID listens to kafka topic right now or check amount of group ID that listen to specific topic with any rest API?
If you have kafka-manager installed with your kafka cluster, you can use the rest api that kafka-manager provides to get consumer groups under a topic.
For kafka-manager rest api please refer to this
https://github.com/yahoo/kafka-manager/blob/master/conf/routes
There is following consumer code:
from kafka.client import KafkaClient
from kafka.consumer import SimpleConsumer
kafka = KafkaClient("localhost", 9092)
consumer = SimpleConsumer(kafka, "my-group", "my-topic")
consumer.seek(0, 2)
for message in consumer:
print message
kafka.close()
Then I produce message with script:
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-topic
The thing is that when I start consumers as two different processes then I receive new messages in each process. However I want it to be sent to only one consumer, not broadcasted.
In documentation of Kafka (https://kafka.apache.org/documentation.html) there is written:
If all the consumer instances have the same consumer group, then this
works just like a traditional queue balancing load over the consumers.
I see that group for these consumers is the same - my-group.
How to make it so that new message is read by exactly one consumer instead of broadcasting it?
the consumer-group API was not officially supported untilĀ kafka v. 0.8.1 (released Mar 12, 2014). For server versions prior, consumer groups do not work correctly. And as of this post the kafka-python library does not currently attempt to send group offset data:
https://github.com/mumrah/kafka-python/blob/c9d9d0aad2447bb8bad0e62c97365e5101001e4b/kafka/consumer.py#L108-L115
Its hard to tell from the example above what your Zookeeper configuration is or if there's one at all. You'll need a Zookeeper cluster for the consumer group information to be persisted WRT what consumer within each group has consumed to a given offset.
A solid example is here:
Official Kafka documentation - Consumer Group Example
This should not happen - make sure that both of the consumers are being registered under the same consumer group in the zookeeper znodes. Each message to a topic should be consumed by a consumer group exactly once, so one consumer out of everyone in the group should receive the message, not what you are experiencing. What version of Kafka are you using?