How to monitor kafka-offset lag without consuming it - apache-kafka

I am working for a retail giant as Kafka resource, they want to monitor the lags without having consumer.
I found some tools like Burrow, but that looks like linux specific while i have to test that in windows and then apply.
Any suggestion will be much appreciated.

I assume you cannot run a Linux VM or container?
Burrow is written in Golang, so it can be compiled to run in Windows. And Burrow does consume the consumer group topic and compute statistics on it...
There are also other tools out there like ones written by Lightbend, Zalando, Confluent, and likely others such as a Prometheus Lag exporter project on Github because lag is an important metric to track in any industry...
Consuming group information doesn't alter anything

Related

Where to run Kafka stream processor?

I'm playing around with Apache Kafka a bit and have a functional multi-node cluster configured. I want to now introduce a Kafka Stream Processor. I'll just do something simple, but here's my question: Where do I run it? I know I can run it as a standalone jar on any machine, but is that the correct place to run it? Do I run it on a worker node? Can I run it via the distributed Kafka Connect worker API? I saw documentation that says multiple instances of the same processor will be aware of each other....how? Is that handled in the Java Kafka libraries behind the scenes?
Basically, how do I deploy a processor at scale? Presumably I wouldn't manually start 10 (or 100 or 1000) instances of the same processor.
Assume I am NOT using Kubernetes for this, please. Also assume I am using the community-only packages for the Confluent Platform.
Kafka Connect does not run Kafka Streams applications.
ksqlDB, on the other hand, offers an abstraction layer for Kafka Streams applications and offers an embedded Connect worker.
Otherwise, yes, you simply run the Kafka Streams JAR files, anywhere that has network access to your Kafka cluster. Ideally, not on the cluster itself as it'll be competing for RAM and disk space.
And none of the above require Confluent Platform.
how do I deploy a processor at scale? Presumably I wouldn't manually start 10 (or 100 or 1000) instances of the same processor.
Well, you can only have up-to the number of partitions for your processor's input topics active threads, which you control by num.stream.threads and number of Streams processes.
If you're not deploying into Kubernetes, then you can still use other options like Puppet, Ansible, Supervisor, Hashicorp Nomad's Java Driver, etc.

View consumer and producer statistics on shell : kafka

I am new to kafka. I have been given a task to send 2kb message with optimized throughput and latency. i really don't know how to benchmark these two metrics and setup my cluster. I do not have any cluster monitoring tool to use but to see the statistics on the terminal when i started the producer and consumer. Can anyone please help me which script i can use to see relevant statistics on the consumer end while the data flow is in progress?
make sure you check command-line tools that comes with Apache Kafka installation (bin/ directory). Those include kafka-producer-perf-test.sh and kafka-consumer-perf-test which can help you to test your cluster performance.
This article includes good examples: https://community.cloudera.com/t5/Community-Articles/Kafka-2-3-Performance-testing/ta-p/284767

Kafka Connect instead of Flume Ingestion

I have been looking into the concepts and application of Kafka Connect, and I have even touched one project based on it in one of my intern. Now in my working scenario, now I am considering replacing the architecture of the our real time data ingestion platform which is currently based on flume -> Kafka with Kafka Connect and Kafka.
The reason why I am considering the switch can be concluded mainly into:
But if we use flume we need to install the agent on each remote machine which generates tons of workload for further devops, especially at the place where I am working where the authority of machines is managed in a rigid way that maintaining utilities on machines belonging to other departments.
Another reason for the consideration is that the machines' os environment varies, if we install flumes on a variety of machines , some machine with different os and jdks(I have met some with IBM jdk) just cannot make flume work well which in worst case can result in zero data ingestion.
It looks with Kafka Connect we can deploy it in a centralized way with our Kafka cluster so that the develops cost can go down. Beside, we can avoid installing flumes on machines belonging to others and avoid the risk of incompatible environment to ensure the stable ingestion of data from every remote machine.
Besides, the most ingestion scenario is only to ingest real-time-written log text file on remote machines(on linux and unix file system) into Kafka topics, that is it. So I won't need advanced connectors which is not supported in apache version of Kafka.
But I am not sure if I am understanding the usage or scenario of Kafka Connect the right way. Also I am wondering if Kafka Connect should be deployed on the same machine with the data source machines or if it is ok they resides on different machines. If they can be different then why flume requires the agent to be run on the same machine with the data source? So I wish someone more experienced can give me some lights on that.
Is Kafka Connect appropriate for ingesting data to Kafka? yes
Does Kafka Connect run local to the data source? only if it has to (e.g. reading a local file with Kafka Connect spooldir plug, FilePulse plugin, etc ).
Should you rip out something that works and replace it with Kafka Connect? not unless it's fixing a problem that you have
If you're not using either yet, should you use Kafka Connect instead of Flume? Quite possibly.
Learn more about Kafka Connect here: https://dev.to/rmoff/crunchconf-2019-from-zero-to-hero-with-kafka-connect-81o
For file ingest alone there's other tools too like Filebeat too

Kafka 2.0 - Multiple Kerberos Principals in KafkaConnect Connectors

We are currently using HDF (Hortonworks Dataflow) 3.3.1 which bundles Kafka 2.0.0. Problem is with running multiple connectors with different configuration(Kerberos principals) on same KafkaConnect Cluster.
As part of this Kafka version, all connectors are supposed to use same consumer/producer properties which have been set in worker configuration with consumer.* or producer.* prefix. But as I stated, we have multiple users (apps) running their own connectors and we can't use a single Kerberos principal to allow read on all topics.
So just wanted to check with experts if there is any way this security limitation can be over come. The option I can think of is - run a different Kafka-Connect cluster for each Kafka User (different principals) but what implications it could have if we run many KafkaConnect Clusters on same nodes ? Will it cause any impacts in term of resources (Java heap etc.) or this is the only way (standard procedure) to handle this.
PS: In later releases (2.3+) this problem is fixed via KAFKA-8265 and these settings can be overwritten but even if we try upgrading to latest HDF we will only get Kafka 2.1 which will not solve this issue.
Thanks for your help !!
I think upgrading is your best option to get the linked feature. As I commented, you can go get latest kafka versions on your own... Hortonworks/Cloudera doesn't offer support for Connect anyway. They'd rather you use Spark/Flink/NiFi (I think Storm is no longer around?)
what implications it could have if we run many KafkaConnect Clusters on same nodes ? Will it cause any impacts in term of resources (Java heap etc.)
Heap is the main one (for batching, sink connectors). Network and CPU load could also come into account, depending on rate of messages.
As long as the advertised ports for each cluster process aren't colliding, you should be able to use the same group ids and internal topics, though

Kafka and IIDR CDC

I am trying to build a CDC pipeline using : DB2--IBM CDC --Kafka
and I am trying to figure out the right way to setup this .
I tried below things -
1.Setup a 3 node kafka cluster on linux on prem
2.Installed IIDR CDC software on linux on prem using - setup-iidr-11.4.0.1-5085-linux-x86.bin file . The CDC instance is up and running .
The various online documentation suggest to install 'IIDR management console ' to configure the source datastore and CDC server configuration and also Kafka subscription configuration to build the pipeline .
Currently I do not have the management console installed .
Few questions on this -
1.Is there any alternative to IBM CDC management console for setting up the kafka-CDC pipeline ?
2.How can I get the IIDR management console ? and if we install it on our local windows dekstop and try to connect to CDC/Kafka which are on remote linux servers, will it work ?
3.Any other method to setup the data ingestion IIDR CDC to Kafka ?
I am fairly new to CDC/ IIDR , please help !
I own the development of the IIDR Kafka target for our CDC Replication product.
Management Console is the best way to setup the subscription initially. You can install it on a windows box.
Technically I believe you can use our scripting language called CHCCLP to setup a subscription as well. But I recommend using the GUI.
Here are links to our resources on our IIDR (CDC) Kafka Target. Search for the "Kafka" section.
"https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/W8d78486eafb9_4a06_a482_7e7962f5ac59/page/IIDR%20Wiki"
An example of setting up a subscription and replicating is this video
https://ibm.box.com/s/ur8jokg6tclsx5fcav5g86a3n57mqtd5
Management console and access server can be obtained from IBM fix central.
I have installed MC/Access server on my VM and on my personal windows box to use it against my linux VMs. You will need connectivity of course.
You can definitely follow up with our Support and they'll be able to sort you out. Plus we have docs in our knowledge centre on MC starting here.... https://www.ibm.com/support/knowledgecenter/en/SSTRGZ_11.4.0/com.ibm.cdcdoc.mcadminguide.doc/concepts/overview_of_cdc.html
You'll find our Kafka target is very flexible it comes with five different formats to write data into Kafka, and you can choose to capture data in an audit format, or the Kafka compaction compatible key, null for a delete method.
Additionally you can even use the product to write several records to several different topics in several formats from a single insert operation. This is useful if some of your consumer apps want JSON and others Avro binary. Additionally you can use this to put all the data to more secure topics, and write out just some of the data to topics that more people have access to.
We even have customers who encrypt columns in flight when replicating.
Finally the product's transformations can be parallelized even if you choose to only use one producer to write out data.
Actually one more finally, we additionally provide the option to use a special consumer which produces database ACID semantics for data written into Kafka and shred across topics and partitions. It re-orders it. we call it the transactionally consistent consumer. It provides operation order, bookmarks for restarting applications, and allows parallelism in performance but ordered, exactly once, deduplicated consumption of data.
From my talk at the Kafka Summit...
https://www.confluent.io/kafka-summit-sf18/a-solution-for-leveraging-kafka-to-provide-end-to-end-acid-transactions