Why Apache Kafka doesn't support ephemeral topics? - apache-kafka

Having a distributed log is great but I see good use cases for ephemeral topics as well so I wonder Why Apache Kafka doesn't support ephemeral topics?

Actually, Kafka can save messages in memory and after a specific time or number of messages flushes them to the disk.
see log.flush.interval.messages and log.flush.interval.ms in Broker configs.

Related

Kafka writing in disk logs time

Running some perf test on Kafka, we are having bad latency in the producer.
Checking the kafka broker logs I can see this log
[Topic] Wrote producer snapshot at offset 331258 with 62 producer ids in 860 ms. (kafka.utils.Logging)
I dont know if this is the time that it takes to write in disk or replicas before ack to the producer(ack=all) but those 800ms it seems a lot to me.
Regards
This needs a detailed analysis. Here are few things that you can do:
Monitor kafka broker/producer resources like CPU/Memory to see if any particular resource is reaching near 100% usage.
If kafka broker resource is saturating, then for your load you might need to provide more resources to your kafka broker. Same logic is applicable on your kafka producer.
If resources are not saturating, you would need to play around with your kafka producer configuration. Calculate rough throughput of your kafka producers (in messages/sec as well as bytes/sec). Check kafka producer config defaults to see if you can find a probable cause. There are a lot of producer configs like: batch.size, buffer.memory, linger.ms, max.request.size etc any of which might be the reason for your producer to not perform in an optimum way.

Kafka Connect best practices for topic compaction

I am using Debezium which makes of Kafka Connect.
Kafka Connect exposes a couple of topics that need to be created:
OFFSET_STORAGE_TOPIC
This environment variable is required when running the Kafka Connect service. Set this to the name of the Kafka topic where the Kafka Connect services in the group store connector offsets. The topic should have many partitions, be highly replicated (e.g., 3x or more) and should be configured for compaction.
STATUS_STORAGE_TOPIC
This environment variable should be provided when running the Kafka Connect service. Set this to the name of the Kafka topic where the Kafka Connect services in the group store connector status. The topic can have multiple partitions, should be highly replicated (e.g., 3x or more) and should be configured for compaction.
Does anyone have any specific recommended compaction configs for these topics?
e.g.
is it enough to set just:
cleanup.policy: compact
unclean.leader.election.enable: true
or also:
min.compaction.lag.ms: 60000
segment.ms: 1800000
min.cleanable.dirty.ratio: 0.01
delete.retention.ms: 100
The defaults should be fine, and Connect will create/configure those topics on its own unless you preconfigure those topics with those settings.
These are the only cases when I can think of when to adjust the compaction settings
a connect-group lingering on the topic longer than you want it to be. For example, a source connector doesn't start immediately after a long downtime because it's processing the offsets topic
your Connect cluster doesn't accurately report its state, or the tasks do not rebalance appropriately (because the status topic is in a bad state)
The __consumer_offsets (compacted) topic is what is used for Sink connectors, and would be configured separately for all consumers, not only Connect

How to find the byte rate of Consumers Kafka

I created an application which use Kafka. So how can I find how many MB/sec my consumers reads ?
My topics have only one partition
Are you using the Java Kafka consumer API? If yes, some JMX metrics are exposed by it and some more specific to consumer fetching. You can see them here:
https://kafka.apache.org/documentation/#consumer_fetch_monitoring

Does a kafka consumer machine need to run zookeeper?

So my question is this: If i have a server running Kafka (And zookeeper), and another machine only consuming messages, does the consumer machine need to run zookeeper too? Or does the server take care of all?
No.
Role of Zookeeper in Kafka is:
Broker registration: (cluster membership) with heartbeats mechanism to keep the list current
Storing topic configuration: which topics exist, how many partitions each
has, where are the replicas, who is the preferred leader, list of ISR for
partitions
Electing controller: The controller is one of the brokers and is responsible for maintaining the leader/follower relationship for all the partitions.
So Zookeeper is required only for kafka broker. There is no need to have Zookeper on the producer or consumer side.
The consumer does not need zookeeper
You have not mentioned which version of Kafka or the clients you're using.
Kafka consumers using 0.8 store their offsets in Zookeeper, so it is required for them. However, no, you would not run Zookeeper and consumers on the same server
From 0.9 and later, clients are separate from needing it (unless you want to manage external connections to Zookeeper on your own for storing data)

Increase number of partitions in a Kafka topic from a Kafka client

I'm a new user of Apache Kafka and I'm still getting to know the internals.
In my use case, I need to increase the number of partitions of a topic dynamically from the Kafka Producer client.
I found other similar questions regarding increasing the partition size, but they utilize the zookeeper configuration. But my kafkaProducer has only the Kafka broker config, but not the zookeeper config.
Is there any way I can increase the number of partitions of a topic from the Producer side? I'm running Kafka version 0.10.0.0.
As of Kafka 0.10.0.1 (latest release): As Manav said it is not possible to increase the number of partitions from the Producer client.
Looking ahead (next releases): In an upcoming version of Kafka, clients will be able to perform some topic management actions, as outlined in KIP-4. A lot of the KIP-4 functionality is already completed and available in Kafka's trunk; the code in trunk as of today allows client to create and to delete topics. But unfortunately, for your use case, increasing the number of partitions is still not possible yet -- this is in scope for KIP-4 (see Alter Topics Request) but is not completed yet.
TL;DR: The next versions of Kafka will allow you to increase the number of partitions of a Kafka topic, but this functionality is not yet available.
It is not possible to increase the number of partitions from the Producer client.
Any specific use case use why you cannot use the broker to achieve this ?
But my kafkaProducer has only the Kafka broker config, but not the
zookeeper config.
I don't think any client will let you change the broker config. You can only access (read) the server side config at max.
Your producer can provide different keys for ProducerRecord's. The broker would place them in different partitions. For example, if you need two partitions, use keys "abc" and "xyz".
This can be done in version 0.9 as well.