Metadata requests in Kafka producer - apache-kafka

How many metadata requests will Kafka producer make? one per message or one per batch or one per partition?
How the acknowledgements will be sent to produce by Kafka ? one at a time or as a whole list or list per each leader?

The first time the producer makes a metadata request is when it connects to the bootstrap servers that you set in the client configuration. Of course, it can be just one broker or more but not necessarily all the brokers in the cluster (so the metadata request is not for each broker). In this way, the producer gets information about where are the topics that it wants to send messages.
During its life, more metadata requests can be done when it receives an error connecting to the broker leader for the partition it's writing, in this case, it needs to know which broker is the new leader for connecting to it (if not connected yet for other topics) and starting to send.

How many metadata requests will Kafka producer make? one per message or one per batch or one per partition?
Usually one per broker, to find out the partition leaders.
Could be more if the whole send process takes a lot of time and your metadata on producer side expires (the property is called metadata.timeout.ms or so).
How the acknowledgements will be sent to produce by Kafka ? one at a time or as a whole list or list per each leader?
Produce requests are sent only to leaders.
As they contain batches of records, you will get a ProduceResponse per batch.

Related

Difference between kafka batch and kafka request

I was not able to find an satisfactory answer anywhere, sorry for if this question might look trivial:
In Kafka, on producer side, can a request contain multiple batches to different partitions ?
I see the words batch and requests are used as synonyms in the documentation, and I was hoping to find some clarity on this.
If yes, how does this affect the ack policy ?
Are acks on per batch or request basis ?
A Kafka request (and response) is a message sent over the network between a Kafka client and broker. The Kafka protocol uses many types of requests, you can find them all in the Kafka protocol documentation.
The Produce and Fetch requests are used to exchange records. They both contain Kafka batches, it's the RECORDS field in the protocol description. A Kafka batch is used to group several records together and saves some bytes by sharing the metadata for all records. You can find the exact format of a batch in the documentation.
TLDR:
Requests/responses are the full messages exchanged between Kafka clients and brokers. Some requests contain Kafka batches that are groups of records.
I'm not sure you are asking about producer or consumer side. Here are some info that might answer your question.
On producer side:
By default, Kafka producer will accumulate records in a batch up to 16KB.
By default, the producer will have up to 5 requests in flight, meaning that 5 batches can be sent to Kafka at the same time. Meanwhile, the producer start to accumulate data for the next batches.
The acks config controls the number of brokers required to answer in order to consider each request successful.
On consumer side:
By default, the Kafka consumer regularly calls poll() to get a maximum of 500 records per poll.
Also by default, the Kafka consumer will ack every 5 seconds.
Meaning that the consumer will commit all the records that have been polled during the last 5 seconds by all the subsequent calls to poll().
Hope this helps!

If I use Kafka as simple message. Does it really worth

=== Assume everything from consumer point of view ===
I was reading couple of Kafka articles and I saw that the number of partitions is coupled to number of micro-service instances.... Ex: If I say 1topic 1partition for my serviceA.. Producer pushes message to topicT1, partitionP1, and from consumerSide(ServiceA1) I can read from t1,p1. If I spin new pod(ServiceA2) to have highThroughput then second instance will never receive any message because Kafka/ZooKeeper assigns id to each Consumer and partition1 is already taken by serviceA1. So serviceA2++ stays idle... To avoid such a hassle Kafka recommends to add more partition, so that number of consumers can be increased/decreased based on need.
I was also able to test through commandLine and service2 never consumed any message. If I shut service1 then service2 was able to pick new message... So if I spin more pod then FailSafe/Availability increases but throughput is same always...
Is my assumption is correct. Am I missing anything. Now I feel like any standard messaging will have the same problem...How to extend message-oriented systems itself.
Every topic has a partition, by default it comes with only one partition if you don't define the partition count value. In your case, you have a consumer group that consists of two consumers. Every consumer read the log from the partition. In your case, first consumer read the log from the first partition(we have the only partition), and for second consumer there will be no partition to the consumer the data so it become idle. Once first consumer gets down then only the second consumer starts reading the data from the first partition from the last committed offset.
Please check below blogs and videos. It explains the topic, consumer, and consumer group in kafka.
https://www.javatpoint.com/apache-kafka-consumer-and-consumer-groups
http://cloudurable.com/blog/kafka-architecture-consumers/index.html
https://docs.confluent.io/platform/current/clients/consumer.html
https://www.youtube.com/watch?v=lAdG16KaHLs
I hope this will give you idea about the consumer and consumer group.
A broad solution to this is to decouple consumption of a message (i.e. receiving a message from Kafka and perhaps deserializing it and validating that it conforms to the schema) and processing it (interpreting the message). If the consumption is simple enough, being limited to no more instances consuming than there are partitions need not constrain.
One way to accomplish this is to have a Kafka consumption service which sends an HTTP request (perhaps through a load balancer or whatever) to a processing service which has arbitrarily many members.
Note that depending on what you're using Kafka for, there may be a requirement that certain messages always be in the same partition as one another in order to ensure that they get handled in a deterministic order (since ordering across partitions is not guaranteed). A typical example of this would be if the messages are change events for a particular record. If you're accomplishing this via some hash of the message key (or a portion of the key if using a custom partitioner), then simply changing the number of partitions might not be viable (you would need to introduce some sort of migration or have the producers know which records have to be routed to the old partitions and only route to the new partitions if the record has never been seen before).
We just started replacing messaging with Kafka.
In a traditional MQ there will be a cluster and 1orMQ will be there inside.
So the MQ cluster/co-ordinator service will deliver the message to clients.
Now there can be 10 services/clients which can consume message from single MQ.
So if there are 10 messages in MQ then each service/consumer/client can read/process 1 message
Now this case is not possible in Kafka which I understood now as per design
To achieve similar functionality in Kafka I have add equal or more number of partition as client/consumer/pods.

If the partition a Kafka producer try to send messages to went offline, can the producer try to send to a different partition?

My Kafka cluster has 5 brokers and the replication factor is 3 for topics. At some time some partitions went offline but eventually they went back online. My questions are:
How many brokers were down does it indicate, given the fact that there were offline partitions? I think given the cluster setup above, I can afford to lose 2 brokers at the same time. However, if there were 2 brokers down, for some partitions they no longer have quorum; will these partitions go offline in this case?
If there are offline partitions, and a Kafka producer tries to send messages to them and fails, will the producer try a different partition that may be online? The messages have no key in them.
Not sure if I understood your question completely right but I have the impression that you are mixing up partitions and replications. Or at least, your question cannot be looked at isolated on the producer. As soon as one broker is down some things will happen on the cluster.
Each TopicPartition has one Partition Leader and your clients (e.g. Producer and Consumer) are only communicating with this one leader, independen of the number of replications.
In the case where two out of five broker are not available, Kafka will move the partition leader as well as the replicas to a healthy broker. In that scenario you should therefore not get into trouble although it might take some time and retries for the new leader to be selected and the new replications to be created on the healthy broker. A leader selection can be made fast as you have set the replication factor to three, so even if two brokers go down, one broker should still have the complete data (assuming all partitions were in-sync). However, creating two new replicas could take some time dependent on the amount of data. For that scenario you need to look into the topic level configuration min.insync.replicas and the KafkaProducer confiruation acks (see below).
I think the following are the most important configurations for your KafkaProducer to handle such situation:
bootstrap.servers: If you are anticipating regular connection problems with your brokers, you should ensure that you list all five of them. Although it is sufficient to only mention one address (as one broker will then communicate will all other brokers in the cluster) it is safe to have them all listed in case one or even two broker are not available.
acks: This defaults to 1 and defines the number of acknowledgments the producer requires the partition leader to have received before considering a request as successful. Possible values are 0, 1 and all.
retries: This value defaults to 2147483647 and will cause the client to resend any record whose send fails with a potentially transient error until the time of delivery.timeout.ms is reached
delivery.timeout.ms: An upper bound on the time to report success or failure after a call to send() returns. This limits the total time that a record will be delayed prior to sending, the time to await acknowledgement from the broker (if expected), and the time allowed for retriable send failures. The producer may report failure to send a record earlier than this config if either an unrecoverable error is encountered, the retries have been exhausted, or the record is added to a batch which reached an earlier delivery expiration deadline. The value of this config should be greater than or equal to the sum of request.timeout.ms and linger.ms.
You will find more details on the documentation on the Producer configs.

Under what circumstance will kafka overflow?

I am testing a POC where data will be sent from 50+ servers from UDP port to kafka client (Will use UDP-kafka bridge). Each servers generates ~2000 messages/sec and that adds up to 100K+ messages/sec for all 50+ servers. My question here is
What will happen if kafka is not able to ingest all those messages?
Will that create a backlog on my 50+ servers?
If I have a kafka cluster, how do I decide which server sends message to which broker?
1) It depends what you mean by "not able" to ingest. If you mean it's bottlenecked by the network, then you'll likely just get timeouts when trying to produce some of your messages. If you have retries (documented here) set up to anything other than the default (0) then it will attempt to send the message that many times. You can also customise the request.timeout.ms, the default value of this being 30 seconds. If you mean in terms of disk space, the broker will probably crash.
2) You do this in a roundabout way. When you create a topic with N partitions, the brokers will create those partitions on whichever server has the least usage at that time, one by one. This means that, if you have 10 brokers, and you create a 10 partition topic, each broker should receive one partition. Now, you can specify which partition to send a message to, but you can't specify which broker that partition will be on. It's possible to find out which broker a partition is on after you have created it, so you could create a map of partitions to brokers and use that to choose which broker to send to.

Apache Kafka- Algorithm/Strategy used to pull messages from different partitions of a same topic by a Single Consumer

I have been studying Apache Kafka for a while now.
Lets consider the following example.
Consider I have a topic with 3 partitions. I have a single producer and single consumer. I am producing my messages without specifying the key attribute.
So i know on the producer side, when i publish a message, the strategy used by kafka to assign a message to either of those partitions would be Round-Robin.
Now, what i want to know is when I start a single consumer belonging to a certain consumer group listening to that same topic, what strategy will it use to pull the messages from the different partitons(as there are 3)?
Would it follow the a similar round-robin model, where it will send a fetch request to a leader of a partition 1, wait for a response, get the response, return the records to process. Then, send a fetch request to the leader of a partition 2 and so on?
If it follows some other strategy/algorithm, I would love to know what it is?
Thank you in advance.
There is no ordering guarantee outside of a partition so in a way that algorithm used is moot to the end user and subject to change.
Today, there is nothing terribly complex that happens in this instance. The protocol shows you that a fetch request includes a partition so you get a fetch per partition. That means the order depends on the consumer. A partition won't be starved because fetch requests will happen for all partitions assigned to the consumer.