How do I usually set "batch.size" and "buffer.memory" in Apache Kafka? [duplicate] - apache-kafka

I send String-messages to Kafka V. 0.8 with the Java Producer API.
If the message size is about 15 MB I get a MessageSizeTooLargeException.
I have tried to set message.max.bytesto 40 MB, but I still get the exception. Small messages worked without problems.
(The exception appear in the producer, I don't have a consumer in this application.)
What can I do to get rid of this exception?
My example producer config
private ProducerConfig kafkaConfig() {
Properties props = new Properties();
props.put("metadata.broker.list", BROKERS);
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("request.required.acks", "1");
props.put("message.max.bytes", "" + 1024 * 1024 * 40);
return new ProducerConfig(props);
}
Error-Log:
4709 [main] WARN kafka.producer.async.DefaultEventHandler - Produce request with correlation id 214 failed due to [datasift,0]: kafka.common.MessageSizeTooLargeException
4869 [main] WARN kafka.producer.async.DefaultEventHandler - Produce request with correlation id 217 failed due to [datasift,0]: kafka.common.MessageSizeTooLargeException
5035 [main] WARN kafka.producer.async.DefaultEventHandler - Produce request with correlation id 220 failed due to [datasift,0]: kafka.common.MessageSizeTooLargeException
5198 [main] WARN kafka.producer.async.DefaultEventHandler - Produce request with correlation id 223 failed due to [datasift,0]: kafka.common.MessageSizeTooLargeException
5305 [main] ERROR kafka.producer.async.DefaultEventHandler - Failed to send requests for topics datasift with correlation ids in [213,224]
kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.
at kafka.producer.async.DefaultEventHandler.handle(Unknown Source)
at kafka.producer.Producer.send(Unknown Source)
at kafka.javaapi.producer.Producer.send(Unknown Source)

You need to adjust three (or four) properties:
Consumer side:fetch.message.max.bytes - this will determine the largest size of a message that can be fetched by the consumer.
Broker side: replica.fetch.max.bytes - this will allow for the replicas in the brokers to send messages within the cluster and make sure the messages are replicated correctly. If this is too small, then the message will never be replicated, and therefore, the consumer will never see the message because the message will never be committed (fully replicated).
Broker side: message.max.bytes - this is the largest size of the message that can be received by the broker from a producer.
Broker side (per topic): max.message.bytes - this is the largest size of the message the broker will allow to be appended to the topic. This size is validated pre-compression. (Defaults to broker's message.max.bytes.)
I found out the hard way about number 2 - you don't get ANY exceptions, messages, or warnings from Kafka, so be sure to consider this when you are sending large messages.

Minor changes required for Kafka 0.10 and the new consumer compared to laughing_man's answer:
Broker: No changes, you still need to increase properties message.max.bytes and replica.fetch.max.bytes. message.max.bytes has to be equal or smaller(*) than replica.fetch.max.bytes.
Producer: Increase max.request.size to send the larger message.
Consumer: Increase max.partition.fetch.bytes to receive larger messages.
(*) Read the comments to learn more about message.max.bytes<=replica.fetch.max.bytes

The answer from #laughing_man is quite accurate. But still, I wanted to give a recommendation which I learned from Kafka expert Stephane Maarek. We actively applied this solution in our live systems.
Kafka isn’t meant to handle large messages.
Your API should use cloud storage (for example, AWS S3) and simply push a reference to S3 to Kafka or any other message broker. You'll need to find a place to save your data, whether it can be a network drive or something else entirely, but it shouldn't be a message broker.
If you don't want to proceed with the recommended and reliable solution above,
The message max size is 1MB (the setting in your brokers is called message.max.bytes) Apache Kafka. If you really needed it badly, you could increase that size and make sure to increase the network buffers for your producers and consumers.
And if you really care about splitting your message, make sure each message split has the exact same key so that it gets pushed to the same partition, and your message content should report a “part id” so that your consumer can fully reconstruct the message.
If the message is text-based try to compress the data, which may reduce the data size, but not magically.
Again, you have to use an external system to store that data and just push an external reference to Kafka. That is a very common architecture and one you should go with and widely accepted.
Keep that in mind Kafka works best only if the messages are huge in amount but not in size.
Source: https://www.quora.com/How-do-I-send-Large-messages-80-MB-in-Kafka

The idea is to have equal size of message being sent from Kafka Producer to Kafka Broker and then received by Kafka Consumer i.e.
Kafka producer --> Kafka Broker --> Kafka Consumer
Suppose if the requirement is to send 15MB of message, then the Producer, the Broker and the Consumer, all three, needs to be in sync.
Kafka Producer sends 15 MB --> Kafka Broker Allows/Stores 15 MB --> Kafka Consumer receives 15 MB
The setting therefore should be:
a) on Broker:
message.max.bytes=15728640
replica.fetch.max.bytes=15728640
b) on Consumer:
fetch.message.max.bytes=15728640

You need to override the following properties:
Broker Configs($KAFKA_HOME/config/server.properties)
replica.fetch.max.bytes
message.max.bytes
Consumer Configs($KAFKA_HOME/config/consumer.properties)
This step didn't work for me. I add it to the consumer app and it was working fine
fetch.message.max.bytes
Restart the server.
look at this documentation for more info:
http://kafka.apache.org/08/configuration.html

I think, most of the answers here are kind of outdated or not entirely complete.
To refer on the answer of Sacha Vetter (with the update for Kafka 0.10), I'd like to provide some additional Information and links to the official documentation.
Producer Configuration:
max.request.size (Link) has to be increased for files bigger than 1 MB, otherwise they are rejected
Broker/Topic configuration:
message.max.bytes (Link) may be set, if one like to increase the message size on broker level. But, from the documentation: "This can be set per topic with the topic level max.message.bytes config."
max.message.bytes (Link) may be increased, if only one topic should be able to accept lager files. The broker configuration must not be changed.
I'd always prefer a topic-restricted configuration, due to the fact, that I can configure the topic by myself as a client for the Kafka cluster (e.g. with the admin client). I may not have any influence on the broker configuration itself.
In the answers from above, some more configurations are mentioned as necessary:
replica.fetch.max.bytes (Link) (Broker config)
From the documentation: "This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made."
max.partition.fetch.bytes (Link) (Consumer config)
From the documentation: "Records are fetched in batches by the consumer. If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress."
fetch.max.bytes (Link) (Consumer config; not mentioned above, but same category)
From the documentation: "Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress."
Conclusion: The configurations regarding fetching messages are not necessary to change for processing messages, lager than the default values of these configuration (had this tested in a small setup). Probably, the consumer may always get batches of size 1. However, two of the configurations from the first block has to be set, as mentioned in the answers before.
This clarification should not tell anything about performance and should not be a recommendation to set or not to set these configuration. The best values has to be evaluated individually depending on the concrete planned throughput and data structure.

One key thing to remember that message.max.bytes attribute must be in sync with the consumer's fetch.message.max.bytes property. the fetch size must be at least as large as the maximum message size otherwise there could be situation where producers can send messages larger than the consumer can consume/fetch. It might worth taking a look at it.
Which version of Kafka you are using? Also provide some more details trace that you are getting. is there some thing like ... payload size of xxxx larger
than 1000000 coming up in the log?

For people using landoop kafka:
You can pass the config values in the environment variables like:
docker run -d --rm -p 2181:2181 -p 3030:3030 -p 8081-8083:8081-8083 -p 9581-9585:9581-9585 -p 9092:9092
-e KAFKA_TOPIC_MAX_MESSAGE_BYTES=15728640 -e KAFKA_REPLICA_FETCH_MAX_BYTES=15728640 landoop/fast-data-dev:latest `
This sets topic.max.message.bytes and replica.fetch.max.bytes on the broker.
And if you're using rdkafka then pass the message.max.bytes in the producer config like:
const producer = new Kafka.Producer({
'metadata.broker.list': 'localhost:9092',
'message.max.bytes': '15728640',
'dr_cb': true
});
Similarly, for the consumer,
const kafkaConf = {
"group.id": "librd-test",
"fetch.message.max.bytes":"15728640",
... .. }

Here is how I achieved successfully sending data up to 100mb using kafka-python==2.0.2:
Broker:
consumer = KafkaConsumer(
...
max_partition_fetch_bytes=max_bytes,
fetch_max_bytes=max_bytes,
)
Producer (See final solution at the end):
producer = KafkaProducer(
...
max_request_size=KafkaSettings.MAX_BYTES,
)
Then:
producer.send(topic, value=data).get()
After sending data like this, the following exception appeared:
MessageSizeTooLargeError: The message is n bytes when serialized which is larger than the total memory buffer you have configured with the buffer_memory configuration.
Finally I increased buffer_memory (default 32mb) to receive the message on the other end.
producer = KafkaProducer(
...
max_request_size=KafkaSettings.MAX_BYTES,
buffer_memory=KafkaSettings.MAX_BYTES * 3,
)

Related

kafka stream state store issue with max.request.size paramater

We are using Kafka streams state store in the project, and we want to store more than 1MB of data, but we got below exception:
The message is 1760923 bytes when serialized which is larger than the
maximum request size you have configured with the max.request.size
configuration.
Then I followed the link Add prefix to StreamsConfig to enable setting default internal topic configs and added the following config:
topic.max.request.size=50000000
Then application works fine, and it can works fine when state store internal topic had been created but when Kafka been restarted and the state store topic had been lost/delete, then the Kafka stream processor need to create the internal state store topic automatically when start the application, at that moment, it throw exception which says:
"Aorg.apache.kafka.streams.errors.StreamsException: Could not create topic data-msg-seq-state-store-changelog. at org.apache.kafka.streams.processor.internals.InternalTopicManager.makeReady(InternalTopicManager.java:148)....
.....
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:805) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:774) Caused by: org.apache.kafka.common.errors.InvalidConfigurationException: Unknown topic config name: max.request.size".
The solution is we can manually create the internal topic, but that should not be good one.
Can you help me on this issue? If there is any config I have missed?
Thanks very much.
17 June 2020 update: still not resolve the issue. anyone can help?
The solution that you are looking for lies in the Kafka Stream's configuration properties that you set before starting the stream.
props.put(StreamsConfig.PRODUCER_PREFIX + ProducerConfig.MAX_REQUEST_SIZE_CONFIG, "5242880");
The value I used here is 5 MB in bytes. You can change the value to suit your needs.
I don't see a configuration with max.request.size. May be it is max.message.bytes (Topic configuration reference). So, you may try setting this.
You can refer to the broker setting max.message.bytes and increase it. It sets it at the broker level.
Documentation states:
The largest record batch size allowed by Kafka (after compression if
compression is enabled). If this is increased and there are consumers
older than 0.10.2, the consumers' fetch size must also be increased so
that the they can fetch record batches this large. In the latest
message format version, records are always grouped into batches for
efficiency. In previous message format versions, uncompressed records
are not grouped into batches and this limit only applies to a single
record in that case.This can be set per topic with the topic level
max.message.bytes config.
Default: 1048588 (~1Mb) (Confluent Kafka)
Also refer to the following Stackoverflow answer

Weird behavior with partition fetch max bytes with Kafka consumer

I have a topic, A, with 12 partitions. I have 3 Kafka brokers in a cluster. There are 4 partitions per broker for topic A. I haven't created any replicas as I am not concerned with resiliency.
I have a simple Java Consumer using the kafka-client library. I have mentioned the following in the property
Properties properties = new Properties();
properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka-serverA:9092,kafka-serverB:9092,kafka-serverC:9092");
properties.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
properties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
properties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, groupID);
properties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
properties.setProperty("max.partition.fetch.bytes", "100000");
There is more code for ConsumerRecord and print the records, which is working fine. I have 12 messages in the topic and I have verified through "kafka-run-class.sh kafka.admin.ConsumerGroupCommand" that there is a message in each partition. The message size is 100000 bytes, exactly equal to the max.partition.fetch.bytes limit.
When I poll, I should see 12 messages come back as a response. However, the response is very erratic. Sometimes I see messages from 4 partitions, indicating that only one broker is responding to the consumer request, or sometimes I see 8. I never got a response from all 12 partitions. Just for testing, I removed the max.partition.fetch.bytes property. I observed the same behavior.
Am I missing anything? It seems the serve1, server2, server3 in the bootstrap config is not picking all 3 brokers when serving the request.
Any help is greatly appreciated. I am running the brokers and the consumer on separate machines and they are adequately sized.
Am I missing anything? It seems the serve1, server2, server3 in the
bootstrap config is not picking all 3 brokers when serving the
request.
In your Kafka bootstrap.servers property you have listed all your brokers which is good.
One of the brokers will be picked up for fetching metadata which is basically the information about how many partitions topic has and which broker is leader for those partitions.
No matter which server is picked up, one should give information about the other.
Check if all of your brokers know each other i.e. they belong to the same Kafka cluster i.e. they point to the same zookeeper instance(s).
All of the broker IPs that you mention must be accessible to your consumer. Therefore, ensure that you have set the appropriate advertised.listeners property.
For example, if advertised.listeners=PLAINTEXT://1.2.3.4:9092 then 1.2.3.4:9092 must be accessible by the consumer.
Also, by default Kafka messages are automatically committed periodically after sometime and if messages are read by a certain consumer with a particular groupId, they are not consumed again beacuse they are committed. So, you may also want to try changing the group.id property and re-check.
Also, check if you are running multiple consumers with the same group id and in that case, some partitions will be assigned to one consumer and some to other.
You can troubleshoot this by using kafka-console-consumer with --from-beginning flag giving the topic and see if all the messages are consumed.
You may also want to check default.api.timeout.ms parameter and try increasing the value, in case there is any network congestion causing the client to switch from one bootstrap server to other.

Producer side compression in apache kafka

I hve enabled snappy compression on producer side with a batch size of 64kb, and processing messages of 1 kb each and setting linger time to inf, does this mean till i process 64 messages, producer wont send the messages to kafka out topic...
In other words, will producer send each message to kafka or wait for 64 messages and send them in a single batch...
Cause the offsets are increasing one by one rather than in the multiple of 64
Edit - using flink-kafka connectors
Messages are batched by producer so that the network usage is minimized not to be written "as a batch" into Kafka's commitlog. What you are seeing is correctly done by Kafka as each message needs to be accounted for i.e. identified key / partition relationship, appended to the commitlog and then offset is incremented. Unless the first two steps are done, offset is not incremented.
Also there is data replication to be taken care of based on configurations as well as message tracking systems get updated for each message received (to support lag apis).
Also do note, the batch.size parameter considers ready to ship message's size, which has been pre-processed as 1. compressed 2. serialized by your favorite serializer.

kafka broker is not gzipping my bigger size message , even though i specified compression type in the producer configuration

Below are my producer configuration , where if you see their is compression type as gzip , even though i mentioned the compression type why the message is not publishing and it is failing with
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, edi856KafkaConfig.getBootstrapServersConfig());
props.put(ProducerConfig.RETRIES_CONFIG, edi856KafkaConfig.getRetriesConfig());
props.put(ProducerConfig.BATCH_SIZE_CONFIG, edi856KafkaConfig.getBatchSizeConfig());
props.put(ProducerConfig.LINGER_MS_CONFIG, edi856KafkaConfig.getIntegerMsConfig());
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, edi856KafkaConfig.getBufferMemoryConfig());
***props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.IntegerSerializer");
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");***
props.put(Edi856KafkaProducerConstants.SSL_PROTOCOL, edi856KafkaConfig.getSslProtocol());
props.put(Edi856KafkaProducerConstants.SECURITY_PROTOCOL, edi856KafkaConfig.getSecurityProtocol());
props.put(Edi856KafkaProducerConstants.SSL_KEYSTORE_LOCATION, edi856KafkaConfig.getSslKeystoreLocation());
props.put(Edi856KafkaProducerConstants.SSL_KEYSTORE_PASSWORD, edi856KafkaConfig.getSslKeystorePassword());
props.put(Edi856KafkaProducerConstants.SSL_TRUSTSTORE_LOCATION, edi856KafkaConfig.getSslTruststoreLocation());
props.put(Edi856KafkaProducerConstants.SSL_TRUSTSTORE_PASSWORD, edi856KafkaConfig.getSslTruststorePassword());
**props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "gzip");**
and error am getting is below
org.apache.kafka.common.errors.RecordTooLargeException: The message is 1170632 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.
2017-12-07_12:34:10.037 [http-nio-8080-exec-1] ERROR c.tgt.trans.producer.Edi856Producer - Exception while writing mesage to topic= '{}'
org.springframework.kafka.core.KafkaProducerException: Failed to send; nested exception is org.apache.kafka.common.errors.RecordTooLargeException: The message is 1170632 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.
and want consumer configuration we need to use of i want string representation of the kafka message on the consumer side
Unfortunately you're encountering a rather odd issue with the new Producer implementation in Kafka.
Although the messages size limit applied by Kafka at the broker level is applied to a single compressed record set (potentially multiple messages), the new producer currently applies the max.request.size limit on the record prior to any compression.
This has been captured in https://issues.apache.org/jira/browse/KAFKA-4169 (created 14/Sep/16 and unresolved at time of writing).
If you are certain that the compressed size of your message (plus any overhead of the record set) will be smaller than the broker's configured max.message.bytes, you may be able to get away with increasing the value of max.request.size property on your Producer without having to change any configuration on the broker. This would allow the Producer code to accept the size of the pre-compression payload where it would then be compressed and sent to the broker.
However it is important to note that should the Producer try to send a request that is too large for the configuration of the broker, the broker will reject the message and it will be up to your application to handle this correctly.
Just read the error message :)
The message is 1170632 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration
The message is > 1 MByte that is the default value allowed by Apache Kafka. To allow large messages check the answers in How can I send large messages with Kafka (over 15MB)?

Kafka Producer Quotas

Here is the inbound messaging flow in our IoT platform:
Device ---(MQTT)---> RabbitMQ Broker ---(AMQP)---> Apache Storm ---> Kafka
I'm looking to implement a solution which effectively limits/throttles the amount of data published to Kafka per second on a per-client basis.
The current strategy in place utilizes Guava's RateLimiter where each device gets its own locally cached instance. When a device message is received, the RateLimiter mapped to that deviceId is fetched from cache and the tryAquire() method is invoked. If a permit was successfully acquired then the tuple is forwarded to Kafka as usual, else, quota exceeded and message is discarded silently. This method is rather cumbersome and at some point doomed to fail or become a bottleneck.
I've been reading up on Kafka's byte-rate quotas and believe this would work perfectly in our case especially since Kafka clients can be configured dynamically. When a virtual device is created in our platform then a new client.id should be added where client.id == deviceId.
Let's assume the following use case as an example:
Admin creates 2 virtual devices: humidity & temp sensor
A rule is fired to create new user/clientId entries in Kafka for above devices
Set their producer quota values via Kafka CLI
Both devices emit an inbound event message
...?
Here's my question. If using a single Producer instance, is it possible to specify a client.id in the ProducerRecord or somewhere in the Producer prior to calling send()? If a Producer is allowed only a single client.id, does this mean each device must have its own Producer? If only a one-to-one mapping is allowed then would it be wise to cache potentially hundreds, if not thousands, of Producer instances, one for each device? Is there a better approach I'm not aware of yet?
Note: Our platform is an "open door system" meaning clients never get sent back an error response such as "Rate Exceeded" or any error for that matter. It's all transparent to the end user. For this reason, I can't interfere with data in RabbitMQ or re-route messages to different queues.. my only option to integrate this stuff lies in between Storm or Kafka.
You can configure the client.id by application: properties.put ("client.id", "humidity") or properties.put ("client.id", "temp")
According to each client.id you can set the values
producer_byte_rate = 1024, consumer_byte_rate = 2048,
request_percentage = 200
A doubt that I am in relation to this configuration (producer_byte_rate = 1024, consumer_byte_rate = 2048, request_percentage = 200), the Producer does not assume the inserted configuration, since the Consumer is working properly
While you can specify client.id on Producer object, remember that they are heavyweight, and you might not be willing to create multiple instances of them (especially on one-per-device basis).
Regarding reducing the number of Producer, have you considered creating one on a per-user, and not per-device basis, or even having a finite shared pool of them? Kafka message headers could then be used to discern which device actually produced the data. The drawback is that you would need to throttle message production on your side, so that one device does not grab all resources from the other ones.
However, you can limit the users on Kafka broker side, with configuration applying to default user/client:
> bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-default
Updated config for entity: default client-id.
See https://kafka.apache.org/documentation/#design_quotas for more examples and explaination in depth.
How the messages are discerned depends on your architecture, the possible solutions include:
a topic / partition per user (e.g. data-USERABCDEF)
if you decide to use common topics, then you can put producer data into message headers - https://kafka.apache.org/0110/javadoc/index.html?org/apache/kafka/common/header/Headers.html , or you can put them into payload itself