I have each message size 25KB average. I am getting only 15 to 18 records on each poll.How to get more number of records on each poll ?
Below is my kafka consumer configuration
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootStrapServer);
properties.put(ConsumerConfig.GROUP_ID_CONFIG, "ConsumerGrp");
properties.put("request.timeout.ms", 120000);
properties.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 7000);
properties.put(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG, 4194304); // 4MB
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
ConsumerRecords<String, Object> crs = this.consumer.poll(Duration.ofMillis(10000));
System.out.println("Polled count is "+crs.count());
Related
I am using jsr223 sampler to post json message to kafka using kafka client jar. When I am posting message is going in kafka as null. Can someone tell what I am missing. Actually message is going as null in Application .Below is my code.
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.header.Header;
import org.apache.kafka.common.header.internals.RecordHeader;
import java.nio.charset.StandardCharsets;
import groovy.json.JsonSlurper;
import java.util.ArrayList;
import org.apache.jmeter.threads.JMeterVariables;
Properties props = new Properties();
props.put("bootstrap.servers", "lxkfkbkomsstg01.lowes.com:9093,lxkfkbkomsstg02.lowes.com:9093,lxkfkbkomsstg03.lowes.com:9093,lxkfkbkomsstg04.lowes.com:9093,lxkfkbkomsstg05.lowes.com:9093");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("compression.type", "none");
props.put("batch.size", "16384");
props.put("linger.ms", "0");
props.put("buffer.memory", "33554432");
props.put("acks", "1");
props.put("send.buffer.bytes", "131072");
props.put("receive.buffer.bytes", "32768");
props.put("security.protocol", "SSL");
//props.put("sasl.kerberos.service.name", "kafka");
//props.put("sasl.mechanism", "GSSAPI");
//props.put("ssl.keystore.type", "JKS");
props.put("ssl.truststore.location", "/Users/rajkumar/Documents/EOMS/eoms-truststore-stage.jks");
props.put("ssl.truststore.password", "4DxYJnVDcPi6E8w3uCS63qoa");
props.put("ssl.endpoint.identification.algorithm", "");
props.put("ssl.protocol", "SSL");
props.put("ssl.truststore.type", "JKS");
String eventType="orbit_pick";
KafkaProducer<String, String> producer = new KafkaProducer<String, String>(props);
List<Header> headers = new ArrayList<Header>();
Header header = new RecordHeader("event_type",eventType.getBytes(StandardCharsets.UTF_8));
headers.add(header);
// headers.add(new RecordHeader("event_type",eventType.getBytes(StandardCharsets.UTF_8)));
Date latestdate = new Date();
ProducerRecord<String, String> producerRecord = new ProducerRecord<String, String>("orbit.shipment.lfs.inbound.prf", 1, latestdate.getTime(), "702807441", "{\"SellerOrganizationCode\":\"LOWES\",\"ShipNode\":\"0224\",\"IsShortage\":\"N\",\"ShipmentKey\":\"2022021708351092902896763\",\"ShipmentNo\":\"702807441\",\"Extn\":{\"ExtnPickingHasStartedFlag\":\"Y\",\"ExtnSourceSystem\":\"StoreOrderSvc\",\"ExtnPickerId\":\"98977\",\"ExtnOperation\":\"pick\",\"ExtnInPickupLocker\":\"N\"},\"Instructions\":{\"Instruction\":{\"InstructionText\":\"picking\"},\"Replace\":\"Y\"},\"ShipmentLines\":{\"ShipmentLine\":[{\"BackroomPickedQuantity\":\"1\",\"Quantity\":\"3\",\"CodeValue\":\"\",\"ShipmentLineNo\":\"1\",\"ShipmentSubLineNo\":\"0\",\"ShortageQty\":\"\",\"ItemID\":\"1505\",\"NewShipNode\":\"\",\"Extn\":null}]},\"MessageID\":\"8970549709qqaachwejhk\",\"MessageTimeStamp\":\\"${__time(yyyy-MM-dd'T'hh:mm:ss)}\",\"eventType\":\"orbit_multiple_shipments_customer_pickup\"}", headers);
producer.send(producerRecord);
producer.close();
I don't see any code so it's expected it doesn't send anything to Kafka.
Here is some sample you can use as a reference:
import org.apache.kafka.clients.producer.KafkaProducer
import org.apache.kafka.clients.producer.ProducerRecord
def props = new Properties()
props.put("bootstrap.servers", "localhost:9092")
props.put("acks", "all")
props.put("retries", 0)
props.put("batch.size", 16384)
props.put("linger.ms", 1)
props.put("buffer.memory", 33554432)
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")
def producer = new KafkaProducer<>(props)
producer.send(new ProducerRecord<>("your-topic", "your-key", "your-json-here"))
producer.close()
P.S. Are you aware of Pepper-Box - Kafka Load Generator plugin? You may find it easier to configure and use. Check out Apache Kafka - How to Load Test with JMeter article for more information
I m trying make producer using kafka and spring boot.
I have tried creating new application to produce message on a topic and to be consumed by other application. When m starting the server topic is not being recognized initially only. Error which is coming is shown below:
2021-09-03 15:33:20.024 WARN 1 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-1, groupId=lims-public-helper] Error while fetching metadata with correlation id 9 : { sms.requests=UNKNOWN_TOPIC_OR_PARTITION}
2021-09-03 15:33:20.026 WARN 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-1, groupId=lims-public-helper] The following subscribed topics are not assigned to any members: [ sms.requests]
I tried with other server it is working very fine with same configuration tried this server it is giving exception.
Kafka topics
$ kaf topics
NAME PARTITIONS REPLICAS
__consumer_offsets 50 3
__trace 9 1
sms.requests 3 1
sms.status 1 3
test 1 3
Consumer code:
public ConsumerFactory<String, OtpDTO> otpConsumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ConsumerConfig.GROUP_ID_CONFIG, limsGroupId);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
props.put(JsonSerializer.TYPE_MAPPINGS, "otpDTO:com.lims.helper.dto.OtpDTO");
props.put(JsonDeserializer.VALUE_DEFAULT_TYPE, "com.lims.helper.dto.OtpDTO");
props.put(ErrorHandlingDeserializer.KEY_DESERIALIZER_CLASS, StringDeserializer.class);
props.put(ErrorHandlingDeserializer.VALUE_DESERIALIZER_CLASS, JsonDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(), new JsonDeserializer<>(OtpDTO.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, OtpDTO> otpKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, OtpDTO> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(otpConsumerFactory());
return factory;
}
consumer listner details:
#KafkaListener(topics = "${spring.kafka.topic.lims.sms.otp}", containerFactory = "otpKafkaListenerContainerFactory")
public void otpTopicMessage(#Payload OtpDTO otpDTO) {
log.info(String.format("--------##### otp topic consumer: %s", otpDTO));
}
properties details of topic:
spring.kafka.topic.lims.sms.otp=sms.requests
spring.kafka.topic.lims.sms.status=sms.status
I am using this kind of configuration for the kakfa message consumption.
As soon as I am stopping the message on producer side it will go idle for few seconds and it will start reading old messages.
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, Boolean.TRUE);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "test-consumer");
props.put("auto.commit.interval.ms", "100");
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "100000");
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "10000");
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, "500");
props.put("sasl.mechanism", "PLAIN");
I wanted to read only recent messages only.
Please add the below property to the props
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest")
Hi I'm new to kafka and I have a quick question.
I implemented a kafka producer and consumer
zookeeper and producer is running in another server (192.168.10.233)
Consumer is running in another server (192.168.10.234)
Both are locally connected
Problem is
Consumer get connected with producer but not listening any message but if I move this listening part to same server (192.168.10.233) , it is receiving the messages
this is my code for consumer
def listen(): Unit = {
val props = new Properties();
props.put("bootstrap.servers", "192.168.10.233:9092");
props.put("group.id", "groupId");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("session.timeout.ms", "30000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
val consumer = new KafkaConsumer(props);
println("calling ---- but yet to receive the message")
consumer.subscribe(List("test"));
while (true) {
val records = consumer.poll(100);
for (record <- records)
println("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value());
}
}
I also checked 192.168.10.233:9092 from outside ,weather the port is not blocked by anything.
Most likely you have to set advertised.host.name in your kafka/config/server.properties to a value that is routable from outside.
I setup a single node kafka and try a simple pub/sub pattern like that :
From my Laptop i produce some messages by code :
Properties props = new Properties();
props.put("bootstrap.servers", "192.168.23.152:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
Producer<String, String> producer = new KafkaProducer<>(props);
for (int i = 0; i < 10; i++)
producer.send(new ProducerRecord<String, String>("tp3", Integer.toString(i), "hello " + Integer.toString(i)));
producer.close();
and I also written a Simple Consumer :
Properties props = new Properties();
props.put("bootstrap.servers", "192.168.23.152:9092");
props.put("group.id", "g1");
props.put("client.id","client1");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("auto.offset.reset", "latest");
props.put("session.timeout.ms", "30000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList("tp3"));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records)
System.out.printf("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value());
TimeUnit.SECONDS.sleep(1000);
}
But the Consumer did not retrieve anything
Anyone please explain to me what happened ?
I' m sure the producer work well , because i use console command to retrieve messages and it worked perfectly ( i attach proven image here )
Any helps is appreciated :( :( :(
According to the Kafka FAQ:
Why does my consumer never get any data?
By default, when a consumer is started for the very first time, it ignores all existing data in a topic and will only consume new data coming in after the consumer is started. If this is the case, try sending some more data after the consumer is started. Alternatively, you can configure the consumer by setting auto.offset.reset to "earliest" for the new consumer in 0.9 and "smallest" for the old consumer.