I am using this kind of configuration for the kakfa message consumption.
As soon as I am stopping the message on producer side it will go idle for few seconds and it will start reading old messages.
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, Boolean.TRUE);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "test-consumer");
props.put("auto.commit.interval.ms", "100");
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "100000");
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "10000");
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, "500");
props.put("sasl.mechanism", "PLAIN");
I wanted to read only recent messages only.
Please add the below property to the props
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest")
Related
I am using jsr223 sampler to post json message to kafka using kafka client jar. When I am posting message is going in kafka as null. Can someone tell what I am missing. Actually message is going as null in Application .Below is my code.
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.header.Header;
import org.apache.kafka.common.header.internals.RecordHeader;
import java.nio.charset.StandardCharsets;
import groovy.json.JsonSlurper;
import java.util.ArrayList;
import org.apache.jmeter.threads.JMeterVariables;
Properties props = new Properties();
props.put("bootstrap.servers", "lxkfkbkomsstg01.lowes.com:9093,lxkfkbkomsstg02.lowes.com:9093,lxkfkbkomsstg03.lowes.com:9093,lxkfkbkomsstg04.lowes.com:9093,lxkfkbkomsstg05.lowes.com:9093");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("compression.type", "none");
props.put("batch.size", "16384");
props.put("linger.ms", "0");
props.put("buffer.memory", "33554432");
props.put("acks", "1");
props.put("send.buffer.bytes", "131072");
props.put("receive.buffer.bytes", "32768");
props.put("security.protocol", "SSL");
//props.put("sasl.kerberos.service.name", "kafka");
//props.put("sasl.mechanism", "GSSAPI");
//props.put("ssl.keystore.type", "JKS");
props.put("ssl.truststore.location", "/Users/rajkumar/Documents/EOMS/eoms-truststore-stage.jks");
props.put("ssl.truststore.password", "4DxYJnVDcPi6E8w3uCS63qoa");
props.put("ssl.endpoint.identification.algorithm", "");
props.put("ssl.protocol", "SSL");
props.put("ssl.truststore.type", "JKS");
String eventType="orbit_pick";
KafkaProducer<String, String> producer = new KafkaProducer<String, String>(props);
List<Header> headers = new ArrayList<Header>();
Header header = new RecordHeader("event_type",eventType.getBytes(StandardCharsets.UTF_8));
headers.add(header);
// headers.add(new RecordHeader("event_type",eventType.getBytes(StandardCharsets.UTF_8)));
Date latestdate = new Date();
ProducerRecord<String, String> producerRecord = new ProducerRecord<String, String>("orbit.shipment.lfs.inbound.prf", 1, latestdate.getTime(), "702807441", "{\"SellerOrganizationCode\":\"LOWES\",\"ShipNode\":\"0224\",\"IsShortage\":\"N\",\"ShipmentKey\":\"2022021708351092902896763\",\"ShipmentNo\":\"702807441\",\"Extn\":{\"ExtnPickingHasStartedFlag\":\"Y\",\"ExtnSourceSystem\":\"StoreOrderSvc\",\"ExtnPickerId\":\"98977\",\"ExtnOperation\":\"pick\",\"ExtnInPickupLocker\":\"N\"},\"Instructions\":{\"Instruction\":{\"InstructionText\":\"picking\"},\"Replace\":\"Y\"},\"ShipmentLines\":{\"ShipmentLine\":[{\"BackroomPickedQuantity\":\"1\",\"Quantity\":\"3\",\"CodeValue\":\"\",\"ShipmentLineNo\":\"1\",\"ShipmentSubLineNo\":\"0\",\"ShortageQty\":\"\",\"ItemID\":\"1505\",\"NewShipNode\":\"\",\"Extn\":null}]},\"MessageID\":\"8970549709qqaachwejhk\",\"MessageTimeStamp\":\\"${__time(yyyy-MM-dd'T'hh:mm:ss)}\",\"eventType\":\"orbit_multiple_shipments_customer_pickup\"}", headers);
producer.send(producerRecord);
producer.close();
I don't see any code so it's expected it doesn't send anything to Kafka.
Here is some sample you can use as a reference:
import org.apache.kafka.clients.producer.KafkaProducer
import org.apache.kafka.clients.producer.ProducerRecord
def props = new Properties()
props.put("bootstrap.servers", "localhost:9092")
props.put("acks", "all")
props.put("retries", 0)
props.put("batch.size", 16384)
props.put("linger.ms", 1)
props.put("buffer.memory", 33554432)
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")
def producer = new KafkaProducer<>(props)
producer.send(new ProducerRecord<>("your-topic", "your-key", "your-json-here"))
producer.close()
P.S. Are you aware of Pepper-Box - Kafka Load Generator plugin? You may find it easier to configure and use. Check out Apache Kafka - How to Load Test with JMeter article for more information
I have each message size 25KB average. I am getting only 15 to 18 records on each poll.How to get more number of records on each poll ?
Below is my kafka consumer configuration
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootStrapServer);
properties.put(ConsumerConfig.GROUP_ID_CONFIG, "ConsumerGrp");
properties.put("request.timeout.ms", 120000);
properties.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 7000);
properties.put(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG, 4194304); // 4MB
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
ConsumerRecords<String, Object> crs = this.consumer.poll(Duration.ofMillis(10000));
System.out.println("Polled count is "+crs.count());
Hi I'm new to kafka and I have a quick question.
I implemented a kafka producer and consumer
zookeeper and producer is running in another server (192.168.10.233)
Consumer is running in another server (192.168.10.234)
Both are locally connected
Problem is
Consumer get connected with producer but not listening any message but if I move this listening part to same server (192.168.10.233) , it is receiving the messages
this is my code for consumer
def listen(): Unit = {
val props = new Properties();
props.put("bootstrap.servers", "192.168.10.233:9092");
props.put("group.id", "groupId");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("session.timeout.ms", "30000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
val consumer = new KafkaConsumer(props);
println("calling ---- but yet to receive the message")
consumer.subscribe(List("test"));
while (true) {
val records = consumer.poll(100);
for (record <- records)
println("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value());
}
}
I also checked 192.168.10.233:9092 from outside ,weather the port is not blocked by anything.
Most likely you have to set advertised.host.name in your kafka/config/server.properties to a value that is routable from outside.
I setup a single node kafka and try a simple pub/sub pattern like that :
From my Laptop i produce some messages by code :
Properties props = new Properties();
props.put("bootstrap.servers", "192.168.23.152:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
Producer<String, String> producer = new KafkaProducer<>(props);
for (int i = 0; i < 10; i++)
producer.send(new ProducerRecord<String, String>("tp3", Integer.toString(i), "hello " + Integer.toString(i)));
producer.close();
and I also written a Simple Consumer :
Properties props = new Properties();
props.put("bootstrap.servers", "192.168.23.152:9092");
props.put("group.id", "g1");
props.put("client.id","client1");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("auto.offset.reset", "latest");
props.put("session.timeout.ms", "30000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList("tp3"));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records)
System.out.printf("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value());
TimeUnit.SECONDS.sleep(1000);
}
But the Consumer did not retrieve anything
Anyone please explain to me what happened ?
I' m sure the producer work well , because i use console command to retrieve messages and it worked perfectly ( i attach proven image here )
Any helps is appreciated :( :( :(
According to the Kafka FAQ:
Why does my consumer never get any data?
By default, when a consumer is started for the very first time, it ignores all existing data in a topic and will only consume new data coming in after the consumer is started. If this is the case, try sending some more data after the consumer is started. Alternatively, you can configure the consumer by setting auto.offset.reset to "earliest" for the new consumer in 0.9 and "smallest" for the old consumer.
--Consumer
Properties props = new Properties();
String groupId = "consumer-tutorial-group";
List<String> topics = Arrays.asList("consumer-tutorial");
props.put("bootstrap.servers", "192.168.1.75:9092");
props.put("group.id", groupId);
props.put("enable.auto.commit", "true");
props.put("key.deserializer", StringDeserializer.class.getName());
props.put("value.deserializer", StringDeserializer.class.getName());
KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
try {
consumer.subscribe(topics);
while (true) {
ConsumerRecords<String, String> records = consumer.poll(Long.MAX_VALUE);
for (ConsumerRecord<String, String> record : records)
System.out.printf("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value());
}
} catch (Exception e) {
System.out.println(e.toString());
} finally {
consumer.close();
}
}
i am trying to write run the above code,its a simple consumer code which try to read from a topic but i got a weird exception and i can't handle it.
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'topic_metadata': Error reading array of size 1139567, only 45 bytes available
i quote you also my producer code
--Producer
Properties props = new Properties();
props.put("bootstrap.servers", "192.168.1.7:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
Producer<String, String> producer = new KafkaProducer<String, String>(props);
for(int i = 0; i < 100; i++)
producer.send(new ProducerRecord<String, String>("consumer-tutorial", Integer.toString(i), Integer.toString(i)));
producer.close();
Here is kafka configs
--Start zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties
--Start Kafka Server
bin/kafka-server-start.sh config/server.properties
-- Create a topic
bin/kafka-topics.sh --create --topic consumer-tutorial --replication-factor 1 --partitions 3 --zookeeper 192.168.1.75:2181
--Kafka 0.10.0
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.10.0.0</version>
</dependency>
I've also got the same issue when using kafka_2.11 artifact with version 0.10.0.0. But this got resolved once I've changed the kafka server to 0.10.0.0. Earlier I was pointing to 0.9.0.1. It looks like server and your pom version should be in synch.
i solved my problem with downgrade to kafka 0.9.0,but it still not an efficient solution for me. If someone knows an efficient way of how to fix this in kafka 0.10.0 version,feel free to post it. Until then this is my solution
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.9.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.9.0.0</version>
</dependency>
I have the same issue.Client jar compatibility issue as I am using Kafka server 9.0.0 and Kafka client 10.0.0.Basically Kafka 0.10.0 introduced a new message format and not able to read the topic metadata from the older version.
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>1.0.0.RELEASE</version> <!-- changed due lower version of the kafka server -->
</dependency>