Not able to send messages to kafka topic through java code - apache-kafka

I am using Kafka. This is my code , where i want to send messages to kafka server,Topic name is "west" with message "message1".I'm not getting any error though i haven't seen my sent messages in the topic is there anything wrong here?
class SimpleProducer {
public static void main(String[] args) throws Exception{
Properties props = new Properties();
props.put("bootstrap.servers","172.xxxxxxxxx:9092");
props.put("serializer.class", "kafka.serializer.DefaultEncoder");
props.put("acks", "1");
props.put("retries", 1);
props.put("batch.size", 16384);
props.put("linger.ms", 0);
props.put("client.id", "foo");
props.put("buffer.memory", 33554432);
props.put("timeout.ms", "500");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.setProperty(ProducerConfig.MAX_BLOCK_MS_CONFIG, "500");
props.setProperty(ProducerConfig.RETRY_BACKOFF_MS_CONFIG, "100");
System.out.println("ready to send msg");
try {
Producer<String, String> producer = new KafkaProducer<String, String>(props);
producer.send(new ProducerRecord<String, String>("west","message1"));
System.out.println("Message sent successfully");
producer.close();
}
catch(Exception e)
{
System.out.println("Messgae doesn't sent successfully");
e.printStackTrace();
}
}
}

The API you used to send the message is asynchronous. Use the form of send() which has two arguments. The second argument is a Callback which you can use to see if the send really worked or if there was an error somewhere.

producer.send(yourRecord,
new Callback() {
public void onCompletion(RecordMetadata metadata, Exception e) {
if(e != null) {
e.printStackTrace();
} else {
System.out.println("The offset of the record we just sent is: " + metadata.offset());
}
}
});

Related

Kafka Consumer not reading more than one record in unit tests

I have a Kafka consumer:
consumer.subscribe(statusTopicList);
try {
ConsumerRecords<String, String> consumerRecords =
consumer.poll(Duration.ofSeconds(60));
System.out.println("SIZE IS: " + consumerRecords());
for (ConsumerRecord<String, String > record : consumerRecords) {
System.out.println(“Record is: “ + record.value());
}
} catch (Exception e) {
e.printStackTrace();
} finally {
consumer.close();
}
And in my unit tests:
((MockConsumer<String, String>) consumer)
.addRecord(new ConsumerRecord<>(
topic, 1, 0, "test-application1", record1));
((MockConsumer<String, String >) consumer)
.addRecord( new ConsumerRecord<>(
topic, 1, 0, "test-application2", record2));
The size of the consumerRecords is still 1 though I’m adding 2 records. How can I read both the messages in one poll?
My consumer properties are:
private static Consumer<String, SecondaryJoinStatus> createStatusConsumer() {
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
bootstrapServers);
props.put(ConsumerConfig.GROUP_ID_CONFIG,
groupId);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
props.put("schema.registry.url",
schemaRegistryUrl);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
“EARLIEST");
props.put("max.partition.fetch.bytes", 5242880);
return new KafkaConsumer<>(props);
}
Use MAX_POLL_RECORDS_CONFIG config to poll more records here
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 5);

Apache Kafka Producer Consumer API Issue

I am trying to create a simple Java API to test Kafka Producer and Consumer. When I am running the producer and consumer on separate terminals on my Mac machine its working fine. But when I tried to connect to Kafka server using java api code, getting this error :
Exception in thread "main" java.lang.NullPointerException
at kafkatest2.ProducerTest.main(ProducerTest.java:34)
Producer Code :
package kafkatest2;
import java.util.Properties;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
public class ProducerTest {
public static void main(String[] args) {
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer","org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer","org.apache.kafka.common.serialization.StringSerializer");
Producer<String, String> producer = null;
try {
producer = new KafkaProducer<>(props);
for (int i = 0; i < 10; i++) {
String msg = "Message " + i;
producer.send(new ProducerRecord<String, String>("tested", msg));
System.out.println("Sent:" + msg);
}
} catch (Exception e) {
e.printStackTrace();
} finally {
System.out.println("last");
producer.close();
}
}
}
Consumer Code :
package kafkatest2;
import java.util.Arrays;
import java.util.Properties;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
public class ConsumerTest {
public static void main(String[] args) {
Properties props = new Properties();
props.put("zookeeper.connect", "127.0.0.1:2181");
props.put("group.id", "test-consumer-group");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("auto.offset.reset", "earliest");
props.put("session.timeout.ms", "30000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> kafkaConsumer1 = new KafkaConsumer<>(props);
kafkaConsumer1.subscribe(Arrays.asList("tested"));
while (true) {
ConsumerRecords<String, String> records = kafkaConsumer1.poll(10);
for (ConsumerRecord<String, String> record : records) {
System.out.println("Partition: " + record.partition() + " Offset: " + record.offset() + " Value: " + record.value() + " ThreadID: " + Thread.currentThread().getId());
}
}
}
}
Please let me know what am I missing?? is there some issue with the configuration values??
Thanks,
Vipul
The stack trace indicates an NPE in your producer code:
Exception in thread "main" java.lang.NullPointerException at
kafkatest2.ProducerTest.main(ProducerTest.java:34)
This can happen if producer = new KafkaProducer<>(props); throws an Exception. If so, when you enter the finally block, the local variable producer won't be defined so calling producer.close() will throw an NPE. Simply wrap the call to close in an if (producer != null) block.

Unable to get number of messages in kafka topic

I am fairly new to kafka. I have created a sample producer and consumer in java. Using the producer, I was able to send data to a kafka topic but I am not able to get the number of records in the topic using the following consumer code.
public class ConsumerTests {
public static void main(String[] args) throws Exception {
BasicConfigurator.configure();
String topicName = "MobileData";
String groupId = "TestGroup";
Properties properties = new Properties();
properties.put("bootstrap.servers", "localhost:9092");
properties.put("group.id", groupId);
properties.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
properties.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> kafkaConsumer = new KafkaConsumer<>(properties);
kafkaConsumer.subscribe(Arrays.asList(topicName));
try {
while (true) {
ConsumerRecords<String, String> consumerRecords = consumer.poll(100);
System.out.println("Record count is " + records.count());
}
} catch (WakeupException e) {
// ignore for shutdown
} finally {
consumer.close();
}
}
}
I don't get any exception in the console but consumerRecords.count() always returns 0, even if there are messages in the topic. Please let me know, if I am missing something to get the record details.
The poll(...) call should normally be in a loop. It's always possible for the initial poll(...) to return no data (depending on the timeout) while the partition assignment is in progress. Here's an example:
try {
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
System.out.println("Record count is " + records.count());
}
} catch (WakeupException e) {
// ignore for shutdown
} finally {
consumer.close();
}
For more info see this relevant article:

Kafka consumer API is not working properly

I am new to Kafka .i started doing on Kafka i am facing below issue please help me to solve this one thank in advance.
First i am writing producer API it is working fine but while doing Consumer API messages are not display.
My code is like this :
import java.util.Arrays;
import java.util.Properties;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.ConsumerRecord;
public class ConsumerGroup {
public static void main(String[] args) throws Exception {
String topic = "Hello-Kafka";
String group = "myGroup";
Properties props = new Properties();
props.put("bootstrap.servers", "XXX.XX.XX.XX:9092");
props.put("group.id", group);
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
try {
consumer.subscribe(Arrays.asList(topic));
System.out.println("Subscribed to topic " + topic);
ConsumerRecords<String, String> records = consumer.poll(100);
System.out.println("records ::" + records);
System.out.println(records.toString());
for (ConsumerRecord<String, String> record : records) {
System.out.println("Record::" + record.offset());
System.out.println(record.key());
System.out.println(record.value());
}
consumer.commitSync();
} catch (Exception e) {
e.printStackTrace();
} finally {
consumer.commitSync();
consumer.close();
}
}
}
Response ::
Subscribed to topic Hello-Kafka
records ::org.apache.kafka.clients.consumer.ConsumerRecords#76b0bfab
org.apache.kafka.clients.consumer.ConsumerRecords#76b0bfab
here not printing the Offset,key,value
Control is not coming to for (ConsumerRecord record : records) {
that for loop it self please help me.
You are trying to print empty records, hence only records.toString() is printing in your code, which essentially is the name of the class.
I made some changes in your code and got it working. Do have a look if this helps.
public class ConsumerGroup {
public static void main(String[] args) throws Exception {
String topic = "Hello-Kafka";
String group = "myGroup";
Properties props = new Properties();
props.put("bootstrap.servers", "xx.xx.xx.xx:9092");
props.put("group.id", group);
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
try {
consumer.subscribe(Arrays.asList(topic));
System.out.println("Subscribed to topic " + topic);
while(true){
ConsumerRecords<String, String> records = consumer.poll(1000);
if(records.isEmpty()){
}
else{
System.out.println("records ::" + records);
System.out.println(records.toString());
for (ConsumerRecord<String, String> record : records) {
System.out.println("Record::" + record.offset());
System.out.println(record.key());
System.out.println(record.value());
}
consumer.commitSync();
}
}
} catch (Exception e) {
e.printStackTrace();
} finally {
consumer.commitSync();
consumer.close();
}
}
}

kafka consumer group is rebalancing

I am using Kafka .9 and new java consumer. I am polling inside a loop. I am getting commitfailedexcption because of group rebalance, when code try to execute consumer.commitSycn . Please note, I am adding session.timeout.ms as 30000 and heartbeat.interval.ms as 10000 to consumer and polling happens for sure in 30000. Can anyone help me out. Please let me know if any information is needed.
Here is the code :-
Properties props = new Properties();
props.put("bootstrap.servers", {allthreeservers});
props.put("group.id", groupId);
props.put("key.deserializer", StringDeserializer.class.getName());
props.put("value.deserializer", ObjectSerializer.class.getName());
props.put("auto.offset.reset", erlierst);
props.put("enable.auto.commit", false);
props.put("session.timeout.ms", 30000);
props.put("heartbeat.interval.ms", 10000);
props.put("request.timeout.ms", 31000);
props.put("kafka.consumer.topic.name", topic);
props.put("max.partition.fetch.bytes", 1000);
while (true) {
Boolean isPassed = true;
try {
ConsumerRecords<Object, Object> records = consumer.poll(1000);
if (records.count() > 0) {
ConsumeEventInThread consumerEventInThread = new ConsumeEventInThread(records, consumerService);
FutureTask<Boolean> futureTask = new FutureTask<>(consumerEventInThread);
executorServiceForAsyncKafkaEventProcessing.execute(futureTask);
try {
isPassed = (Boolean) futureTask.get(Long.parseLong(props.getProperty("session.timeout.ms")) - Long.parseLong("5000"), TimeUnit.MILLISECONDS);
} catch (Exception Exception) {
logger.warn("Time out after waiting till session time out");
}
consumer.commitSync();
logger.info("Successfully committed offset for topic " + Arrays.asList(props.getProperty("kafka.consumer.topic.name")));
}else{
logger.info("Failed to process consumed messages, will not Commit and consume again");
}
}
} catch (Exception e) {
logger.error("Unhandled exception in while consuming messages " + Arrays.asList(props.getProperty("kafka.consumer.topic.name")), e);
}
}
The CommitFailedException is thrown when the commit cannot be completed because the group has been rebalanced. This is the main thing we have to be careful of when using the Java client. Since all network IO (including heartbeating) and message processing is done in the foreground, it is possible for the session timeout to expire while a batch of messages is being processed. To handle this, you have two choices.
First you can adjust the session.timeout.ms setting to ensure that the handler has enough time to finish processing messages. You can then tune max.partition.fetch.bytes to limit the amount of data returned in a single batch, though you will have to consider how many partitions are in the subscribed topics.
The second option is to do message processing in a separate thread, but you will have to manage flow control to ensure that the threads can keep up.
You can set session.timeout.ms large enough that commit failures from rebalances are rare.The only drawback to this is a longer delay before partitions can be re-assigned in the event of a hard failure.
For more info please see doc
This is working example.
----Worker code-----
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.Callable;
public class Worker implements Callable<Boolean> {
ConsumerRecord record;
public Worker(ConsumerRecord record) {
this.record = record;
}
public Boolean call() {
Map<String, Object> data = new HashMap<>();
try {
data.put("partition", record.partition());
data.put("offset", record.offset());
data.put("value", record.value());
Thread.sleep(10000);
System.out.println("Processing Thread---" + Thread.currentThread().getName() + " data: " + data);
return Boolean.TRUE;
} catch (Exception e) {
e.printStackTrace();
return Boolean.FALSE;
}
}
}
---------Execution code------------------
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.serialization.StringDeserializer;
import java.util.*;
import java.util.concurrent.*;
public class AsyncConsumer {
public static void main(String[] args) {
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "test-group");
props.put("key.deserializer", StringDeserializer.class.getName());
props.put("value.deserializer", StringDeserializer.class.getName());
props.put("enable.auto.commit", false);
props.put("session.timeout.ms", 30000);
props.put("heartbeat.interval.ms", 10000);
props.put("request.timeout.ms", 31000);
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList("Test1", "Test2"));
int poolSize=10;
ExecutorService es= Executors.newFixedThreadPool(poolSize);
CompletionService<Boolean> completionService=new ExecutorCompletionService<Boolean>(es);
try {
while (true) {
System.out.println("Polling................");
ConsumerRecords<String, String> records = consumer.poll(1000);
List<ConsumerRecord> recordList = new ArrayList();
for (ConsumerRecord<String, String> record : records) {
recordList.add(record);
if(recordList.size() ==poolSize){
int taskCount=poolSize;
//process it
recordList.forEach( recordTobeProcess -> completionService.submit(new Worker(recordTobeProcess)));
while(taskCount >0){
try {
Future<Boolean> futureResult = completionService.poll(1, TimeUnit.SECONDS);
if (futureResult != null) {
boolean result = futureResult.get().booleanValue();
taskCount = taskCount - 1;
}
}catch (Exception e) {
e.printStackTrace();
}
}
recordList.clear();
Map<TopicPartition,OffsetAndMetadata> commitOffset= Collections.singletonMap(new TopicPartition(record.topic(),record.partition()),
new OffsetAndMetadata(record.offset() + 1));
consumer.commitSync(commitOffset);
}
}
}
} finally {
consumer.close();
}
}
}
You need to follow some rule like:
1) You need to pass fixed number of record(for example 10) to ConsumeEventInThread.
2) Create more thread for processing instead of one thread and submit all task on completionservice.
3) poll all submitted task and verify.
4) then commit(should use parametric commitSync method instead of non parametric).