kafka consumer not receiving message over remote - apache-kafka

Hi I'm new to kafka and I have a quick question.
I implemented a kafka producer and consumer
zookeeper and producer is running in another server (192.168.10.233)
Consumer is running in another server (192.168.10.234)
Both are locally connected
Problem is
Consumer get connected with producer but not listening any message but if I move this listening part to same server (192.168.10.233) , it is receiving the messages
this is my code for consumer
def listen(): Unit = {
val props = new Properties();
props.put("bootstrap.servers", "192.168.10.233:9092");
props.put("group.id", "groupId");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("session.timeout.ms", "30000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
val consumer = new KafkaConsumer(props);
println("calling ---- but yet to receive the message")
consumer.subscribe(List("test"));
while (true) {
val records = consumer.poll(100);
for (record <- records)
println("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value());
}
}
I also checked 192.168.10.233:9092 from outside ,weather the port is not blocked by anything.

Most likely you have to set advertised.host.name in your kafka/config/server.properties to a value that is routable from outside.

Related

Facing issue while consuming or producing data to a topic on specific server VPC

I m trying make producer using kafka and spring boot.
I have tried creating new application to produce message on a topic and to be consumed by other application. When m starting the server topic is not being recognized initially only. Error which is coming is shown below:
2021-09-03 15:33:20.024 WARN 1 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-1, groupId=lims-public-helper] Error while fetching metadata with correlation id 9 : { sms.requests=UNKNOWN_TOPIC_OR_PARTITION}
2021-09-03 15:33:20.026 WARN 1 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-1, groupId=lims-public-helper] The following subscribed topics are not assigned to any members: [ sms.requests]
I tried with other server it is working very fine with same configuration tried this server it is giving exception.
Kafka topics
$ kaf topics
NAME PARTITIONS REPLICAS
__consumer_offsets 50 3
__trace 9 1
sms.requests 3 1
sms.status 1 3
test 1 3
Consumer code:
public ConsumerFactory<String, OtpDTO> otpConsumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ConsumerConfig.GROUP_ID_CONFIG, limsGroupId);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
props.put(JsonSerializer.TYPE_MAPPINGS, "otpDTO:com.lims.helper.dto.OtpDTO");
props.put(JsonDeserializer.VALUE_DEFAULT_TYPE, "com.lims.helper.dto.OtpDTO");
props.put(ErrorHandlingDeserializer.KEY_DESERIALIZER_CLASS, StringDeserializer.class);
props.put(ErrorHandlingDeserializer.VALUE_DESERIALIZER_CLASS, JsonDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(), new JsonDeserializer<>(OtpDTO.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, OtpDTO> otpKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, OtpDTO> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(otpConsumerFactory());
return factory;
}
consumer listner details:
#KafkaListener(topics = "${spring.kafka.topic.lims.sms.otp}", containerFactory = "otpKafkaListenerContainerFactory")
public void otpTopicMessage(#Payload OtpDTO otpDTO) {
log.info(String.format("--------##### otp topic consumer: %s", otpDTO));
}
properties details of topic:
spring.kafka.topic.lims.sms.otp=sms.requests
spring.kafka.topic.lims.sms.status=sms.status

Read only the latest kafka messages

I am using this kind of configuration for the kakfa message consumption.
As soon as I am stopping the message on producer side it will go idle for few seconds and it will start reading old messages.
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, Boolean.TRUE);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "test-consumer");
props.put("auto.commit.interval.ms", "100");
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "100000");
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "10000");
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, "500");
props.put("sasl.mechanism", "PLAIN");
I wanted to read only recent messages only.
Please add the below property to the props
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest")

Kafka Producer Send String Not Working

I am trying to make a kafka producer that sends a string "This program is running" to a kafka topic. I am not sure why it is not working. Below is the following code. I am not Cloudera distribution.
package kafka_test;
import java.util.Properties;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;
public class DataMovement {
public static void main(String[] args) {
String kafkaTopic = args[0];
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "server:9092");
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
KafkaProducer<String, String> producer = new KafkaProducer<String, String>(props);
ProducerRecord<String, String> producerRecord = new ProducerRecord<String, String>(kafkaTopic, null, "This program is running.");
producer.send(producerRecord);
producer.close();
}
}
I don't get an error message but a timeout:
It also outputs lots of information about the Kafka, ssl, passowrd, client.id etc.
16/10/31 10:25:46 INFO utils.AppInfoParser: Kafka version : 0.9.0.1
16/10/31 10:25:46 INFO utils.AppInfoParser: Kafka commitId : commitid
16/10/31 10:26:46 INFO producer.KafkaProducer: Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
The code works fine. Problem was the server:9092 should be the address of the designated broker of the Kafka cluster (I had it targeted on the active broker, they are different).

high level Kafka consumer api not work

I setup a single node kafka and try a simple pub/sub pattern like that :
From my Laptop i produce some messages by code :
Properties props = new Properties();
props.put("bootstrap.servers", "192.168.23.152:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
Producer<String, String> producer = new KafkaProducer<>(props);
for (int i = 0; i < 10; i++)
producer.send(new ProducerRecord<String, String>("tp3", Integer.toString(i), "hello " + Integer.toString(i)));
producer.close();
and I also written a Simple Consumer :
Properties props = new Properties();
props.put("bootstrap.servers", "192.168.23.152:9092");
props.put("group.id", "g1");
props.put("client.id","client1");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("auto.offset.reset", "latest");
props.put("session.timeout.ms", "30000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList("tp3"));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records)
System.out.printf("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value());
TimeUnit.SECONDS.sleep(1000);
}
But the Consumer did not retrieve anything
Anyone please explain to me what happened ?
I' m sure the producer work well , because i use console command to retrieve messages and it worked perfectly ( i attach proven image here )
Any helps is appreciated :( :( :(
According to the Kafka FAQ:
Why does my consumer never get any data?
By default, when a consumer is started for the very first time, it ignores all existing data in a topic and will only consume new data coming in after the consumer is started. If this is the case, try sending some more data after the consumer is started. Alternatively, you can configure the consumer by setting auto.offset.reset to "earliest" for the new consumer in 0.9 and "smallest" for the old consumer.

Apache Kafka 0.9.0.0 Show all Topics with Partitions

I'm currently evaluating Apache Kafka and I have a simple consumer that is supposed to read messages from a specific topic partition. Here is my client:
public static void main(String args[]) {
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "test");
props.put("enable.auto.commit", "false");
props.put("auto.commit.interval.ms", "1000");
props.put("session.timeout.ms", "30000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
TopicPartition partition0 = new TopicPartition("test_topic", Integer.parseInt(args[0]));
ArrayList topicAssignment = new ArrayList();
topicAssignment.add(partition0);
consumer.assign(topicAssignment);
//consumer.subscribe(Arrays.asList("test_topic"));
int commitInterval = 200;
List<ConsumerRecord<String, String>> buffer = new ArrayList<ConsumerRecord<String, String>>();
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records) {
buffer.add(record);
if (buffer.size() >= commitInterval) {
process(buffer);
consumer.commitSync();
buffer.clear();
}
}
}
}
static void process(List<ConsumerRecord<String, String>> buffers) {
for (ConsumerRecord<String, String> buffer : buffers) {
System.out.println(buffer);
}
}
Here is the command that I use to start Apache Kafka:
bin/kafka-server-start.sh config/server.properties & bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 2 --topic test_topic
As you can see here, I'm creating the topic with 2 partitions (p0 and p1)!
I'm then starting two instances of my consumer with the following commands:
For Consumer 1:
java -cp target/scala-2.11/kafka-consumer-0.1.0-SNAPAHOT.jar com.test.api.consumer.KafkaConsumer09Java 0
For Consumer 2:
java -cp target/scala-2.11/kafka-consumer-0.1.0-SNAPAHOT.jar com.test.api.consumer.KafkaConsumer09Java 1
Where 0 and 1 represent the actual partition from which I want my consumer's to read the messages from.
But what happens is that only my Consumer 1 is getting all the messages. I was under the impression that the messages from the producer end up equally on the partitions.
I used the following command to see how many partitions that I have for my topic test_topic:
Joes-MacBook-Pro:kafka_2.11-0.9.0.0 joe$ bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --broker-info --group test --topic test_topic --zookeeper localhost:2181
[2016-01-14 13:36:48,831] WARN WARNING: ConsumerOffsetChecker is deprecated and will be dropped in releases following 0.9.0. Use ConsumerGroupCommand instead. (kafka.tools.ConsumerOffsetChecker$)
Group Topic Pid Offset logSize Lag Owner
test test_topic 0 10000 10000 0 none
BROKER INFO
0 -> 172.22.4.34:9092
Why is there only one partition even though I said to Kafka to create 2 partitions for the test_topic?
Here is my producer:
def main(args: Array[String]) {
//val conf = new SparkConf().setAppName("VPP metrics producer")
//val sc = new SparkContext(conf)
val props: Properties = new Properties()
props.put("metadata.broker.list", "localhost:9092,localhost:9093")
props.put("serializer.class", "kafka.serializer.StringEncoder")
val config = new ProducerConfig(props)
val producer = new Producer[String, String](config)
1 to 10000 map {
case i =>
val jsonStr = getRandomTsDataPoint().toJson.toString
println(s"sending message $i to kafka")
producer.send(new KeyedMessage[String, String]("test_topic", jsonStr))
println(s"sent message $i to kafka")
}
}
I'm not sure why you would have 1 partition if you created the topic with 2. Never happened to me, that's for sure.
Can you try this:
bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test_topic
That should show you how many partitions are really there.
Then, if there's really 1 partition, maybe you could start over by creating a new topic with:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 2 --topic test_topic_2
And then try:
bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test_topic_2
... and report back the findings.
You are just consuming from partition 0 but you also need to consume from partition 1. If you consume from 1 and commit you will see in column pid no also no 1.
But you also need a producer which writes into 1 also.