Error reading field 'topic_metadata' in Kafka - apache-kafka

I am trying to connect to my broker on aws with auto.create.topics.enable=true in my server.properties file. But when I am trying to connect to broker using Java client producer I am getting the following error.
1197 [kafka-producer-network-thread | producer-1] ERROR
org.apache.kafka.clients.producer.internals.Sender - Uncaught error in
kafka producer I/O thread:
org.apache.kafka.common.protocol.types.SchemaException: Error reading
field 'topic_metadata': Error reading array of size 619631, only 37
bytes available at
org.apache.kafka.common.protocol.types.Schema.read(Schema.java:73) at
org.apache.kafka.clients.NetworkClient.parseResponse(NetworkClient.java:380)
at
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:449)
at
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:269)
at
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:229)
at
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:134)
at java.lang.Thread.run(Unknown Source)
Following is my Client producer code.
public static void main(String[] argv){
Properties props = new Properties();
props.put("bootstrap.servers", "http://XX.XX.XX.XX:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 0);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("block.on.buffer.full",true);
Producer<String, String> producer = new KafkaProducer<String, String>(props);
try{ for(int i = 0; i < 10; i++)
{ producer.send(new ProducerRecord<String, String>("topicjava", Integer.toString(i), Integer.toString(i)));
System.out.println("Tried sending:"+i);}
}
catch (Exception e){
e.printStackTrace();
}
producer.close();
}
Can someone help me resolve this?

I have faced the similar issue. The problem here is, when there is a mismatch between kafka clients version in pom file and kafka server is different.
I was using kafka clients 0.10.0.0_1 but the kafka server was still in 0.9.0.0. So i upgraded the kafka server version to 10 the issue got resolved.
<dependency>
<groupId>org.apache.servicemix.bundles</groupId>
<artifactId>org.apache.servicemix.bundles.kafka-clients</artifactId>
<version>0.10.0.0_1</version>
</dependency>

Looks like I was setting wrong properties at the client side also my server.properties file had properties which were not meant for the client I was using.So I decided to change the java client to version 0.9.0 using maven.
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.9.0.0</version>
</dependency>
my server.properties file is as below.
broker.id=0
port=9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=9000
delete.topic.enable=true
advertised.host.name=<aws public Ip>
advertised.port=9092
My producer code looks like
import java.util.Properties;
import java.util.concurrent.ExecutionException;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;
public class HelloKafkaProducer
{
public static void main(String args[]) throws InterruptedException, ExecutionException {
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"IP:9092");
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());
KafkaProducer<String,String> producer = new KafkaProducer<String,String>(props);
boolean sync = false;
String topic="loader1";
String key = "mykey";
for(int i=0;i<1000;i++)
{
String value = "myvaluehasbeensent"+i+i;
ProducerRecord<String,String> producerRecord = new ProducerRecord<String,String>(topic, key, value);
if (sync) {
producer.send(producerRecord).get();
} else {
producer.send(producerRecord);
}
}
producer.close();
}
}

Make sure that you use the correct versions. Lets say you use following maven dependecy:
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-0.8_2.10</artifactId>
<version>${flink.version}</version>
</dependency>
So the artifact equals: flink-connector-kafka-0.8_2.10
Now check if you use the correct Kafka version:
cd /KAFKA_HOME/libs
Now find kafka_YOUR-VERSION-sources.jar.
In my case I have kafka_2.10-0.8.2.1-sources.jar. So it works fine! :)
If you use different versions, just change maven dependecies OR download the correct kafka version.

I solved this problem by editing
/etc/hosts file
Check your hosts file that if zookeeper or other brokers's ip are not in this file .

Related

Jsr223 sampler sending null json to kafka topic

I am using jsr223 sampler to post json message to kafka using kafka client jar. When I am posting message is going in kafka as null. Can someone tell what I am missing. Actually message is going as null in Application .Below is my code.
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.header.Header;
import org.apache.kafka.common.header.internals.RecordHeader;
import java.nio.charset.StandardCharsets;
import groovy.json.JsonSlurper;
import java.util.ArrayList;
import org.apache.jmeter.threads.JMeterVariables;
Properties props = new Properties();
props.put("bootstrap.servers", "lxkfkbkomsstg01.lowes.com:9093,lxkfkbkomsstg02.lowes.com:9093,lxkfkbkomsstg03.lowes.com:9093,lxkfkbkomsstg04.lowes.com:9093,lxkfkbkomsstg05.lowes.com:9093");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("compression.type", "none");
props.put("batch.size", "16384");
props.put("linger.ms", "0");
props.put("buffer.memory", "33554432");
props.put("acks", "1");
props.put("send.buffer.bytes", "131072");
props.put("receive.buffer.bytes", "32768");
props.put("security.protocol", "SSL");
//props.put("sasl.kerberos.service.name", "kafka");
//props.put("sasl.mechanism", "GSSAPI");
//props.put("ssl.keystore.type", "JKS");
props.put("ssl.truststore.location", "/Users/rajkumar/Documents/EOMS/eoms-truststore-stage.jks");
props.put("ssl.truststore.password", "4DxYJnVDcPi6E8w3uCS63qoa");
props.put("ssl.endpoint.identification.algorithm", "");
props.put("ssl.protocol", "SSL");
props.put("ssl.truststore.type", "JKS");
String eventType="orbit_pick";
KafkaProducer<String, String> producer = new KafkaProducer<String, String>(props);
List<Header> headers = new ArrayList<Header>();
Header header = new RecordHeader("event_type",eventType.getBytes(StandardCharsets.UTF_8));
headers.add(header);
// headers.add(new RecordHeader("event_type",eventType.getBytes(StandardCharsets.UTF_8)));
Date latestdate = new Date();
ProducerRecord<String, String> producerRecord = new ProducerRecord<String, String>("orbit.shipment.lfs.inbound.prf", 1, latestdate.getTime(), "702807441", "{\"SellerOrganizationCode\":\"LOWES\",\"ShipNode\":\"0224\",\"IsShortage\":\"N\",\"ShipmentKey\":\"2022021708351092902896763\",\"ShipmentNo\":\"702807441\",\"Extn\":{\"ExtnPickingHasStartedFlag\":\"Y\",\"ExtnSourceSystem\":\"StoreOrderSvc\",\"ExtnPickerId\":\"98977\",\"ExtnOperation\":\"pick\",\"ExtnInPickupLocker\":\"N\"},\"Instructions\":{\"Instruction\":{\"InstructionText\":\"picking\"},\"Replace\":\"Y\"},\"ShipmentLines\":{\"ShipmentLine\":[{\"BackroomPickedQuantity\":\"1\",\"Quantity\":\"3\",\"CodeValue\":\"\",\"ShipmentLineNo\":\"1\",\"ShipmentSubLineNo\":\"0\",\"ShortageQty\":\"\",\"ItemID\":\"1505\",\"NewShipNode\":\"\",\"Extn\":null}]},\"MessageID\":\"8970549709qqaachwejhk\",\"MessageTimeStamp\":\\"${__time(yyyy-MM-dd'T'hh:mm:ss)}\",\"eventType\":\"orbit_multiple_shipments_customer_pickup\"}", headers);
producer.send(producerRecord);
producer.close();
I don't see any code so it's expected it doesn't send anything to Kafka.
Here is some sample you can use as a reference:
import org.apache.kafka.clients.producer.KafkaProducer
import org.apache.kafka.clients.producer.ProducerRecord
def props = new Properties()
props.put("bootstrap.servers", "localhost:9092")
props.put("acks", "all")
props.put("retries", 0)
props.put("batch.size", 16384)
props.put("linger.ms", 1)
props.put("buffer.memory", 33554432)
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")
def producer = new KafkaProducer<>(props)
producer.send(new ProducerRecord<>("your-topic", "your-key", "your-json-here"))
producer.close()
P.S. Are you aware of Pepper-Box - Kafka Load Generator plugin? You may find it easier to configure and use. Check out Apache Kafka - How to Load Test with JMeter article for more information

Maintaining the original traceId while passing messages through Kafka with quarkus and opentracing

I'm trying to create the most basic working example with two Quarkus (2.4.1.Final) microservices (a producer and a consumer) that communicate through Kafka and are traced with opentracing.
I have followed the kafka and the opentracing tutorial, ran the producer and consumer in dev mode (so they create that redpanda kafka broker), and then attempted to emit a POJO and log the traceId in both the consumer and producer. As far as I understand, this should work out of the box.
The POJO is sent, serialized and de-serialized without a hitch. The kafka message header that the consumer receives even has the correct original trace and span id injected into it (I've checked with debugging both the producer and consumer) using that uber-trace-id field.
But, for some reason, when logging the trace id, they don't match. It's like the tracing context "forgets" about the span it receives via kafka. Note that my business need is to just print the traceId every log so we can follow printed logs through kibana.
The producer:
package com.example;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import org.eclipse.microprofile.reactive.messaging.Channel;
import org.eclipse.microprofile.reactive.messaging.Emitter;
import org.jboss.logging.Logger;
#Path("/hello")
public class ExampleResource {
private static final Logger log = Logger.getLogger(ExampleResource.class);
private final Emitter<Person> peopleEmitter;
public ExampleResource(#Channel("people") Emitter<Person> peopleEmitter) {this.peopleEmitter = peopleEmitter;}
#GET
#Path("/{name}")
#Produces(MediaType.APPLICATION_JSON)
public Person hello(#PathParam("name") String name) {
var p = new Person();
p.name = name;
log.info("Produced " + p.name);
peopleEmitter.send(p);
return p;
}
}
The consumer:
package com.example;
import org.eclipse.microprofile.opentracing.Traced;
import org.eclipse.microprofile.reactive.messaging.Incoming;
import org.jboss.logging.Logger;
import javax.enterprise.context.ApplicationScoped;
#ApplicationScoped
public class PeopleConsumer {
private static final Logger log = Logger.getLogger(PeopleConsumer.class);
#Traced
#Incoming("people")
public void process(Person person) {
log.info("received " + person.name);
}
}
The POJO:
package com.example;
public class Person {
public String name;
}
App config for the producer:
quarkus.application.name=producer
quarkus.http.port=8090
quarkus.log.console.format=%d{HH:mm:ss} %-5p traceId=%X{traceId}, spanId=%X{spanId}, [%c{2.}] (%t) %s%e%n
mp.messaging.outgoing.people.connector=smallrye-kafka
mp.messaging.outgoing.people.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor
And the consumer:
quarkus.http.port=8091
quarkus.application.name=consumer
quarkus.log.console.format=%d{HH:mm:ss} %-5p traceId=%X{traceId}, spanId=%X{spanId}, [%c{2.}] (%t) %s%e%n
mp.messaging.incoming.people.connector=smallrye-kafka
mp.messaging.incoming.people.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor
Serializer/deserializer
public class PersonDeserializer extends ObjectMapperDeserializer<Person> {
public PersonDeserializer() {
super(Person.class);
}
}
public class PersonSerializer extends ObjectMapperSerializer<Person> {
}
The dependencies (same for both):
<dependencies>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-smallrye-opentracing</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-smallrye-reactive-messaging-kafka</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-smallrye-context-propagation</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-arc</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-resteasy-reactive-jackson</artifactId>
</dependency>
<dependency>
<groupId>io.opentracing.contrib</groupId>
<artifactId>opentracing-kafka-client</artifactId>
</dependency>
</dependencies>
Basic use case:
Put http://localhost:8090/hello/John into a browser. You will see log at the producer:
18:43:24 INFO traceId=00019292a43349df, spanId=19292a43349df, [co.ex.ExampleResource] (executor-thread-0) Produced John
And at the consumer
18:43:25 INFO traceId=1756e0a24c740fa6, spanId=1756e0a24c740fa6, [co.ex.PeopleConsumer] (pool-1-thread-1) received John
Notice the trace ids being different. I am not sure what else I'm supposed to be doing/configuring...

what will be used in place of KafkaUtils in latest kafka version 0.10.1.1

I have upgraded my kafka and also kafka-spark streaming, But I am facing some challanges while changing some of the method of them . Like KafkaUtils is throwing error as well as Iterator is also throwing error . My Kafka version is 0.10.1.1 .
So If anyone have any idea that how to fix these changes that would be great.
Thanks
KafkaUtils are part of Apache Spark Streaming, not part of Apache Kafka
org.apache.spark.streaming.kafka.KafkaUtils
The previous package of KafkaUtils was "org.apache.spark.streaming.kafka". The latest package is "org.apache.spark.streaming.kafka010".
For setting kafkaparams and topic details, check the following code snippet,
import java.util.*;
import org.apache.spark.SparkConf;
import org.apache.spark.TaskContext;
import org.apache.spark.api.java.*;
import org.apache.spark.api.java.function.*;
import org.apache.spark.streaming.api.java.*;
import org.apache.spark.streaming.kafka010.*;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.serialization.StringDeserializer;
import scala.Tuple2;
Map<String, Object> kafkaParams = new HashMap<>();
kafkaParams.put("bootstrap.servers", "localhost:9092,anotherhost:9092");
kafkaParams.put("key.deserializer", StringDeserializer.class);
kafkaParams.put("value.deserializer", StringDeserializer.class);
kafkaParams.put("group.id", "use_a_separate_group_id_for_each_stream");
kafkaParams.put("auto.offset.reset", "latest");
kafkaParams.put("enable.auto.commit", false);
Collection<String> topics = Arrays.asList("topicA", "topicB");
final JavaInputDStream<ConsumerRecord<String, String>> stream =
KafkaUtils.createDirectStream(
streamingContext,
LocationStrategies.PreferConsistent(),
ConsumerStrategies.<String, String>Subscribe(topics, kafkaParams)
);
stream.mapToPair(
new PairFunction<ConsumerRecord<String, String>, String, String>() {
#Override
public Tuple2<String, String> call(ConsumerRecord<String, String> record) {
return new Tuple2<>(record.key(), record.value());
}
})
For further reference, visit the following link https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html

Kafka Producer Send String Not Working

I am trying to make a kafka producer that sends a string "This program is running" to a kafka topic. I am not sure why it is not working. Below is the following code. I am not Cloudera distribution.
package kafka_test;
import java.util.Properties;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;
public class DataMovement {
public static void main(String[] args) {
String kafkaTopic = args[0];
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "server:9092");
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
KafkaProducer<String, String> producer = new KafkaProducer<String, String>(props);
ProducerRecord<String, String> producerRecord = new ProducerRecord<String, String>(kafkaTopic, null, "This program is running.");
producer.send(producerRecord);
producer.close();
}
}
I don't get an error message but a timeout:
It also outputs lots of information about the Kafka, ssl, passowrd, client.id etc.
16/10/31 10:25:46 INFO utils.AppInfoParser: Kafka version : 0.9.0.1
16/10/31 10:25:46 INFO utils.AppInfoParser: Kafka commitId : commitid
16/10/31 10:26:46 INFO producer.KafkaProducer: Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
The code works fine. Problem was the server:9092 should be the address of the designated broker of the Kafka cluster (I had it targeted on the active broker, they are different).

Error reading field 'topic_metadata': Error reading array of size 1139567, only 45 bytes available

--Consumer
Properties props = new Properties();
String groupId = "consumer-tutorial-group";
List<String> topics = Arrays.asList("consumer-tutorial");
props.put("bootstrap.servers", "192.168.1.75:9092");
props.put("group.id", groupId);
props.put("enable.auto.commit", "true");
props.put("key.deserializer", StringDeserializer.class.getName());
props.put("value.deserializer", StringDeserializer.class.getName());
KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
try {
consumer.subscribe(topics);
while (true) {
ConsumerRecords<String, String> records = consumer.poll(Long.MAX_VALUE);
for (ConsumerRecord<String, String> record : records)
System.out.printf("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value());
}
} catch (Exception e) {
System.out.println(e.toString());
} finally {
consumer.close();
}
}
i am trying to write run the above code,its a simple consumer code which try to read from a topic but i got a weird exception and i can't handle it.
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'topic_metadata': Error reading array of size 1139567, only 45 bytes available
i quote you also my producer code
--Producer
Properties props = new Properties();
props.put("bootstrap.servers", "192.168.1.7:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
Producer<String, String> producer = new KafkaProducer<String, String>(props);
for(int i = 0; i < 100; i++)
producer.send(new ProducerRecord<String, String>("consumer-tutorial", Integer.toString(i), Integer.toString(i)));
producer.close();
Here is kafka configs
--Start zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties
--Start Kafka Server
bin/kafka-server-start.sh config/server.properties
-- Create a topic
bin/kafka-topics.sh --create --topic consumer-tutorial --replication-factor 1 --partitions 3 --zookeeper 192.168.1.75:2181
--Kafka 0.10.0
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.10.0.0</version>
</dependency>
I've also got the same issue when using kafka_2.11 artifact with version 0.10.0.0. But this got resolved once I've changed the kafka server to 0.10.0.0. Earlier I was pointing to 0.9.0.1. It looks like server and your pom version should be in synch.
i solved my problem with downgrade to kafka 0.9.0,but it still not an efficient solution for me. If someone knows an efficient way of how to fix this in kafka 0.10.0 version,feel free to post it. Until then this is my solution
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.9.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.9.0.0</version>
</dependency>
I have the same issue.Client jar compatibility issue as I am using Kafka server 9.0.0 and Kafka client 10.0.0.Basically Kafka 0.10.0 introduced a new message format and not able to read the topic metadata from the older version.
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>1.0.0.RELEASE</version> <!-- changed due lower version of the kafka server -->
</dependency>