Spring Kafka Key serializer not working for object - apache-kafka

I'm not being able to reproduce documentation or sample code in order to have a non String Key being serialized.
My goal is using the Key (field) to pass control actions alongside data.
Classes ControlChannel and SchedulerEntry are regular Pojo.
Environment is:
Java 11
Spring Boot 2.4.1
Kafka 2.6.0
Expected code to Serialize/Deserialize:
Listener and Template
#KafkaListener(topics = "Scheduler", groupId = "scheduler", containerFactory = "schedulerKafkaListenerContainerFactory")
public void listenForScheduler(
#Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) ControlChannel control,
#Header(KafkaHeaders.RECEIVED_TIMESTAMP) long timestamp,
#Payload SchedulerEntry entry) {
log.info("received data KEY ='{}'", control);
log.info("received data PAYLOAD = '{}'", entry);
/* ... */
}
#Bean
public KafkaTemplate<ControlChannel, SchedulerEntry> schedulerKafkaTemplate() {
return new KafkaTemplate<>(schedulerProducerFactory());
}
**First Try - Consumer and Producer (Type Mapping and Trusted Packaged) **
#Bean
public ProducerFactory<ControlChannel, SchedulerEntry> schedulerProducerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(JsonSerializer.ADD_TYPE_INFO_HEADERS, false);
props.put(JsonSerializer.TYPE_MAPPINGS, "key:io.infolayer.aida.ControlChannel, value:io.infolayer.aida.entity.SchedulerEntry");
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(props,
new JsonSerializer<ControlChannel>(),
new JsonSerializer<SchedulerEntry>());
}
public ConsumerFactory<ControlChannel, SchedulerEntry> consumerFactory(String groupId) {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
props.put(JsonDeserializer.REMOVE_TYPE_INFO_HEADERS, false);
props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
props.put(JsonDeserializer.TYPE_MAPPINGS, "key:io.infolayer.aida.ControlChannel, value:io.infolayer.aida.entity.SchedulerEntry");
JsonDeserializer<ControlChannel> k = new JsonDeserializer<ControlChannel>();
k.configure(props, true);
JsonDeserializer<SchedulerEntry> v = new JsonDeserializer<SchedulerEntry>();
k.configure(props, true);
return new DefaultKafkaConsumerFactory<>(props, k, v);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<ControlChannel, SchedulerEntry> schedulerKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<ControlChannel, SchedulerEntry> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory("scheduler"));
return factory;
}
Exception:
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition Scheduler-0 at offset 25. If needed, please seek past the record to continue consumption.
Caused by: java.lang.IllegalStateException: No type information in headers and no default type provided
**Second Try - Consumer and Producer (Just setting Key serializer/deserializer as Json) **
#Bean
public ProducerFactory<ControlChannel, SchedulerEntry> schedulerProducerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(props);
}
public ConsumerFactory<ControlChannel, SchedulerEntry> consumerFactory(String groupId) {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props, new JsonDeserializer<>(ControlChannel.class), new JsonDeserializer<>(SchedulerEntry.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<ControlChannel, SchedulerEntry> schedulerKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<ControlChannel, SchedulerEntry> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory("scheduler"));
return factory;
}
Exception
org.springframework.kafka.KafkaException: Seek to current after exception; nested exception is org.springframework.kafka.listener.ListenerExecutionFailedException:
Listener method 'public void io.infolayer.aida.scheduler.KafkaSchedulerListener.listenForScheduler(io.infolayer.aida.ControlChannel,long,io.infolayer.aida.entity.SchedulerEntry)'
threw exception; nested exception is org.springframework.core.convert.ConverterNotFoundException:
No converter found capable of converting from type [io.infolayer.aida.entity.SchedulerEntry] to type [#org.springframework.messaging.handler.annotation.Header io.infolayer.aida.ControlChannel]; nested exception is org.springframework.core.convert.ConverterNotFoundException:
No converter found capable of converting from type [io.infolayer.aida.entity.SchedulerEntry] to type [#org.springframework.messaging.handler.annotation.Header io.infolayer.aida.ControlChannel]

There are several problems with your first attempt.
you need to call configure() on the serializers with add type info=true
you are calling configure() on k twice and not configuring v (deserializers)
This works as expected...
#SpringBootApplication
public class So65501295Application {
private static final Logger log = LoggerFactory.getLogger(So65501295Application.class);
public static void main(String[] args) {
SpringApplication.run(So65501295Application.class, args);
}
#Bean
public ProducerFactory<ControlChannel, SchedulerEntry> schedulerProducerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(JsonSerializer.ADD_TYPE_INFO_HEADERS, true);
props.put(JsonSerializer.TYPE_MAPPINGS,
"key:com.example.demo.So65501295Application.ControlChannel, "
+ "value:com.example.demo.So65501295Application.SchedulerEntry");
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
JsonSerializer<ControlChannel> k = new JsonSerializer<ControlChannel>();
k.configure(props, true);
JsonSerializer<SchedulerEntry> v = new JsonSerializer<SchedulerEntry>();
v.configure(props, false);
return new DefaultKafkaProducerFactory<>(props, k, v);
}
public ConsumerFactory<ControlChannel, SchedulerEntry> consumerFactory(String groupId) {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(JsonDeserializer.REMOVE_TYPE_INFO_HEADERS, false);
props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
props.put(JsonDeserializer.TYPE_MAPPINGS,
"key:com.example.demo.So65501295Application.ControlChannel, "
+ "value:com.example.demo.So65501295Application.SchedulerEntry");
JsonDeserializer<ControlChannel> k = new JsonDeserializer<ControlChannel>();
k.configure(props, true);
JsonDeserializer<SchedulerEntry> v = new JsonDeserializer<SchedulerEntry>();
v.configure(props, false);
return new DefaultKafkaConsumerFactory<>(props, k, v);
}
#KafkaListener(topics = "Scheduler", groupId = "scheduler", containerFactory = "schedulerKafkaListenerContainerFactory")
public void listenForScheduler(
#Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) ControlChannel control,
#Header(KafkaHeaders.RECEIVED_TIMESTAMP) long timestamp,
#Payload SchedulerEntry entry) {
log.info("received data KEY ='{}'", control);
log.info("received data PAYLOAD = '{}'", entry);
/* ... */
}
#Bean
public KafkaTemplate<ControlChannel, SchedulerEntry> schedulerKafkaTemplate() {
return new KafkaTemplate<>(schedulerProducerFactory());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<ControlChannel, SchedulerEntry> schedulerKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<ControlChannel, SchedulerEntry> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory("scheduler"));
return factory;
}
#Bean
public ApplicationRunner runner(KafkaTemplate<ControlChannel, SchedulerEntry> template) {
return args -> {
template.send("Scheduler", new ControlChannel(), new SchedulerEntry());
};
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("Scheduler").partitions(1).replicas(1).build();
}
public static class ControlChannel {
String foo;
public String getFoo() {
return this.foo;
}
public void setFoo(String foo) {
this.foo = foo;
}
}
public static class SchedulerEntry {
String foo;
public String getFoo() {
return this.foo;
}
public void setFoo(String foo) {
this.foo = foo;
}
}
}
2021-01-04 11:42:25.026 INFO 23905 --- [ntainer#0-0-C-1] com.example.demo.So65501295Application
: received data KEY ='com.example.demo.So65501295Application$ControlChannel#44a72886'
2021-01-04 11:42:25.026 INFO 23905 --- [ntainer#0-0-C-1] com.example.demo.So65501295Application
: received data PAYLOAD = 'com.example.demo.So65501295Application$SchedulerEntry#74461c59'

Related

when I seek to a timestamp using OffsetsForTimes method it give me a timout exeception

0
I created a class called ConsumerConfig and Service class that contain fonction that gets records from a topic.
locally it works just fine but when i add the brokers it stopped working and i get timout exception, and it takes a loong time can anybody help me .
here is the code of the ConsumerConfig class and the fonctions that gets records from specific topic :
#Component
public class ConsumerConfig {
private static Integer numberOfConsumer = 0;
private KafkaConsumer consumer;
private Map<String, Object> buildDefaultConfig() {
final Map<String, Object> defaultClientConfig = new HashMap<>();
defaultClientConfig.put("bootstrap.servers", (String) getbrokersConfigurationFromFile().get("spring.kafka.bootstrap-servers"));
defaultClientConfig.put("client.id", "test-consumer-id" + (++numberOfConsumer));
defaultClientConfig.put("request.timeout.ms",60000);
return defaultClientConfig;
}
#Bean
#RequestScope
public <K, V> KafkaConsumer<K, V> getKafkaConsumer() {
// Build config
final Map<String, Object> kafkaConsumerConfig = buildDefaultConfig();
kafkaConsumerConfig.put("key.deserializer", StringDeserializer.class);
kafkaConsumerConfig.put("value.deserializer", StringDeserializer.class);
kafkaConsumerConfig.put("auto.offset.reset", "earliest");
kafkaConsumerConfig.put("max.poll.records",5);
kafkaConsumerConfig.put("default.api.timeout.ms", 6000000);
kafkaConsumerConfig.put("max.block.ms ",60000000);
kafkaConsumerConfig.put("auto.offset.reset", "earliest");
kafkaConsumerConfig.put("enable.partition.eof", "false");
kafkaConsumerConfig.put("enable.auto.commit", "true");
kafkaConsumerConfig.put("auto.commit.interval.ms", "1000");
kafkaConsumerConfig.put("session.timeout.ms", "30000");
//fetch.max.byte The maximum amount of data the server should return for a fetch request.
// Create and return Consumer.
return consumer=new KafkaConsumer<K, V> (kafkaConsumerConfig);
}
}
/**
* reacd from specific offset timestamp
*/
public <K, V> List<Record> consumeFromTime(String topicName, Long timestampMs) {
// Get the list of partitions
List<PartitionInfo> partitionInfos = consumer.partitionsFor(topicName);
// Transform PartitionInfo into TopicPartition
List<TopicPartition> topicPartitionList = partitionInfos
.stream()
.map(info -> new TopicPartition(topicName, info.partition()))
.collect(Collectors.toList());
// Assign the consumer to these partitions
consumer.assign(topicPartitionList);
// Look for offsets based on timestamp
Map<TopicPartition, Long> partitionTimestampMap = topicPartitionList.stream()
.collect(Collectors.toMap(tp -> tp, tp -> System.currentTimeMillis() - timestampMs * 1000));
Map<TopicPartition, OffsetAndTimestamp> partitionOffsetMap = consumer.offsetsForTimes(partitionTimestampMap);
// Force the consumer to seek for those offsets
List<ConsumerRecord<K, V>> allRecords = new ArrayList<>();
partitionOffsetMap.forEach((tp, offsetAndTimestamp) -> {
if (!Objects.isNull(offsetAndTimestamp)) {
consumer.seek(tp, offsetAndTimestamp.offset());
ConsumerRecords<K, V> records;
records = consumer.poll(Duration.ofMillis(100L));
records.forEach(allRecords::add);
}
}
);
return allRecords.stream().map(v -> Record.builder().values(v.value()).offset(v.offset()).partition(v.partition()).build()).collect(Collectors.toList());
}

kafka listener not being able to consume messages unless written otherwise

LEVEL II EXPERIMENT
If i use return new DefaultKafkaConsumerFactory<String, ConsumerEnrollmentSyncMessage>(props, new StringDeserializer(), new JsonDeserializer()); the consumer does not fetch any message.
But - if i use the following the consumer works
JsonDeserializer<ConsumerEnrollmentSyncMessage> deserializer = new JsonDeserializer<>(ConsumerEnrollmentSyncMessage.class);
deserializer.setRemoveTypeHeaders(false);
deserializer.addTrustedPackages("*");
deserializer.setUseTypeMapperForKey(true);
return new DefaultKafkaConsumerFactory<String, ConsumerEnrollmentSyncMessage>(props, new StringDeserializer(), deserializer)
My sole aim is to make a single ConsumerFactory to receive three different type of payload such as A.class, B.class and C.class
Thanks.
PLEASE DO NOT SEE BELOW
The kafka listener is not being able to consume messages unless written in a different way.
Listener 1 - Not being able to consume messages if i use type mapping (token:type) that is a standalone producer application a standalone consumer application
public ConsumerFactory<String, Object> consumerFactory() {
JsonDeserializer<Object> jsonDeserializer = new JsonDeserializer<>();
Map<String, Object> deserProps = new HashMap<>();
deserProps.put(JsonDeserializer.TYPE_MAPPINGS, applicationConfig.getTypeMapping());
Map<String, Object> props = new HashMap<String, Object>();
props.put(ConsumerConfig.GROUP_ID_CONFIG, applicationConfig.getKafkaGroupId());
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, LATEST);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, FALSE);
props.put("security.protocol", applicationConfig.getKafkaSslProtocol());
props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, applicationConfig.getKafkaSslTrustStoreLocation());
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,JsonDeserializer.class);
props.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, applicationConfig.getInterceptorClassConfig());
props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
jsonDeserializer.configure(deserProps, false);
return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(), jsonDeserializer);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setErrorHandler(new CustomSeekToCurrentErrorHandler());
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.getContainerProperties().setAssignmentCommitOption(AssignmentCommitOption.NEVER);
factory.setConcurrency(2);
return factory;
}
#KafkaListener(id = "welcomeConsumerIdOne", autoStartup = "false", topics = "#{appConfig.getWelcomeConfirmedEventTopic()}", containerFactory = "kafkaListenerContainerFactory")
public void consumeWelcomeMessage(ConsumerEnrollmentSyncMessage message, #Headers MessageHeaders messageHeaders, Acknowledgment ack) {
//message received
}
Listener 2 - Able to consume messages without any type mapping.
#KafkaListener(topics = "bev3_welcome_confirmed_topic_dev", containerFactory = "kafkaListenerContainerFactory")
public void consumePublishedEvents(ConsumerEnrollmentSyncMessage message) {
System.out.println("consumed message: "+message);
}
#Bean
public ConsumerFactory<String, ConsumerEnrollmentSyncMessage> consumerFactory() {
JsonDeserializer<ConsumerEnrollmentSyncMessage> deserializer = new JsonDeserializer<>(ConsumerEnrollmentSyncMessage.class);
deserializer.setRemoveTypeHeaders(false);
deserializer.addTrustedPackages("*");
deserializer.setUseTypeMapperForKey(true);
Map<String, Object> props = new HashMap<String, Object>();
;
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, deserializer);
props.put(ConsumerConfig.GROUP_ID_CONFIG, applicationConfig.getKafkaGroupId());
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, LATEST);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, TRUE);
props.put("security.protocol", applicationConfig.getKafkaSslProtocol());
props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, applicationConfig.getKafkaSslTrustStoreLocation());
props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
return new DefaultKafkaConsumerFactory<String, ConsumerEnrollmentSyncMessage>(props, new StringDeserializer(), deserializer);
}
/**
* Kafka Listener container factory
*
* #return ConcurrentKafkaListenerContainerFactory
*
*/
#Bean
public ConcurrentKafkaListenerContainerFactory<String, ConsumerEnrollmentSyncMessage> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, ConsumerEnrollmentSyncMessage> factory = new ConcurrentKafkaListenerContainerFactory<String, ConsumerEnrollmentSyncMessage>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
I am using LATEST
I am not too sure of the cause of this problem. Is there a problem with the group-id or is there anything has to be fixed on kafka-broker end. Tried changing the listener-id and group-id but no luck. Thanks.
What version are you using?
Before 2.8, when instantiating the deserializer yourself, you had to fully configure it yourself; the consumer properties were not applied.
https://github.com/spring-projects/spring-kafka/pull/1907
Adding the deserializer class to the properties is redundant because you are supplying an instance directly to the factory.
In any case, this makes no sense:
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, deserializer);
it must be a class name or class, not an instance.

How to read the Header values in the Batch listener error handling scenario

I am trying to handle the exception at the listener
#KafkaListener(id = PropertiesUtil.ID,
topics = "#{'${kafka.consumer.topic}'}",
groupId = "${kafka.consumer.group.id.config}",
containerFactory = "containerFactory",
errorHandler = "errorHandler")
public void receiveEvents(#Payload List<ConsumerRecord<String, String>> recordList,
Acknowledgment acknowledgment) {
try {
log.info("Consuming the batch of size {} from kafka topic {}", consumerRecordList.size(),
consumerRecordList.get(0).topic());
processEvent(consumerRecordList);
incrementOffset(acknowledgment);
} catch (Exception exception) {
throwOrHandleExceptions(exception, recordList, acknowledgment);
.........
}
}
The Kafka container config:
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>>
containerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConcurrency(this.numberOfConsumers);
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL);
factory.setConsumerFactory(getConsumerFactory());
factory.setBatchListener(true);
return factory;
}
}
the listener error handler impl
#Bean
public ConsumerAwareListenerErrorHandler errorHandler() {
return (m, e, c) -> {
MessageHeaders headers = m.getHeaders();
List<String> topics = headers.get(KafkaHeaders.RECEIVED_TOPIC, List.class);
List<Integer> partitions = headers.get(KafkaHeaders.RECEIVED_PARTITION_ID, List.class);
List<Long> offsets = headers.get(KafkaHeaders.OFFSET, List.class);
Map<TopicPartition, Long> offsetsToReset = new HashMap<>();
for (int i = 0; i < topics.size(); i++) {
int index = i;
offsetsToReset.compute(new TopicPartition(topics.get(i), partitions.get(i)),
(k, v) -> v == null ? offsets.get(index) : Math.min(v, offsets.get(index)));
}
...
};
}
when i try to run the same without the batching processing then i am able to fetch the partition,topic and offset values but when i enable batch processing and try to test it then i am getting only two values inside the headers i.e id and timestamp and other values are not set. Am i missing anything here??
What version are you using? I just tested it with Boot 2.2.4 (SK 2.3.5) and it works fine...
#SpringBootApplication
public class So60152179Application {
public static void main(String[] args) {
SpringApplication.run(So60152179Application.class, args);
}
#KafkaListener(id = "so60152179", topics = "so60152179", errorHandler = "eh")
public void listen(List<String> in) {
throw new RuntimeException("foo");
}
#Bean
public ConsumerAwareListenerErrorHandler eh() {
return (m, e, c) -> {
System.out.println(m);
return null;
};
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> {
template.send("so60152179", "foo");
};
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so60152179").partitions(1).replicas(1).build();
}
}
spring.kafka.listener.type=batch
spring.kafka.consumer.auto-offset-reset=earliest
GenericMessage [payload=[foo], headers={kafka_offset=[0], kafka_nativeHeaders=[RecordHeaders(headers = [], isReadOnly = false)], kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#2f2e787f, kafka_timestampType=[CREATE_TIME], kafka_receivedMessageKey=[null], kafka_receivedPartitionId=[0], kafka_receivedTopic=[so60152179], kafka_receivedTimestamp=[1581351585253], kafka_groupId=so60152179}]

Spring kafka do not retry not committed offsets

How can i stop spring kafka do not retry not readed messages from topic. For example is i kill application and then restart it my consumer is starting consuming not consumed messages. How can i prevent it?
#Bean
public ConsumerFactory<String, String> manualConsumerFactory() {
Map<String, Object> configs = consumerConfigs();
configs.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
configs.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return new DefaultKafkaConsumerFactory<>(configs);
}
/**
* Kafka manual ack listener container factory kafka listener container factory.
*
* #return the kafka listener container factory
*/
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaManualAckListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(manualConsumerFactory());
ContainerProperties props = factory.getContainerProperties();
props.setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
return factory;
}
#Override
#EventListener
public void processSettlementFile(final Notification notification) {
LOG.info("Handling message [{}]", notification);
try {
final Map<String, JobParameter> parameters = new HashMap<>();
parameters.put("fileName", new JobParameter("1-101-D-2017-212-volume-per-transaction.csv"));
parameters.put("bucket", new JobParameter("bucket-name-can-be-passed-also-from-kafka-todo"));
final JobParameters jobParameters = new JobParameters(parameters);
final JobExecution execution = jobLauncher.run(succeededTransactionCsvFileToDatabaseJob, jobParameters);
LOG.info("Job Execution Status: " + execution.getStatus());
} catch (JobExecutionAlreadyRunningException | JobRestartException | JobInstanceAlreadyCompleteException | JobParametersInvalidException e) {
LOG.error("Failed to process job..", e);
}
}
#KafkaListener(topics = "topic", groupId = "processor-service", clientIdPrefix = "string", containerFactory = "kafkaManualAckListenerContainerFactory")
public void listenAsString(#Payload final String payload, Acknowledgment acknowledgment, final ConsumerRecord<String, String> consumerRecord) throws TopicEventException {
applicationEventPublisher.publishEvent(object);
acknowledgment.acknowledge();
}
You can add a ConsumerAwareRebalanceListener to the container configuration and call consumer.seekToEnd(partitions) in onPartitionsAssigned().

Error to serialize message when sending to kafka topic

i need to test a message, which contains headers, so i need to use MessageBuilder, but I can not serialize.
I tried adding the serialization settings on the producer props but it did not work.
Can someone help me?
this error:
org.apache.kafka.common.errors.SerializationException: Can't convert value of class org.springframework.messaging.support.GenericMessage to class org.apache.kafka.common.serialization.StringSerializer specified in value.serializer
My test class:
public class TransactionMastercardAdapterTest extends AbstractTest{
#Autowired
private KafkaTemplate<String, Message<String>> template;
#ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1);
#BeforeClass
public static void setUp() {
System.setProperty("spring.kafka.bootstrap-servers", embeddedKafka.getBrokersAsString());
System.setProperty("spring.cloud.stream.kafka.binder.zkNodes", embeddedKafka.getZookeeperConnectionString());
}
#Test
public void sendTransactionCommandTest(){
String payload = "{\"o2oTransactionId\" : \"" + UUID.randomUUID().toString().toUpperCase() + "\","
+ "\"cardId\" : \"11\","
+ "\"transactionId\" : \"20110405123456\","
+ "\"amount\" : 200.59,"
+ "\"partnerId\" : \"11\"}";
Map<String, Object> props = KafkaTestUtils.producerProps(embeddedKafka);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
Producer<String, Message<String>> producer = new KafkaProducer<>(props);
producer.send(new ProducerRecord<String, Message<String>> ("notification_topic", MessageBuilder.withPayload(payload)
.setHeader("status", "RECEIVED")
.setHeader("service", "MASTERCARD")
.build()));
Map<String, Object> configs = KafkaTestUtils.consumerProps("test1", "false", embeddedKafka);
configs.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
ConsumerFactory<byte[], byte[]> cf = new DefaultKafkaConsumerFactory<>(configs);
Consumer<byte[], byte[]> consumer = cf.createConsumer();
consumer.subscribe(Collections.singleton("transaction_topic"));
ConsumerRecords<byte[], byte[]> records = consumer.poll(10_000);
consumer.commitSync();
assertThat(records.count()).isEqualTo(1);
}
}
I'd say the error is obvious:
Can't convert value of class org.springframework.messaging.support.GenericMessage to class org.apache.kafka.common.serialization.StringSerializer specified in value.serializer
Where your value is GenericMessage, but StringSerializer can work only with strings.
What you need is called JavaSerializer which does not exist, but not so difficult to write:
public class JavaSerializer implements Serializer<Object> {
#Override
public byte[] serialize(String topic, Object data) {
try {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
ObjectOutputStream objectStream = new ObjectOutputStream(byteStream);
objectStream.writeObject(data);
objectStream.flush();
objectStream.close();
return byteStream.toByteArray();
}
catch (IOException e) {
throw new IllegalStateException("Can't serialize object: " + data, e);
}
}
#Override
public void configure(Map<String, ?> configs, boolean isKey) {
}
#Override
public void close() {
}
}
And configure it for that value.serializer property.
private void configureProducer() {
Properties props = new Properties();
props.put("key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer",
"org.apache.kafka.common.serialization.ByteArraySerializer");
producer = new KafkaProducer<String, String>(props);
}
This will do the job.
In my case i am using spring cloud and did not added the below property in the properties file
spring.cloud.stream.kafka.binder.configuration.value.serializer=org.apache.kafka.common.serialization.StringSerializer
This is what I used and it worked for me
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, org.springframework.kafka.support.serializer.JsonSerializer.class);
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.IntegerSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
annotate the JSON class with #XmlRootElement