The default of Spring-Kafka is to cleanup local state maintained on the filesystem when a Kafka-Streams application is stopped. It seems like this is governed by CleanupConfig passed to the StreamsBuilderFactoryBean.
I tried to disable this as follows:
#SpringBootApplication
#EnableKafka
#EnableKafkaStreams
public class App {
public final static String STORE_NAME = "store";
public static final String INPUT_TOPIC = "input";
public static void main(String[] args) {
SpringApplication.run(App.class, args);
}
#Bean
public StreamsBuilderFactoryBean defaultKafkaStreamsBuilder(KafkaStreamsConfiguration streamsConfig) {
return new StreamsBuilderFactoryBean(streamsConfig, new CleanupConfig(false, false));
}
#Bean
public KTable<String, Integer> kTable(StreamsBuilder kStreamBuilder) {
return kStreamBuilder.table(
INPUT_TOPIC,
Consumed.with(new Serdes.StringSerde(), new Serdes.IntegerSerde()),
Materialized.as(STORE_NAME)
);
}
}
Still, when the application is stopped, all state in /tmp/kafka-streams/ is deleted. Any idea how this should be done correctly?
Related
I'm reading a ton of questions and answers about this topic, but I can't solve my problem.
I initialized a Springboot project with Kafka and spring-data-jdbc.
What I'm trying to do is
Configure a Kafka JDBC Connector in order to push record changes from a PostgreSQL DB into a Kafka topic
Setup a Kafka Consumer in order to consume records pushed into the topic by inserting them into another PostgresSQL DB.
For point 1 is everything ok.
For point 2 I'm having some problem.
This is how is organized the project
com.migration
- MigrationApplication.java
com.migration.config
- KafkaConsumerConfig.java
com.migration.db
- JDBCConfig.java
- RecordRepository.java
com.migration.listener
- MessageListener.java
com.migration.model
- Record.java
- AbstractRecord.java
- PostgresRecord.java
This is the MessageListener class
#EnableJdbcRepositories("com.migration.db")
#Transactional
#Configuration
public class MessageListener {
#Autowired
private RecordRepository repository;
#KafkaListener(topics={"author"}, groupId = "migrator", containerFactory = "migratorKafkaListenerContainerFactory")
public void listenGroupMigrator(Record record) {
repository.insert(message);
throw new RuntimeException();
}
I think is pretty clear, it setup a Kafka Consumer in order to listen on "author" topic and consume the record by inserting it into DB.
As you can see, inside listenGroupMigrator() method is performed the insert into DB of the record and then is thrown RuntimeException because I'm checking if #Transactional works and if rollback is performed.
But not, rollback is not performed, even if the class is annotated with #Transactional.
For completeness these are other classes
RecordRepository class
#Repository
public class RecordRepository {
public RecordRepository() {}
public void insert(Record record) {
JDBCConfig jdbcConfig = new JDBCConfig();
SimpleJdbcInsert messageInsert = new SimpleJdbcInsert(jdbcConfig.postgresDataSource());
messageInsert.withTableName(record.tableName()).execute(record.content());
}
}
JDBCConfig class
#Configuration
public class JDBCConfig {
#Bean
public DataSource postgresDataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName("org.postgresql.Driver");
dataSource.setUrl("jdbc:postgresql://localhost:5432/db");
dataSource.setUsername("postgres");
dataSource.setPassword("root");
return dataSource;
}
}
KafkaConsumerConfig class:
#EnableKafka
#Configuration
public class KafkaConsumerConfig {
#Value(value = "${kafka.bootstrap-server}")
private String bootstrapServer;
private <T extends Record> ConsumerFactory<String, T> consumerFactory(String groupId, Class<T> clazz) {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(JsonSerializer.ADD_TYPE_INFO_HEADERS, false);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(), new JsonDeserializer<>(clazz));
}
private <T extends Record> ConcurrentKafkaListenerContainerFactory<String, T> kafkaListenerContainerFactory(String groupId, Class<T> clazz) {
ConcurrentKafkaListenerContainerFactory<String, T> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory(groupId, clazz));
return factory;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, PostgresRecord> migratorKafkaListenerContainerFactory() {
return kafkaListenerContainerFactory("migrator", PostgresRecord.class);
}
}
MigrationApplication class
#SpringBootApplication
public class MigrationApplication {
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(MigrationApplication.class, args);
MessageListener listener = context.getBean(MessageListener.class);
}
}
How can I make the listenGroupMigrator method transactional?
I have a spring application with a Kafka consumer using a #KafkaListerner annotation. The topic being consumed is log compacted and we might have the scenario where we must consume again the topic messages. What's the best way to achieve this programmatically? We don't control the Kafka topic configuration.
#KafkaListener(...)
public void listen(String in, #Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer) {
System.out.println(in);
if (this.resetNeeded) {
consumer.seekToBeginning(consumer.assignment());
this.resetNeeded = false;
}
}
If you want to reset when the listener is idle (no records) you can enable idle events and perform the seeks by listening for a ListenerContainerIdleEvent in an ApplicationListener or #EventListener method.
The event has a reference to the consumer.
EDIT
#SpringBootApplication
public class So58769796Application {
public static void main(String[] args) {
SpringApplication.run(So58769796Application.class, args);
}
#KafkaListener(id = "so58769796", topics = "so58769796")
public void listen1(String value, #Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key) {
System.out.println("One:" + key + ":" + value);
}
#KafkaListener(id = "so58769796a", topics = "so58769796")
public void listen2(String value, #Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key) {
System.out.println("Two:" + key + ":" + value);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so58769796")
.compact()
.partitions(1)
.replicas(1)
.build();
}
boolean reset;
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> {
template.send("so58769796", "foo", "bar");
System.out.println("Hit enter to rewind");
System.in.read();
this.reset = true;
};
}
#EventListener
public void listen(ListenerContainerIdleEvent event) {
System.out.println(event);
if (this.reset && event.getListenerId().startsWith("so58769796-")) {
event.getConsumer().seekToBeginning(event.getConsumer().assignment());
}
}
}
and
spring.kafka.listener.idle-event-interval=5000
EDIT2
Here's another technique - in this case we rewind each time the app starts (and on demand)...
#SpringBootApplication
public class So58769796Application implements ConsumerSeekAware {
public static void main(String[] args) {
SpringApplication.run(So58769796Application.class, args);
}
#KafkaListener(id = "so58769796", topics = "so58769796")
public void listen(String value, #Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key) {
System.out.println(key + ":" + value);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so58769796")
.compact()
.partitions(1)
.replicas(1)
.build();
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template,
KafkaListenerEndpointRegistry registry) {
return args -> {
template.send("so58769796", "foo", "bar");
System.out.println("Hit enter to rewind");
System.in.read();
registry.getListenerContainer("so58769796").stop();
registry.getListenerContainer("so58769796").start();
};
}
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> assignments, ConsumerSeekCallback callback) {
assignments.keySet().forEach(tp -> callback.seekToBeginning(tp.topic(), tp.partition()));
}
}
I would like to collect metrics with Vert.x Micrometer Metrics, so I need to set proper options to VertxOptions. I run Vertx with Launcher and there is a hook beforeDeployingVerticle but when I override it it's not called.
I overriden Launcher class and beforeDeployingVerticle method but this method is never executed.
public class LauncherTest {
public static class SimpleVerticle extends AbstractVerticle {
#Override
public void start(Future<Void> startFuture) throws Exception {
System.out.println("verticle started");
}
}
public static class LauncherWithHook extends Launcher {
#Override
public void beforeDeployingVerticle(DeploymentOptions deploymentOptions) {
System.out.println("before deploying");
}
}
public static void main(String[] args) {
new LauncherWithHook().execute("run", SimpleVerticle.class.getName());
}
}
In a result I receive just verticle started, but I expect also to have before deploying there. Should I add this hook somehow different?
change your main method like this:
public static void main(String[] args) {
String[] argz = {"run", "your.namepace.LauncherTest$SimpleVerticle"};
LauncherWithHook launcher = new LauncherWithHook();
launcher.dispatch(argz);
}
In Spring Batch it would be great to keep track of the execution thread through logging. However, MDC does not seem to work.
MDC.put("process", "batchJob");
logger.info("{}; status={}", getJobName(), batchStatus.name());
Anyone got MDC working in Spring Batch?
I solved it by adding a JobExecutionListener like that:
public class Slf4jBatchJobListener implements JobExecutionListener {
private static final String DEFAULT_MDC_UUID_TOKEN_KEY = "Slf4jMDCFilter.UUID";
private final Logger logger = LoggerFactory.getLogger(getClass());
public void beforeJob(JobExecution jobExecution) {
String token = UUID.randomUUID().toString().toUpperCase();
MDC.put(DEFAULT_MDC_UUID_TOKEN_KEY, token);
logger.info("Job {} with id {} starting...", jobExecution.getJobInstance().getJobName(), jobExecution.getId());
}
public void afterJob(JobExecution jobExecution) {
logger.info("Job {} with id {} ended.", jobExecution.getJobInstance().getJobName(), jobExecution.getId());
MDC.remove(DEFAULT_MDC_UUID_TOKEN_KEY);
}
}
Because some jobs are multi-threaded, I had to add also a TaskDecorator in order to copy the DMC from the parent thread to the subthread like this:
public class Slf4JTaskDecorator implements TaskDecorator {
#Override
public Runnable decorate(Runnable runnable) {
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return () -> {
try {
MDC.setContextMap(contextMap);
runnable.run();
} finally {
MDC.clear();
}
};
}
}
Set the TaskDecorator to the TaskExecutor:
#Bean
public TaskExecutor taskExecutor(){
SimpleAsyncTaskExecutor taskExecutor = new SimpleAsyncTaskExecutor("spring_batch");
taskExecutor.setConcurrencyLimit(maxThreads);
taskExecutor.setTaskDecorator(new Slf4JTaskDecorator());
return taskExecutor;
}
And lastly, update the logging pattern in properties:
logging:
pattern:
level: "%5p %X{Slf4jMDCFilter.UUID}"
I wrote sample spring amqp producer which is running on RabbitMQ server which sends messages and consuming those messages uisng MessageListener using Spring AMQP. Here, I want to set queue and message durability to false. Could you please any one help me on how to set "durable" flag to false using annotations.
Here is sample code
#Configuration
public class ProducerConfiguration {
protected final String queueName = "hello.queue";
#Bean
public RabbitTemplate rabbitTemplate() {
RabbitTemplate template = new RabbitTemplate(connectionFactory());
template.setRoutingKey(this.queueName);
template.setQueue(this.queueName);
return template;
}
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
connectionFactory.setUsername("guest");
connectionFactory.setPassword("guest");
return connectionFactory;
}
}
public class Producer {
public static void main(String[] args) throws Exception {
new Producer().send();
}
public void send() {
ApplicationContext context = new AnnotationConfigApplicationContext(
ProducerConfiguration.class);
RabbitTemplate rabbitTemplate = context.getBean(RabbitTemplate.class);
for (int i = 1; i <= 10; i++) {
rabbitTemplate.convertAndSend(i);
}
}
}
Thanks in Advance.
#Configuration
public class Config {
#Bean
public ConnectionFactory connectionFactory() {
return new CachingConnectionFactory();
}
#Bean
public Queue foo() {
return new Queue("foo", false);
}
#Bean
public RabbitAdmin rabbitAdmin() {
return new RabbitAdmin(connectionFactory());
}
}
The rabbit admin will declare the queue the first time the connection is opened. Note that you can't change a queue from durable to not; delete it first.