Spring kafka setErrorHandler deprecated replacement (boot 2.6.4) - apache-kafka

On spring boot 2.6.4, this method is deprecated.
public ConcurrentKafkaListenerContainerFactory<Object, Object> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer) {
var factory = new ConcurrentKafkaListenerContainerFactory<Object, Object>();
configurer.configure(factory, consumerFactory());
// deprecated
factory.setErrorHandler(new GlobalErrorHandler());
return factory;
}
The global error handler class
public class GlobalErrorHandler implements ConsumerAwareErrorHandler {
private static final Logger log = LoggerFactory.getLogger(GlobalErrorHandler.class);
#Override
public void handle(Exception thrownException, ConsumerRecord<?, ?> data, Consumer<?, ?> consumer) {
// my custom global logic (e.g. notify ops team via slack)
}
}
What is the replacement sample for this? The doc says I should use setCommonErrorHandler, but how to implements the CommonErrorHandler interface, as no method to be overriden there.
Point is, I have to send slack notification to ops team, based on certain condition (the message tpye, which is available on kafka message header)
This is not blocking, just an annoying deprecated message though.
Thanks

See the Spring for Apache Kafka documentation; legacy error handlers are replaced with CommonErrorHandler implementations.
What's New?
https://docs.spring.io/spring-kafka/docs/current/reference/html/#x28-eh
The legacy GenericErrorHandler and its sub-interface hierarchies for record an batch listeners have been replaced by a new single interface CommonErrorHandler with implementations corresponding to most legacy implementations of GenericErrorHandler. See Container Error Handlers for more information.
Container Error Handlers
https://docs.spring.io/spring-kafka/docs/current/reference/html/#error-handlers
Starting with version 2.8, the legacy ErrorHandler and BatchErrorHandler interfaces have been superseded by a new CommonErrorHandler. These error handlers can handle errors for both record and batch listeners, allowing a single listener container factory to create containers for both types of listener. CommonErrorHandler implementations to replace most legacy framework error handler implementations are provided and the legacy error handlers deprecated. The legacy interfaces are still supported by listener containers and listener container factories; they will be deprecated in a future release.

I was facing exactly the same problem, so I changed the method implementation ConsumerAwareErrorHandler by
CommonErrorHandler
and implemented
handleRecord
like described in the docs and it works!
public class GlobalErrorHandler implements CommonErrorHandler {
private static final Logger log = LoggerFactory.getLogger(GlobalErrorHandler.class);
#Override
public void handleRecord(
Exception thrownException,
ConsumerRecord<?, ?> record,
Consumer<?, ?> consumer,
MessageListenerContainer container) {
log.warn("Global error handler for message: {}", record.value().toString());
}
}
In KafkaConfig.class
#Bean(value = "kafkaListenerContainerFactory")
public ConcurrentKafkaListenerContainerFactory<Object, Object> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer) {
var factory = new ConcurrentKafkaListenerContainerFactory<>();
configurer.configure(factory, consumerFactory());
factory.setCommonErrorHandler(new GlobalErrorHandler());
return factory;
}

Related

Axon Framework - Configuring Multiple EventStores in Axon Configuration

We are having an usecase wherein each aggregate root should have different eventstores. We have used the following configuration where currently , we have only one event-store configured as below
#Configuration
#EnableDiscoveryClient
public class AxonConfig {
private static final String DOMAIN_EVENTS_COLLECTION_NAME = "coll-capture.domainEvents";
//private static final String DOMAIN_EVENTS_COLLECTION_NAME_TEST =
//"coll-capture.domainEvents-test";
#Value("${mongodb.database}")
private String databaseName;
#Value("${spring.application.name}")
private String appName;
#Bean
public RestTemplate restTemplate() {
CloseableHttpClient httpClient = HttpClientBuilder.create().build();
HttpComponentsClientHttpRequestFactory clientHttpRequestFactory = new
HttpComponentsClientHttpRequestFactory(httpClient);
return new RestTemplate(clientHttpRequestFactory);
}
#Bean
#Profile({"uat", "prod"})
public CommandRouter springCloudHttpBackupCommandRouter(DiscoveryClient discoveryClient,
Registration localInstance,
RestTemplate restTemplate,
#Value("${axon.distributed.spring-
cloud.fallback-url}") String messageRoutingInformationEndpoint) {
return new SpringCloudHttpBackupCommandRouter(discoveryClient,
localInstance,
new AnnotationRoutingStrategy(),
serviceInstance -> appName.equalsIgnoreCase(serviceInstance.getServiceId()),
restTemplate,
messageRoutingInformationEndpoint);
}
#Bean
public Repository<TestEnquiry> testEnquiryRepository(EventStore eventStore) {
return new EventSourcingRepository<>(TestEnquiry.class, eventStore);
}
#Bean
public Repository<Test2Enquiry> test2enquiryRepository(EventStore eventStore) {
return new EventSourcingRepository<>(Test2Enquiry.class, eventStore);
}
#Bean
public EventStorageEngine eventStorageEngine(MongoClient client) {
MongoTemplate mongoTemplate = new DefaultMongoTemplate(client, databaseName)
.withDomainEventsCollection(DOMAIN_EVENTS_COLLECTION_NAME);
return new MongoEventStorageEngine(mongoTemplate);
}
}
Now , We want to configure "DOMAIN_EVENTS_COLLECTION_NAME_TEST"(just for example) as well in EventStorageEngine. How we can achieve the same support for multiple event-stores and select the tracking process as which collection they should be part of
If you are going the route of segregating the event streams, then combining them from an event handling perspective could become a necessity indeed. Especially when having several bounded contexts, segregating the event streams into distinct storage solutions is reasonable.
If you want to define which [message source / event store] is used by a TrackingEventProcessor, you will have to deal with the EventProcessingConfigurer. More specifically, you should invoke the EventProcessingConfigurer#registerTrackingEventProcessor(String, Function<Configuration, StreamableMessageSource<TrackedEventMessage<?>>>) method. The first String parameter is the name of the processor you want to configure as being "tracking". The second parameter defines a Function which gives you the message source to be used by this TrackingEventProcessor (TEP). It is here where you should provide the event store you want this TEP to ingest events from.
Pairing them up at a later stage could also occur of course, which is also supported by Axon Framework. This boils down to a specific form of StreamableMessageSource implementation.
More specifically, you can use the MultiStreamableMessageSource, where you can connect any number of StreamableMessageSources together.
Note that Axon's EmbeddedEventStore is in essence an implementation of a StreamableMessageSource. Once the MultiStreamableMessageSource, you will have to specify it as the messageSource for your TrackingEventProcessors of course.
Last note, know that this solution can only be used when you are using TrackingEventProcessors, as those are the only Event Processors provided by Axon ingesting a StreamableMessageSource as the source for it's events.

Overridden RabbitSourceConfiguration (app starters) does not work with Spring Cloud Edgware

I'm testing an upgrade of my Spring Cloud DataFlow services from Spring Cloud Dalston.SR4/Spring Boot 1.5.9 to Spring Cloud Edgware/Spring Boot 1.5.9. Some of my services extend source (or sink) components from the app starters. I've found this does not work with Spring Cloud Edgware.
For example, I have overridden org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration and bound my app to my overridden version. This has previously worked with Spring Cloud versions going back almost a year.
With Edgware, I get the following (whether the app is run standalone or within dataflow):
***************************
APPLICATION FAILED TO START
***************************
Description:
Field channels in org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration required a bean of type 'org.springframework.cloud.stream.messaging.Source' that could not be found.
Action:
Consider defining a bean of type 'org.springframework.cloud.stream.messaging.Source' in your configuration.
I get the same behaviour with the 1.3.0.RELEASE and 1.2.0.RELEASE of spring-cloud-starter-stream-rabbit.
I override RabbitSourceConfiguration so I can set a header mapper on the AmqpInboundChannelAdapter, and also to perform a connectivity test prior to starting up the container.
My subclass is bound to the Spring Boot application with #EnableBinding(HeaderMapperRabbitSourceConfiguration.class). A cutdown version of my subclass is:
public class HeaderMapperRabbitSourceConfiguration extends RabbitSourceConfiguration {
public HeaderMapperRabbitSourceConfiguration(final MyHealthCheck healthCheck,
final MyAppConfig config) {
// ...
}
#Bean
#Override
public AmqpInboundChannelAdapter adapter() {
final AmqpInboundChannelAdapter adapter = super.adapter();
adapter.setHeaderMapper(new NotificationHeaderMapper(config));
return adapter;
}
#Bean
#Override
public SimpleMessageListenerContainer container() {
if (config.performConnectivityCheckOnStartup()) {
if (LOGGER.isInfoEnabled()) {
LOGGER.info("Attempting connectivity with ...");
}
final Health health = healthCheck.health();
if (health.getStatus() == Status.DOWN) {
LOGGER.error("Unable to connect .....");
throw new UnableToLoginException("Unable to connect ...");
} else if (LOGGER.isInfoEnabled()) {
LOGGER.info("Connectivity established with ...");
}
}
return super.container();
}
}
You really should never do stuff like healthCheck.health(); within a #Bean definition. The application context is not yet fully baked or started; it may, or may not, work depending on the order that beans are created.
If you want to prevent the app from starting, add a bean that implements SmartLifecycle, put the bean in a late phase (high value) so it's started after everything else. Then put your code in start(). autStartup must be true.
In this case, it's being run before the stream infrastructure has created the channel.
Some ordering might have changed from the earlier release but, in any case, performing activity like this in a #Bean definition is dangerous.
You just happened to be lucky before.
EDIT
I just noticed your #EnableBinding is wrong; it should be Source.class. I can't see how that would ever have worked - that's what creates the bean for the channels field of type Source.
This works fine for me after updating stream and the binder to 1.3.0.RELEASE...
#Configuration
public class MySource extends RabbitSourceConfiguration {
#Bean
#Override
public AmqpInboundChannelAdapter adapter() {
AmqpInboundChannelAdapter adapter = super.adapter();
adapter.setHeaderMapper(new MyMapper());
return adapter;
}
}
and
#SpringBootApplication
#EnableBinding(Source.class)
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
If that doesn't work, please edit the question to show your POM.

Error using "condition paramter header" #StreamListener of new release Chelsea.RC1

I am trying to use the event filter to reduce the amount of topics the application uses using the new feature available in the new release of the spring cloud stream (Chelsea.RC1). The message is being created, with the correct header, however, inspecting the contents of the message in the queue, the message does not contain the header, only the body with the payload.
public void sendEnroll(EnrollCommand data) {
//MessageChannel
outputEnroll.send(MessageBuilder
.withPayload(data)
.setHeader("brand", "MASTERCARD")
.setHeader("operation", Operation.ENROLL).build());
}
Consumer
#Service
#EnableBinding(Channel.class)
public class EnrollConsumer {
#Autowired
private EnrollService service;
#StreamListener(target = Channel.INPUT_ENROLL, condition = "headers['brand']=='MASTERCARD'")
public void enrollConsumer(#Payload String command){
System.out.println(command);
//service.enrollment(command);
}
}
In consumer service, it gives the following warning:
WARN -kafka-listener-1 o.s.c.s.b.DispatchingStreamListenerMessageHandler:62 - Cannot find a #StreamListener matching for message with id: 7baae934-7484-a7fd-91b0-ba906558bb13
You have to map that your custom headers:
spring.cloud.stream.kafka.binder.headers = brand,operation
That information is present in the documentation.

Using a Producer Method To Choose a Bean Implementation

I followed the example on here for dynamically selecting the implementation to inject during run time. I then try to implement it based on my understanding but my code always return the default implementation;
Here is my code
#Stateless
public class MemberRegistration {
#Inject
private Logger log;
#Inject
private EntityManager em;
#Inject
private Event<Member> memberEventSrc;
#Inject
#Switch
IHandler handler;
private int handlerValue;
public String testCDI(int value) {
handlerValue = value;
log.info("handling " + value);
log.info("handling " + handlerValue);
return handler.handle();
}
#Produces
#RequestScoped
#Switch
public IHandler produceHandler(#New Handler0 handler0,
#New Handler1 handler1) {
log.info("Calling producer method with handler: "+handlerValue);
switch (handlerValue) {
case 1:
log.info("returning one");
return handler1;
case 0:
log.info("returning 0");
return handler0;
default:
log.info("returning default");
return handler1;
}
}
}
When i call the method testCDI I then update the handlerValue so that my producer method can use that value. What am I missing here to ensure that the producer method is called when the right value is available?
The code is running on Wildfly 8.2.0
The instance injected isn't going to be resolved when you call the method, but at the time of injection of the bean (the stateless session bean in this case). As a result, handlerValue will be 0.
You can however use an Instance<IHandler> to defer the injection. Use an annotation literal instead of your switch to do something like
#Inject
#Any
private Instance<IHandler> handlerInst
Then in your code
IHandler handler = handlerInst.select(new SwitchLiteral(value)).get();
then do work against that guy, but in your producer you need to use the InjectionPoint class to read the Switch annotation represented by the SwitchLiteral
You are running into cycling dependency here with your simplified code. Fields injected with plain #Inject need to be resolved BEFORE MemberRegistration is created, but handler field can only be created with a producer method AFTER MemberRegistration is created (beans with producer methods are created according to same rules as other CDI beans).
There are 2 solutions:
Either you create a separate HandlerProducer class, which will contain produceHandler() method and also handlerValue field. You should mark the class as #ApplicationScoped in order to reuse the same instance all over.
Or you need not only to produce IHandler dynamically, but also use (inject it) dynamically only when really needed in the MemberRegistration - this way handler is produced not BEFORE MemberRegistration is created, but after or never if not required. You do this by injecting Instance object and then use its get() method to retrieve handler when needed. Anyway, I am not sure if CDI will create a new instance every time, or reuse existing EJB. Scopes of EJBs and plain CDI beans are completely different and in general, I would not use an EJB as a bean with producer methods. It is better to always create a separate bean for producer methods, as in solution 1.
Example for solution 2 follows:
#Inject
#Switch
Instance<IHandler> handlerInjector;
private int handlerValue;
public String testCDI(int value) {
handlerValue = value;
log.info("handling " + value);
log.info("handling " + handlerValue);
return handlerInjector.get().handle();
}

Cannot remove a JPA entity using Spring Integration

When I try to remove an entity using Outbound Channel Adapter I always get removing a detached instance exception.
I know that an entity should be retrieved and deleted in the same transaction to avoid this exception, but how can I achieve it with Spring Integration?
To demonstrate the problem I modified the JPA sample:
PersonService.java
public interface PersonService {
...
void deletePerson(Person person);
}
Main.java
private static void deletePerson(final PersonService service) {
final List<Person> people = service.findPeople();
Person p1 = people.get(0);
service.deletePerson(p1);
}
spring-integration-context.xml
<int:gateway id="personService"
service-interface="org.springframework.integration.samples.jpa.service.PersonService"
default-request-timeout="5000" default-reply-timeout="5000">
<int:method name="createPerson" request-channel="createPersonRequestChannel"/>
<int:method name="findPeople" request-channel="listPeopleRequestChannel"/>
<int:method name="deletePerson" request-channel="deletePersonChannel"/>
</int:gateway>
<int:channel id="deletePersonChannel"/>
<int-jpa:outbound-channel-adapter entity-manager-factory="entityManagerFactory"
channel="deletePersonChannel" persist-mode="DELETE" >
<int-jpa:transactional transaction-manager="transactionManager" />
</int-jpa:outbound-channel-adapter>
When I call deletePerson I get the exception:
Exception in thread "main" java.lang.IllegalArgumentException:
Removing a detached instance
org.springframework.integration.samples.jpa.Person#1001
UPDATE:
Apparently I should've chosen a sample closer to my actual project, because here you can just create a new transaction programmatically and wrap both retrieve and delete function calls in it, as Artem did.
In my project I have a transformer connected to an outbound-channel-adapter. The transformer retrieves an entity and the outbound-channel-adapter removes it. How can I get the transformer and the outbound-channel-adapter to use the same transaction in this case?
To get it worked you should wrap all operations in the deletePerson to transaction, e.g.
private static void deletePerson(final PersonService service) {
new new TransactionTemplate(transactionManager)
.execute(new TransactionCallbackWithoutResult() {
protected void doInTransactionWithoutResult(TransactionStatus status) {
final List<Person> people = service.findPeople();
Person p1 = people.get(0);
service.deletePerson(p1);
}
});
}
In this case you should somehow provide to your method transactionManager bean too.
UPDATE:
I shown you a sample for use-case in the original question.
Now re. <transformer> -> <jpa:outbound-channel-adapter>.
In this you should understand where your message flow is started:
If it is <inbound-channel-adapter> with poller, so just make the <poller> <transactional>
If it <gateway>, who call <transformer>, so it's just enough to mark gateway's method with #Transactional
Here is one more transactional advice trick: Keep transaction within Spring Integration flow
In all cases you should get rid of <transactional> from your <jpa:outbound-channel-adapter>