I'm trying to set up a simple cloud stream Sink but keep running into the following errors.
I've tried several binders and they all keep giving the same error.
"SEVERE","logNameSource":"org.springframework.boot.diagnostics.LoggingFailureAnalysisReporter","message":"
***************************
APPLICATION FAILED TO START
***************************
Description:
Parameter 0 of method binderAwareRouterBeanPostProcessor in org.springframework.cloud.stream.config.BindingServiceConfiguration required a bean of type '[ Lorg.springframework.integration.router.AbstractMappingMessageRouter;' that could not be found.
Action:
Consider defining a bean of type '[ Lorg.springframework.integration.router.AbstractMappingMessageRouter;' in your configuration.
I'm trying to use a simple Sink to log an incoming message from a kafka topic
#EnableBinding(Sink.class)
public class ReadEMPMesage {
private static Logger logger =
LoggerFactory.getLogger(ReadEMPMesage.class);
public ReadEMPMesage() {
System.out.println("In constructor");
}
#StreamListener(Sink.INPUT)
public void loggerSink(String ccpEvent) {
logger.info("Received" + ccpEvent);
}
}
and my configuration is as follows
# Test consumer properties
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.group-id=testEmbeddedKafkaApplication
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.ByteArrayDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.ByteArrayDeserializer
# Binding properties
spring.cloud.stream.bindings.output.destination=testEmbeddedOut
spring.cloud.stream.bindings.input.destination=testEmbeddedIn
spring.cloud.stream.bindings.output.producer.headerMode=raw
spring.cloud.stream.bindings.input.consumer.headerMode=raw
spring.cloud.stream.bindings.input.group=embeddedKafkaApplication
and my pom
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
TL;DR - check your version of Spring Boot and try upgrading it a few minor revs.
I ran into this problem on a project after upgrading from Spring Cloud DALSTON.RELEASE to Spring Cloud Edgware.SR4 -- it was strange because other projects worked fine but there was a single one that didn't.
After further investigation I realized that the troublemaker project was using Spring Boot 1.5.3.RELEASE and others were using 1.5.9.RELEASE
After upgrading Spring Boot to 1.5.9.RELEASE things seemed to start working
Related
I'm trying to use Spring Cloud Stream to process messages sent to an Azure Event Hub instance. Those messages should be routed to a tenant-specific topic determined at runtime, based on message content, on a Kafka cluster. For development purposes, I'm running Kafka locally via Docker.
I've done some research about bindings not known at configuration time and have found that dynamic destination resolution might be exactly what I need for this scenario.
However, the only way to get my solution working is to use StreamBridge. I would rather use the dynamic destination header spring.cloud.stream.sendto.destination, in that way the processor could be written as a Function<> instead of a Consumer<> (it is not properly a sink). The main concern about this approach is that, since the final solution will be deployed with Spring Data Flow, I'm afraid I will have troubles configuring the streams if using StreamBridge.
Moving on to the code, this is the processor function, I stripped away the unrelated parts
private static final String OUTPUT_DESTINATION_TEMPLATE = "%s.gateway-report";
private static final String STREAM_DESTINATION_HEADER = "spring.cloud.stream.sendto.destination";
private static final String TENANT_ID_HEADER = "tenant-id";
#Bean
public Function<Message<String>, Message<String>>
routeMessageToTenantDestination(TenantGatewayDeviceService gatewayDeviceService) {
return msg -> {
final String tenantId = "test";
final String destination = String.format(OUTPUT_DESTINATION_TEMPLATE, tenantId);
return MessageBuilder.withPayload(msg.getPayload())
.setHeader(STREAM_DESTINATION_HEADER, destination)
.setHeader(TENANT_ID_HEADER, tenantId)
.build();
};
}
and this is my application.yml
spring:
cloud:
stream:
bindings:
routeMessageToTenantDestination-in-0:
binder: kafka-evthub
destination: gateway-report
group: report-processor
dynamic-destinations:
binders:
kafka-ioc:
type: kafka
environment:
spring.cloud.stream.kafka.binder:
brokers: localhost:29092
kafka-evthub:
type: kafka
environment:
spring.cloud.stream.kafka.binder:
brokers: xxxxxxxxxxx.servicebus.windows.net:9093
configuration:
sasl:
jaas:
config: org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://xxxxxxxxxxx.servicebus.windows.net/;SharedAccessKeyName=*******;SharedAccessKey=********";
mechanism: PLAIN
security.protocol: SASL_SSL
default-binder: kafka-ioc
My relevant dependencies in pom.xml
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
This is the exception I get each time the function fires
2022-01-20 10:56:18.848 ERROR 2258917 --- [container-0-C-1] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessageHandlingException: error occurred in message handler [... stripped away ...]
at org.springframework.integration.support.utils.IntegrationUtils.wrapInHandlingExceptionIfNecessary(IntegrationUtils.java:191)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:65)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:115)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:133)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:106)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:72)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:317)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:272)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:187)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:166)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:109)
at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:208)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.sendMessageIfAny(KafkaMessageDrivenChannelAdapter.java:385)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.access$300(KafkaMessageDrivenChannelAdapter.java:79)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:442)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:416)
at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.lambda$onMessage$0(RetryingMessageListenerAdapter.java:125)
at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:329)
at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:255)
at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage(RetryingMessageListenerAdapter.java:119)
at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage(RetryingMessageListenerAdapter.java:42)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeOnMessage(KafkaMessageListenerContainer.java:2588)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeOnMessage(KafkaMessageListenerContainer.java:2569)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:2483)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:2405)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:2284)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1958)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeIfHaveRecords(KafkaMessageListenerContainer.java:1353)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1344)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1236)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.NullPointerException
at org.springframework.cloud.stream.function.StreamBridge.resolveDestination(StreamBridge.java:276)
at org.springframework.cloud.stream.function.FunctionConfiguration$FunctionToDestinationBinder$1.doSendMessage(FunctionConfiguration.java:604)
at org.springframework.cloud.stream.function.FunctionConfiguration$FunctionToDestinationBinder$1.handleMessageInternal(FunctionConfiguration.java:597)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:56)
... 32 more
I've tried different things, f.i. manually creating the destination topic, configuring an explicit destination binding with the same name assigned to the header (not a definitive solution, just for testing), but I keep getting this exception. I've also tried to provide a NewDestinationBindingCallback<> and I can see from printing a log that the framework enters the method, but nevertheless I keep getting the same error.
This happens also with the other approach for integrating Spring Cloud Stream with Event Hubs, namely the library azure-spring-cloud-stream-binder-eventhubs.
As I said previously, I've found a workaround in relying to StreamBridge, but this solution seems less desirable to me and I would like to understand what I'm missing.
EDIT: I made a small step forward and managed to make it work by downgrading spring boot starter version from 2.6.2 to 2.4.4
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.4.4</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
and setting
<properties>
<spring-cloud.version>2020.0.2</spring-cloud.version>
</properties>
instead of 2021.0.0 in pom.xml, as found in the sample provided by sobychacko. However, it seems like a regression, or something is missing in my configuration to make this work with the most recent version?
Not sure what exactly is causing the issues you have. I just created a basic sample app demonstrating the sendto.destination header and verified that the app works as expected. It is a multi-binder application with two Kafka clusters connected. The function will consume from the first cluster and then using the sendto header, produce the output to the second cluster. Compare the code/config in this sample with your app and see what is missing.
I see references to StreamBridge in the stacktrace you shared. However, when using the sendto.destination header, it shouldn't go through StreamBridge.
I'm trying to add a Camel rout to a working project with Spring Boot for using MongoDB. I've using Mongo with Spring Boot autoconfigure, and it worked pretty easily.
I was confused about how to specify the bean that Spring Boot generates, but I finally found an answer to a related question on SO that said the name of the bean is "mongo". So I changed my rout to .to("mongodb:mongo?....
No Spring is trying to connect to default parameters, localhost and 72017, etc. So how do I figure out what properties to specify in application.properties to set the connection parameters? The documentation isn't being helpful here.
{Edit: I managed to figure this out. The below works now}
Here are the Maven dependencies I added:
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-mongodb</artifactId>
<version>${camel-version}</version>
</dependency>
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-mongodb-starter</artifactId>
<version>${camel-version}</version>
</dependency>
And here are the additions to my application.properties file
spring.data.mongodb.host=<IP>
spring.data.mongodb.port=27017
spring.data.mongodb.database=dev
spring.data.mongodb.username=test
spring.data.mongodb.password=password
And the Camel route:
package Order;
import org.apache.camel.builder.RouteBuilder;
import org.springframework.stereotype.Component;
#Component
public class OrderRouter extends RouteBuilder {
#Override
public void configure() {
// Process message
from("jms:topic:order")
.log("JMS Message: ${body}")
.choice()
.when().jsonpath("$.[?(#.type=='partial')]")
.to("mongodb:mongo?database=dev&collection=order&operation=insert");
}
}
Does this mean I need to define a bean when connecting with Camel? Looking at the documentation it seems that it should generate a bean by adding camel-mongodb-starter along with the application.properteis
https://camel.apache.org/components/latest/mongodb-component.html#_spring_boot_auto_configuration
I found the spring bean name, but only by looking around for examples...
spring.data.mongodb
In my application I am using two data modules (spring-boot-starter-data-jpa and spring-boot-starter-data-redis).
I have a method annotated with #CachPut to store in cache. this method is called once but the actual CachePut operation is happening twice. when I debug there are two proxies created which intercepts the same method.
When I use only one module spring-boot-starter-data-redis it works as expected because there is only one proxy created which intercepts the method having #CachPut annotation.
But as per our requirement our application need to use both the data modules (spring-boot-starter-data-jpa for DB related stuff and spring-boot-starter-data-redis for handling some cache related stuff). If I add spring-boot-starter-data-jpa then the cache operation is getting executed twice (due to multiple proxies).
Is there any way to disable proxy creation with #EnableCaching for jpa module but enable only for redis module.
Code Snippets
Configuration class:
#Configuration
#EnableCaching
public class MyConfiguration {
#Bean
public RedisCacheManager cacheManager(RedisConnectionFactory connectionFactory, #NotNull RedisCacheProperties properties) {
....
}
}
Dependencies
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
<version>1.5.9.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
<version>1.5.9.RELEASE</version>
</dependency>
CacheProcessor (Interface and Implementation classes)
public interface MyCacheProcessor {
MyData store(final MyData myData);
}
public class MyCacheProcessorImpl implements MyCacheProcessor {
#CachePut(cacheNames = {"my-data"}, key = "#myData.id")
public MyData store(final MyData myData) {
log.debug("storing in redis"); // This is printed only once (but actual cache put operation is happening twice )
return myData;
}
}
yaml configs
spring:
redis:
timeout: 500
host: devserver
port: 26379
datasource:
url: jdbc:h2:mem:testdb;Mode=Oracle;DB_CLOSE_ON_EXIT=FALSE
platform: h2
redis.cache.enabled: true
application:
redis:
cache:
my-data:
ttl: 15m
serializer:
type: org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer
handles: com.my.sample.model.MyData
I expect the cache operation should be executed only once even If I use both the data modules. Currently the cache operation is executed twice.
Any guidance would be appreciated.
I have developed a micro service in Spring boot and it is deployed in Cloud Foundry. MongoDB is a service created in PCF and it is a replica set type service. The mongodb service is bound to the micro service in PCF. I am using Spring cloud connector to automatically fetch the connection string for the mongodb service when deployed in cloud using the following code.
#Configuration
#Profile("cloud")
public class CloudFoundryDatabaseConfig extends AbstractCloudConfig{
#Bean
public Cloud cloud() {
return new CloudFactory().getCloud();
}
#Bean
public MongoDbFactory mongoFactory() {
return connectionFactory().mongoDbFactory();
}
}
This code is working perfectly fine when the mongoDB service is a standalone type. However if it is a replica set, i get a unknown host exception. Since the mongodb URI contains comma separated host names, it seems to be unresolved.
An example of MongoDB URI below.
"mongodb://username:password#101.23.65.41:28000,101.23.65.43:28000,101.23.65.45:28000/default?authSource=admin"
Error:
com.mongodb.MongoSocketException: mongod-node-0-310d0fd1.mongodb.internal: Name or service not known}, caused by {java.net.UnknownHostException
Pom.xml:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-cloudfoundry-connector</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-spring-service-connector</artifactId>
</dependency>
Please help.
For the past 6 months,I am the dev for our solution written on top of kafka-0.8.1.1. It is in stable for us. We thought we would upgrade to kafka-0.9.0.1.
With the server upgrade, we did not face any issues.
We have our own solution built to extract the messages and write to different destinations and also messages read by storm. For our unit tests we were using the following maven artifact
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.9.2</artifactId>
<version>0.8.1.1</version>
I could not find, 0.9.0.1 version for kafka_2.9.2. Hence I moved to kafka_2.11 first. This is the artifact used:
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.9.0.1</version>
I was running into following issue:
scala.ScalaObject not found issue
java.lang.NoSuchMethodError: scala.Predef$.ArrowAssoc(Ljava/lang/Object;)Ljava/lang/Object;
kafkaConfig<init> issue with NoSuchMethodError (Ljava/util/map;)Ljava/util/map
Also most of the time, I would run into KafkaServerStartable(both in kafka_2.10-0.9.0.1 and kafka_2.11-0.9.0.1) hang issue. But with the same unit tests, I never got into kafka server hang issue with kafka_2.9.2.
Could you please help me with my problem ?
Am I missing anything?
This is not an answer. following up on my question:
This is the kafka config that the existing code is using to start the test server:
dependency that I tried:
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId> # in this, scala 11.4, 11.7 used alternativel to verify
<version>0.9.0.1</version>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId> # in this scala 10.4 is used
<version>0.9.0.1</version>
public KafkaTestServer(int port, ZookeeperTestServer zkServer, String brokerId, int defaultPartitionCount) throws Exception {
this.zkServer = zkServer;
KafkaConfig config = getKafkaConfig(zkServer.getConnectString(), port, brokerId, defaultPartitionCount);
kafkaServer = new KafkaServerStartable(config);
kafkaServer.startup();
ProducerConfig conf = new ProducerConfig(getProducerConfig(getKafkaBrokerString()));
producer = new Producer<>(conf);
}
private KafkaConfig getKafkaConfig(String zkConnectString, int port, String brokerId, int defaultPartitionCount) {
Properties props = new Properties();
props.setProperty("zookeeper.connect", zkConnectString);
props.setProperty("broker.id", brokerId);
props.setProperty("port", Integer.toString(port));
createKafkaDataDirectory();
props.setProperty("log.dirs", dataDirectory.getAbsolutePath());
props.setProperty("num.partitions", Integer.toString(defaultPartitionCount));
props.setProperty("retry.backoff.ms", "500");
return new KafkaConfig(props, false);
}