Redis Spring data with Lettuce: com.lambdaworks.redis.RedisCommandExecutionException: MOVED error - spring-data

I'm using AWS ElastiCache (Redis) in cluster mode. I'm having two implementations to connect to ElastiCache. One of the implementations is directly using the native Lettuce driver and other using Spring data with Lettuce as underneath driver. AWS ElastiCache has Cluster configuration endpoint. I want to use this endpoint to connect to ElastiCache. I'm able to successfully connect to ElastiCache using cluster endpoint with native Lettuce driver implementation but getting below error when using spring data with cluster endpoint
Spring data: 1.8.9-RELEASE (Using a higher version of Spring is not an option)
Lettuce: 4.5.0-FINAL
#Bean
public LettuceConnectionFactory redisConnectionFactory() {
LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory();
lettuceConnectionFactory.setHostName(<cluster_endpoint>);
lettuceConnectionFactory.setPort();
lettuceConnectionFactory.setUseSsl(Boolean.valueOf(useSsl));
//lettuceConnectionFactory.setPassword(password);
return lettuceConnectionFactory;
}
Error:
Caused by: org.springframework.data.redis.RedisSystemException: Error in execution; nested exception is com.lambdaworks.redis.RedisCommandExecutionException: MOVED 12894 cache---.usw2.cache.amazonaws.com:6379
at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:50)
at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:48)
at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:41)
It works fine in below two scenarios -
1) I use cluster node endpoints and inject in LettuceConnectionFactory via RedisClusterConfiguration.
2) If I use Lettuce as direct implementation (not through Spring data) using StatefulRedisClusterConnection.
What could be the reason for the above two errors? I would prefer to use Lettuce with Spring data by using cluster configuration endpoint.

Below is the solution for connecting to Elasticache Redis cluster using Spring Data -
Using ElastiCache Cluster Configuration Endpoint:
#Bean
public RedisClusterConfiguration redisClusterConfiguration() {
RedisClusterConfiguration clusterConfiguration = new
RedisClusterConfiguration();
clusterConfiguration.clusterNode("host", port);
new LettuceConnectionFactory(clusterConfiguration);
}
Using ElastiCache Node endpoints:
#Bean
public RedisClusterConfiguration redisClusterConfiguration() {
RedisClusterConfiguration clusterConfiguration = new RedisClusterConfiguration()
.clusterNode("redis-cluster----0001-001.redis-cluster---.usw2.cache.amazonaws.com",6379)
.clusterNode("redis-cluster----0001-002.redis-cluster---.usw2.cache.amazonaws.com",6379)
.clusterNode("redis-cluster----0001-003.redis-cluster---.usw2.cache.amazonaws.com",6379)
.clusterNode("redis-cluster----0001-004.redis-cluster---.usw2.cache.amazonaws.com",6379);
return clusterConfiguration;
}
Thanks to Mark Paluch who responded to my issue in Spring Data forum.
Here is the detail - https://jira.spring.io/browse/DATAREDIS-898

Related

Spring Cloud Data Flow - Partition Batch Job using Spring Cloud Kubernetes Deployer : Environment properties not getting passed to worker pods

I modified the dataflow sample app partitioned-batch-job to deploy it in a kubernetes cluster via the SCDF server that is running in the cluster. I used the dashboard UI to launch this app as a task. Modified the code for the partitionHandler() (DeployerPartitionHandler bean) as shown below. I am facing an issue with the worker pods being launched without the spring datasource properties from the master pod environment. I confirmed the master step environment has the right values for these properties and it is being set in the DeployerPartitionHandler bean as below:
partitionHandler
.setEnvironmentVariablesProvider(new SimpleEnvironmentVariablesProvider(this.environment));
I worked around it for now by passing these as command line arguments as indicated in the code below. Appreciate any inputs on why the environment properties are not available in the worker pods.
Also confirmed that the "spring cloud deployer local" flavor of this app runs on my local machine without this problem via "java -jar target/<app.jar>" .
#Bean
public PartitionHandler partitionHandler(TaskLauncher taskLauncher, JobExplorer jobExplorer,
TaskRepository taskRepository) throws Exception {
DockerResourceLoader dockerResourceLoader = new DockerResourceLoader();
Resource resource = dockerResourceLoader.getResource(config.getDockerResourceLocation());
logger.info("Docker Resource URI: " + config.getDockerResourceLocation());
DeployerPartitionHandler partitionHandler =
new DeployerPartitionHandler(taskLauncher, jobExplorer, resource, "workerStep", taskRepository);
List<String> commandLineArgs = new ArrayList<>(8);
commandLineArgs.add("--spring.profiles.active=worker");
commandLineArgs.add("--spring.cloud.task.initialize-enabled=false");
commandLineArgs.add("--spring.batch.initializer.enabled=false");
// Passing these properties in command line as worker tasks are not getting them from the environment
commandLineArgs.add("--spring.datasource.url=jdbc:mysql://10.141.22.143:3306/scdf?useSSL=false");
commandLineArgs.add("--spring.datasource.username=dbuser");
commandLineArgs.add("--spring.datasource.password=dbpassword");
commandLineArgs.add("--spring.datasource.driverClassName=org.mariadb.jdbc.Driver");
partitionHandler
.setCommandLineArgsProvider(new PassThroughCommandLineArgsProvider(commandLineArgs));
partitionHandler
.setEnvironmentVariablesProvider(new SimpleEnvironmentVariablesProvider(this.environment));
partitionHandler.setMaxWorkers(2);
partitionHandler.setApplicationName("PartitionedBatchJobTask");
return partitionHandler;
}
Included below is the worker pod configuration YAML that shows SCDF is already passing the datasource properties as command line, but it is still not being used (the datasource defaults to H2 embedded and not MYSQL as the URL property indicates). The properties are bunched up together (as part of the sun.java.command property) if that matters.
- --sun.boot.library.path=/opt/openjdk/lib/amd64
- --KUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443
- --sun.java.command=io.spring.PartitionedBatchJobApplication --management.metrics.tags.service=task-application
--spring.datasource.username=dbuser --spring.datasource.url=jdbc:mysql://10.141.22.143:3306/scdf?useSSL=false
--spring.datasource.driverClassName=org.mariadb.jdbc.Driver --management.metrics.tags.application=partitioned-batch-job-89
--docker-resource-location=docker:vrajkuma/partitioned-batch-job:2.3.1-SNAPSHOT
--spring.cloud.task.name=partitioned-batch-job --spring.datasource.password=dbpassword
--spring.cloud.task.executionid=89
- --sun.cpu.endian=little
- --MY_SCDF_SPRING_CLOUD_DATAFLOW_SKIPPER_SERVICE_PORT_HTTP=80

eureka property to display datacenter and environment in the Eureka dashboard

I am working with the latest 2020.0.3 version of Spring cloud Eureka server. Which property in the application configuration will display the datacenter and environment in the Eureka dashboard? It is not clear in the documentation.
If you are working in AWS you'll need this bean.
#Bean #Primary
#Profile("aws")
public EurekaInstanceConfigBean eurekaInstanceConfig(InetUtils inetUtils) {
EurekaInstanceConfigBean config = new EurekaInstanceConfigBean(inetUtils);
AmazonInfo info = AmazonInfo.Builder.newBuilder().autoBuild("eureka");
config.setDataCenterInfo(info);
return config;
}
more info -> https://cloud.spring.io/spring-cloud-netflix/multi/multi__service_discovery_eureka_clients.html#_using_eureka_on_aws

Spring Cloud Dataflow errorChannel not working

I'm attempting to create a custom exception handler for my Spring Cloud Dataflow stream to route some errors to be requeued and others to be DLQ'd.
To do this I'm utilizing the global Spring Integration "errorChannel" and routing based on exception type.
This is the code for the Spring Integration error router:
package com.acme.error.router;
import com.acme.exceptions.DlqException;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.integration.annotation.MessageEndpoint;
import org.springframework.integration.annotation.Router;
import org.springframework.integration.transformer.MessageTransformationException;
import org.springframework.messaging.Message;
#MessageEndpoint
#EnableBinding({ ErrorMessageChannels.class })
public class ErrorMessageMappingRouter {
private static final Logger LOGGER = LoggerFactory.getLogger(ErrorMessageMappingRouter.class);
public static final String ERROR_CHANNEL = "errorChannel";
#Router(inputChannel = ERROR_CHANNEL)
public String onError(Message<Object> message) {
LOGGER.debug("ERROR ROUTER - onError");
if(message.getPayload() instanceof MessageTransformationException) {
MessageTransformationException exception = (MessageTransformationException) message.getPayload();
Message<?> failedMessage = exception.getFailedMessage();
if(exceptionChainContainsDlq(exception)) {
return ErrorMessageChannels.DLQ_QUEUE_NAME;
}
return ErrorMessageChannels.REQUEUE_CHANNEL;
}
return ErrorMessageChannels.DLQ_QUEUE_NAME;
}
...
}
The error router is picked up by each of the stream apps through a package scan on the Spring Boot App for each:
#ComponentScan(basePackages = { "com.acme.error.router" }
#SpringBootApplication
public class StreamApp {}
When this is deployed and run with the local Spring Cloud Dataflow server (version 1.5.0-RELEASE), and a DlqException is thrown, the message is successfully routed to the onError method in the errorRouter and then placed into the dlq topic.
However, when this is deployed as a docker container with SCDF Kubernetes server (also version 1.5.0-RELEASE), the onError method is never hit. (The log statement at the beginning of the router is never output)
In the startup logs for the stream apps, it looks like the bean is picked up correctly and registers as a listener for the errorChannel, but for some reason, when exceptions are thrown they do not get handled by the onError method in our router.
Startup Logs:
o.s.i.endpoint.EventDrivenConsumer : Adding {router:errorMessageMappingRouter.onError.router} as a subscriber to the 'errorChannel' channel
o.s.i.channel.PublishSubscribeChannel : Channel 'errorChannel' has 1 subscriber(s).
o.s.i.endpoint.EventDrivenConsumer : started errorMessageMappingRouter.onError.router
We are using all default settings for the spring cloud stream and kafka binder configurations:
spring.cloud:
stream:
binders:
kafka:
type: kafka
environment.spring.cloud.stream.kafka.binder.brokers=brokerlist
environment.spring.cloud.stream.kafka.binder.zkNodes=zklist
Edit: Added pod args from kubectl describe <pod>
Args:
--spring.cloud.stream.bindings.input.group=delivery-stream
--spring.cloud.stream.bindings.output.producer.requiredGroups=delivery-stream
--spring.cloud.stream.bindings.output.destination=delivery-stream.enricher
--spring.cloud.stream.binders.xdkafka.environment.spring.cloud.stream.kafka.binder.zkNodes=<zkNodes>
--spring.cloud.stream.binders.xdkafka.type=kafka
--spring.cloud.stream.binders.xdkafka.defaultCandidate=true
--spring.cloud.stream.binders.xdkafka.environment.spring.cloud.stream.kafka.binder.brokers=<brokers>
--spring.cloud.stream.bindings.input.destination=delivery-stream.config-enricher
One other idea we attempted was trying to use the Spring Cloud Stream - spring integration error channel support to send to a broker topic on errors, but since messages don't seem to be landing in the global Spring Integration errorChannel at all, that didn't work either.
Is there anything special we need to do in SCDF Kubernetes to enable the global Spring Integration errorChannel?
What am I missing here?
Update with solution from the comments:
After reviewing your configuration I am now pretty sure I know what
the issue is. You have a multi-binder configuration scenario. Even if
you only deal with a single binder instance the existence of
spring.cloud.stream.binders.... is what's going to make framework
treat it as multi-binder. Basically this a bug -
github.com/spring-cloud/spring-cloud-stream/issues/1384. As you can
see it was fixed but you need to upgrade to Elmhurst.SR2 or grab the
latest snapshot (we're in RC2 and 2.1.0.RELEASE is in few weeks
anyway) – Oleg Zhurakousky
This was indeed the problem with our setup. Instead of upgrading, we just eliminated our multi-binder usage for now and the issue was resolved.
Update with solution from the comments:
After reviewing your configuration I am now pretty sure I know what
the issue is. You have a multi-binder configuration scenario. Even if
you only deal with a single binder instance the existence of
spring.cloud.stream.binders.... is what's going to make framework
treat it as multi-binder. Basically this a bug -
github.com/spring-cloud/spring-cloud-stream/issues/1384. As you can
see it was fixed but you need to upgrade to Elmhurst.SR2 or grab the
latest snapshot (we're in RC2 and 2.1.0.RELEASE is in few weeks
anyway) – Oleg Zhurakousky
This was indeed the problem with our setup. Instead of upgrading, we just eliminated our multi-binder usage for now and the issue was resolved.

Discovery / Registration only works when K8s pod already registered.

Thanks for Spring Boot Admin!
I am using it with Spring Cloud Kubernetes, our k8s pods only get discovered when we start the Spring Boot Admin, after the service pods have been started.
It seems looking at InstanceDiscoveryListener, the discovery of clients,
will happen based on events. Like ApplicationReadyEvent (When starting) and i.e. InstanceRegisteredEvent.
Is it correct to say, that Spring Boot Admin, will not try to discover periodically? If so how do I make sure an event is fired from the application to Spring boot admin picks it up and registers the instance?
Especially to make sure, that instanced are registered when they are started after spring boot admin, was started. (The order in which k8s pods are started is arbitrary/hard to control, something in general we don't want to do).
Thank You!
Christophew
Version:
springBootAdminVersion = '2.0.1'
springCloudVersion = 'Finchley.RELEASE'
springCloudK8s = '0.3.0.RELEASE'
Not sure if this is the best way to solve it but seems to work:
class TimedInstanceDiscoveryListener extends InstanceDiscoveryListener {
private static final Logger log = LoggerFactory.getLogger(TimedInstanceDiscoveryListener.class);
public TimedInstanceDiscoveryListener(DiscoveryClient discoveryClient, InstanceRegistry registry, InstanceRepository repository) {
super(discoveryClient, registry, repository);
log.info("Starting custom TimedInstanceDiscoveryListener");
}
#Scheduled(fixedRate = 5000)
public void periodicDiscovery() {
log.info("Discovering new pod / services");
super.discover();
}
}
#Bean
#ConfigurationProperties(prefix = "spring.boot.admin.discovery")
public InstanceDiscoveryListener instanceDiscoveryListener(ServiceInstanceConverter serviceInstanceConverter,
DiscoveryClient discoveryClient,
InstanceRegistry registry,
InstanceRepository repository) {
InstanceDiscoveryListener listener = new TimedInstanceDiscoveryListener(discoveryClient, registry, repository);
listener.setConverter(serviceInstanceConverter);
return listener;
}

Container managed MongoDB Connection in Liberty + Spring Data

We have developed an application in Spring Boot + spring data (backend) + MongoDB and used IBM Websphere Liberty as application Server. We were used "Application Managed DB Connection" in an yml file and enjoyed the benefit of Spring Boot autoconfiguration.
Due to policy changes, we would need to manage our DB Connection in Liberty Server(using mongo feature), in Server.xml. I spent whole day in finding out an good example to do this, but dont find any example in Spring with "Container Managed MongoDB Connection" in IBM Websphere Liberty Server.
Can someone please support here?
Check out this other stackoverflow solution. The following is an extension of how you would use that in your Spring Boot app.
You should be able to inject your datasource the same way. You could even inject it into your configuration and wrap it in a Spring DelegatingDataSource.
#Configuration
public class DataSourceConfiguration {
// This is the last code section from that link above
#Resource(lookup = "jdbc/oracle")
DataSource ds;
#Bean
public DataSource mySpringManagedDS() {
return new DelegatingDataSource(ds);
}
}
Then you should be able to inject the mySpringManagedDS DataSource into your Component, Service, etc.
In the past Liberty had a dedicated mongodb-2.0 feature for the server.xml, however this feature provided pretty minimal benefit, since you still needed to bring your own MongoDB libraries. Also, over time MongoDB made significant breaking changes to their API, including how MongoDB gets configured.
Since the MongoDB API is changing so drastically between releases, we found it better to not provide any new MongoDB features in Liberty and instead suggest that users simply use a CDI producer like this:
CDI producer (holds any configuration too):
#ApplicationScoped
public class MongoProducer {
#Produces
public MongoClient createMongo() {
return new MongoClient(new ServerAddress(), new MongoClientOptions.Builder().build());
}
#Produces
public MongoDatabase createDB(MongoClient client) {
return client.getDatabase("testdb");
}
public void close(#Disposes MongoClient toClose) {
toClose.close();
}
}
Example usage:
#Inject
MongoDatabase db;
#POST
#Path("/add")
#Consumes(MediaType.APPLICATION_JSON)
public void add(CrewMember crewMember) {
MongoCollection<Document> crew = db.getCollection("Crew");
Document newCrewMember = new Document();
newCrewMember.put("Name",crewMember.getName());
newCrewMember.put("Rank",crewMember.getRank());
newCrewMember.put("CrewID",crewMember.getCrewID());
crew.insertOne(newCrewMember);
}
This is just the basics, but the following blog post goes into much greater detail along with code examples:
https://openliberty.io/blog/2019/02/19/mongodb-with-open-liberty.html