eureka property to display datacenter and environment in the Eureka dashboard - netflix-eureka

I am working with the latest 2020.0.3 version of Spring cloud Eureka server. Which property in the application configuration will display the datacenter and environment in the Eureka dashboard? It is not clear in the documentation.

If you are working in AWS you'll need this bean.
#Bean #Primary
#Profile("aws")
public EurekaInstanceConfigBean eurekaInstanceConfig(InetUtils inetUtils) {
EurekaInstanceConfigBean config = new EurekaInstanceConfigBean(inetUtils);
AmazonInfo info = AmazonInfo.Builder.newBuilder().autoBuild("eureka");
config.setDataCenterInfo(info);
return config;
}
more info -> https://cloud.spring.io/spring-cloud-netflix/multi/multi__service_discovery_eureka_clients.html#_using_eureka_on_aws

Related

Spring Cloud Data Flow - Partition Batch Job using Spring Cloud Kubernetes Deployer : Environment properties not getting passed to worker pods

I modified the dataflow sample app partitioned-batch-job to deploy it in a kubernetes cluster via the SCDF server that is running in the cluster. I used the dashboard UI to launch this app as a task. Modified the code for the partitionHandler() (DeployerPartitionHandler bean) as shown below. I am facing an issue with the worker pods being launched without the spring datasource properties from the master pod environment. I confirmed the master step environment has the right values for these properties and it is being set in the DeployerPartitionHandler bean as below:
partitionHandler
.setEnvironmentVariablesProvider(new SimpleEnvironmentVariablesProvider(this.environment));
I worked around it for now by passing these as command line arguments as indicated in the code below. Appreciate any inputs on why the environment properties are not available in the worker pods.
Also confirmed that the "spring cloud deployer local" flavor of this app runs on my local machine without this problem via "java -jar target/<app.jar>" .
#Bean
public PartitionHandler partitionHandler(TaskLauncher taskLauncher, JobExplorer jobExplorer,
TaskRepository taskRepository) throws Exception {
DockerResourceLoader dockerResourceLoader = new DockerResourceLoader();
Resource resource = dockerResourceLoader.getResource(config.getDockerResourceLocation());
logger.info("Docker Resource URI: " + config.getDockerResourceLocation());
DeployerPartitionHandler partitionHandler =
new DeployerPartitionHandler(taskLauncher, jobExplorer, resource, "workerStep", taskRepository);
List<String> commandLineArgs = new ArrayList<>(8);
commandLineArgs.add("--spring.profiles.active=worker");
commandLineArgs.add("--spring.cloud.task.initialize-enabled=false");
commandLineArgs.add("--spring.batch.initializer.enabled=false");
// Passing these properties in command line as worker tasks are not getting them from the environment
commandLineArgs.add("--spring.datasource.url=jdbc:mysql://10.141.22.143:3306/scdf?useSSL=false");
commandLineArgs.add("--spring.datasource.username=dbuser");
commandLineArgs.add("--spring.datasource.password=dbpassword");
commandLineArgs.add("--spring.datasource.driverClassName=org.mariadb.jdbc.Driver");
partitionHandler
.setCommandLineArgsProvider(new PassThroughCommandLineArgsProvider(commandLineArgs));
partitionHandler
.setEnvironmentVariablesProvider(new SimpleEnvironmentVariablesProvider(this.environment));
partitionHandler.setMaxWorkers(2);
partitionHandler.setApplicationName("PartitionedBatchJobTask");
return partitionHandler;
}
Included below is the worker pod configuration YAML that shows SCDF is already passing the datasource properties as command line, but it is still not being used (the datasource defaults to H2 embedded and not MYSQL as the URL property indicates). The properties are bunched up together (as part of the sun.java.command property) if that matters.
- --sun.boot.library.path=/opt/openjdk/lib/amd64
- --KUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443
- --sun.java.command=io.spring.PartitionedBatchJobApplication --management.metrics.tags.service=task-application
--spring.datasource.username=dbuser --spring.datasource.url=jdbc:mysql://10.141.22.143:3306/scdf?useSSL=false
--spring.datasource.driverClassName=org.mariadb.jdbc.Driver --management.metrics.tags.application=partitioned-batch-job-89
--docker-resource-location=docker:vrajkuma/partitioned-batch-job:2.3.1-SNAPSHOT
--spring.cloud.task.name=partitioned-batch-job --spring.datasource.password=dbpassword
--spring.cloud.task.executionid=89
- --sun.cpu.endian=little
- --MY_SCDF_SPRING_CLOUD_DATAFLOW_SKIPPER_SERVICE_PORT_HTTP=80

Redis Spring data with Lettuce: com.lambdaworks.redis.RedisCommandExecutionException: MOVED error

I'm using AWS ElastiCache (Redis) in cluster mode. I'm having two implementations to connect to ElastiCache. One of the implementations is directly using the native Lettuce driver and other using Spring data with Lettuce as underneath driver. AWS ElastiCache has Cluster configuration endpoint. I want to use this endpoint to connect to ElastiCache. I'm able to successfully connect to ElastiCache using cluster endpoint with native Lettuce driver implementation but getting below error when using spring data with cluster endpoint
Spring data: 1.8.9-RELEASE (Using a higher version of Spring is not an option)
Lettuce: 4.5.0-FINAL
#Bean
public LettuceConnectionFactory redisConnectionFactory() {
LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory();
lettuceConnectionFactory.setHostName(<cluster_endpoint>);
lettuceConnectionFactory.setPort();
lettuceConnectionFactory.setUseSsl(Boolean.valueOf(useSsl));
//lettuceConnectionFactory.setPassword(password);
return lettuceConnectionFactory;
}
Error:
Caused by: org.springframework.data.redis.RedisSystemException: Error in execution; nested exception is com.lambdaworks.redis.RedisCommandExecutionException: MOVED 12894 cache---.usw2.cache.amazonaws.com:6379
at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:50)
at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:48)
at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:41)
It works fine in below two scenarios -
1) I use cluster node endpoints and inject in LettuceConnectionFactory via RedisClusterConfiguration.
2) If I use Lettuce as direct implementation (not through Spring data) using StatefulRedisClusterConnection.
What could be the reason for the above two errors? I would prefer to use Lettuce with Spring data by using cluster configuration endpoint.
Below is the solution for connecting to Elasticache Redis cluster using Spring Data -
Using ElastiCache Cluster Configuration Endpoint:
#Bean
public RedisClusterConfiguration redisClusterConfiguration() {
RedisClusterConfiguration clusterConfiguration = new
RedisClusterConfiguration();
clusterConfiguration.clusterNode("host", port);
new LettuceConnectionFactory(clusterConfiguration);
}
Using ElastiCache Node endpoints:
#Bean
public RedisClusterConfiguration redisClusterConfiguration() {
RedisClusterConfiguration clusterConfiguration = new RedisClusterConfiguration()
.clusterNode("redis-cluster----0001-001.redis-cluster---.usw2.cache.amazonaws.com",6379)
.clusterNode("redis-cluster----0001-002.redis-cluster---.usw2.cache.amazonaws.com",6379)
.clusterNode("redis-cluster----0001-003.redis-cluster---.usw2.cache.amazonaws.com",6379)
.clusterNode("redis-cluster----0001-004.redis-cluster---.usw2.cache.amazonaws.com",6379);
return clusterConfiguration;
}
Thanks to Mark Paluch who responded to my issue in Spring Data forum.
Here is the detail - https://jira.spring.io/browse/DATAREDIS-898

Discovery / Registration only works when K8s pod already registered.

Thanks for Spring Boot Admin!
I am using it with Spring Cloud Kubernetes, our k8s pods only get discovered when we start the Spring Boot Admin, after the service pods have been started.
It seems looking at InstanceDiscoveryListener, the discovery of clients,
will happen based on events. Like ApplicationReadyEvent (When starting) and i.e. InstanceRegisteredEvent.
Is it correct to say, that Spring Boot Admin, will not try to discover periodically? If so how do I make sure an event is fired from the application to Spring boot admin picks it up and registers the instance?
Especially to make sure, that instanced are registered when they are started after spring boot admin, was started. (The order in which k8s pods are started is arbitrary/hard to control, something in general we don't want to do).
Thank You!
Christophew
Version:
springBootAdminVersion = '2.0.1'
springCloudVersion = 'Finchley.RELEASE'
springCloudK8s = '0.3.0.RELEASE'
Not sure if this is the best way to solve it but seems to work:
class TimedInstanceDiscoveryListener extends InstanceDiscoveryListener {
private static final Logger log = LoggerFactory.getLogger(TimedInstanceDiscoveryListener.class);
public TimedInstanceDiscoveryListener(DiscoveryClient discoveryClient, InstanceRegistry registry, InstanceRepository repository) {
super(discoveryClient, registry, repository);
log.info("Starting custom TimedInstanceDiscoveryListener");
}
#Scheduled(fixedRate = 5000)
public void periodicDiscovery() {
log.info("Discovering new pod / services");
super.discover();
}
}
#Bean
#ConfigurationProperties(prefix = "spring.boot.admin.discovery")
public InstanceDiscoveryListener instanceDiscoveryListener(ServiceInstanceConverter serviceInstanceConverter,
DiscoveryClient discoveryClient,
InstanceRegistry registry,
InstanceRepository repository) {
InstanceDiscoveryListener listener = new TimedInstanceDiscoveryListener(discoveryClient, registry, repository);
listener.setConverter(serviceInstanceConverter);
return listener;
}

How to redirect spring cloud task logs into spring cloud task sink application?

I am having a Spring cloud Task sink application, that will trigger the Spring cloud Task.
#SpringBootApplication
#EnableBinding(Sink.class)
#RestController
#EnableScheduling
#EnableTaskLauncher
#Slf4j
public class FileTaskLauncherApp {
#Autowired
private Sink sink;
#Value("${spring.task.artifactory.url}")
private String uri;
#Value("${spring.task.name:file_task_launcher}")
private String taskName;
#GetMapping("/triggerTask")
public String publishTask(){
log.info("Publishing task with task launcher request...");
Map<String, String> prop = new HashMap<>();
prop.put("server.port", "0");
Map<String,String> deployProp=new HashMap<>();
deployProp.put("deployer.*.local.inheritLogging","true");
TaskLaunchRequest request = new TaskLaunchRequest(
uri, null,
prop,
deploymentProp, taskName);
GenericMessage<TaskLaunchRequest> message = new
GenericMessage<TaskLaunchRequest>(
request);
this.sink.input().send(message);
return "SUCCESS";
}
}
But Spring cloud task sink will be calling Spring Cloud Task and each task is a short lived micro service having its own functionality. I wanted to redirect application logs from Spring cloud task into Task sink application.
This is my application.properties:
server.port=8084
spring.cloud.stream.kafka.binder.brokers= localhost:2181
spring.cloud.stream.bindings.input.destination=fileTask
spring.task.artifactory.url=maven://com.tgt.fulfillment:file-generation-task:1.0.1-SNAPSHOT
spring.task.name=file_task_launcher
deployer.*.local.inheritLogging=true
This is my logs coming from task sink application
12:40:39.057 [http-nio-8084-exec-1] INFO o.s.c.task.launcher.TaskLauncherSink - Launching Task for the following uri maven://com.test:file-generation-task:1.0.1-SNAPSHOT
12:40:39.140 [http-nio-8084-exec-1] INFO o.s.c.d.spi.local.LocalTaskLauncher - Command to be executed: /Library/Java/JavaVirtualMachines/jdk1.8.0_171.jdk/Contents/Home/jre/bin/java -jar /Users/z003c1v/.m2/repository/com/test/file-generation-task/1.0.1-SNAPSHOT/file-generation-task-1.0.1-SNAPSHOT.jar
12:40:39.153 [http-nio-8084-exec-1] INFO o.s.c.d.spi.local.LocalTaskLauncher - launching task file_task_launcher-2c630ad9-acbb-43e0-8140-3ce49506f8e2
Logs will be in /var/folders/y5/hr2vrk411wdg_3xl3_10r295rp30bg/T/file_task_launcher7177051446839079310/1539587439103/file_task_launcher-2c630ad9-acbb-43e0-8140-3ce49506f8e2
As per the below spring documentation by enabling deployer.*.local.inheritLogging=true in deployment properties, application logs should be redirected to server logs, but this is not happening.
Reference: http://docs.spring.io/spring-cloud-dataflow/docs/1.4.0.RELEASE/reference/htmlsingle/#_logging
Could somebody please help me in resolving this issue at the earliest.
Can you share your stream definition that consists of the task launcher sink?
The inheritLogging property is a local deployer property and hence it should be specified when deploying the stream not at the app level property you mentioned above.
Something like:
stream deploy --name mystream --properties "deployer.*.local.inheritLogging=true”

annotation #RibbonClient not work together with RestTemplate

I am trying Ribbon configuration with RestTemplate based on bookmark service example but without luck, here is my code:
#SpringBootApplication
#RestController
#RibbonClient(name = "foo", configuration = SampleRibbonConfiguration.class)
public class BookmarkServiceApplication {
public static void main(String[] args) {
SpringApplication.run(BookmarkServiceApplication.class, args);
}
#Autowired
RestTemplate restTemplate;
#RequestMapping("/hello")
public String hello() {
String greeting = this.restTemplate.getForObject("http://foo/hello", String.class);
return String.format("%s, %s!", greeting);
}
}
with error page as below:
Whitelabel Error Page
This application has no explicit mapping for /error, so you are seeing this as a fallback.
Tue Mar 22 19:59:33 GMT+08:00 2016
There was an unexpected error (type=Internal Server Error, status=500).
No instances available for foo
but if I remove annotation #RibbonClient, everything will be just ok,
#RibbonClient(name = "foo", configuration = SampleRibbonConfiguration.class)
and here is SampleRibbonConfiguration implementation:
public class SampleRibbonConfiguration {
#Autowired
IClientConfig ribbonClientConfig;
#Bean
public IPing ribbonPing(IClientConfig config) {
return new PingUrl();
}
#Bean
public IRule ribbonRule(IClientConfig config) {
return new AvailabilityFilteringRule();
}
}
Is it because RibbonClient can not work with RestTemplate together?
and another question is that does Ribbon configuration like load balancing rule could be configured via application.yml configuration file?
as from Ribbon wiki, seems we can configure Ribbon parameters like NFLoadBalancerClassName, NFLoadBalancerRuleClassName etc in property file, does Spring Cloud also supports this?
I'm going to assume you're using Eureka for Service Discovery.
Your particular error:
No instances available for foo
can happen for a couple of reasons
1.) All services are down
All of the instances of your foo service could legitimately be DOWN.
Solution: Try visiting your Eureka Dashboard and ensure all the services are actually UP.
If you're running locally, the Eureka Dashboard is at http://localhost:8761/
2.) Waiting for heartbeats
When you very first register a service via Eureka, there's a period of time where the service is UP but not available. From the documentation
A service is not available for discovery by clients until the
instance, the server and the client all have the same metadata in
their local cache (so it could take 3 heartbeats)
Solution: Wait a good 30 seconds after starting your foo service before you try calling it via your client.
In your particular case I'm going to guess #2 is likely what's happening to you. You're probably starting the service and trying to call it immediately from the client.
When it doesn't work, you stop the client, make some changes and restart. By that time though, all of the heartbeats have completed and your service is now available.
For your second question. Look at the "Customizing the Ribbon Client using properties" section in the reference documentation. (link)