Spring Cloud: How to use OpenFeign without load-balancing but with Eureka service discovery? - spring-cloud

I do not have spring-cloud-starter-loadbalancer on the classpath on purpose since I want to explain Feign functionality to students without the additional load-balancing part.
When annotating with
#FeignClient(name = "product")
I get the following exception:
Caused by: java.lang.IllegalStateException: No Feign Client for loadBalancing defined. Did you forget to include spring-cloud-starter-loadbalancer?
at org.springframework.cloud.openfeign.FeignClientFactoryBean.loadBalance(FeignClientFactoryBean.java:382) ~[spring-cloud-openfeign-core-3.1.3.jar:3.1.3]
Then I tried putting the service-name into the "url" field:
#FeignClient(name = "product", url = "productservice")
But that tries to use "productservice" as some host name:
java.net.UnknownHostException: productservice
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:220) ~[na:na]
I know I can paste the proper URL into that field but what I want is to have that provided by the Eureka discovery.
Thanks for any help.
Versions:
Spring Boot 2.7.1
Spring Cloud 2021.0.3

Related

started geode spring boot and save to remote region but failed to start bean gemfireClusterSchemaObjectInitializer

With a simple client app, make an object and object repository, connect to a Geode cluster, then run a #Bean ApplicationRunner to put some data to a remote region.
#ClientCacheApplication(name = "Web", locators = #Locator, logLevel = "debug", subscriptionEnabled = true)
#EnableClusterDefinedRegions
#EnableClusterConfiguration(useHttp = true)
#EnablePdx
public class MyCache {
private static final Logger log = LoggerFactory.getLogger(MyCache.class);
#Bean
ApplicationRunner StartedUp(MyRepository myRepo){
log.info("In StartedUp");
return args -> {
String guid = UUID.randomUUID().toString().substring(0, 8).toUpperCase();
MyObject msg = new MyObject(guid, "Started");
myRepo.save(msg);
log.info("Out StartedUp");
};
}
The "save" put fails with
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.springframework.web.client.ResourceAccessException: I/O error on POST request for "https://localhost:7070/gemfire/v1/regions": Connection refused: connect; nested exception is java.net.ConnectException: Connection refused: connect
Problem creating region and persist region to disk Geode Gemfire Spring Boot helped. The problem is the #EnableClusterConfiguration(useHttp = true)
This annotation makes the remote cluster appear to be a localhost. If I remove it altogether then the put works.
If remove just the useHttp = true there is another error:
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on #.#.#.#(Web:9408:loner)### The function is not registered for function id CreateRegionFunction
In a nutshell, the SDG #EnableClusterConfiguration annotation (details available here) enables configuration metadata defined and declared on the client (i.e. Spring [Boot] Data, GemFire/Geode application) to be pushed from the client-side to the cluster (of GemFire/Geode servers).
I say "enable" because it depends on the client-side configuration metadata (i.e. Spring bean definitions you have explicitly or implicitly defined/declared). Explicit configuration is configuration you defined with a bean definition (in XML, or JavaConfig with #Bean, etc). Implicit configuration is auto-configuration or using SDG annotations like #EnableEntityDefinedRegions or #EnableCachingDefinedRegions, etc.
By default, the #EnableClusterConfiguration annotation assumes the cluster of GemFire or Geode servers were configured and bootstrapped with Spring, and specifically using the SDG Annotation configuration model. When the GemFire or Geode servers are configured and bootstrapped with Spring, then SDG goes on to register some provided, canned GemFire Functions that the #EnableClusterConfiguration annotation calls (by default and...) as a fallback.
NOTE: See the appendix in the SBDG reference documentation on configuring and bootstrapping a GemFire or Geode server, or even a cluster of servers, with Spring. This certainly simplifies local development and debugging as opposed to using Gfsh. You can do all sorts of interesting combinations: Gfsh Locator with Spring servers, Spring [embedded|standalone] Locator with both Gfsh and Spring servers, etc.
Most of the time, users are using Spring on the client and Gfsh to (partially) configure and bootstrap their cluster (of servers). When this is the case, then Spring is generally not on the servers' classpath and the "provided, canned" Functions I referred to above are not present and automatically registered. In which case, you must rely on GemFire/Geodes internal, Management REST API (something I know a thing or 2 about, ;-) to send the configuration metadata from the client to the server/cluster. This is why the useHttp attribute on the #EnableClusterConfiguration annotation must be set to true.
This is why you saw the Exception...
org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer';
nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on #.#.#.#(Web:9408:loner)###
The function is not registered for function id CreateRegionFunction
The CreateRegionFunction is the canned Function provided by SDG out of the box, but only when Spring is used to both configure and bootstrap the servers in the cluster.
This generally works well for CI/CD environments, and especially our own test infrastructure since we typically do not have a full installations of either Apache Geode or Pivotal GemFire available to test with in those environments. For 1, those artifacts must be resolvable from and artifact repository like Maven Central. The Apache Geode (and especially) Pivotal GemFire distributions are not. The JARs are, but the full distro isn't. Anyway...
Hopefully, all of this makes sense up to this point.
I do have a recommendation if I may.
Given your application class definition begins with...
#ClientCacheApplication(name = "Web", locators = #Locator,
logLevel = "debug", subscriptionEnabled = true)
#EnableClusterDefinedRegions
#EnableClusterConfiguration(useHttp = true)
#EnablePdx
public class MyCache { ... }
I would highly recommend simply using Spring Boot for Apache Geode (and Pivotal GemFire), i.e. SBDG, in place of SDG directly.
Your application class could then be simplified to:
#SpringBootApplication
#EnableClusterAware
#EnableClusterDefinedRegions
public class MyCache { ... }
You can then externalize some of the hard coded configuration settings using the Spring Boot application.properties file:
spring.application.name=Web
spring.data.gemfire.cache.log-level=debug
spring.data.gemfire.pool.subscription-enabled=true
NOTE: #EnableClusterAware is a much more robust and capable extension of #EnableClusterConfiguration. See additional details here.
Here are a few resources to get you going:
Project Overview
Getting Started Sample Guide
Use Case Driven Guides/Samples
Useful resources in the Appendix TOC.
Detailed information on SBDG provided Auto-configuration.
Detailed information on Declarative Configuration.
Detailed information on Externalized Configuration.
In general, SBDG, which is based on SDG, SSDG and STDG, is the preferred/recommended starting point for all things Spring for Apache Geode and Pivotal GemFire (or now, Pivotal Cloud Cache).
Hope this helps.

java connection refused error with spring boot data geode remote locator

Per my question Apache Geode Web framework I've checked through various spring guides from here and spring data geode samples from here and written a short spring data geode application but it cannot connect to the remote GFSH started Geode locator. The Application class is:
package cm;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.data.gemfire.config.annotation.ClientCacheApplication;
import org.springframework.data.gemfire.config.annotation.ClientCacheApplication.Locator;
import org.springframework.data.gemfire.config.annotation.EnablePdx;
import org.springframework.data.gemfire.repository.config.EnableGemfireRepositories;
#SpringBootApplication
#ClientCacheApplication(name = "CmWeb", locators = #Locator, subscriptionEnabled = true)
#EnableGemfireRepositories(basePackageClasses= {CmRequest.class})
#EnablePdx
public class CmWeb {
public static void main(String[] args) {
SpringApplication.run(CmWeb.class, args);
}
}
and in the resources directory application.properties I've set up the remote locator:
# Configure the client's connection Pool to the servers in the cluster
spring.data.gemfire.pool.locators=1.2.3.4[10334]
Build and run the application and it discovers the locator (which it returns as the server name)
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : AutoConnectionSource discovered new locators [UAT:10334]
A couple of seconds later it throws the error:
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : locator UAT:10334 is not running.
and
java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method) ~[na:1.8.0_232]
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85) ~[na:1.8.0_232]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[na:1.8.0_232]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:204) ~[na:1.8.0_232]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[na:1.8.0_232]
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172) ~[na:1.8.0_232]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_232]
at java.net.Socket.connect(Socket.java:607) ~[na:1.8.0_232]
at org.apache.geode.internal.net.SocketCreator.connect(SocketCreator.java:958) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.internal.net.SocketCreator.connect(SocketCreator.java:899) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.internal.net.SocketCreator.connect(SocketCreator.java:888) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.distributed.internal.tcpserver.TcpClient.getServerVersion(TcpClient.java:290) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.distributed.internal.tcpserver.TcpClient.requestToServer(TcpClient.java:184) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl.queryOneLocatorUsingConnection(AutoConnectionSourceImpl.java:209) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl.queryOneLocator(AutoConnectionSourceImpl.java:199) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl.queryLocators(AutoConnectionSourceImpl.java:287) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl$UpdateLocatorListTask.run2(AutoConnectionSourceImpl.java:500) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.PoolImpl$PoolTask.run(PoolImpl.java:1371) [geode-core-1.9.2.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_232]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_232]
at org.apache.geode.internal.ScheduledThreadPoolExecutorWithKeepAlive$DelegatingScheduledFuture.run(ScheduledThreadPoolExecutorWithKeepAlive.java:276) [geode-core-1.9.2.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_232]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_232]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_232]
After a lot of investigation I thought it was that the spring data geode client expects a spring boot geode server according to Connecting GemFire using Spring Boot and Spring Data GemFire and so I downloaded the ListRegionsOnServerFunction jar and deployed it on the GFSH server get the same result (have not yet restarted the server...) but that causes the same error condition.
If by Spring-Data-Gemfire - Unable to contact a Locator service. Operation either timed out or Locator does not exist I try and change the application.properties from
spring.data.gemfire.pool.locators=1.2.3.4[10334]
to
spring.gemfire.locators=1.2.3.4[10334]
or other variations then the app can't find the remote locator and throws:
[Timer-DEFAULT-3] o.a.g.c.c.i.AutoConnectionSourceImpl : locator localhost/127.0.0.1:10334 is not running.
Writing this question I've finally found How to connect a remote-locator in Geode and also can't PING the GFSH server from the SPRING app. However, the server bind address is setup properly for remote locator clients and various other services and UI using a locally built Geode Native Client for Geode v 1.10 can connect. I suspect PING may be disabled across this (semi-internal) network by default. I also disabled the firewall rules for ports 10334, 1099, 40404 to allow all traffic but still get the same error condition.
It turns out that from repeated INFO messages in the Spring Boot app after the connection refused:
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : updateLocatorInLocatorList changing locator list: loc form: LocatorAddress [socketInetAddress=UAT:10334, hostname=UAT, isIpString=false] ,loc to: UAT:10334
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : updateLocatorInLocatorList locator list from:[UAT:10334, /1.2.3.4:10334] to: [LocatorAddress [socketInetAddress=UAT:10334, hostname=UAT, isIpString=false], LocatorAddress [socketInetAddress=/1.2.3.4:10334, hostname=1.2.3.4, isIpString=true]]
and then running list clients on the server, the connection from the Spring Boot app to the Geode server v 1.10 is in fact established. Arrrgh!
It means the locator logic is working but this doesn't explain why after the first connection there's a java.net.ConnectException: Connection refused: connect error. Any ideas?
1 quick note about your Spring Boot application class...
#SpringBootApplication
#ClientCacheApplication(name = "CmWeb", locators = #Locator, subscriptionEnabled = true)
#EnableGemfireRepositories(basePackageClasses= {CmRequest.class})
#EnablePdx
public class CmWeb {
public static void main(String[] args) {
SpringApplication.run(CmWeb.class, args);
}
}
The following statements are true iff you are using Spring Boot for Apache Geode (or Pivotal GemFire), which is highly recommended.
When using SBDG (by declaring the correct org.springframework.geode:spring-geode-starter dependency on your application classpath), then you do not need to explicitly declare the #ClientCacheApplication, #EnableGemfireRepositories or the #EnablePdx annotations since SBDG auto-configures a ClientCache instance by default, auto-configures SD Repositories particularly when all entity classes are in the same package or sub-package as the Spring Boot app and SBDG auto-configures PDX by default, as well.
The locator = #Locator just specifies that the "DEFAULT" GemFire/Geode Pool when configured via the ClientCacheFactory should connect to the cluster via Locators, on localhost using the default Locator port, 10334. Therefore, this attribute is mostly useless and I would recommend the new #EnableClusterAware annotation from SBDG (see here).
The other attributes can be configured via Spring Boot application.properties, like so:
spring.application.name=CmWeb
spring.data.gemfire.pool.subscription-enabled=true
TIP: You can configure subscription on individually "named" Pools, even via properties, if you are using more than 1 Pool (of connections) in your application, perhaps to route different payloads based on workflows to different "grouped" servers in your cluster, etc.
You started to configure the "DEFAULT" Pool in application.properties already with...
# Configure the client's connection Pool to the servers in the cluster
spring.data.gemfire.pool.locators=1.2.3.4[10334]
Regarding...
After a lot of investigation I thought it was that the spring data geode client expects a spring boot geode server
No, SDG does not expect the cluster (of servers) to be configured or bootstrapped with Spring at all. Using Gfsh is perfectly valid. For instance. If the ListRegionsOnServerFunction is not available, SDG falls back to other means (provided by GemFire/Geode itself, which Gfsh knows and uses).
All the messages you are seeing in the Spring Boot app logs are coming from Geode itself, i.e. nothing to do with Spring. In a nutshell, and FWIW, SDG/SBDG is a facade around the Apache Geode (Pivotal GemFire) API and Java client driver. SDG/SBDG is at the mercy of this client doing the right thing, which of course, is partially dependent on proper configuration. Still... I am really just thinking out loud now since I suspect you are already well aware of (or have discovered) all of this.
I would also say the Java client and Native Client are not exactly an apple to apple comparison either. Meaning, if you developed a client using purely the Apache Geode (Pivotal GemFire) API without Spring, you'd have the exact same problem.
I have never seen a case where the first connection is establish but subsequent connections get a "Connection refused", o.O #argh
Have you tried this same configuration/arrangement with older Geode versions, e.g. 1.9?
Sorry for your troubles. I will think on this more.

Zuul forwarding error

I am facing issues using Zuul and Ribbon. I am using also Eureka for microservice registry.
I have ribbon-service(port 9000) communicating with the user-service using the REST API
user-service has 2 instances (on port 8081 and 8091)
on ribbon-service I have implemented client-side load balancing using hystrix and feign client
I consume ribbon-service API using Zuul route, and this API then triggers user-service API
When I start my microservice ecosystem and try to consume ribbon-service API(zuulservice:8761/ribbon-service/) I get the following error:
com.netflix.zuul.exception.ZuulException: Forwarding error
at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.handleException(RibbonRoutingFilter.java:189) ~[spring-cloud-netflix-zuul-2.0.0.RELEASE.jar:2.0.0.RELEASE]
at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.forward(RibbonRoutingFilter.java:164) ~[spring-cloud-netflix-zuul-2.0.0.RELEASE.jar:2.0.0.RELEASE]
at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.run(RibbonRoutingFilter.java:112) ~[spring-cloud-netflix-zuul-2.0.0.RELEASE.jar:2.0.0.RELEASE]
at com.netflix.zuul.ZuulFilter.runFilter(ZuulFilter.java:117) ~[zuul-core-1.3.1.jar:1.3.1]
at com.netflix.zuul.FilterProcessor.processZuulFilter(FilterProcessor.java:193) ~[zuul-core-1.3.1.jar:1.3.1]
at com.netflix.zuul.FilterProcessor.runFilters(FilterProcessor.java:157) ~[zuul-core-1.3.1.jar:1.3.1]
at com.netflix.zuul.FilterProcessor.route(FilterProcessor.java:118) ~[zuul-core-1.3.1.jar:1.3.1]
at com.netflix.zuul.ZuulRunner.route(ZuulRunner.java:96) ~[zuul-core-1.3.1.jar:1.3.1]
at com.netflix.zuul.http.ZuulServlet.route(ZuulServlet.java:116) ~[zuul-core-1.3.1.jar:1.3.1]
at com.netflix.zuul.http.ZuulServlet.service(ZuulServlet.java:81) ~[zuul-core-1.3.1.jar:1.3.1]
at org.springframework.web.servlet.mvc.ServletWrappingController.handleRequestInternal(ServletWrappingController.java:165) [spring-webmvc-5.0.5.RELEASE.jar:5.0.5.RELEASE]
at org.springframework.cloud.netflix.zuul.web.ZuulController.handleRequest(ZuulController.java:44) [spring-cloud-netflix-zuul-2.0.0.RELEASE.jar:2.0.0.RELEASE]
at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:52) [spring-webmvc-5.0.5.RELEASE.jar:5.0.5.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:991) [spring-webmvc-5.0.5.RELEASE.jar:5.0.5.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:925) [spring-webmvc-5.0.5.RELEASE.jar:5.0.5.RELEASE]................
Caused by: com.netflix.client.ClientException: Load balancer does not have available server for client: ribbon-service
at com.netflix.loadbalancer.LoadBalancerContext.getServerFromLoadBalancer(LoadBalancerContext.java:483) ~[ribbon-loadbalancer-2.2.5.jar:2.2.5]
at com.netflix.loadbalancer.reactive.LoadBalancerCommand$1.call(LoadBalancerCommand.java:184) ~[ribbon-loadbalancer-2.2.5.jar:2.2.5]
at com.netflix.loadbalancer.reactive.LoadBalancerCommand$1.call(LoadBalancerCommand.java:180) ~[ribbon-loadbalancer-2.2.5.jar:2.2.5]
at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar:1.3.8]
at rx.internal.operators.OnSubscribeConcatMap.call(OnSubscribeConcatMap.java:94) ~[rxjava-1.3.8.jar:1.3.8]
at rx.internal.operators.OnSubscribeConcatMap.call(OnSubscribeConcatMap.java:42) ~[rxjava-1.3.8.jar:1.3.8]
at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar:1.3.8]
This error remains for some time, and after that time I get the following output:
2018-07-16 12:55:43.260 INFO 19233 --- [erListUpdater-0] c.netflix.config.ChainedDynamicProperty : Flipping property: ribbon-service.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
After that output everthing works great again.
When I hit eurekaservice:8765/eureka/apps I have registered both ribbon-service and all the user-service instances.
This is my zuul service application.properties:
> #Service port
server.port=8765
#Service port
spring.application.name=zuul-service
# Discovery Server Access
eureka.client.service-url.defaultZone:http://localhost:8761/eureka/
eureka.instance.lease-renewal-interval-in-seconds=3
#User service configuration
zuul.routes.user-service.path:/user-service/**
zuul.routes.user-service.serviceId:user-service
#Product service configuration
zuul.routes.product-service.path:/product-service/**
zuul.routes.product-service.serviceId:product-service
#Product service configuration
zuul.routes.shoppingcart-service.path:/shoppingcart-service/**
zuul.routes.shoppingcart-service.serviceId:shoppingcart-service
#Product service configuration
zuul.routes.payment-service.path:/payment-service/**
zuul.routes.payment-service.serviceId:payment-service
#Product service configuration
zuul.routes.ribbon-service.path:/ribbon-service/**
zuul.routes.ribbon-service.serviceId:ribbon-service
This is my zuul-service bootstrap.properties:
> #Application name
spring.application.name=zuul-service
#hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds: 12600
ribbon.ConnectTimeout: 6000
ribbon.ReadTimeout: 60000
robbon.eureka.enabled: true
hystrix.command.default.execution.timeout.enabled=false
I am using spring 2.0.1 and spring cloud Finchley.RELEASE.
Can somebody explain what is going on?
Thank you!
The way DiscoveryClient works is that it fetches new list of changes from Eureka (Service discovery) for every defined time period called hearbeat. Until new changes with respect to registered applications propagates from service discovery to your Zuul instance it is bound to have errors.
The output you see in the logs is just the confirmation that it has received this information.
You may consider add strip path config for your zuul configuration.
add stripPrefix=false in every route config

How to integrate spring batch admin console

I'm using jhipster and I want to integrate spring batch admin console in my webapp. (http://localhost:8080/batch-console for example)
I tried to integrate the following response in my jhipster webapp Is there a way to integrate spring-batch-admin and spring-boot properly? but I get the following error (properties seems to be not loaded)
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'loggingConfiguration': Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder 'spring.application.name' in string value "${spring.application.name}"
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:355)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1219)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:543)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:482)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:751)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:861)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:541)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:122)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:761)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:371)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:315)
at com.mycompany.myapp.JhipsterApp.main(JhipsterApp.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49)
Caused by: java.lang.IllegalArgumentException: Could not resolve placeholder 'spring.application.name' in string value "${spring.application.name}"
at org.springframework.util.PropertyPlaceholderHelper.parseStringValue(PropertyPlaceholderHelper.java:174)
at org.springframework.util.PropertyPlaceholderHelper.replacePlaceholders(PropertyPlaceholderHelper.java:126)
at org.springframework.beans.factory.config.PropertyPlaceholderConfigurer$PlaceholderResolvingStringValueResolver.resolveStringValue(PropertyPlaceholderConfigurer.java:258)
at org.springframework.beans.factory.support.AbstractBeanFactory.resolveEmbeddedValue(AbstractBeanFactory.java:813)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1076)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1056)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:566)
at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:88)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:349)
... 20 common frames omitted
If I hard code these properties for testing, my jhipster application starts but I get :
Request URL:http://localhost:8080/
Request Method:GET
Status Code:403 Forbidden
What do I need to doexactly to integrate spring batch admin in my Jhipster app?
Do I need to add the following dependency to my pom?
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-batch</artifactId>
</dependency>
Spring boot version : 1.4.1.RELEASE
Jhipster version : 3.8.0
Thanks
You don't have to add spring-boot. JHipster is a spring-boot app. As for spring-batch-admin for what I see in the documentation is a web UI using Spring MVC which actualy is also a part of the JHipster stack, but JHipster is using as a client a single page app with AngularJS wich is consuming REST services, i.e. you can have some issue when integrating your batch-admin app.
Your
Request URL:http://localhost:8080/ Request Method:GET Status Code:403 Forbidden it indicates that there are some changing at your authorization since
http://localhost:8080/ is the normal entry point in a JHipster app.
As a solution I will recommend to use the micro service approach. The documentation of spring-batch-admin indicates that there is RESTful (JSON) API. This will perfectly fits into micro services app.

spring cloud, zuul proxy and ribbon healthcheck issue without eureka

I have configured my zuul proxy to work with multiple instance of my microservice . The only thing that i have done is adding ribbon.listOfServers keyword to my configuration.
It is working fine with round robin policy.
But when i shutdown one of the microservice instances ribbon still sends requests to that one and returning error to the clients.
How can i enable healthCheck feature inside the ZuulProxy ?
My zuul configuration is shown below:
shared.microservice.customer.service1.url=ip1:port1/shared/microservice/customer/
shared.microservice.customer.service2.url=ip2:port2/shared/microservice/customer/
ribbon.eureka.enabled = false
zuul.routes.customer-micro-service.path: /shared/microservice/customer/**
zuul.routes.customer-micro-service.serviceId: customers
customers.ribbon.listOfServers = ip1:port1/shared/microservice/customer/,ip2:port2/shared/microservice/customer/
My Main Spring class has the following annotations:
#EnableZuulProxy
#SpringBootApplication
#ComponentScan(basePackages = { "com.my.gateway"})
public class ZuulProxyApplication
And i am getting the below exception :
com.netflix.zuul.exception.ZuulException: Forwarding error
at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.forward(RibbonRoutingFilter.java:143) ~[spring-cloud-netflix-core-1.0.1.RELEASE.jar!/:1.0.1.RELEASE]
at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.run(RibbonRoutingFilter.java:107) ~[spring-cloud-netflix-core-1.0.1.RELEASE.jar!/:1.0.1.RELEASE]
at com.netflix.zuul.ZuulFilter.runFilter(ZuulFilter.java:112) [zuul-core-1.0.28.jar!/:?]
at com.netflix.zuul.FilterProcessor.processZuulFilter(FilterProcessor.java:197) [zuul-core-1.0.28.jar!/:?]
at com.netflix.zuul.FilterProcessor.runFilters(FilterProcessor.java:161) [zuul-core-1.0.28.jar!/:?]
at com.netflix.zuul.FilterProcessor.route(FilterProcessor.java:120) [zuul-core-1.0.28.jar!/:?]
at com.netflix.zuul.ZuulRunner.route(ZuulRunner.java:84) [zuul-core-1.0.28.jar!/:?]
at com.netflix.zuul.http.ZuulServlet.route(ZuulServlet.java:111) [zuul-core-1.0.28.jar!/:?]
at com.netflix.zuul.http.ZuulServlet.service(ZuulServlet.java:77) [zuul-core-1.0.28.jar!/:?]
Caused by: com.netflix.hystrix.exception.HystrixRuntimeException: customersRibbonCommand failed and no fallback available.
at com.netflix.hystrix.AbstractCommand$16.call(AbstractCommand.java:782) ~[hystrix-core-1.4.4.jar!/:1.4.4]
at com.netflix.hystrix.AbstractCommand$16.call(AbstractCommand.java:769) ~[hystrix-core-1.4.4.jar!/:1.4.4]
at rx.internal.operators.OperatorOnErrorResumeNextViaFunction$1.onError(OperatorOnErrorResumeNextViaFunction.java:77) ~[rxjava-1.0.7.jar!/:1.0.7]
at rx.internal.operators.OperatorDoOnEach$1.onError(OperatorDoOnEach.java:70) ~[rxjava-1.0.7.jar!/:1.0.7]
at rx.internal.operators.OperatorDoOnEach$1.onError(OperatorDoOnEach.java:70) ~[rxjava-1.0.7.jar!/:1.0.7]
at rx.internal.operators.OperatorDoOnEach$1.onError(OperatorDoOnEach.java:70) ~[rxjava-1.0.7.jar!/:1.0.7]