Setting
management.context-path = /admin
and using
#EnableCircuitBreaker
makes Hystrix endpoint /admin/hystrix.stream
This becomes an issue when using Turbine to aggregate metrics as its looking for
instanceserver:port/hystrix.stream
when discovering instances via Eureka
Any suggestions?
Full config for turbine:
server.port=8082
spring.application.name=turbine
management.endpoint.health.enabled=true
management.endpoints.jmx.exposure.include=*
management.endpoints.web.exposure.include=*
management.endpoints.web.base-path=/actuator
management.endpoints.web.cors.allowed-origins=true
management.endpoint.health.show-details=always
eureka.client.serviceUrl.defaultZone=${EUREKA_URI:http://localhost:8761/eureka}
eureka.instance.lease-expiration-duration-in-seconds=5
eureka.instance.lease-renewal-interval-in-seconds=5
turbine.aggregator.cluster-config=default
turbine.app-config=google
turbine.cluster-name-expression= new String("default")
turbine.combine-host-port=true
turbine.instanceUrlSuffix.default: actuator/hystrix.stream
Related
My application is based on Spring Boot 2.2.2.RELEASE and PostgreSQL. I am relying on Spring's AutoConfiguration as far as persistence is concerned. My application.properties file contains the following:
# Persistence
dbVendor=postgresql
# Basic connection options
spring.dataSource.driver-class-name=org.postgresql.Driver
spring.dataSource.url=jdbc:postgresql://is-0001/<database>
spring.dataSource.username=<username>
spring.dataSource.password=<password>
# Hibernate options
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect
spring.jpa.properties.hibernate.format_sql=false
spring.jpa.properties.hibernate.hbm2ddl.auto=create
spring.jpa.properties.hibernate.show_sql=false
spring.jpa.properties.hibernate.implicit_naming_strategy=<package>.ImplicitNamingStrategyImpl
spring.jpa.properties.hibernate.physical_naming_strategy=<package>.PhysicalNamingStrategyImpl
# Options to create sql scripts
spring.jpa.properties.javax.persistence.schema-generation.scripts.action=create
spring.jpa.properties.javax.persistence.schema-generation.scripts.create-target=/development/projects/<project>/backend/sql/setup/createDb.sql
#spring.jpa.properties.javax.persistence.schema-generation.scripts.drop-target=/development/projects/<project>/backend/sql/setup/dropDb.sql
For some reason the setting for spring.jpa.properties.hibernate.hbm2ddl.auto=create is ignored by Spring - independent of its value - whereas the spring.jpa.properties.javax... properties are applied correctly which is easy to verify by looking at the generated SQL file (createDb.sql and dropDb.sql).
Does anybody have any idea what the reason for this behaviour could be? I would really be thankful as I have been trying to find the root cause for this issue for more than a day now?
Just as a side node: Spring Boot 2.0.5.RELEASE behaves the same.
You can have a look here:
https://docs.spring.io/spring-boot/docs/current/reference/html/howto.html#howto-initialize-a-database-using-hibernate
You need to set the property: spring.jpa.hibernate.ddl-auto=create
I can't figure out if spring-cloud-gateway supports Route reading from consul registry, like it is with Zuul.
I added spring-cloud-starter-consul-discovery dependency and #EnableDiscoveryClient, and configured consul properties in application.yml, hovewer, /actuator/gateway/routes doesn't show any routes from consul
I also tried to set spring.cloud.gateway.discovery.locator.enabled: true but doesn't changed anything.
Sample excample below:
spring:
cloud:
consul:
discovery:
register: false
locator:
enabled: true
acl-token: d3ee84e2-c99a-5d84-e4bf-b2cefd7671ba
enabled: true
so the main question, is it even suppose to work?
EDIT: Probably should have mentioned it is version 2.0.0.M5., with Spring Boot 2.0.0.M7
Also I launched with --debug and there is this line:
GatewayDiscoveryClientAutoConfiguration#discoveryClientRouteDefinitionLocator:
Did not match:
- #ConditionalOnBean (types: org.springframework.cloud.client.discovery.DiscoveryClient; SearchStrategy: all) did not find any beans of type org.springframework.cloud.client.discovery.DiscoveryClient (OnBeanCondition)
Matched:
- #ConditionalOnProperty (spring.cloud.gateway.discovery.locator.enabled) matched (OnPropertyCondition)
I could solve it declaring the following bean: DiscoveryClientRouteDefinitionLocator (reference)
#Configuration
#EnableDiscoveryClient
public class AutoRouting {
#Bean
public DiscoveryClientRouteDefinitionLocator discoveryClientRouteDefinitionLocator(DiscoveryClient discoveryClient, DiscoveryLocatorProperties properties) {
return new DiscoveryClientRouteDefinitionLocator(discoveryClient, properties);
}
}
P.S: You need to include "spring-cloud-consul"
I have a service that has uses 3 feign clients. Each time I start my application, I get a TimeoutException on the first call to any feign client.
I have to trigger each feign client at least once before everything is stable. Looking around online, the problem is that something inside of feign or hystrix is lazy loaded and the solution was to make a configuration class that overrides the spring defaults. I've tried that wiith the below code and it is still not helping. I still see the same issue. Anyone know a fix for this? Is the only solution to call the feignclient twice via a hystrix callback?
#FeignClient(value = "SERVICE-NAME", configuration =ServiceFeignConfiguration.class)
#Configuration
public class ServiceFeignConfiguration {
#Value("${service.feign.connectTimeout:60000}")
private int connectTimeout;
#Value("${service.feign.readTimeOut:60000}")
private int readTimeout;
#Bean
public Request.Options options() {
return new Request.Options(connectTimeout, readTimeout);
}
}
Spring Cloud - Brixton.SR4
Spring Boot - 1.4.0.RELEASE
This is all running in docker
Ubuntu - 12.04
Docker - 1.12.1
Docker-Compose - 1.8
I found the solution to be that the default properties of Hystrix are not good. They have a very small timeout window and the request will always time out on the first try. I added these properties to my application.yml file in my config service and now all of my services can use feign with no problems and i dont have to code around the first time timeout
hystrix:
threadpool.default.coreSize: "20"
threadpool.default.maxQueueSize: "500000"
threadpool.default.keepAliveTimeMinutes: "2"
threadpool.default.queueSizeRejectionThreshold: "500000"
command:
default:
fallback.isolation.semaphore.maxConcurrentRequests: "20"
execution:
timeout:
enabled: "false"
isolation:
strategy: "THREAD"
thread:
timeoutInMilliseconds: "30000"
I suspect this is an issue, can anyone help to have a check?
In my sideCar application, I have application.yml:
server:
port: 5678
spring:
application:
name: nodeservice
sidecar:
port: ${nodeServer.instance.port:3000}
health-uri: http://localhost:${nodeServer.instance.port:3000}/app/health.json
eureka:
instance:
hostname: ${host.instance.name:localhost}
leaseRenewalIntervalInSeconds: 5 #default is 30, recommended to keep default
metadataMap:
instanceId: ${spring.application.name}:${spring.application.instance_id:${random.value}}
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
And in my main spring config app, I have:
String url_node = "";
try {
InstanceInfo instance = discoveryClient.getNextServerFromEureka("nodeservice", false);
// InstanceInfo instance = discoveryClient.getNextServerFromEureka("foo", false);
url_node = instance.getHomePageUrl();
} catch (Exception e) {
}
Now I start my nodeJS server, I have in spring app:
url for nodeService is: http://SJCC02MT0NUFD58.local:3000/
This is perfect, but after I shutdown my nodeJS server,
http://localhost:3000/app/health.json url is totally down, BUT, in the main java spring app, I still see the same output there.
So it seemed even if the NodeJS service is no longer available, eureka is still remembering that in memory.
Anything wrong for my configuration?
Another question is why the url being discovered by spring is http://SJCC02MT0NUFD58.local:3000/, not http://localhost:3000? I already configured Eureka.server.instance.host to be localhost.
Thanks
You are seeing the appropriate behavior. Eureka and ribbon are built to be very resilient (AP in CAP). In the case you described, a service had at least one instance, then there were none, the ribbon eureka client keeps the last know list of servers around as a last resort. You're just printing the names, if you try to connect to that service it will fail. This is where you use the Hystrix Circuit Breaker that can provide a fallback in the case that no instances are up.
I am using a front end Spring Cloud application (micro service) acting as a Zuul proxy (#EnableZuulProxy) to route requests from an external source to other internal micro services written using spring cloud (spring boot).
The Zuul server is straight out of the applications in the samples section
#SpringBootApplication
#Controller
#EnableZuulProxy
#EnableDiscoveryClient
public class ZuulServerApplication {
public static void main(String[] args) {
new SpringApplicationBuilder(ZuulServerApplication.class).web(true).run(args);
}
}
I ran this set of services locally and it all seems to work fine but if I run it on a network with some load, or through a VPN, then I start to see Zuul forwarding errors, which I am seeing as client timeouts in the logs.
Is there any way to change the timeout on the Zuul forwards so that I can eliminate this issue from my immediate concerns? What accessible parameter settings are there for this?
In my case I had to change the following property:
zuul.host.socket-timeout-millis=30000
The properties to set are: ribbon.ReadTimeout in general and <service>.ribbon.ReadTimeout for a specific service, in milliseconds. The Ribbon wiki has some examples. This javadoc has the property names.
I have experienced the same problem: in long requests, Zuul's hystrix command kept timing out after around a second in spite of setting ribbon.ReadTimeout=10000.
I solved it by disabling timeouts completely:
hystrix:
command:
default:
execution:
timeout:
enabled: false
An alternative that also works is change Zuul's Hystrix isolation strategy to THREAD:
hystrix:
command:
default:
execution:
isolation:
strategy: THREAD
thread:
timeoutInMilliseconds: 10000
This worked for me, I had to set connection and socket timeout in the application.yml:
zuul:
host:
connect-timeout-millis: 60000 # starting the connection
socket-timeout-millis: 60000 # monitor the continuous incoming data flow
I had to alter two timeouts to force zuul to stop timing out long-running requests. Even if hystrix timeouts are disabled ribbon will still timeout.
hystrix:
command:
default:
execution:
timeout:
enabled: false
ribbon:
ReadTimeout: 100000
ConnectTimeout: 100000
If Zuul uses service discovery, you need to configure these timeouts with the ribbon.ReadTimeout and ribbon.SocketTimeout Ribbon properties.
If you have configured Zuul routes by specifying URLs, you need to use zuul.host.connect-timeout-millis and zuul.host.socket-timeout-millis
by routes i mean
zuul:
routes:
dummy-service:
path: /dummy/**
I had a similar issue and I was trying to set timeout globally, and also sequence of setting timeout for Hystrix and Ribbon matters.
After spending plenty of time, I ended up with this solution. My service was taking upto 50 seconds because of huge volume of data.
Points to consider before changing default value for Timeout:
Hystrix time should be greater than combined time of Ribbon ReadTimeout and ConnectionTimeout.
Use for specific service only, which means don't set globally (which doesn't work).
I mean use this:
command:
your-service-name:
instead of this:
command:
default:
Working solution:
hystrix:
command:
your-service-name:
execution:
isolation:
strategy: THREAD
thread:
timeoutInMilliseconds: 95000
your-service-name:
ribbon:
ConnectTimeout: 30000
ReadTimeout: 60000
MaxTotalHttpConnections: 500
MaxConnectionsPerHost: 100
Reference
Only these settings on application.yml worked for me:
ribbon:
ReadTimeout: 90000
ConnectTimeout: 90000
eureka:
enabled: true
zuul:
host:
max-total-connections: 1000
max-per-route-connections: 100
semaphore:
max-semaphores: 500
hystrix:
command:
default:
execution:
isolation:
thread:
timeoutInMilliseconds: 1000000
Hope it helps someone!