These are the default values for Spring cloud consul
spring.cloud.consul.retry.initial-interval=1000
spring.cloud.consul.retry.max-attempts=6
spring.cloud.consul.retry.max-interval=2000
spring.cloud.consul.retry.multiplier=1.1
When these values are customized the values are taken by the application, but does not work as per the requirement. Suppose given max-interval=10000 and max-attempts=10 and the consul is down initially. The application will try to connect consul for 100 seconds. But even if we provided the values which is said above , the application will try to connect consul only up to 5 seconds.
Related
We are running nearly 100 instances in Production for kubernetes cluster and using prometheus server to create Grafana dashboard. To monitor the disk usage , below query is used
(sum(node_filesystem_size_bytes{instance=~"$Instance"}) - sum(node_filesystem_free_bytes{instance=~"$Instance"})) / sum(node_filesystem_size_bytes{instance=~"$Instance"})
As Instance ip is getting replaced and we are using nearly 80 instances, I am getting error as "Request URI too large".Can someone help to fix this issue
You only need to specify the instances once and use the on matching operator to get their matching series:
(sum(node_filesystem_size_bytes{instance=~"$Instance"})
- on(instance) sum(node_filesystem_free_bytes))
/ on(instance) sum(node_filesystem_size_bytes)
Consider also adding a unifying label to your time series so you can do something like ...{instance_type="group-A"} instead of explicitly specifying instances.
I have many Java microservices running in a Kubernetes Cluster. All of them are APM agents sending data to an APM server in our Elastic Cloud Cluster.
Everything was working fine but suddenly every microservice received the error below showed in the logs.
I tried to restart the cluster, increase the hardware power and I tried to follow the hints but no success.
Obs: The disk is almost empty and the memory usage is ok.
Everything is in 7.5.2 version
I deleted all the indexes related to APM and everything worked after some minutes.
for better performance u can fine tune these fields in apm-server.yml file
internal queue size increase queue.mem.events=output.elasticsearch.worker * output.elasticsearch.bulk_max_size
default is 4096
output.elasticsearch.worker (increase) default is 1
output.elasticsearch.bulk_max_size (increase) default is 50 very less
Example : for my use case i have used following stats for 2 apm-server nodes and 3 es nodes (1 master 2 data nodes )
queue.mem.events=40000
output.elasticsearch.worker=4
output.elasticsearch.bulk_max_size=10000
I am working on two Spring-boot applications. I am using spring-cloud-starter-hystrix for circuit-breaking & fallback methods using #EnableCircuitBreaker.
Now I also want to have an hystrix dashboard with metrics which can be achieved by Turbine Server using #EnableTurbine #EnableHystrixDashboard.
AFAIK the Turbine service gets the application URLs from Eureka Instance. And in Turbine server app.properties we should give the other apps name. So that Turbine will check with Eureka on app url:port.
In my case, I am not using Eureka. So how can I use a Turbine Service to manually hardcode my application URL to fetch metric streams & display the metrics dashboard?
So basically in Turbine Server can I disable connection to Eureka & hardcode URLs to fetch metrics?
I have browsed for few hours & couldnt find a solution. Any help is appreciated.
Download and run turbine-web war file from HERE and deploy it on any server say a tomcat with a JVM runtime argument specifying the location of your turbine config file. Something like-
-Darchaius.configurationSource.additionalUrls=file:///etc/files/turbine-archaius.properties"
Add configuration like your server IPs, URI of hystrix stream servlet in that file. Take more help from HERE.
Here's my sample config file for better understanding-
turbine.aggregator.clusterConfig=<cluster-name>
turbine.instanceUrlSuffix.<cluster-name>=/hystrix.stream
#I am using a separate file to list down all my server IPs that turbine need to agregate data from
turbine.FileBasedInstanceDiscovery.filePath=/etc/files/turbine-server-list
InstanceDiscovery.impl=com.netflix.turbine.discovery.FileBasedInstanceDiscovery
turbine.InstanceMonitor.eventStream.skipLineLogic.enabled=false
The other file turbine-server-list contains server IPs from which to aggregate metrics. something like-
APPLICATION-IP1:PORT,<cluster-name>,up
APPLICATION-IP2:PORT,<cluster-name>,up
Find your aggregated turbine metrics at- http://TURBINE-SERVER-IP:PORT/turbine/turbine.stream?cluster=cluster-name
I'm a bit confused with this configuration. My Spring Boot app with #EnableDiscoveryClient has spring.cloud.consul.host set to localhost. I'm running a Consul Agent on the host where my Boot app is running, but I've a few questions (can't seem to find my answers in the documentation).
Can this config accept multiple values?
If so, I'd prefer to set the values to a list of Consul server addresses (but then, what's the point of running Consul Agents at all, so this doesn't seem practical, which means I'm not understanding something here)
If not, are we expected to run a Consul Agent on every node a Boot app with #EnableDiscoveryClient is running? (this feels wrong as well; for one, this would seem like a single point of failure even though one agent should be able to tell everything about the cluster; what if I can't contact this one agent?)
What's the best practice for this configuration?
Actuallly this is Consul itself to solve your problem. An agent is runing on every server to handle clustering, failures, sharing data, autodiscovery etc. for you so that you don't neen to know the other hosts in your Spring Boot configuration. Spring Boot app always connects to the agent running on the same machine.
See https://www.consul.io/docs/agent/basics.html
I am switching all my service infrastructure from eureka to consul.
In the eureka case I have multiple services with the same name and Eureka handles this via the Application and instance to differentiate.
In the consul case, if I have this naming scheme, does spring cloud generate unique ids under eh covers?
I read where consul will use the id and name synonymously unless you register them under unique ids.
So you can have service 1 as (name=myservice, id=xxx) and service 2 as (name=myservice, id=yyy).
So in that way consul preserves uniqueness. What does spring cloud do under the covers?
Ok, so it appears that the question is not clear.
I know that I can specify uniqueness when I define them but I don't
I have a large microservices-based system in production. We have multiples of each microservices for both redundancy and scaling and we do not specifically set uniqueness on the services.
We don't because Eureka does this for us. Say I have a CustomerAccountService with 5 instances then I when I request customer account service I can see 5 instances. Looking at the Eureka data model, we see one Application and 5 instances of it.
So I am planning on moving to consul and want t preserve a similar mode of operation. Many instances of the same time of service.
What I really want to know is how the spring consul registration works under the covers or do I have to do something special for this.
I do know that COnsul defines a name and an id and that they can be the same or they can be different.
So can I have the name for 5 instances the same and have the id variate? If so, how does that happen in the spring cloud consul version of this.
Any application registered with the same spring.application.name in Consul using Spring Cloud will be grouped together just like Eureka.