Turbine Dashboard Metrics without Eureka - spring-cloud

I am working on two Spring-boot applications. I am using spring-cloud-starter-hystrix for circuit-breaking & fallback methods using #EnableCircuitBreaker.
Now I also want to have an hystrix dashboard with metrics which can be achieved by Turbine Server using #EnableTurbine #EnableHystrixDashboard.
AFAIK the Turbine service gets the application URLs from Eureka Instance. And in Turbine server app.properties we should give the other apps name. So that Turbine will check with Eureka on app url:port.
In my case, I am not using Eureka. So how can I use a Turbine Service to manually hardcode my application URL to fetch metric streams & display the metrics dashboard?
So basically in Turbine Server can I disable connection to Eureka & hardcode URLs to fetch metrics?
I have browsed for few hours & couldnt find a solution. Any help is appreciated.

Download and run turbine-web war file from HERE and deploy it on any server say a tomcat with a JVM runtime argument specifying the location of your turbine config file. Something like-
-Darchaius.configurationSource.additionalUrls=file:///etc/files/turbine-archaius.properties"
Add configuration like your server IPs, URI of hystrix stream servlet in that file. Take more help from HERE.
Here's my sample config file for better understanding-
turbine.aggregator.clusterConfig=<cluster-name>
turbine.instanceUrlSuffix.<cluster-name>=/hystrix.stream
#I am using a separate file to list down all my server IPs that turbine need to agregate data from
turbine.FileBasedInstanceDiscovery.filePath=/etc/files/turbine-server-list
InstanceDiscovery.impl=com.netflix.turbine.discovery.FileBasedInstanceDiscovery
turbine.InstanceMonitor.eventStream.skipLineLogic.enabled=false
The other file turbine-server-list contains server IPs from which to aggregate metrics. something like-
APPLICATION-IP1:PORT,<cluster-name>,up
APPLICATION-IP2:PORT,<cluster-name>,up
Find your aggregated turbine metrics at- http://TURBINE-SERVER-IP:PORT/turbine/turbine.stream?cluster=cluster-name

Related

How do I get CPU metrics in Dynatrace API for a service?

Dynatrace has a rest api to get statistics. In the console I can go to a host and view the processes running - for instance weblogic. From there I can drill in and see spring boot web controllers and their CPU utilization as a percentage. These are also available through the "services" section of the admin portal.
Im trying to automate reading these services and their CPU to compare multiple environments. Even though I can see the percentage of the service on the front end dynatrace admin portal I cant seem to find the right api to get the data.
I can see the results at the process level with /api/v2/metrics/query?metricSelector=builtin:tech.generic.cpu.usage but I cant seem to get it at the service level. For instance weblogic is one server and one process group instance but it has multiple web controllers (sometimes several per web application) and I would like to see the CPU usage for each.

Using logstash, config server and eureka with spring cloud task and dataflow

We have an existing microservice environment with logstash, config and eureka servers. We are now setting up a Spring Cloud Dataflow (Kubernetes) environment (primarily intially to run tasks/batch jobs).
Ideally we would like the tasks to use the existing logstash, config and eureka servers via the standard spring boot configuration (annotations etc) to support the following scenarios:
Logstash: When a task runs its logs are output to logstash and viewable from Kibana
Config Server: To support changing configuration properties for tasks. eg a periodic task's configuration can be tweaked by altering the values on the configuration server and next time the task runs it will use the new values.
My understanding is that config server properties will override properties in the task definition which override properties in the internal application.properties.
Eureka: Each task would register itself in Eureka. The main reason for this is that our tasks have web actuator endpoints exposed and we can then can use Spring Boot Admin (which can discover services via eureka) to access the actuator endpoints and information while a task is running.
(Some of our tasks can take hours to run and this would enable us to monitor them, adjust logging etc)
Is this a sensible approach - or are there any potential issues to look out for here (eg short lived tasks with eureka). I can’t find any discussion of this in the existing spring cloud data flow or spring cloud task documentation.
You may try logstash-logback-encoder for SCDF integration with ELK stack. It works fine for our SCDF on Yarn stream application.
Config Server should work for any Spring Boot application.

How to make the Eureka server strong?

I am new to Spring Cloud. Currently, I want to build a new micro service based on Spring Cloud. It is very easy to build a new Eureka server. But my question is that how to make it high availability ? For example I create two Eureka server and a load balancer. When one of the Eureka server is down, the system still works well. But I don't know to to consist registered information in the two Eureka server.
I have already asked something similar in the spring cloud gitter channel.
Because of the CAP theorem, something as a distributes Service discovery has to decide, either to provide availability, or more consistency, with a trade off to the other one.
in short, by quoting Spencer Gibb:
Eureka favors availability over consistency
so it is very available, while registred services may be not acutal anymore.
As Spencer suggested, if consistency is something you need more then availability, try Consul together with spring cloud consul intead

spring cloud consul service names

I am switching all my service infrastructure from eureka to consul.
In the eureka case I have multiple services with the same name and Eureka handles this via the Application and instance to differentiate.
In the consul case, if I have this naming scheme, does spring cloud generate unique ids under eh covers?
I read where consul will use the id and name synonymously unless you register them under unique ids.
So you can have service 1 as (name=myservice, id=xxx) and service 2 as (name=myservice, id=yyy).
So in that way consul preserves uniqueness. What does spring cloud do under the covers?
Ok, so it appears that the question is not clear.
I know that I can specify uniqueness when I define them but I don't
I have a large microservices-based system in production. We have multiples of each microservices for both redundancy and scaling and we do not specifically set uniqueness on the services.
We don't because Eureka does this for us. Say I have a CustomerAccountService with 5 instances then I when I request customer account service I can see 5 instances. Looking at the Eureka data model, we see one Application and 5 instances of it.
So I am planning on moving to consul and want t preserve a similar mode of operation. Many instances of the same time of service.
What I really want to know is how the spring consul registration works under the covers or do I have to do something special for this.
I do know that COnsul defines a name and an id and that they can be the same or they can be different.
So can I have the name for 5 instances the same and have the id variate? If so, how does that happen in the spring cloud consul version of this.
Any application registered with the same spring.application.name in Consul using Spring Cloud will be grouped together just like Eureka.

Spring Cloud Configuration Server Through Sidecar

We are using spring cloud sidecar with a node.js application. It would be extremely useful if we could serve up configuration from the spring configuration server and make that configuration available to the node application.
I would like the sidecar to resolve any property place holders on behalf of the node application.
The sidecar already hits the configuration server and I know that the Environment in the sidecar WILL resolve all the property place holders. My problem is, how do I efficiently expose all those properties to the node application? I could create a simple rest endpoint that accepts a key and then returns environment.getProperty(key) but that would be extremely inefficient.
I am thinking that I could iterate over all property sources (I know that not all property sources can be enumerated), collect a unique set of the names and then turn around and call environment.getProperty() for each name....
But is there a better way?
I have to imagine this is functionality that others have needed when using Spring Cloud in a polyglot environment?