Autoscaling of Spring cloud OSS microservices - spring-cloud

I have developed several micro services using spring cloud Netflix stack (Eureka, Zuul, Zipkin, config server etc.) Is there any open source / free solution available to spin up/down microservices instances. e.g. if CPU usage > 90% for 3 consecutive checking it will add one more instance.

Related

Is it possible to make auto-refresh properties for Spring Cloud clients in a **multi-pod** environment

Is it possible to make auto-refresh properties for Spring Cloud clients in a multi-pod environment (Google Kubernetes Engine)?
I found several work arounds:
Using Spring Cloud Bus (too heavy solution).
Running refresh inside code using RefreshEvent and #Schedule it (not recommended by Spring).
Creating a new endpoint in Config Server to perform a refresh on all Spring Cloud clients.

Spring Cloud Data Flow with Azure Event Hub limitations?

We plan to use Spring Cloud Data Flow on Azure Cloud using Azure EventHub as a messaging binder.
On Azure EventHub, there are hard limits :
100 Namespaces
10 topics per namespaces.
The Spring Cloud Azure Event Hub Stream Binder seems to be able to configure only one namespace, so how can we manage multiple namespaces?
Maybe we should use multiple binders, to have multiple instances of the Spring Cloud Azure Event Hub Stream Binder?
Does anyone have any ideas? or documentation we did not find?
Regards
RĂ©mi
Spring Cloud Data Flow and Spring Cloud Skipper support the concept of "platform accounts". Using that, you can set up multiple accounts, for each namespace or any other K8s clusters even. This opens a lot of flexibility to work around these hard limits in Azure stack.
We have a recipe on multi-platform deployments.
When deploying the streams from SCDF, you'd pick and choose the platform account (aka namespace or other configs), so automatically the deployed stream apps (with Azure binder in the classpath) would be running in different namespaces. Effectively, dodging the limits enforced in Azure.
The provenance tracking of where the apps run and the audit trail is automatically also captured in SCDF, so at any given time, you'd know who did what and in which namespace.

Spring cloud data flow deployment

I wanna deploy the Spring-cloud-data-flow on several hosts.
I will deploy the server of Spring-cloud-data-flow on one host-A, and deploy the agents on the other hosts(These hosts are in charge of executing the tasks).
Except the host-A, all the other hosts run the same tasks.
Shall I modify on the basis of the Spring Cloud Data Flow Local Server or on the Spring Cloud Data Flow Apache Yarn Server or other better choice?
Do you mean how the apps are deployed on several hosts? If so, the apps are deployed using the underlying deployer implementation. For instance, if it is local deployer then, each app is deployed by spawning a new process. You can scale out the number apps deployment using the count property during the stream deploy. I am not sure what do you mean by the agents here.

How to make the Eureka server strong?

I am new to Spring Cloud. Currently, I want to build a new micro service based on Spring Cloud. It is very easy to build a new Eureka server. But my question is that how to make it high availability ? For example I create two Eureka server and a load balancer. When one of the Eureka server is down, the system still works well. But I don't know to to consist registered information in the two Eureka server.
I have already asked something similar in the spring cloud gitter channel.
Because of the CAP theorem, something as a distributes Service discovery has to decide, either to provide availability, or more consistency, with a trade off to the other one.
in short, by quoting Spencer Gibb:
Eureka favors availability over consistency
so it is very available, while registred services may be not acutal anymore.
As Spencer suggested, if consistency is something you need more then availability, try Consul together with spring cloud consul intead

WAR application and RDBMS for high volume transaction

Can you please suggest some of the industry best practices (of the type "it depends") to deal with a WAR (web application archive) deployed into a J2EE container to connect with a RDBMS?
In the current scenario, we deploy the .war file to an Apache Tomcat instance that connects to a PostgreSQL data base instance. We are required to scale up both in terms of data storage and transactions (multi read-write).
It seems, we have two choices.
Build load balancing system
First, we start with Apache Tomcat clustering as described here with a short graphic (from their website) as below.
DNS Round Robin
|
Load Balancer
/ \
Cluster1 Cluster2
/ \ / \
Tomcat1 Tomcat2 Tomcat3 Tomcat4
Then, build the database cluster as described here whose graphic is shown below.
Both clustering (Tomcat and PostgreSQL) are front-ended by HAProxy as shown in the graphic below.
Use IaaS provider
Or, perhaps, just set-up the servers in AWS and let AWS manage all balancing while we just pay per use. For example, as described here for a typical set-up and here for AWS PostgreSQL.