Feign/Ribbon/Eureka - a RestClient backed by an Apache HttpClient pool is created but never used - spring-cloud

We are using Feign on top of Ribbon and Eureka.
We noticed a com.netflix.niws.client.http.RestClient instance is automatically created for each Feign client but never used. Instead, the Feign.Builder creates a feign.ribbon.RibbonClient that delegates the actual HTTP call to a feign.Client.Default instance. The latter uses standard Java HttpConnection without any pooling feature.
Unfortunately, the creation of these apparently useless RestClient instances (one per feign client) comes with its own Apache HttpClient, its own connection pool, housekeeping thread and metrics stuff...
A quick look at the /metrics actuator endpoint shows metrics like:
counter.servo.<client name>_createnew: 0
counter.servo.<client name>_delete: 0
counter.servo.<client name>_release: 0
counter.servo.<client name>_request: 0
counter.servo.<client name>_reuse: 0
Those metrics are created by com.netflix.http4.NamedConnectionPool. Their value stay at 0 whatever the activity.
Did someone experienced the same behaviour?
Why are these RestClient instances created for each feign client and never used?

Observed behaviour was caused by issue https://github.com/spring-cloud/spring-cloud-netflix/issues/312.
Fix is scheduled to be included in Spring Cloud 1.0.2

Related

Why am I experiencing endless connection timeouts using quarkus microprofile reactive rest client

At some point of my quarkus app life (under kubernetes) it begins getting endless connection timeouts from multiple different hosts (timeout configured to be 1 second). As of this point the app never recovers until I restart the k8s pod.
These endless connection timeouts are not due to the hosts since other apps in the cluster do not suffer from this, also a restart of my app fixes the problem.
I am declaring multiple hosts(base-uri) through the quarkus application.properties. (maybe its using a single vertx/netty event-loop and it's wrong?)

Configuring wait time of SOAP request node and SOAP input node in IIB

I am using IIB V10.Can we increase the max client wait time of SOAP input node more than 180 seconds(default) in IIB. Also, can we configure the request time out of SOAP request node from 120 seconds(default) to a higher number?
The IIB documentation describe these timeouts in detail here:
maxClientWaitTime of SOAP Input node.
requestTimeout of SOAP Request node.
You can configure these values either directly in the flow as properties of the nodes or via BAR overrides before the deployment.
There is also general chapter called Configuring message flows to process timeouts which describes the timeout handling of these synchronous nodes.

Spring Cloud Gateway blocking requests for route descovery

I'm using Spring Cloud Gateway from spring-cloud-starter-gateway version 2.1.0.RELEASE and I need to understand why Gateway is blocking requests to perform the DiscoveryClientRouteDefinitionLocator process.
Spring Cloud Version: Greenwich.RELEASE.
I have two environments: staging and production.
In production we have a working gateway with the following latency for /actuator/health call:
I was investigatinng why those spikes occurs on a simple health call and I figure out the gateway blocks any requests sometimes (even health or real microservices call) to perform discovery routes of all my microservices.
We use Consul for discovery server and I tried to test this latency at my staging environment (with way less hardware resource on Consul). The impact of this block is clear:
After improving the Consul hardware resources we have no more spikes but the latency still is not perfect (and have minor spikes to discovery all routes) for a health call:
I need to ask: why Spring Cloud Gateway is blocking requests even having caching feature? Should not this process run in the background? What I'm doing wrong? Its really an issue with Spring Cloud Gateway?
Thank you.
As discussed here previous version of Spring Cloud Gateway was using a blocking discovery-client.
Using versions newer than 2.1.5.RELEASE will result in a more asynchronous gateway that doesn't do many blocking requests.

Does gRPC server spin up a new thread for each request?

I tried profiling a gRPC java server. And i see the below set of thread pools majorly.
grpc-default-executor Threads : Created 1 for each incoming request.
grpc-default-worker-ELG Threads: May be to listen on the incoming gRPC requests & assign to the above "grpc-default-executor" thread.
Overall, is gRPC java server, Netty style or Jetty/Tomcat style? Or it can configured to run as both ways?
gRPC Java server is exposed closer to Jetty/Tomcat style, except that it is asynchronous. That is, in normal Servlets each request consumes a thread until it is complete. While newer Servlet versions let you detach from the dedicated thread and continue work asynchronously (freeing the thread for other use) that is more uncommon. In gRPC you are free to work in either style. Note that gRPC uses a cachedThreadPool by default to reuse threads; on server-side it's a good idea to replace the default executor with your own, generally fixed-size, pool via ServerBuilder.executor().
Internally gRPC Java uses the Netty-style. That means fully non-blocking. You may use ServerBuilder.directExecutor() to run on the Netty threads. Although in that case you may want to specify the NettyServerBuilder.bossEventLoopGroup(), workerEventLoopGroup(), and for compatibility channelType().
As far as I know you can specify using the directExecutor() when building the GRPC server / client that will ensure all work is done in the IO thread and so threads will be shared. The default is to not do this for safety reasons as you will need to be very careful about what you do if you are in the IO Thread (like you should never block there).

How to make Vertx server to serve request in parallel?

How to make Vertx server to serve request in parallel? If lets say there are 50 user who are submitting HTTP request to vertx server then I want all user request should be served in parallel ?
Asking in context of vertx 2 manual
As far as I know, it is the same than vertx 3: http servers handle requests in parallel, unless you block the event loop.
For all 50 user requests to be served in parallel, run your verticle with increased number of instances which will in short scale your application.
Run 50 instances of Java verticle,
vertx run MyVerticle.java -instances 50
From vert.x manual,
-instances : The number of instances of the verticle to instantiate in the Vert.x server. Each verticle instance is strictly single threaded so to scale your application across available cores you might want to deploy more than one instance.
Analogy : one user request/one vert.x instance