Spring Cloud Feign: Is Feign efficient enough compared with RestTemplate? - spring-cloud

I quickly browsed the source code of Feign, I found that Feign uses JDK's HttpUrlConnection to issue HTTP request and close it when request finished without using a connection pool. I doubt the efficiency of this kind of manner. Then I read the document of Spring's RestTemplate which says RestTemplate can switch to Apache Http Client or OKHttp to send HTTP request:
Note: by default the RestTemplate relies on standard JDK facilities to establish HTTP connections. You can switch to use a different HTTP library such as Apache HttpComponents, Netty, and OkHttp through the HttpAccessor.setRequestFactory(org.springframework.http.client.ClientHttpRequestFactory) property.
Does it mean RestTemplate is better than Feign at the view of performance?

Old question, but probably worth mentioning here that as of Spring 5 the RestTemplate is deprecated in favor of the WebClient.

In comparison with Feign, RestTemplate takes advantage on the default client performance (although it is mapped that this client has its issues with Java 11 regarding connections resets), but it loses on the ease of integrating logging libs and the more verbose and harder to test programmatic approach.
Another good point in favor of Feign is the easiness to implement fallback strategies combined with Hystrix, implementing custom ErrorDecoder.
If you want to go deeper about how Feign implementation, take a look at this article.
Talking about performance, another attention point of RestTemplate is that it uses the Java Servlet API, which is based on the thread-per-request model. This means that the thread will block until the web client receives the response, which can lead to degraded performance and to waste resources such as memory and CPU cycles, specially when communicating with slow services. On the other hand, this is not an issue for Feign because it can be used with async clients, which are not thread blocking.

Related

Vertx WebClient shared vs single across multiple verticles?

I am using vert.x as an api gateway to route calls to downstream services.
As of now, I am using single web client instance which is shared across multiple verticles (injected through guice)
Does it make sense for each verticle to have it's own webclient? Will it help in boosting performance? (My each gateway instance runs 64 vericles and handles approximately 1000 requests per second)
What are the pros and cons of each approach?
Can someone help to figure out what's the ideal strategy for the same?
Thanks
Vert.x is optimized for using a single WebClient per-Verticle. Sharing a single WebClient instance between threads might work, but it can negatively affect performance, and could lead to some code running on the "wrong" event-loop thread, as described by Julien Viet, the lead developer of Vert.x:
So if you share a web client between verticles, then your verticle
might reuse a connection previously open (because of pooling) and you
will get callbacks on the event loop you won't expect. In addition
there is synchronization in the web client that might become contented
when used intensively from different threads.
Additionally, the Vert.x documentation for HttpClient, which is the underlying object used by WebClient, explicitly states not to share it between Vert.x Contexts (each Verticle gets its own Context):
The HttpClient can be used in a Verticle or embedded.
When used in a Verticle, the Verticle should use its own client
instance.
More generally a client should not be shared between different Vert.x
contexts as it can lead to unexpected behavior.
For example a keep-alive connection will call the client handlers on
the context of the request that opened the connection, subsequent
requests will use the same context.

Will WebFlux have any bottlenecks in such architecture?

We're currently about to migrate from monolithic design to the microservice architecture, trying to choose the best way to replace JAX-WS with RESTful and considering to use Spring WebFlux.
We currently have an JAX-WS endpoint deployed at Tomcat EE serving requests from third-party clients. Webservice endpoint makes a long running blocking call to the database and then sends a SOAP-response to the client with a data retrieved from DB (Oracle).
Oracle DB will be replaced with one of NoSQL databases soon (possibly it will be MongoDB). Since MongoDB supports asynchronous calls we're considering to substitute current implementation with a microservice exposing REST endpoint based on WebFlux.
We have about 2500 req/sec at peaks, so current endpoint often gets down with a OutOfMemoryError. It was a root cause that pushed us towards migration.
My thoughts are to create a non-blocking endpoint which will call MongoDB in asynchronous manner and send a REST-response to the client. So I have a few questions considering basic features that WebFlux provides:
As far as I concerned there is a built-in backpressure control at
the business-level (not TCP flow control) in WebFlux and it works
generally via Reactive Streams. Since our clients are not
reactive, does it means that such way of a backpressure control is
not implementable here?
Suppose that calls to a new database remains long-running in a new
architecture. Since Netty uses EventLoop to serve incoming
requests, is there possible a situation when the microservice has
accepted all incoming HTTP connections, invoke an async call to the
db and subscribed a resulted Mono to the scheduler, but, since
the request quantity keeps growing explosively, application keep
creating new workers at scheduler pools that leads to a
crashing? Is this a realistic scenario?
Suppose that calls to the database remained synchronous. Is there a
way to handle them using WebFlux in a such way that microservice
will remain reachable under load?
Which bottlenecks can be found in such design? Does this solution
looks adequate?
Does Netty (or Reactor-Netty, or whatever) has a tool to limit a
quantity of requests processing simultaneously? Say I would to limit
the endpoint to serve not more than 100 parallel requests and skip
all requests above that point, is it possible?
Suppose I will create a huge amount of threads serving async (or
maybe sync) calls to the DB. Where is a breaking point when the
application will crash or stop responding to the incoming
HTTP-requests? What will happened there - we will ran out of memory
or..?
Finally, there were no any major issues concerning perfomance during our pilot project. But unfortunately we didn't take in account some specific Linux (and also OpenShift) TCP tuning props.
They may significanly affect the overall perfomance, in our case we've gained about 10 times more requests after tuning.
So pay attention to the net.core.somaxconn and other related parameters.
I've summarized our expertise in the article.

JavaFX interactivity with Spring MVC Restful

I am building a JavaFX client application communicating with Spring MVC Restful server(Spring boot 1.4.1) application which works as expected.
Some features require fast interaction with the server to validate limits and availability before proceeding to next input example check if member number insert is valid and if has exceeded limit to insert, during accumulation of records(each confirmed record temporarily stored in a tableview before sent to server for storage) before the records are actually saved.
Within JavaFX and Spring framework(in both frontend and backend) scope, how can such kind of features made look more interactive(or live) than normal "let-me-wait-for-response" approach
If question is not clear, just ask, otherwise i think it is
It appears that the only interaction you have between client (JavaFX) and server (SpringBoot) is through a REST API. This will make short bursts of data (such a validation) take longer.
Switching to another communication mechanism (for example gRPC or Netty with Msgpack) could help. Note that once you open the door for non-REST calls it'll make you re-think the use of REST in the first place.
Non-REST communication may not be an option depending on your requirements (firewalls, etc) or may need additional setup in order to surmount other obstacles, in other words, there's no free lunch.

Designing a REST API with req/resp and pub/sub requirements

Nowadays I'm designing a REST interface for a distributed system. It is a client/sever architecture but with two message exchange patterns:
req/resp: the most RESTful approach, it would be a CRUD interface to access/create/modify/delete objects in the server.
pub/subs: this is my main doubt. I need the server to send asynchronous notifications to the client as soon as possible.
Searching in the web I found that one solution could be to implement REST-servers in the server and client: Publish/subscribe REST-HTTP Simple Protocol web services architecture?
Another alternative would be to implement blocking-REST and so the client doesn't need to listen in a specific port: Using blocking REST requests to implement publish/subscribe
I would like to know which options would you consider to implement an interface like this one. Thanks!
Web Sockets can provide a channel for the service to update web clients live. There's other techniques like http long polling where the client makes a "blocking" request (as you referred to it) where the service holds the request for a period of less than a timeout (say 50 sec) and writes a response when it has data. The web client immediately issues another request. This loop creates a continuous channel where messages can be "sent" from the server to the client but it's initiated from the client (firewalls, proxies, etc...)
There are libraries such as socket.io, signalR and many others that wrap this logic and even fallback from websockets to long polling gracefully for you and abstract away the details.
I would recommend write some sample web socket and long polling examples just to understand but then rely on libraries like mentioned above to get it right.

Microservices, AMQP or REST [duplicate]

A little background.
Very big monolithic Django application. All components use the same database. We need to separate services so we can independently upgrade some parts of the system without affecting the rest.
We use RabbitMQ as a broker to Celery.
Right now we have two options:
HTTP Services using a REST interface.
JSONRPC over AMQP to a event loop service
My team is leaning towards HTTP because that's what they are familiar with but I think the advantages of using RPC over AMQP far outweigh it.
AMQP provides us with the capabilities to easily add in load balancing, and high availability, with guaranteed message deliveries.
Whereas with HTTP we have to create client HTTP wrappers to work with the REST interfaces, we have to put in a load balancer and set up that infrastructure in order to have HA etc.
With AMQP I can just spawn another instance of the service, it will connect to the same queue as the other instances and bam, HA and load balancing.
Am I missing something with my thoughts on AMQP?
At first,
REST, RPC - architecture patterns, AMQP - wire-level and HTTP - application protocol which run on top of TCP/IP
AMQP is a specific protocol when HTTP - general-purpose protocol, thus, HTTP has damn high overhead comparing to AMQP
AMQP nature is asynchronous where HTTP nature is synchronous
both REST and RPC use data serialization, which format is up to you and it depends of infrastructure. If you are using python everywhere I think you can use python native serialization - pickle which should be faster than JSON or any other formats.
both HTTP+REST and AMQP+RPC can run in heterogeneous and/or distributed environment
So if you are choosing what to use: HTTP+REST or AMQP+RPC, the answer is really subject of infrastructure complexity and resource usage. Without any specific requirements both solution will work fine, but i would rather make some abstraction to be able switch between them transparently.
You told that your team familiar with HTTP but not with AMQP. If development time is an important time you got an answer.
If you want to build HA infrastructure with minimal complexity I guess AMQP protocol is what you want.
I had an experience with both of them and advantages of RESTful services are:
they well-mapped on web interface
people are familiar with them
easy to debug (due to general purpose of HTTP)
easy provide API to third-party services.
Advantages of AMQP-based solution:
damn fast
flexible
cost-effective (in resources usage meaning)
Note, that you can provide RESTful API to third-party services on top of your AMQP-based API while REST is not a protocol but rather paradigm, but you should think about it building your AQMP RPC api. I have done it in this way to provide API to external third-party services and provide access to API on those part of infrastructure which run on old codebase or where it is not possible to add AMQP support.
If I am right your question is about how to better organize communication between different parts of your software, not how to provide an API to end-users.
If you have a high-load project RabbitMQ is damn good piece of software and you can easily add any number of workers which run on different machines. Also it has mirroring and clustering out of the box. And one more thing, RabbitMQ is build on top of Erlang OTP, which is high-reliable,stable platform ... (bla-bla-bla), it is good not only for marketing but for engineers too. I had an issue with RabbitMQ only once when nginx logs took all disc space on the same partition where RabbitMQ run.
UPD (May 2018):
Saurabh Bhoomkar posted a link to the MQ vs. HTTP article written by Arnold Shoon on June 7th, 2012, here's a copy of it:
I was going through my old files and came across my notes on MQ and thought I’d share some reasons to use MQ vs. HTTP:
If your consumer processes at a fixed rate (i.e. can’t handle floods to the HTTP server [bursts]) then using MQ provides the flexibility for the service to buffer the other requests vs. bogging it down.
Time independent processing and messaging exchange patterns — if the thread is performing a fire-and-forget, then MQ is better suited for that pattern vs. HTTP.
Long-lived processes are better suited for MQ as you can send a request and have a seperate thread listening for responses (note WS-Addressing allows HTTP to process in this manner but requires both endpoints to support that capability).
Loose coupling where one process can continue to do work even if the other process is not available vs. HTTP having to retry.
Request prioritization where more important messages can jump to the front of the queue.
XA transactions – MQ is fully XA compliant – HTTP is not.
Fault tolerance – MQ messages survive server or network failures – HTTP does not.
MQ provides for ‘assured’ delivery of messages once and only once, http does not.
MQ provides the ability to do message segmentation and message grouping for large messages – HTTP does not have that ability as it treats each transaction seperately.
MQ provides a pub/sub interface where-as HTTP is point-to-point.
UPD (Dec 2018):
As noticed by #Kevin in comments below, it's questionable that RabbitMQ scales better then RESTful servies. My original answer was based on simply adding more workers, which is just a part of scaling and as long as single AMQP broker capacity not exceeded, it is true, though after that it requires more advanced techniques like Highly Available (Mirrored) Queues which makes both HTTP and AMQP-based services have some non-trivial complexity to scale at infrastructure level.
After careful thinking I also removed that maintaining AMQP broker (RabbitMQ) is simpler than any HTTP server: original answer was written in Jun 2013 and a lot of changed since that time, but the main change was that I get more insight in both of approaches, so the best I can say now that "your mileage may vary".
Also note, that comparing both HTTP and AMQP is apple to oranges to some extent, so please, do not interpret this answer as the ultimate guidance to base your decision on but rather take it as one of sources or as a reference for your further researches to find out what exact solution will match your particular case.
The irony of the solution OP had to accept is, AMQP or other MQ solutions are often used to insulate callers from the inherent unreliability of HTTP-only services -- to provide some level of timeout & retry logic and message persistence so the caller doesn't have to implement its own HTTP insulation code. A very thin HTTP gateway or adapter layer over a reliable AMQP core, with option to go straight to AMQP using a more reliable client protocol like JSONRPC would often be the best solution for this scenario.
Your thoughts on AMQP are spot on!
Furthermore, since you are transitioning from a monolithic to a more distributed architecture, then adopting AMQP for communication between the services is more ideal for your use case. Here is why…
Communication via a REST interface and by extension HTTP is synchronous in nature — this synchronous nature of HTTP makes it a not-so-great option as the pattern of communication in a distributed architecture like the one you talk about. Why?
Imagine you have two services, service A and service B in that your Django application that communicate via REST API calls. This API calls usually play out this way: service A makes an http request to service B, waits idly for the response, and only proceeds to the next task after getting a response from service B. In essence, service A is blocked until it receives a response from service B.
This is problematic because one of the goals with microservices is to build small autonomous services that would always be available even if one or more services are down– No single point of failure. The fact that service A connects directly to service B and in fact, waits for some response, introduces a level of coupling that detracts from the intended autonomy of each service.
AMQP on the other hand is asynchronous in nature — this asynchronous nature of AMQP makes it great for use in your scenario and other like it.
If you go down the AMQP route, instead of service A making requests to service B directly, you can introduce an AMQP based MQ between these two services. Service A will add requests to the Message Queue. Service B then picks up the request and processes it at its own pace.
This approach decouples the two services and, by extension, makes them autonomous. This is true because:
If service B fails unexpectedly, service A will keep accepting requests and adding them to the queue as though nothing happened. The requests would always be in the queue for service B to process them when it’s back online.
If service A experiences a spike in traffic, service B won’t even notice because it only picks up requests from the Message Queues at its own pace
This approach also has the added benefit of being easy to scale— you can add more queues or create copies of service B to process more requests.
Lastly, service A does not have to wait for a response from service B, the end users don’t also have to wait for long— this leads to improved performance and, by extension, a better user experience.
Just in case you are considering moving from HTTP to AMQP in your distributed architecture and you are just not sure how to go about it, you can checkout this 7 parts beginner guide on message queues and microservices. It shows you how to use a message queue in a distributed architecture by walking you through a demo project.