Vertx WebClient shared vs single across multiple verticles? - vert.x

I am using vert.x as an api gateway to route calls to downstream services.
As of now, I am using single web client instance which is shared across multiple verticles (injected through guice)
Does it make sense for each verticle to have it's own webclient? Will it help in boosting performance? (My each gateway instance runs 64 vericles and handles approximately 1000 requests per second)
What are the pros and cons of each approach?
Can someone help to figure out what's the ideal strategy for the same?
Thanks

Vert.x is optimized for using a single WebClient per-Verticle. Sharing a single WebClient instance between threads might work, but it can negatively affect performance, and could lead to some code running on the "wrong" event-loop thread, as described by Julien Viet, the lead developer of Vert.x:
So if you share a web client between verticles, then your verticle
might reuse a connection previously open (because of pooling) and you
will get callbacks on the event loop you won't expect. In addition
there is synchronization in the web client that might become contented
when used intensively from different threads.
Additionally, the Vert.x documentation for HttpClient, which is the underlying object used by WebClient, explicitly states not to share it between Vert.x Contexts (each Verticle gets its own Context):
The HttpClient can be used in a Verticle or embedded.
When used in a Verticle, the Verticle should use its own client
instance.
More generally a client should not be shared between different Vert.x
contexts as it can lead to unexpected behavior.
For example a keep-alive connection will call the client handlers on
the context of the request that opened the connection, subsequent
requests will use the same context.

Related

starting with reactive DB access in a blocking monolith

In a DB heavy monolith based on wildfly. does it make sense to transform the DB access to reactive one for starters? should I see performance benefits?
also, the DB is sybase and the only 'generic' jdbc driver I know is from vert.x but this implies that I will have to put vert.x inside my wildfly. I understand that they are sort of alternatives but I cant find any other options.
I would love to hear your thoughts about the 2 points I am raising. In general, I cant commit to a full transition from wildfly to quarkus/vert.x from the get go as it will take lots of resources so I thought I could start smaller...
Vert.x is a toolkit, which means, for example, you do not need to use the web server it provides, nor any other module. It's also very lightweight, so you will only add a few more dependencies to your application. So, yes it can make sense to integrate Vert.x.
vertx-jdbc-client however, cannot magically transform blocking calls into non-blocking calls. Instead, it will off-load the blocking calls onto Vert.x' worker thread pool. That will lead to another effect: The DB call you used to wait for, will immediately return, leaving you with nothing but a Future. That Future will eventually have the expected result.
Going further upstream in your code (the direction where your user's request came from), this means that you will have to
either defer processing of the result via Future.map() or Future.compose()
block the thread to get the result immediately
You will win nothing by (2), so rule that out.
When you go for (1), you must defer all further processing, up to the point where the incoming request is originally handled. If that is, for example, a Servlet, you have to use Asynchronous Processing to make sure that Wildfly does not commit the response after the doGet, doPost etc. method exits.
The result of all this will be that Wildfly now handles your request asynchronously, with Vert.x managing the DB interaction. You can do that. But it would be more idiomatic to your current setup to just use Asynchronous Processing (or Spring's #Async feature) and wrap all of your code in a Runnable. Both approaches will not speed up request processing itself, because the processing depends on the slower DB. However, Wildfly will be able to process more requests because the threads it assigns to requests will not be blocked anymore.
Having all that said, if you want to migrate to Quarkus in small steps, you should do that service by service. Identify the Servlets (or Controllers) which do the work, and port them one by one to Quarkus. If sessions are your problem, then you could possibly share them between Wildfly and Quarkus, using Infinispan.

Will WebFlux have any bottlenecks in such architecture?

We're currently about to migrate from monolithic design to the microservice architecture, trying to choose the best way to replace JAX-WS with RESTful and considering to use Spring WebFlux.
We currently have an JAX-WS endpoint deployed at Tomcat EE serving requests from third-party clients. Webservice endpoint makes a long running blocking call to the database and then sends a SOAP-response to the client with a data retrieved from DB (Oracle).
Oracle DB will be replaced with one of NoSQL databases soon (possibly it will be MongoDB). Since MongoDB supports asynchronous calls we're considering to substitute current implementation with a microservice exposing REST endpoint based on WebFlux.
We have about 2500 req/sec at peaks, so current endpoint often gets down with a OutOfMemoryError. It was a root cause that pushed us towards migration.
My thoughts are to create a non-blocking endpoint which will call MongoDB in asynchronous manner and send a REST-response to the client. So I have a few questions considering basic features that WebFlux provides:
As far as I concerned there is a built-in backpressure control at
the business-level (not TCP flow control) in WebFlux and it works
generally via Reactive Streams. Since our clients are not
reactive, does it means that such way of a backpressure control is
not implementable here?
Suppose that calls to a new database remains long-running in a new
architecture. Since Netty uses EventLoop to serve incoming
requests, is there possible a situation when the microservice has
accepted all incoming HTTP connections, invoke an async call to the
db and subscribed a resulted Mono to the scheduler, but, since
the request quantity keeps growing explosively, application keep
creating new workers at scheduler pools that leads to a
crashing? Is this a realistic scenario?
Suppose that calls to the database remained synchronous. Is there a
way to handle them using WebFlux in a such way that microservice
will remain reachable under load?
Which bottlenecks can be found in such design? Does this solution
looks adequate?
Does Netty (or Reactor-Netty, or whatever) has a tool to limit a
quantity of requests processing simultaneously? Say I would to limit
the endpoint to serve not more than 100 parallel requests and skip
all requests above that point, is it possible?
Suppose I will create a huge amount of threads serving async (or
maybe sync) calls to the DB. Where is a breaking point when the
application will crash or stop responding to the incoming
HTTP-requests? What will happened there - we will ran out of memory
or..?
Finally, there were no any major issues concerning perfomance during our pilot project. But unfortunately we didn't take in account some specific Linux (and also OpenShift) TCP tuning props.
They may significanly affect the overall perfomance, in our case we've gained about 10 times more requests after tuning.
So pay attention to the net.core.somaxconn and other related parameters.
I've summarized our expertise in the article.

Throttle API calls to external service using Scala

I have a service exposing a REST endpoint that, after a couple of transformations, calls a third-party service also via its REST endpoint.
I would like to implement some sort of throttling on my service to avoid being throttled by this third-party service. Note that my service's endpoint accepts only one request and not a list of them. I'm using Play and we also have Akka Streams as dependency.
My first though was to have my service saving the requests into a database table and then have an Akka Streams Source, leveraging the throttle function, picking tasks, applying the transformations and then calling the external service.
Is this a reasanoble approach or does it have any severe drawbacks?
Thanks!
Why save the requests to the database? Does the queue need to survive restarts and/or do you run a load-balanced setup that needs to somehow synchronize the requests?
If you don't need the above I'd think using only Source.queue to store the task data would work just as well?
And maybe you already thought of this: If you want to make your endpoint more resilient you should allow your API to send a 'sorry, busy' response and drop the request instead of queuing it if your queue grows beyond a certain size.

What happens to messages that come to a server implements stream processing after the source reached its bound?

Im learning akka streams but obviously its relevant to any streaming framework :)
quoting akka documentation:
Reactive Streams is just to define a common mechanism of how to move
data across an asynchronous boundary without losses, buffering or
resource exhaustion
Now, from what I understand is that if up until before streams, lets take an http server for example, the request would come and when the receiver wasent finished with a request, so the new requests that are coming will be collected in a buffer that will hold the waiting requests, and then there is a problem that this buffer have an unknown size and at some point if the server is overloaded we can loose requests that were waiting.
So then stream processing came to play and they bounded this buffer to be controllable...so we can predefine the number of messages (requests in my example) we want to have in line and we can take care of each at a time.
my question, if we implement that a source in our server can have a 3 messages at most, so if the 4th id coming what happens with it?
I mean when another server will call us and we are already taking care of 3 requests...what will happened to he's request?
What you're describing is not actually the main problem that Reactive Streams implementations solve.
Backpressure in terms of the number of requests is solved with regular networking tools. For example, in Java you can configure a thread pool of a networking library (for example Netty) to some parallelism level, and the library will take care of accepting as much requests as possible. Or, if you use synchronous sockets API, it is even simpler - you can postpone calling accept() on the server socket until all of the currently connected clients are served. In either case, there is no "buffer" on either side, it's just until the server accepts a connection, the client will be blocked (either inside a system call for blocking APIs, or in an event loop for async APIs).
What Reactive Streams implementations solve is how to handle backpressure inside a higher-level data pipeline. Reactive streams implementations (e.g. akka-streams) provide a way to construct a pipeline of data in which, when the consumer of the data is slow, the producer will slow down automatically as well, and this would work across any kind of underlying transport, be it HTTP, WebSockets, raw TCP connections or even in-process messaging.
For example, consider a simple WebSocket connection, where the client sends a continuous stream of information (e.g. data from some sensor), and the server writes this data to some database. Now suppose that the database on the server side becomes slow for some reason (networking problems, disk overload, whatever). The server now can't keep up with the data the client sends, that is, it cannot save it to the database in time before the new piece of data arrives. If you're using a reactive streams implementation throughout this pipeline, the server will signal to the client automatically that it cannot process more data, and the client will automatically tweak its rate of producing in order not to overload the server.
Naturally, this can be done without any Reactive Streams implementation, e.g. by manually controlling acknowledgements. However, like with many other libraries, Reactive Streams implementations solve this problem for you. They also provide an easy way to define such pipelines, and usually they have interfaces for various external systems like databases. In particular, such libraries may implement backpressure on the lowest level, down to to the TCP connection, which may be hard to do manually.
As for Reactive Streams itself, it is just a description of an API which can be implemented by a library, which defines common terms and behavior and allows such libraries to be interchangeable or to interact easily, e.g. you can connect an akka-streams pipeline to a Monix pipeline using the interfaces from the specification, and the combined pipeline will work seamlessly and supporting all of the backpressure features of Reacive Streams.

Server-side Websocket implementations in non-event driven HTTP Server Environments

I am trying to understand implementations/options for server-side Websocket endpoints - particularly in Perl using PSGI/Plack and I have a question: Why are all server-side websocket implementations based around event-driven PSGI servers (Twiggy, Tatsumaki, etc.)?
I get that websocket communication is asynchronous, but a non-event driven PSGI server (say Starman) could spawn an asynchronous listener to handle the websocket side of things. I have seen (but not understood) PHP implementations of Websocket servers, so why cant the same be done with PSGI without having to change the server to an event driven one?
Underlying network logic to deal with sockets depends on platform, OS and particular software implementations.
Most common three methods are:
pulling - there is blocking constant "asking" if socket has some data. This method is well bad, as it will block execution of main thread for as long as it waits for some data.
thread per socket - each new connection involves creating new thread and asking each socket in blocking manner happens within that thread. So it wont block main thread with logic. This method is bad as creating thread for each connection is too expensive for memory, and can be around 1Mb or RAM based on OS and other criteria.
async - uses system features to "notify" your process when there is something. So you can react once your app is ready (in case of single threaded app) or even react in separate thread straight away. This method is well efficient as it saves RAM, and allows your app to work without need of waiting or asking for data. It utilises existing functionalities that most OS and platforms provide.
Taking this in account, you indeed can create single process functional way to deal with sockets traffic. But that is not efficient at all as been proven previously. That is why fully async models are major today, as most languages and platforms do support such paradigm.