Reactive vs Non Reactive Response Time - reactive-programming

For a system powerful enough to serve a number of requests (not running out of threads), would there be a difference, from the users perspective, in terms of response time / speed?
Also, would database the only thing that is usually blocking the thread, and hence we need reactive db driver?
I mean, if a rest endpoint does not make calls to db, there would be no diff whether the endpoint is reactive or not?

First of all, you need to know what happens when you use project-reactor - webflux client.
Let's assume your endpoint (call it /demo) is responsible for making 5 async calls to other systems to return response from itself.
Example response times:
Service A: 5 ms
Service B: 50 ms
Service C: 100 ms
Service D: 250 ms
Service E: 400 ms
Typical non-weblux client way:
5 threads are consumed, the last one is blocked for 400ms.
Webflux client way:
Every call to services A, B, C, D, E consume one thread, make call, returns thread and when the response comes another thread is consumed to process response.
The final conclusion:
If your system will be overloaded by big amount of requests (let it be n) in the same time you will lock n threads for 400 ms.
Try to imagine the scale of the problem.

Related

Vert.x httpClient - any idea how to limit the number of concurrent requests?

I need to send x HTTP client request. I want to send the requests in parallel, but no more than y at once.
I will explain:
The client can handle only y requests simultaneously. I need to send x request to the client, while x > y.
I don't want to wait until all the first y requests will end, and then send another bulk of y requests. This approach isn't efficient, because at each moment, my client can handle y requests. If I will wait until all the first y will end to send another y requests, the client won't be fully utilized.
Any Idea how can implement it with vert.x?
I'm considering sending x requests at once and then send another request each time the handler gets the callback. Is it make sense?
What is the meaning of maxPoolSize in HttpClientOptions? Is it have any connection to concurrent requests?
Many thanks!
I'm answering my question... After some tests, the described test does not scale well with any reactor pattern. The solution here is to use a thread poll of y for sending x tasks.
I would suggest to go with your solution based on callbacks, and not to rely on maxPoolSize.
From the documentation:
* If an HttpClient receives a request but is already handling maxPoolSize requests it will attempt to put the new
* request on it's wait queue. If the maxWaitQueueSize is set and the new request would cause the wait queue to exceed
* that size then the request will receive this exception.
https://github.com/eclipse-vertx/vert.x/blob/master/src/main/java/io/vertx/core/http/ConnectionPoolTooBusyException.java

How application server handle multiple requests to save data into table

I have created a web application in jsf and it has a button.
If the button is clicked then it will go to the server side and execute the below function to save the data in a table and I am using mybatis for this.
public void save(A a)
{
SqlSession session = null;
try{
session = SqlConnection.getInstance().openSession();
TestMapper testmap= session.getMapper(TestMapper.class);
testmap.insert(a);
session .commit();
}
catch(Exception e){
}
finally{
session.close();
}
}
Now i have deployed this application in an application server JBoss(wildfly).
As per my understanding, when multiple users try to access the application
by hitting the URL, the application server creates thread for each of the user request.
For example if 4 clients make request then 4 threads will be generated that is t1,t2,t3 and t4.
If all the 4 users hit the save button at the same time, how save method will be executed, like if t1 access the method and execute insert statement
to insert data into table, then t2,t3 and t4 or simultaneously all the 4 threads will execute the insert method and insert data?
To bring some context I would describe first two possible approaches to handling requests. In this case HTTP but these approaches do not depend on the protocol used and the main important thing is that requests come from the network and for their execution some IO is needed (either access to filesystem or database or network calls to other systems). Note that the following description has some simplifications.
These two approaches are:
synchronous
asynchronous
In general to process the typical HTTP request that involves DB access at least four IO operations are needed:
request handler needs to read the request data from the client socket
request handler needs to write request to the socket connected to the DB
request handler needs to read response from the DB socket
request handler needs to write the response to the client socket
Let's see how this is done for both cases.
Synchronous
In this approach the server has a pool (think a collection) of threads that are ready to serve a request.
When the request comes in the server borrows a thread from the pool and executes a request handler in that thread.
When the request handler needs to do the IO operation it initiates the IO operation and then waits for its completion. By wait I mean that thread execution is blocked until the IO operation completes and the data (for example response with the results of the SQL query) is available.
In this case concurrency that is requests processing for multiple clients simultaneously is achieved by having some number of threads in the pool. IO operations are much slower if compared to CPU so most of the time the thread processing some request is blocked on IO operation and CPU cores can execute stages of the request processing for other clients.
Note that because of the slowness of the IO operations thread pool used for handling HTTP requests is usually large enough. Documentation for sync requests processing subsystem used in wildfly says about 10 threads per CPU core as a reasonable value.
Asynchronous
In this case the IO is handled differently. There is a small number of threads handling IO. They all work the same way and I'll describe one of them.
Such thread runs a loop which basically waits for events and every time an event happen it calls a handler for an event.
The first such event is new request. When a request processing is started the request handler is invoked from the loop that is run by one of the IO threads. The first thing the request handler is doing it tries to read request from the client socket. So the handler initiates the IO operation on the client socket and returns control to the caller. That means that the thread is released and it can process another event.
Another event happens when the IO operations that reads from client socket got some data available. In this case the loop invokes the handler at the point where the handler returned the control to the loop after the IO initiate namely it is resumed on the next step that processes the input data (like parses HTTP parameters) and initiates new IO operation (in this case request to the DB socket). And again the handler releases the thread so it can handler other events (like completion of IO operations that are part of other clients' requests processing).
Given that IO operations are slow compared to the speed of CPU itself one thread handling IO can process a lot of requests concurrently.
Note: that it is important that the requests handler code never uses any blocking operation (like blocking IO) because that would steal the IO thread and will not allow other requests to proceed.
JSF and Mybatis
In case of JSF and mybatis the synchronous approach is used. JSF uses a servlet to handle requests from the UI and servlets are handled by the synchronous processors in WildFly. JDBC which is used by mybatis to communicate to a DB is also using synchronous IO so threads are used to execute requests concurrently.
Congestions
All of the above is written with the assumption that there is no other sources of the congestion. By congestion here I mean a limitation on the ability of the certain component of the system to execute things in parallel.
For example imagine a situation that a database is configured to only allow one client connection at a time (this is not a reasonable configuration and I'm using this only to demonstrate the idea). In this case even if multiple threads can execute the code of the save method in parallel all but one will be blocked at the moment when they try to open the connection to the database.
Another similar example is if you are using sqlite database. It only allows one client to write to the DB at a time. So at the point when thread A tries to execute insert it will be blocked if the is another thread B that is already executing the insert. And only after the commit executed by the thread B the thread A would be able to proceed with the insert. The time A depends on the time it take for B to execute its request and the number of other threads waiting to do a write operation to the same DB.
In practice if you are using a RDBMS that scales better (like postgresql, mysql or oracle) you will not hit this problem when using the small number of connection. But it may become a problem when there is a big number of concurrent requests and there is a limitation in the DB on the number of client connections or the connection pool is used to limit the number of connections on the application side. In this case if there are already many connections to the database the new clients will wait until existing requests are finished and connections are closed.

Intermittent slowness in responses from vert.x based web server

I have a vertx webserver running on a 1x8g machine. It has about 15 routes mapped, 5 of which are blocking and 10 are non blocking. These are all part of one standard verticle that my app comprises of. The non blocking handlers just open an http connection to another downstream system ( all of which are very fast - elastic search / cached data APIs ). Some of the blocking handlers do take a bit of time - anywhere between 3 and 9 seconds depending on the time of the day - these also call an external system.
The API response time for my non blocking handlers are usually in the 400ms-600ms range. Occasionally, I see the response times spiking up to over 2 seconds and sometimes all the way up to 12 seconds. I'm not sure what is causing this. Is it the combination of blocking vs nonblocking handlers in the same verticle.
What is the best way to diagnose the root cause here ?

Always use neg[.z.w] to ensure that all messages are asynchronous?

Consider the following definition on server:
f:{show "Received ",string x; neg[.z.w] (`mycallback; x+1)}
on client side:
q)mycallback:{show "Returned ",string x;}
q)neg[h] (`f; 42)
q)"Returned 43"
In the q for motrtals, the tip says:
When performing asynchronous messaging, always use neg[.z.w] to ensure
that all messages are asynchronous. Otherwise you will get a deadlock
as each process waits for the other.
therefore I change the definition on the server as:
f:{show "Received ",string x; .z.w (`mycallback; x+1)}
everything goes fine, and I haven't seen any deadlocks.
Can anyone give me an example to show why I should always use neg[.z.w]?
If I understand you're question correctly I think your asking how sync and async messages work. The issue with the example you have provided is that x+1 is a very simple query that can be evaluated almost instantaneously. For a more illustrative example consider changing this to a sleep (or a more strenuous calculation, eg. a large database query).
On your server side define:
f:{show "Received ",string x;system "sleep 10"; neg[.z.w] (`mycallback; x+1)}
Then on your client side you can send the synchronous query:
h(`f; 42)
multiple times. Doing this you will see there is no longer a q prompt on the client side as it must wait for a response. These requests can be queued and thus block both the client and server for a significant amount of time.
Alternatively, if you were to call:
(neg h)(`f; 42)
on the client side. You will see the q prompt remain, as the client is not waiting for a response. This is an asynchronous call.
Now, in your server side function you are looking at using either .z.w or neg .z.w. This follows the exact same principal however from a server perspective. If the response to query is large enough, the messaging can take a significant amount of time. Consequently, by using neg this response can be sent asynchronously so the server is not blocked during this process.
NOTE: If you are working on a windows machine you will need to swap out sleep for timeout or perhaps a while loop if you are following my examples.
Update: I suppose one way to cause such a deadlock would be to have two dependant processes, attempting to synchronously call each other. For example:
q)\p 10002
q)h:hopen 10003
q)g:{h (`f1;`)}
q)h (`f;`)'
on one side and
q)\p 10003
q)h:hopen 10002
q)f:{h (`g;`)}
q)f1:{show "test"}
on the other. This would result in both processes being stuck and thus test never being shown.
Joe's answer covers pretty much everything, but to your specific example, a deadlock happens if the client calls
h (`f; 42)
Client is waiting response from the server before processing the next request, but the server is also waiting response from the client before it completes the client's request.

Fast single thread comet server, possible?

I recently encountered a few cases where a server would distribute an event stream that contains the exact same data for all listeners, such as a 'recent activity' box.
It occurred to me that it is quite strange and inefficient to have a server like Apache run a thread processing and querying the database for every single comet stream containing the same data.
What I would do for those global(not per user) streams is run a single thread that continuously emits data, and a new (green)thread for every new request that outputs the headers and then 'merges' into the main thread.
Is it possible for one thread to serve multiple sockets, or for multiple clients to listen to the same socket?
An example
o = event
# threads received
| a b # 3
o / / # 3 -
|/_/
| # 1
o c # 2 a, b
| /
o/ # 2 a, b
o # 1 a, b, c
| # connection b closed
o # 1 a, c
Does something like this exist? Would it work? Is it possible to do?
Disclaimer: I'm not a server expert.
Check out node.js - single threaded, event driven server. Uses JavaScript as a bonus.
If you are using ASP.NET, the following post should be useful
http://beta.codeproject.com/KB/aspnet/CometAsync.aspx
By the way, it is possible to implement Comet to serve more than one client per thread, but only one thread for all client seems not enough?
You are talking about "asynchronous web requests" applied to Comet, some like "asynchronous Comet".
In my opinion this approach, so popular these days, is deeply flawed.