Configure Wildfly 10 to block simultaneous repeating servlet calls - wildfly

Is it possible to configure in the wildfly 10 undertow to block simultaneous repeating servlet calls from one user?
I want to have only the first HTTP call from the application to reach the servlet and the next repeating calls from the same user should be blocked until the first servlet request has been completed. I know this is possible from the java application itself by setting the method as synchronized, but that will block all requests for all the others users and i think its a bad programming practice. Please advice.

Related

starting with reactive DB access in a blocking monolith

In a DB heavy monolith based on wildfly. does it make sense to transform the DB access to reactive one for starters? should I see performance benefits?
also, the DB is sybase and the only 'generic' jdbc driver I know is from vert.x but this implies that I will have to put vert.x inside my wildfly. I understand that they are sort of alternatives but I cant find any other options.
I would love to hear your thoughts about the 2 points I am raising. In general, I cant commit to a full transition from wildfly to quarkus/vert.x from the get go as it will take lots of resources so I thought I could start smaller...
Vert.x is a toolkit, which means, for example, you do not need to use the web server it provides, nor any other module. It's also very lightweight, so you will only add a few more dependencies to your application. So, yes it can make sense to integrate Vert.x.
vertx-jdbc-client however, cannot magically transform blocking calls into non-blocking calls. Instead, it will off-load the blocking calls onto Vert.x' worker thread pool. That will lead to another effect: The DB call you used to wait for, will immediately return, leaving you with nothing but a Future. That Future will eventually have the expected result.
Going further upstream in your code (the direction where your user's request came from), this means that you will have to
either defer processing of the result via Future.map() or Future.compose()
block the thread to get the result immediately
You will win nothing by (2), so rule that out.
When you go for (1), you must defer all further processing, up to the point where the incoming request is originally handled. If that is, for example, a Servlet, you have to use Asynchronous Processing to make sure that Wildfly does not commit the response after the doGet, doPost etc. method exits.
The result of all this will be that Wildfly now handles your request asynchronously, with Vert.x managing the DB interaction. You can do that. But it would be more idiomatic to your current setup to just use Asynchronous Processing (or Spring's #Async feature) and wrap all of your code in a Runnable. Both approaches will not speed up request processing itself, because the processing depends on the slower DB. However, Wildfly will be able to process more requests because the threads it assigns to requests will not be blocked anymore.
Having all that said, if you want to migrate to Quarkus in small steps, you should do that service by service. Identify the Servlets (or Controllers) which do the work, and port them one by one to Quarkus. If sessions are your problem, then you could possibly share them between Wildfly and Quarkus, using Infinispan.

Deploying new Verticle every for every HTTP Request?

Currently on application startup I'm deploying a single verticle and calling createHttpServer(serverOptions).
I've set up a request().connection().closeHandler for handling a closed connection event, primarily so when clients decide to cancel their request, we halt our execution of that request.
However, when I set up that handler in that same verticle, it only seems to execute the closeHandler code once any synchronous code is finished executing and we're waiting on databases to respond via Futures and asynchronous handlers.
If instead of that, I deploy a worker verticle for each new HTTP request, it properly interrupts execution to execute the closeHandler code.
As I understand it, the HttpServer is already supposed to handle scalability of requests on its own since it can handle many at once without deploying new verticles. Essentially, this sounds like a hacky workaround that may affect our thread loads or things of that nature once our application is in full swing. So my questions are:
Is this the right way of doing this?
If not, what is the correct method or paradigm to follow?
How do you cancel the execution of a verticle from within itself verticle and inside that closeHandler? And by cancel execution, I mean including any Futures waiting to be completed.
Why does closeHandler only execute asynchronously when doing this multiple verticle approach? Using the normal way and simply executing requests using the alloted thread pool postpones closeHandler's execution until the eventloop finishes its queue, we need this to happen asynchronously
I think you need to understand Vert.x better. Vert.x does not start and stop thread per request. Verticles are long living and each handle multiple events during their lifetime but never concurrently. Also you should not deploy worker (or non-worker) Verticles per request.
What you do is that you deploy a pool of Verticles (worker and non) and Vert.x divides the load between them. An HTTP server is placed in front and will receive requests and forward them to verticle(s) to be handled.
for stopping processing a request, you need to keep a flag somewhere which is set if connection is closed. then you can check for it in your process and stop processing. Just don't forget to clear the flag at beginning of each request.
Deploying or undeploying verticles doesn't affect threads count. Vert.x uses thread pools of a limited size.
Undeploying verticles is a mean to downscale your service. Ideally, you shouldn't undeploy verticles at all. Deploying or undeploying does have a performance impact.
closeHandler, as I mentioned previously, is a callback method to release resources.
Vert.x Future doesn't provide cancellation means. The reason is that even Java's Future.cancel() is a cooperative operation.
As a means to fix this, probably passing a reference to AtomicBoolean as was suggested above, and checking it before every synchronous step is the best way. You will still be blocked by synchronous operations, though.

Semaphore error logged in mobicents sip servlet

We have an application written against Mobicents SIP Servlets, currently this is using v2.1.547 but I have also tested against v3.1.633 with the same behavior noted.
Our application is working as a B2BUA, we have an incoming SIP call and we also have an outbound SIP call being placed to an MRF which is executing VXML. These two SIP calls are associated with a single SipApplicationSession - which is the concurrency model we have configured.
The scenario which recreates this 100% of the time is as follows:
inbound call placed to our application (call is not answered)
outbound call placed to MRF
inbound call hangsup
application attempts to terminate the SipSession associated with the outbound call
I am seeing this being logged:
2015-12-17 09:53:56,771 WARN [SipApplicationSessionImpl] (MSS-Executor-Thread-14) Failed to acquire session semaphore java.util.concurrent.Semaphore#55fcc0cb[Permits = 0] for 30 secs. We will unlock the semaphore no matter what because the transaction is about to timeout. THIS MIGHT ALSO BE CONCURRENCY CONTROL RISK. app Session is5faf5a3a-6a83-4f23-a30a-57d3eff3281c;SipController
I am willing to believe somehow our application might be triggering this behavior but I can't see how at the moment. I would have thought acquiring/releasing the Semaphore was all internal to the implementation so it should ensure something doesn't acquire the Semaphore and never release it?
Any pointers on how to get to the bottom of this would be appreciated, as I said it is 100% repeatable so getting logs etc is all possible.
It's hard to tell without seeing any logs or application code on how you access and schedule messages to be sent. But if you use the same SipApplicationSession in an asynchronous manner you may want to use our vendor specific asynchronous API https://mobicents.ci.cloudbees.com/job/MobicentsSipServlets-Release/lastSuccessfulBuild/artifact/documentation/jsr289-extensions-apidocs/org/mobicents/javax/servlet/sip/SipSessionsUtilExt.html#scheduleAsynchronousWork(java.lang.String,%20org.mobicents.javax.servlet.sip.SipApplicationSessionAsynchronousWork) which will guarantee that the access to the SipapplicationSession is serialized and avoid any concurrency issues.

How to handle timeouts in a REST Client when calling methods with side-effect

Let's say we have a REST client with some UI that lists items it GETs from the server. The server also exposes some REST methods to manipulate the items (POST / PUT).
Now the user triggers one of those calls that are supposed to change the data on the server side. The UI will reflect the server state change, if the call was successful.
But what are good strategies to handle the situation when the server is not available?
What is a reasonable timeout lengths (especially in a 3G / Cloud setup)?
How do you handle the timeout in the client, considering the fact that the client can't tell whether the operation succeeded or not?
Are there any common patterns to solve that, other than a complete client termination (and subsequent restart)?
This will be application specific. You need to decide what makes the most sense in your usage case.
Perhaps start with a timeout similar to that of the the default PHP session of 24 minutes. Adjust as necessary based on testing.
Do you have server and client mixed up here? If so the server cannot tell if the client times out other than reaching the end of a session. The client can always query the server for a progress update.
This one is a little general to provide an answer for.

Deliberately delayed response from JBoss

What is the best way to deliberately delay a response from the JBoss server?
I would like to implement a delayed login functionality, where if a username has been used in a failed login attempt recently, the AS will wait a few seconds before returning to the user. The stack consists of SQL-db, JBoss runnning the application, EJBs exposed to SOAP webservice adapters which in turn will be used by the clients.
Obviously Thread.sleep() won't do...
No, it does not seem possible, nor reasonable to delay the responses from the application server.
This is a feature that should be implemented on the client side.