BlockHound Detects WebClient's ExchangeFunction's .next() Method As Blocking - reactive-programming

I am profiling my reactor application using BlockHound. I have a filter on my ExchangeFunction:
#Override
public Mono<ClientResponse> filter(ClientRequest request, ExchangeFunction next) {
final ClientRequest.Builder builder = ClientRequest.from(request);
return Mono.defer(() -> next.exchange(builder.build())) //detects blocking call
.transform(reactiveUtil::contextualize)
.publishOn(Schedulers.parallel());
}
BlockHound detects a blocking call on the next.exchange(). Now since I am using WebClient with Netty, why would this call be non-blocking? Subscribing this on an elastic thread does not help.

According to your gist, BlockHound is detecting java.io.FileInputStream.readBytes(..) as blocking deep within the SSL handshake.
This problem has been reported in https://github.com/reactor/reactor-netty/issues/939 and appears to be resolved in the latest releases.

Related

How to write a http REST service asynchronously

What is the recommended way in vert.x to write an Asynchronous request handler?
In this service, a request processing typically involves calling DB, calling external services, etc. I do not want to block the request handling thread however. What is the recommended way to achieve this using vet.x? In a typical asynchronous processing chain, I would use the request handling thread to emit a message to the message bus with the request object. Another handler will pick this message and do some processing such as checking request params. This handler can then emit a new message to the bus which can be picked up by the next handler which will do a remote call. This handler emits a new message with the result of the call which can be picked up by the next handler which will do error checking etc. Next handler would be responsible for creating the response and sending it to the client.
How one can create a similar pipeline using vert.x?
Everything, comprising request handlers for HttpServer, is asynchronous, isn't it?
var server = vertx.createHttpServer(HttpServerOptions())
server.requestHandler { req ->
req.setExpectMultipart(true) // for handling forms
var totalBuffer = Buffer.buffer()
req.handler { buff -> b.appendBuffer(buff) }
.endHandler { // the body has now been fully read
var formAttributes = request.formAttributes()
req.response().putHeader("Content-type","text/html");
req.response().end("Hello HTTP!");
}
// the above is so common that Vertx provides: bodyHandler{totalbuff->..}
}.listen(8080, "127.0.0.1", { res -> if(res.succeeded()) ... });
You just need to (end) write on req.response() on your final handler of your pipeline.
For a more stream-like implementation (i.e., not callback-based), you may use Vert.x Rx/ReactiveStreams API. E.g., you may use Vert.x Web Client for making requests, possibly using its Rx-fied API.

How to get notified when unfiltered Netty server actually gets shutdown?

I have an Unfiltered Netty server that I need to shutdown and restart after every test.
val mockService = unfiltered.netty.Server.http(mockServicePort).handler(mockServicePlan)
before {
proxyServer.start()
}
after {
proxyServer.stop()
}
Currently, this is not working, and I am fairly certain that is because the stop() function is non-blocking and so the following start() function gets called to early.
I looked for a way to block or get notified on server closure, but it would not appear to be surfaced through the current API.
Is there a better way of achieving what I am trying to achieve?
Easy answer: replace your Unfiltered Netty server with a HTTP4S Blaze server.
var server: org.http4s.server.Server = null
val go: Task[Server] = org.http4s.server.blaze.BlazeBuilder
.bindHttp(mockServicePort)
.mountService(mockService)
.start
before {
server = go.run
}
after {
server.shutdown.run
}
There's also an awaitShutdown that blocks until the server shuts down. But since shutdown is a Task, it's easy to get notified when it has finished. Just flatMap that shit.

HttpListener prevent Timeout

I Implemented a HttpListener to process SoapRequests. This works fine but I can't find a soloution for the problem, that some soap-requests take too much time, resulting in timeouts on client side.
How do I let the requesting client know, that his request is not a timeout?
I thought about sending "dummy"-information while the request gets processsed, but the HttpListener only seems to send the data when you Close the response-object, and this can be done only once, so this is not the right thing to do I suppose.
Soloution:
Thread alliveWorker = new Thread(() =>
{
try
{
while (context.Response.OutputStream.CanWrite)
{
context.Response.OutputStream.WriteByte((byte) ' ');
context.Response.OutputStream.Flush();
Thread.Sleep(5000);
}
}
finally
{
}
});
alliveWorker.Start();
doWork();
alliveWorker.Interrupt();
createTheRealResponse();
Sending dummy information is not a bad idea.
I think you need to call the Flush() method on the HttpListenerResponse's OutputStream property after writing the dummy data. You must also enable SendChunked property:
Try sending a dummy space at regular interval:
response.SendChunked = true;
response.OutputStream.WriteByte((byte)' ');
response.OutputStream.Flush();
I see two options - increase timeouts on client side or extend protocol with operation status requests from client for long running operations.
If you are using .net 4.5, take a look at the HttpListenerTimeoutManager Class, you can use this class as a base to implement custom timeout behaviour.

ASP.NET Web Api: Delegate after Request

I have a problem with streams and the web api.
I return the stream which is consumed by the web api. Currently, i put the socket into a pool after getting the stream. but this cause some errors.
Now, I must putthe socket into the pool AFTER the request ended. (The stream was consumed and is now closed).
Is there a delegate for this or some other best practises?
Example code:
public HttpResponseMessage Get(int fileId)
{
HttpResponseMessage response = null;
response = new HttpResponseMessage(HttpStatusCode.OK);
Stream s = GetFile(id);
response.Content = new StreamContent(fileStream);
}
GetFile(int id)
{
FSClient fs = GetFSClient();
Stream s = fs.GetFileStream(id);
AddFSToPool(fs);
return s;
}
GetFile uses a self-programmed FileServer-Client.
It has an option to reuse FileServer-Connections. This connections will be stored in a pool. (In the pool are only unused FileServer-connections). If the next request calls GetFSClient() it gets an connected one from the pool (and removes it from the pool).
But if another requests comes in and uses a FileServer-Connection which is in the pool (because unused), there is still the problem, that the Stream is possibly in use.
Now I want to do the "put the FSClint into the pool" after the request ended and the stream is fully consumed.
Is there an entry point for that?
Stream is seen as a volatile/temporary resource - no wonder it implements IDisposable.
Also Stream is not thread-safe since it has a Position which means if it is read up to the end, it should be reset back to start and if two Threads reading the stream they will most likely read different chunks.
As such, I would not even attempt to solve this problem. Re-using streams on a web site (inherently multi-user / multi-threaded) not recommended.
UPDATE
As I said, still think that the best option is to re-think the solution but if you need to register something that runs after request finishes, use RegisterForDispose on request:
public HttpResponseMessage Get(HttpRequestMessage req, int fileId)
{
....
req.RegisterForDispose(myStream);
}

ASP.NET MVC2 AsyncController: Does performing multiple async operations in series cause a possible race condition?

The preamble
We're implementing a MVC2 site that needs to consume an external API via https (We cannot use WCF or even old-style SOAP WebServices, I'm afraid). We're using AsyncController wherever we need to communicate with the API, and everything is running fine so far.
Some scenarios have come up where we need to make multiple API calls in series, using results from one step to perform the next.
The general pattern (simplified for demonstration purposes) so far is as follows:
public class WhateverController : AsyncController
{
public void DoStuffAsync(DoStuffModel data)
{
AsyncManager.OutstandingOperations.Increment();
var apiUri = API.getCorrectServiceUri();
var req = new WebClient();
req.DownloadStringCompleted += (sender, e) =>
{
AsyncManager.Parameters["result"] = e.Result;
AsyncManager.OutstandingOperations.Decrement();
};
req.DownloadStringAsync(apiUri);
}
public ActionResult DoStuffCompleted(string result)
{
return View(result);
}
}
We have several Actions that need to perform API calls in parallel working just fine already; we just perform multiple requests, and ensure that we increment AsyncManager.OutstandingOperations correctly.
The scenario
To perform multiple API service requests in series, we presently are calling the next step within the event handler for the first request's DownloadStringCompleted. eg,
req.DownloadStringCompleted += (sender, e) =>
{
AsyncManager.Parameters["step1"] = e.Result;
OtherActionAsync(e.Result);
AsyncManager.OutstandingOperations.Decrement();
}
where OtherActionAsync is another action defined in this same controller following the same pattern as defined above.
The question
Can calling other async actions from within the event handler cause a possible race when accessing values within AsyncManager?
I tried looking around MSDN but all of the commentary about AsyncManager.Sync() was regarding the BeginMethod/EndMethod pattern with IAsyncCallback. In that scenario, the documentation warns about potential race conditions.
We don't need to actually call another action within the controller, if that is off-putting to you. The code to build another WebClient and call .DownloadStringAsync() on that could just as easily be placed within the event handler of the first request. I have just shown it like that here to make it slightly easier to read.
Hopefully that makes sense! If not, please leave a comment and I'll attempt to clarify anything you like.
Thanks!
It turns out the answer is "No".
(for future reference incase anyone comes across this question via a search)