Close RESTEasy client after a certain delay - redhat

I'm trying to close a RESTEasy client after a certain delay (e.g 5 seconds) and it seems the current configuration I'm using is not working at all.
HttpClient httpClient = HttpClientBuilder.create()
.setConnectionTimeToLive(5, TimeUnit.SECONDS)
.setDefaultRequestConfig(RequestConfig.custom()
.setConnectionRequestTimeout(5 * 1000)
.setConnectTimeout(5 * 1000)
.setSocketTimeout(5 * 1000).build())
.build();
ApacheHttpClient43Engine engine = new ApacheHttpClient43Engine(httpClient, localContext);
ResteasyClient client = new ResteasyClientBuilder().httpEngine(engine).build();
according to the documentation the ConnectionTimeToLive should close the connection no matter if there's payload or not.
please find attached the link
https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html-single/developing_web_services_applications/index#jax_rs_client
In my specific case, there sometimes is some latency and the payload is sent in chunks (below the socketTimeout interval hence the connection is kept alive and it could happen that the client is active for hours)
My main goal is to kill the client and release the connection but I feel there is something I'm missing in the configuration.
I'm using wiremock to replicate this specific scenario by sending the payload in chucks.
.withChunkedDribbleDelay
any clue about the configuration?

You may try using .withFixedDelay(60000) instead of .withChunkedDribbleDelay().

Related

Vertx request does not end on sendFile throwing

I'm new to vert.x and I'm trying to create a simple download service.
I used Request#sendFile(fileName) and it works well, but if I pass a directory path to Request#sendFile(fileName) it throws an exception, which is totally fine.
The problem is that, even if I catch that exception with an handler, I can't send any data nor end the request, an that leaves the http client (the browser) stuck on an endless spinning progress.
That is an example that reproduces the problem:
VertxOptions options = new VertxOptions();
options.setBlockedThreadCheckInterval(1000*60*60);
Vertx vertx = Vertx.vertx(options);
HttpServer server = vertx.createHttpServer();
Router router = Router.router(vertx);
router
.route(HttpMethod.GET,"/foo")
.handler(ctx->{
// this path exist but is not a file, is a directory.
ctx.response().sendFile("docs/pdf",asr->{
if(asr.failed()) {
ctx.response()
.setStatusCode(404)
// I can't end the connection the only thing I can do is close it
// I've commented out this lambda because is not what I want to happen.
// It's just an hack to end the request all the same.
.end("File not found: "+"docs/pdf" /*, (x)->{ctx.response().close();}*/ );
}
});
});
server
.requestHandler(router)
.listen(3000);
I can this problem by checking first if the path references to a file which both exsist and is not a directory (which in fact I did in the real code), but that leaves me with doubt about what would happen if the IOException was something different (like reading a broken file, or an unauthorized file ...).
When this error happens no data is sent through the wire, I've both checked form the browser and sniffing packets TCP packets (0 bytes send from the server to the browser).
The only things that works is closing the connection with Response#close(), which at least closes the keep-alive http connection, and ends the browser request.
What I want to achieve is to send some information back to the client to tell something went wrong, possibly setting the status code to an appropriate 4** error and possibly adding some details to it (either in status text or in the response body).
You should add failureHandler to your router:
route.failureHandler(frc-> {
frc.response().setStatusCode( 400 ).end("Sorry! Not today");
});
see https://vertx.io/docs/vertx-web/java/#_error_handling

Why Akka HTTP close user connection, when multiple messages is produced?

I have a simple WebSocket application, which is based on Akka HTTP/Reactive streams, like this https://github.com/calvinlfer/akka-http-streaming-response-examples/blob/master/src/main/scala/com/experiments/calvin/ws/WebSocketRoutes.scala#L82.
In other words, I have Sink, Source (which is produced from Publisher), and the Flow:
Flow.fromSinkAndSource(incomingMessages, outgoingMessages)
When I produce more, than 30 messages per second to the client, Akka closes a connection.
I cannot understand, where is a setting, which configure this behaviour. I know about OverflowStrategy, but I don't explicitly configure it.
It seems, that I have OverflowStrategy.fail(), or my problem looks like it.
You can tune Internal buffers.
There are two ways, how to do it:
1) application.conf:
akka.stream.materializer.max-input-buffer-size = 1024
2) You can configure it explicitly for your Flow:
Flow.fromSinkAndSource(incomingMessages, outgoingMessages)
.addAttributes(Attributes.inputBuffer(initial = 1, max = 1024))

Play Framework ws libray stream method stops after 2 minutes on connection

I am using Play 2.6 and The twitter streaming API.
Below is how I connect to twitter using the ws library's stream() method.
The problem is that, the steam always stops after exactly 2 minutes. I tried different topics and the behavior is pretty consistent.
It seems there is a setting but I could not find where.
I am not sure it's on the play side or twitter side.
Any help is greatly appreciated.
ws.url("https://stream.twitter.com/1.1/statuses/filter.json")
.sign(OAuthCalculator(ConsumerKey(credentials._1, credentials._2), RequestToken(credentials._3, credentials._4)))
.withQueryStringParameters("track" -> topic)
.withMethod("POST")
.stream()
.map {
response => response.bodyAsSource.map(t=> {t.utf8String})
}
Play WS has default request timeout which is exactly 2 minutes by default.
Here is link to the docs:
https://www.playframework.com/documentation/2.6.x/ScalaWS#Configuring-Timeouts
So you can put in your application.conf line like
play.ws.timeout.request = 10 minutes
to specify default timeout for your all requests.
Also you can specify timeout for single request using withRequestTimeout method of WSRequest builder
/**
* Sets the maximum time you expect the request to take.
* Use Duration.Inf to set an infinite request timeout.
* Warning: a stream consumption will be interrupted when this time is reached unless Duration.Inf is set.
*/
def withRequestTimeout(timeout: Duration): WSRequest
So to disable request timeout for a sigle request you can use following code
ws.url(someurl)
.withMethod("GET")
.withRequestTimeout(Duration.Inf)

Nodejs Websocket Close Event Called...Eventually

I've been having some problems with the below code that I've pieced together. All the events work as advertised however, when a client drops off-line without first disconnecting the close event doesn't get call right away. If you give it a minute or so it will eventually get called. Also, I find if I continue to send data to the client it picks up a close event faster but never right away. Lastly, if the client gracefully disconnects, the end event is called just fine.
I understand this is related to the other listen events like upgrade and ondata.
I should also state that the client is an embedded device.
client http request:
GET /demo HTTP/1.1\r\n
Host: example.com\r\n
Upgrade: Websocket\r\n
Connection: Upgrade\r\n\r\n
//nodejs server (I'm using version 6.6)
var http = require('http');
var net = require('net');
var sys = require("util");
var srv = http.createServer(function (req, res){
});
srv.on('upgrade', function(req, socket, upgradeHead) {
socket.write('HTTP/1.1 101 Web Socket Protocol Handshake\r\n' +
'Upgrade: WebSocket\r\n' +
'Connection: Upgrade\r\n' +
'\r\n\r\n');
sys.puts('upgraded');
socket.ondata = function(data, start, end) {
socket.write(data.toString('utf8', start, end), 'utf8'); // echo back
};
socket.addListener('end', function () {
sys.puts('end'); //works fine
});
socket.addListener('close', function () {
sys.puts('close'); //eventually gets here
});
});
srv.listen(3400);
Can anyone suggest a solution to pickup an immediate close event? I am trying to keep this simple without use of modules. Thanks in advance.
close event will be called once TCP socket connection is closed by one or another end with few complications of rare cases when system "not realising" that socket been already closed, but this are rare cases. As WebSockets start from HTTP request server might just keep-alive till it timeouts the socket. That involves the delay.
In your case you are trying to perform handshake and then send data back and forth, but WebSockets are a bit more complex process than that.
The handshake process requires some security procedure to validate both ends (server and client) and it is HTTP compatible headers. But different draft versions supported by different platforms and browsers do implement it in a different manner so your implementation should take this in account as well and follow official documentation on WebSockets specification based on versions you need to support.
Then sending and receiving data via WebSockets is not pure string. Actual data sent over WebSockets protocol has data-framing layer, which involves adding header to each message you send. This header has details over message you sending, masking (from client to server), length and many other things. data-framing depends on version of WebSockets again, so implementations will vary slightly.
I would encourage to use existing libraries as they already implement everything you need in nice and clean manner, and have been used extensively across commercial projects.
As your client is embedded platform, and server I assume is node.js as well, it is easy to use same library on both ends.
Best suit here would be ws - actual pure WebSockets.
Socket.IO is not good for your case, as it is much more complex and heavy library that has multiple list of protocols support with fallbacks and have some abstraction that might be not what you are looking for.

CXF JAXRS client not reusing TCP connections

I'm using the JAX-RS support in CXF 2.2.5 to invoke REST webservices. I'm creating a single org.apache.cxf.jaxrs.client.WebClient instance for each endpoint I need to communicate with (typically one or two endpoints for any given deployment) and re-using this client for each web-service invocation.
The problem I face is that the client is creating new TCP connections to the server for each request, despite using the keep-alive setting. At high traffic levels, this is causing problems. An excerpt from my client code is below.
I'm trying to dig through the CXF source to identify the problem but getting hopelessly lost at present. Any thoughts greatly appreciated.
Thanks,
FB
ConcurrentMap<String, WebClient> webclients = new ConcurrentHashMap<String, WebClient>();
public void dispatchRequest(MyRequestClass request, String hostAddress) {
// Fetch or create the web client if we don't already have one for this hostAddress
// NOTE: WebClient is only thread-safe if not changing the URI or headers between calls!
// http://cxf.apache.org/docs/jax-rs-client-api.html#JAX-RSClientAPI-ThreadSafety
WebClient client = webclients.get(hostAddress);
if (client == null) {
String serviceUrl = APP_HTTP_PROTOCOL + "://" + hostAddress + ":" + APP_PORT + "/" + APP_REQUEST_PATH;
WebClient newClient = WebClient.create(serviceUrl).accept(MediaType.TEXT_PLAIN);
client = webclients.putIfAbsent(hostAddress, newClient);
if (client == null) {
client = newClient;
} // Else, another thread must have added the client in the meantime - that's fine if so.
}
XStream marshaller = MyCollection.getMarshaller();
String requestXML = marshaller.toXML(request);
Response response = null;
try {
// Send it!
response = client.post(requestXML);
}
catch (Exception e) {
}
...
}
In your sample code you get a JAX-RS Response, which getEntity() method will return an InputStream by default. Therefore, being CXF not responsible for consuming the stream, this is obviously left open.
If you don't explicitly close that, it would be closed during a Garbage Collection phase.
But even so, under high traffic rates, this little latency prevents the underlying HTTP connection to be reinserted into the internal pool of persistent connections exploited by HttpURLConnection (that CXF is using under the bonnet). So it cannot be reused on time.
If you take care of closing the InputStream, you should not see a large number of TIME_WAIT sockets anymore.
I would definitely try updating to a newer and supported version of CXF. There have been a LOT of updates to the JAX-RS stuff in the newer versions of CXF and this issue may already be fixed.