Jersey as REST client, maintaining a connection (or validating connection) - rest

I am getting started with Jersey 2.1 which I want to use as a client, to make REST calls to someone elses web service.
I have been working through the tutorials, and I think I understand how to open a connection, and make calls to the web service.
The question I have is, since my service will persist, and have to process events when they happen, how do I manage and maintain session connectivity?
I have been trying to understand if I need to:
Close connections? This does not seem to be discussed. So are the implicitly auto-closed after making a call?
If not auto-closed, can I check the state to see if a Connection is still valid?

The underlying connections are opened for each request and closed after the response is received and entity is processed (entity is read).
final WebTarget target = ... some web target
Response response = target.path("resource").request().get();
System.out.println("Connection is still open.");
System.out.println("string response: " + response.readEntity(String.class));
System.out.println("Now the connection is closed.");
If you don't read the entity, then you need to close the response manually by response.close(). Also if the entity is read into an input stream (by response.readEntity(InputStream.class)), the connection stays open until you finish reading from the InputStream. In that case, the InputStream or the Response should be closed manually at the end of reading from InputStream.

Based on the "How To Use Jersey Client Efficiently", closing response connection is one of the best practices to maintain the performance of your application from client side point of view. Also on another note, it also helps to close the client connection as well.
Key Points are :
To close response after collecting/reading all the information needed in your client code.
And to close client connection after completing all the CRUD operation with in your client service class. This is very important in situation where you get some-type of connection/service-endpoint related exception during the service call.
It's also worth mentioning about connection and read time out.
Example:
client.setConnectTimeout(10000);// In millisecond.
client.setReadTimeout(60000); // In millisecond.
Hope this helps.

Related

When to close HTTP Client in Flutter app?

My Flutter mobile app communicates with my back-end server. The docs say it's better to use Client class (IOClient) than plain get, put, etc. methods to maintain persistent connections across multiple requests to the same server.
Docs also say that:
It's important to close each client when it's done being used; failing
to do so can cause the Dart process to hang.
I don't understand when I need to close the client, because almost all app screens require HTTP connection to the same server. What's the best practice here?
Update:
Is it OK to close Client only before app is terminated, or should I close it every time app is hidden (goes to paused state)?
I personnaly think that closing client after each user action is the best practise.
What i call an "user action" can be constituted of multiple API request.
So i think the best is something like that:
var client = http.Client();
try {
var response = await client.post(
Uri.https('my-api-site.com', 'users/add'),
body: {'firstname': 'Alain', 'Lastname': 'Deseine'});
var Response = jsonDecode(utf8.decode(response.bodyBytes)) as Map;
...
// Add here every API request that you need to complete the users action
} finally {
// Then finally destroy the client.
client.close();
}
Don't close the HTTP Client
For some of you, it may sound odd, but the solution is as simple as not to do that.
Why
In most cases, the HTTP Client should be available for the whole app run time. Also, app resources are disposed automatically when the app is closed by the user. For that reason, in most cases, we don't need to handle the disposal of the HTTP Client.
When to dispose an HTTP Client?
Only if we want to run a limited, one-time, predicted, season of HTTP requests. In that case, you can dispose of the Client in many different ways (depending on your state management or the lifecycle that you want to trigger the disposal).
The dispose() function is common to all packages that handle cache and local resources. The documentation mentions that option, but it does not suggest you use it in every scenario. It should be handled in very specific scenarios only.
So for most of you, just don't dispose of the HTTP Client.
Keep connections atomic per server interaction.
almost all app screens require HTTP connection to the same server
One thing is that all screens make http calls, other thing is having constant high frequency interactions with the server.
Let's say we have a multiplayer app, that requires each second that passes to communicate with the server. If that was the case, leaving the client open would be critical. Even though you have the architectural consequence that the dart process would hang. This would mean that dart may not be the best solution for a high server traffic app.
To my understanding your app is not the case. You don't need to worry about leaving the connection open constantly, so you could only open and close it each time you need to use it without having to pay a high performance price.
It should be seemless to the user if you are opening a connection each time you try to consume your API.
Having said this, here are some other insights on this topic:
A large amount of clients connected to the server, even when not active, may consume resources of memory or objects (for example, if there is one thread per connection). On the other hand, keeping the connection on, will allow the client to detect if there is a connection problem to the server much faster (if that even matters). Extracted from this other thread
Hopefully this will help you, given your use case, take a better decision.
In terms of network traffic, it's better to use the same client throughout the app lifecycle. Establishing a new connection for each api is very expensive. However, as per the documentation,
It's important to close each client when it's done being used; failing to do so can cause the Dart process to hang.
Also, if client.close() isn't called, it doesn't mean that the server will keep the connection open forever. The server will close the connection if it is idle for a period more than the HTTP Keep-Alive Timeout. In this case, if the client sends a new request over the connection closed by server, he'll get a 408 Request Timeout.
So, if you decide to use the same client throughout the app lifecycle, keep in your mind the two possible issues that you may run into.

How does Play framework track the blocked client and returns the response

The question is about Play framework specifically although concept is generic. I guess that the blocked client is listening to a socket which is tracked on the server side and passed around with the Future[Result] so that when the Future finishes, then the response is written to the socket and then the socket is closed.
Can someone share more concrete explanation with references?
Quoting from:
https://www.playframework.com/documentation/2.6.18/ScalaAsync
The web client will be blocked while waiting for the response, but
nothing will be blocked on the server, and server resources can be
used to serve other clients.
Note that Play does not manage how to address the client. This is managed by TCP. Basically (as a simple analogy) you can think of a client, like a web browser, as making a telephone call to the server. When the clients makes a request, one of it's sockets gets connected to a particular socket on the server - this is a persistent connection between the sockets for the duration of the request/response. Play's underlying server (Netty for older versions or Akka Http for v2.6+) will accept the incoming request from the socket and assign it a thread. Play will do the work and the resulting Response will get mapped back to the correct socket by the server. The TCP server will manage the mapping between response and the socket, not Play.
As others have noted, the reference to blocking is essentially to do with the way Play Action's are intended to work (non-blocking). They take the request, wrap whatever work you have coded in a Future, and hand this off to get completed at some point in the near future (it might be a different thread that completes the Future, or it could even end up being the same thread). The point is that the creation of the Future is quick and so the thread that handled the request gets returned quickly to the pool so it can pick up another request to work on. If you have heard about Reactive Programming then this is essentially the idea being keeping an Application Responsive.
The web client will be blocked while waiting for the response, but
nothing will be blocked on the server, and server resources can be
used to serve other clients.
So the client might be blocked on it's end whilst waiting for the response to come back through it's socket (unless it too is making async calls), but the idea is that the thread pool handling the requests in Play will not be blocked because of the way they create a Future and hand the completion of this back to Play so they can go back to handle other requests.
There is a bit more to it but hopefully this gives a bit more context to that particular statement from Play's docs.

Akka Http Connection Refused Error

I have a web-service written in Akka where for each user I am making a post request(to encrypt some data) as follows:
http.singleRequest(HttpRequest(method = HttpMethods.POST, uri = encryptUrl, entity = request))
However after a few hits I am getting a ConnectionRefusedError to the encryption server. It seems to be working intermittently. I looked up some potential issues but still nothing.
Finally I found one issue where not closing a connection could cause this scenario. Does it sound right ? Also how can I close a connection in the above case in Akka HTTP.
This may be caused by not fully consuming the request/response entity. From the documentation:
Warning
Consuming (or discarding) the Entity of a request is mandatory! If
accidentally left neither consumed or discarded Akka HTTP will assume
the incoming data should remain back-pressured, and will stall the
incoming data via TCP back-pressure mechanisms. A client should
consume the Entity regardless of the status of the HttpResponse.

Better understand Read Timeout During SOAP Request/Response

I would like someone to clarify something for me:
There are two kinds of timeouts that exist during SOAP requests/responses:
1- Connection Timeout
2- Read Timeout
This applies at least to Axis1/Axis2, which I'm currently using.
The connection timeout happens when the client couldn't connect to the web service in question within the set Connection Timeout value, and which would eventually result in throwing the following exception :
Could not connect to host within a timeout of "value".
As for the Read Timeout, I'm really not sure about it, and I don't know which assumption is true. Let's take a scenario for example, in which a client is sending data to a web service, which will in turn process the data, checks for their sanity, inserts them into the database when they are, and then the web service will send some data back to the client. Bottom line, we have a significant amount of processing time on the server, and significant data that's being sent back and forth between the client and the web service.
What I'm unable to understand is when is a read timeout exception thrown by the client?
1- Could it happen when the client is still in the process of marshaling the objects that are being sent to the web service?
2- Could it happen during the process when the web service has already started writing its response to the open socket?
I could really appreciate clear answers on this. Thanks a lot in advance.
It's much clearer now thanks to the efforts I did to research this. A "Read Timeout" is basically when the client hasn't gotten anything byte of date still. So let's take a scenario where a server needs to reply back to a client with 4 MBs of data. Read Timeout will be reset with every byte of data the client is receiving from the server.

What is a RESTful way of monitoring a REST resource for changes?

If there is a REST resource that I want to monitor for changes or modifications from other clients, what is the best (and most RESTful) way of doing so?
One idea I've had for doing so is by providing specific resources that will keep the connection open rather than returning immediately if the resource does not (yet) exist. For example, given the resource:
/game/17/playerToMove
a "GET" on this resource might tell me that it's my opponent's turn to move. Rather than continually polling this resource to find out when it's my turn to move, I might note the move number (say 5) and attempt to retrieve the next move:
/game/17/move/5
In a "normal" REST model, it seems a GET request for this URL would return a 404 (not found) error. However, if instead, the server kept the connection open until my opponent played his move, i.e.:
PUT /game/17/move/5
then the server could return the contents that my opponent PUT into that resource. This would both provide me with the data I need, as well as a sort of notification for when my opponent has moved without requiring polling.
Is this sort of scheme RESTful? Or does it violate some sort of REST principle?
Your proposed solution sounds like long polling, which could work really well.
You would request /game/17/move/5 and the server will not send any data, until move 5 has been completed. If the connection drops, or you get a time-out, you simply reconnect until you get a valid response.
The benefit of this is it's very quick - as soon as the server has new data, the client will get it. It's also resilient to dropped connections, and works if the client is disconnected for a while (you could request /game/17/move/5 an hour after it's been moved and get the data instantly, then move onto move/6/ and so on)
The issue with long polling is each "poll" ties up a server thread, which quickly breaks servers like Apache (as it runs out of worker-threads, so can't accept other requests). You need a specialised web-server to serve the long-polling requests.. The Python module twisted (an "an event-driven networking engine") is great for this, but it's more work than regular polling..
In answer to your comment about Jetty/Tomcat, I don't have any experience with Java, but it seems they both use a similar pool-of-worker-threads system to Apache, so it will have that same problem. I did find this post which seems to address exactly this problem (for Tomcat)
I'd suggest a 404, if your intended client is a web browser, as keeping the connection open can actively block browser requests in the client to the same domain. It's up to the client how often to poll.
2021 Edit: The answer above was in 2009, for context.
Today, I would suggest using a WebSocket interface with push notifications.
Alternatively, in the above suggestion, I might suggest holding the connection for 500-1000ms and check twice at the server before returning the 404, to reduce the overhead of creating multiple connections at the client.
I found this article proposing a new HTTP header, "When-Modified-After", that essentially does the same thing--the server waits and keeps the connection open until the resource is modified.
I prefer a version-based approach rather than a timestamp-based approach, since it's less prone to race conditions and gives you a little more information about what it is you're retrieving. Any thoughts to this approach?