How does Play framework track the blocked client and returns the response - scala

The question is about Play framework specifically although concept is generic. I guess that the blocked client is listening to a socket which is tracked on the server side and passed around with the Future[Result] so that when the Future finishes, then the response is written to the socket and then the socket is closed.
Can someone share more concrete explanation with references?
Quoting from:
https://www.playframework.com/documentation/2.6.18/ScalaAsync
The web client will be blocked while waiting for the response, but
nothing will be blocked on the server, and server resources can be
used to serve other clients.

Note that Play does not manage how to address the client. This is managed by TCP. Basically (as a simple analogy) you can think of a client, like a web browser, as making a telephone call to the server. When the clients makes a request, one of it's sockets gets connected to a particular socket on the server - this is a persistent connection between the sockets for the duration of the request/response. Play's underlying server (Netty for older versions or Akka Http for v2.6+) will accept the incoming request from the socket and assign it a thread. Play will do the work and the resulting Response will get mapped back to the correct socket by the server. The TCP server will manage the mapping between response and the socket, not Play.
As others have noted, the reference to blocking is essentially to do with the way Play Action's are intended to work (non-blocking). They take the request, wrap whatever work you have coded in a Future, and hand this off to get completed at some point in the near future (it might be a different thread that completes the Future, or it could even end up being the same thread). The point is that the creation of the Future is quick and so the thread that handled the request gets returned quickly to the pool so it can pick up another request to work on. If you have heard about Reactive Programming then this is essentially the idea being keeping an Application Responsive.
The web client will be blocked while waiting for the response, but
nothing will be blocked on the server, and server resources can be
used to serve other clients.
So the client might be blocked on it's end whilst waiting for the response to come back through it's socket (unless it too is making async calls), but the idea is that the thread pool handling the requests in Play will not be blocked because of the way they create a Future and hand the completion of this back to Play so they can go back to handle other requests.
There is a bit more to it but hopefully this gives a bit more context to that particular statement from Play's docs.

Related

When to close HTTP Client in Flutter app?

My Flutter mobile app communicates with my back-end server. The docs say it's better to use Client class (IOClient) than plain get, put, etc. methods to maintain persistent connections across multiple requests to the same server.
Docs also say that:
It's important to close each client when it's done being used; failing
to do so can cause the Dart process to hang.
I don't understand when I need to close the client, because almost all app screens require HTTP connection to the same server. What's the best practice here?
Update:
Is it OK to close Client only before app is terminated, or should I close it every time app is hidden (goes to paused state)?
I personnaly think that closing client after each user action is the best practise.
What i call an "user action" can be constituted of multiple API request.
So i think the best is something like that:
var client = http.Client();
try {
var response = await client.post(
Uri.https('my-api-site.com', 'users/add'),
body: {'firstname': 'Alain', 'Lastname': 'Deseine'});
var Response = jsonDecode(utf8.decode(response.bodyBytes)) as Map;
...
// Add here every API request that you need to complete the users action
} finally {
// Then finally destroy the client.
client.close();
}
Don't close the HTTP Client
For some of you, it may sound odd, but the solution is as simple as not to do that.
Why
In most cases, the HTTP Client should be available for the whole app run time. Also, app resources are disposed automatically when the app is closed by the user. For that reason, in most cases, we don't need to handle the disposal of the HTTP Client.
When to dispose an HTTP Client?
Only if we want to run a limited, one-time, predicted, season of HTTP requests. In that case, you can dispose of the Client in many different ways (depending on your state management or the lifecycle that you want to trigger the disposal).
The dispose() function is common to all packages that handle cache and local resources. The documentation mentions that option, but it does not suggest you use it in every scenario. It should be handled in very specific scenarios only.
So for most of you, just don't dispose of the HTTP Client.
Keep connections atomic per server interaction.
almost all app screens require HTTP connection to the same server
One thing is that all screens make http calls, other thing is having constant high frequency interactions with the server.
Let's say we have a multiplayer app, that requires each second that passes to communicate with the server. If that was the case, leaving the client open would be critical. Even though you have the architectural consequence that the dart process would hang. This would mean that dart may not be the best solution for a high server traffic app.
To my understanding your app is not the case. You don't need to worry about leaving the connection open constantly, so you could only open and close it each time you need to use it without having to pay a high performance price.
It should be seemless to the user if you are opening a connection each time you try to consume your API.
Having said this, here are some other insights on this topic:
A large amount of clients connected to the server, even when not active, may consume resources of memory or objects (for example, if there is one thread per connection). On the other hand, keeping the connection on, will allow the client to detect if there is a connection problem to the server much faster (if that even matters). Extracted from this other thread
Hopefully this will help you, given your use case, take a better decision.
In terms of network traffic, it's better to use the same client throughout the app lifecycle. Establishing a new connection for each api is very expensive. However, as per the documentation,
It's important to close each client when it's done being used; failing to do so can cause the Dart process to hang.
Also, if client.close() isn't called, it doesn't mean that the server will keep the connection open forever. The server will close the connection if it is idle for a period more than the HTTP Keep-Alive Timeout. In this case, if the client sends a new request over the connection closed by server, he'll get a 408 Request Timeout.
So, if you decide to use the same client throughout the app lifecycle, keep in your mind the two possible issues that you may run into.

Websocket vs REST when sending data to server

Background
We are writing a Messenger-like app. We have setup Websockets to Inbox and Chat.
Question
My question is simple. What are the advantages and disadvantages when sending data from Client to Server using REST instead of Websockets? (I am not interested in updates now.)
We know that REST has higher overhead in terms of message sizes and that WS is duplex (thus open all time). What about the other things we didn't keep in mind?
Here's a summary of the tradeoffs I'm aware of.
Reasons to use webSocket:
You need/want server-push of data.
You are sending lots of small pieces of data from client to server and doing it very regularly. Using webSocket has significantly less overhead per transmission.
Reasons to use REST:
You want to use server-side frameworks or modules that are built for REST, not for webSocket (such as auth, rate limiting, security, streaming, etc...).
You aren't sending data very often from client to server and thus the server-side burden of keeping a webSocket connection open all the time may lessen your server scalability.
You want your client to run in places where a long-connected webSocket during inactive periods of time may not be practical (perhaps mobile).
You want your client to run in old browsers that don't support webSocket.
You want the browser to enforce same-origin restrictions (those are enforced for REST Ajax calls, but not for webSocket connections).
You don't want to have to write code that detects when the webSocket connection has died and then auto-reconnects and handles back-offs and handles mobile issues with battery usage issues, etc...
You need to run in situations where there are proxies or other network infrastructure that may not support long running webSocket connections.
If you want request/response built in. REST is request/response. WebSocket is not - it's message based. Responses from a webSocket are done by sending a messge back. That message back is not, by itself, a response to any specific request, it's just data being sent back. If you want request/response with webSocket, then you have to build some infrastructure yourself where you tag an id into a request and the response for that particular request contains that specific id. Otherwise, if there are every multiple requests in flight at the same time, then you don't know which response belongs with which request because all the data is being sent over the same connection and you would have no way of matching response with request.
If you want other clients to be able to carry out this operation via an Ajax call.
So, if you already have a webSocket implementation, don't have any problem with it that are lessened with REST and aren't interested in any of the reasons that REST might be better, then stick with your webSocket implementation.
Related references:
websocket vs rest API for real time data?
Ajax vs Socket.io
Adding comments per your request:
It sounds like you're expecting someone to tell you the "right" way to do it. There are reasons to pick one way over the other. If none of those reason compel you one way vs. the other, then it's just an architectural choice and you must take in the whole context of what you are doing and decide which architectural choice makes more sense to you. If you already have the reliably established webSocket connection and none of the advantages of REST apply to your situation then you can optimize for "efficiency" and send your data to the server over the webSocket connection.
On the other hand, if you wanted there to be a simple API on your server that could be reached with an Ajax call from other clients, then you'd want your server to support this operation via REST so it would simplest for these other clients to carry out this one operation. So, it all depends upon which direction your requirements drive you and, if there is no particular driving reason to go one way or the other, you just make an architectural choice yourself.

Server-to-server websockets for a long running background service

I'm working on a tool that involves clients/users installing a middleware layer in their application web server stack. This middleware (think Rack for Ruby, or Express for Node) would need to communicate back to my central server for status updates.
Now, I can have it just do a GET every now and then in order to get the latest status, but it occurs to me that it might be cool to use a websocket in order to open a persistent connection. That way, it doesn't have to do any periodic polling, instead just keeping alive a websocket. When a status change occurs, I send an update down the websocket and the client receives it instantly.
Assuming I've got a stack that can handle tons of idle websocket connections, is this a horrible use of websockets? I know it's traditionally used for server-browser communication, but it seems like it could also be useful for behind the scenes server-server calls as well.
Would this be better implemented as a more generic TCP socket communication instead of a websocket, something like ZeroMQ? I guess I don't have much experience at the socket layer, and REST/websockets are a lot more familiar to me.

ZeroMQ pattern for multiple asynchronous requests to single endpoint

I'm using zmq to develop a distributed application having the following network topology: a client node that initiates a request and a server node that replies to requests. Since the client is a node.js application I can't block after a send call to wait the response, so the scenario is that the client could emit multiple send calls to the same endpoint. On the other side the server is a mobile application that processes one request a time in one thread, blocking if there are not any requests.
If this configuration sounds odd, I'm trying to build a sort of RPC initiated by the server to mobile.
I thought to use a DEALER socket client side and a REP socket server side. From zmq guide about DEALER/REP combination:
This gives us an asynchronous client that can talk to multiple REP servers. If we rewrote the "Hello World" client using DEALER, we'd be able to send off any number of "Hello" requests without waiting for replies.
Can it be applied to asynchronous client that can talk to one single server? And could it be a good choice? If not which pattern should I use?
Can it be applied to asynchronous client that can talk to one single server? And could it be a good choice?
REQ/REP is not recommended for traffic going over the Internet. The socket can potentially get stuck in a bad state.
The DEALER/REP is for a dealer client talking to multiple REP server. So this does not apply for your use case.
If not which pattern should I use?
In your case it seems to me that using the traditional DEALER/ROUTER is the way to go. What I usually do is prepend my messages by a "tag frame", ie a frame that contain an UUID of some sort that allows me to identifies my request (and their reply) at the application level.

Why lift 3 round trips are doing 2 kinds of HTTP request

I am using lift 3 round trip and I am trying to understand what happens behind the scene.
Why are there 2 kinds of request :
GET on comet_request
POST on ajax_request
Lift's uses HTTP Long Polling for asynchronous responses to the browser. I won't go into great detail on why the Lift developers have chosen Long Polling over other implementations, like Web Sockets, but there are well thought out reasons and if you're interested just do a quick search through the Lift mailing list where it's been discussed many times.
The gist of how it works is that the browser makes a request to the server, and the server holds the request open until there is information to send. When information becomes available, it gets pushed down the pipe, the browser processes it, and the browser initiates a new long poll request. Lift uses the servlet container's asynchronous support to hold the connection open with very little resource consumption, and because Javascript is asynchronous by nature, waiting on new information is not resource intensive for the browser either. Since there is a limit on the number of requests a browser can make to the same domain at once, Lift only opens one of these long poll connections at a time and multiplexes responses from what could be many different "responders" through it.
Initially Lift's asynchronous support was added so that data generated by server side events could be pushed to the client as they occurred. With the growth in popularity of client side frameworks, the ability to push asynchronous data initiated by client events became useful, hence the addition of round trips. The idea is that the client makes a request to the server, and rather than respond immediately, the server does some stuff in another thread then sends a response (potentially much) later. To users of the client side API, this is modeled as a promise, but behind the scenes what happens is that Lift receives the request and responds immediately (remember, we can't have too many requests open to the same domain) but will stream the actual data that satisfies the promise through the long polling connection when it becomes available.
So, that's what you're seeing. Your initial request is the ajax POST, which triggers the beginning of a round trip. If you were to look at the data returned by that request, you'd see that it's not the data that satisfies the promise. The actual response data is delivered via Lift's long polling mechanism, and that is what you see with the GET request.