Taking the server streaming gRPC API as an example. I’d like to understand if the StreamObserver that the client generates and sends as part of the initial request to the server actually gets sent over the wire for the server to then invoke directly. Or if what is actually happening is the client registers the observer and then gets notified to call onNext (or other) itself. Looking to understand this process a bit deeper than docs explain.
StreamObserver is not sent over-the-wire; it is just used locally. The gRPC wire protocol sends metadata and messages between the client and server and the StreamObserver API locally converts events into callbacks.
You may also be interested in the gRPC wire protocol.
Related
Let's assume there are a several clients that must receive updated data from server. They connect to server and communicate using Server Sent Event push.
How SSE knows that concrete message should be addressed to concrete client like it works in sockets?
Does it support broadcast or private messages?
How SSE knows that concrete message should be addressed to concrete client like it works in sockets?
The client connects to an URL on the server. You can optionally add query parameters to the URL that can be used for logic.
Since the client have initiated the connection, the server must all the time hold a "handle" to this connection, so it can use it to send data. In this way, it is similar to sockets.
Does it support broadcast or private messages?
The server must iterate over all connection handles to send data to all clients. It can send data to only some clients, similar to private messages. How the connections is handled is up to the server.
The question is about Play framework specifically although concept is generic. I guess that the blocked client is listening to a socket which is tracked on the server side and passed around with the Future[Result] so that when the Future finishes, then the response is written to the socket and then the socket is closed.
Can someone share more concrete explanation with references?
Quoting from:
https://www.playframework.com/documentation/2.6.18/ScalaAsync
The web client will be blocked while waiting for the response, but
nothing will be blocked on the server, and server resources can be
used to serve other clients.
Note that Play does not manage how to address the client. This is managed by TCP. Basically (as a simple analogy) you can think of a client, like a web browser, as making a telephone call to the server. When the clients makes a request, one of it's sockets gets connected to a particular socket on the server - this is a persistent connection between the sockets for the duration of the request/response. Play's underlying server (Netty for older versions or Akka Http for v2.6+) will accept the incoming request from the socket and assign it a thread. Play will do the work and the resulting Response will get mapped back to the correct socket by the server. The TCP server will manage the mapping between response and the socket, not Play.
As others have noted, the reference to blocking is essentially to do with the way Play Action's are intended to work (non-blocking). They take the request, wrap whatever work you have coded in a Future, and hand this off to get completed at some point in the near future (it might be a different thread that completes the Future, or it could even end up being the same thread). The point is that the creation of the Future is quick and so the thread that handled the request gets returned quickly to the pool so it can pick up another request to work on. If you have heard about Reactive Programming then this is essentially the idea being keeping an Application Responsive.
The web client will be blocked while waiting for the response, but
nothing will be blocked on the server, and server resources can be
used to serve other clients.
So the client might be blocked on it's end whilst waiting for the response to come back through it's socket (unless it too is making async calls), but the idea is that the thread pool handling the requests in Play will not be blocked because of the way they create a Future and hand the completion of this back to Play so they can go back to handle other requests.
There is a bit more to it but hopefully this gives a bit more context to that particular statement from Play's docs.
I have an existing akka application built on socko websockets. Communication with the sockets takes place inside a single actor and messages both leaving and entering the actor (incoming and outgoing messages, respectively) are labelled with the socket id, which is a first class property of a socko websocket (in socko a connection request arrives labelled with the id, and all the lifecycle transitions such as handshaking, disconnection, incoming frames etc. are similarly labelled)
I'd like to reimplement this single actor using akka-http (socko is more-or-less abandonware these days, for obvious reasons) but it's not straightforward because the two libraries are conceptually very different; akka-http hides the lower level details of the handshaking, disconnection etc, simply sending whichever actor was bound to the http server an UpgradeToWebsocket request header. The header object contains a method that takes a materialized Flow as a handler for all messages exchanged with the client.
So far, so good; I am able to receive messages on the web socket and reply them directly. The official examples all assume some kind of stateless request-reply model, so I'm struggling with understanding how to make the next step to assigning a label to the materialized flow, managing its lifecycle and connection state (I need to inform other actors in the application when a connection is dropped by a client, as well as label the messages.)
The alternative (remodelling the whole application using akka-streams) is far too big a job, so any advice about how to keep track of the sockets would be much appreciated.
To interface with an existing actor-based system, you should look at Source.actorRef and Sink.actorRef. Source.actorRef creates an ActorRef that you can send messages to, and Sink.actorRef allows you to process the incoming messages using an actor and also to detect closing of the websocket.
To connect the actor created by Source.actorRef to the existing long-lived actor, use Flow#mapMaterializedValue. This would also be a good place to assign an unique id for a socket connection.
This answer to a related question might get you started.
One thing to be aware of. The current websocket implementation does not close the server to client flow when the client to server flow is closed using a websocket close message. There is an issue open to implement this, but until it is implemented you have to do this yourself. For example by having something like this in your protocol stack.
The answer from Rüdiger Klaehn was a useful starting point, thanks!
In the end I went with ActorPublisher after reading another question here (Pushing messages via web sockets with akka http).
The key thing is that the Flow is 'materialized' somewhere under the hood of akka-http, so you need to pass into UpgradeToWebSocket.handleMessagesWithSinkSource a Source/Sink pair that already know about an existing actor. So I create an actor (which implements ActorPublisher[TextMessage.Strict]) and then wrap it in Source.fromPublisher(ActorPublisher(myActor)).
When you want to inject a message into the stream from the actor's receive method you first check if totalDemand > 0 (i.e. the stream is willing to accept input) and if so, call onNext with the contents of the message.
I'm writing a client using AsyncHttpClient (AHC) v2.0beta (using Netty 4 as a provider) that streams audio in real-time and it needs to receive server data in real-time too (while streaming). Imagine a HTTP client streaming the microphone's output as the user speaks and receiving the audio transcription has it happens in real time. In short, it's a bidirectional real-time communication over HTTP (chunked multipart request/response).
In order to do that, I had to hack AHC a bit. For instance, there is a blocking call to wait for input data in org.asynchttpclient.multipart.MultipartBody#read(ByteBuffer buffer) which is implemented on top of Netty's io.netty.handler.stream.ChunkedInput.
This somewhat works. The problem is that my custom AsyncHandler will not get onBodyPartReceived() callbacks until the request has finished streaming. They receiving events get pilled up, probably because Netty isn't reading while there is still content to write. Playing with the network stack, I noticed I was only able to receive server responses while streaming if the client was having network contention while writing.
Can someone tell me if this behavior is the result of my particular implementation (blocking in MultipartBody#read()) or an architectural design constrain imposed by Netty's internal implementation?
As a side note, reading and writing happens inside a single IO thread nioEventLoopGroup-X.
We're using GWT Atmosphere to send strings from the server to the client and it works quite well.
However, we would like to send whole entities from the server to the client, serialized by the GWT RequestFactory. Without the need for a request by the client!
So I tried working with SimpleRequestProcessor#createOobMessage(domainObject) and sending that payload to the client. Computing the payload works.
I would then decode that message using AutoBeanCodex#decode and read the domainObject as the correct EntityProxy from the invocation list of the ResponseMessage - however when I do so, it requires some sort of serverId being set to proceed in AbstractRequestFactory#getId (around line 260: assert serverId != null : "serverId")
Any advice on how I can decode a Proxy payload without a request being sent by the client?
Update
The use case for this question is chat-like communication. The client doesn't request the messages from the server but instead will be notified of new messages. And we'd like to include the messages and info on who's sent the message in the notification payload. Since we're using RequestFactory in our project anyway, we want to take advantage of having set up all the Proxy wiring and now simply push the relevant object graph to the client.
Why are you trying to serialize RF messages and send them just as entities? RequestFactory is much more than justa way to send data over the wire - it has at least three different kinds of messages that can be sent from the client to the server: create instances, call setters, and invoke service methods. Based on what happens on the server, not only can data be returned to the client, but messages about what changes were made and if those setters made changes that are not valid under the JSR303 rules.
Are you trying for a simpler, interface way of describing, sending, and receiving entities? Or do you actually want the RF wiring on both client and server so you can batch requests, refer to EntityProxyId instances and have the client only send diffs?
If you just want simpler object declarations, try just using AutoBeans and the AutoBeadCodex you have already looked at - you'll be able to create and marshal instances on both client and server easily, and you can pass them as strings over atmosphere's transports.
If you actually want RequestFactory, but running over something other than AJAX, there are other options. Rather than sending/receiving strings through Atmosphere (which I believe is intended to provide push support for RPC calls), consider using that underlying push layer to implment a new request transport in RequestFactory.
com.google.web.bindery.requestfactory.shared.RequestTransport can be implemented (see com.google.web.bindery.requestfactory.gwt.client.DefaultRequestTransport for the default AJAX version) to use any communication mechanism you would like - and to build the server, take a look at com.google.web.bindery.requestfactory.server.RequestFactoryServlet for what actually must be done to push messages through the Locator, ServiceLocators, etc.
If you really want to use Atmosphere and RF, then consider building a RequestTransport that wraps a simple Atmosphere interface to call to the server with the string - the cometd/websocket calls will already be taken care of for you, and you'll just have to translate the string message into invocations (again, see how RequestFactoryServlet does it).