Decoding GWT RequestFactory payload without Request from out-of-bound message - gwt

We're using GWT Atmosphere to send strings from the server to the client and it works quite well.
However, we would like to send whole entities from the server to the client, serialized by the GWT RequestFactory. Without the need for a request by the client!
So I tried working with SimpleRequestProcessor#createOobMessage(domainObject) and sending that payload to the client. Computing the payload works.
I would then decode that message using AutoBeanCodex#decode and read the domainObject as the correct EntityProxy from the invocation list of the ResponseMessage - however when I do so, it requires some sort of serverId being set to proceed in AbstractRequestFactory#getId (around line 260: assert serverId != null : "serverId")
Any advice on how I can decode a Proxy payload without a request being sent by the client?
Update
The use case for this question is chat-like communication. The client doesn't request the messages from the server but instead will be notified of new messages. And we'd like to include the messages and info on who's sent the message in the notification payload. Since we're using RequestFactory in our project anyway, we want to take advantage of having set up all the Proxy wiring and now simply push the relevant object graph to the client.

Why are you trying to serialize RF messages and send them just as entities? RequestFactory is much more than justa way to send data over the wire - it has at least three different kinds of messages that can be sent from the client to the server: create instances, call setters, and invoke service methods. Based on what happens on the server, not only can data be returned to the client, but messages about what changes were made and if those setters made changes that are not valid under the JSR303 rules.
Are you trying for a simpler, interface way of describing, sending, and receiving entities? Or do you actually want the RF wiring on both client and server so you can batch requests, refer to EntityProxyId instances and have the client only send diffs?
If you just want simpler object declarations, try just using AutoBeans and the AutoBeadCodex you have already looked at - you'll be able to create and marshal instances on both client and server easily, and you can pass them as strings over atmosphere's transports.
If you actually want RequestFactory, but running over something other than AJAX, there are other options. Rather than sending/receiving strings through Atmosphere (which I believe is intended to provide push support for RPC calls), consider using that underlying push layer to implment a new request transport in RequestFactory.
com.google.web.bindery.requestfactory.shared.RequestTransport can be implemented (see com.google.web.bindery.requestfactory.gwt.client.DefaultRequestTransport for the default AJAX version) to use any communication mechanism you would like - and to build the server, take a look at com.google.web.bindery.requestfactory.server.RequestFactoryServlet for what actually must be done to push messages through the Locator, ServiceLocators, etc.
If you really want to use Atmosphere and RF, then consider building a RequestTransport that wraps a simple Atmosphere interface to call to the server with the string - the cometd/websocket calls will already be taken care of for you, and you'll just have to translate the string message into invocations (again, see how RequestFactoryServlet does it).

Related

Sockets can replace HTTP requests? (sockets vs http)

Creating a user, adding some record to collection in the DB, updating some stuff, etc..
All of these we regularly do with HTTP requests against REST api.
Think about making Event bus as server instead of REST api.
In that method, create user will be an event name: "CreateUser" instead of REST api endpoint: POST /users.
In reflect to any action done in the event bus, it will re-emit a following event telling to any body needed to know about, that the event was done.
If for example someone viewing the vehicles collection and another user just edit one of the columns or add a new vehicle instance, it will be reflected immediately to who views it online.
My question is if there attitudes like I mentioned above, if there some formally names for it, if it a good practice, if you know someone who regularly uses it, a framework or something etc. Does the socket.io server can handle and behave like http server in high workloads?
You can use websockets for this; they provide a bidirectional channel between client and server to send messages across. You will have to catch and parse the messages on each end yourself, as there is no additional protocol on top of them.
They don't hold state, so there is no knowledge of who is looking at what, or who got what. You could send the same update message to all connected clients and leave it to the client to use it or not.
You would have to reprogram your client code and the API endpoints, because it's a different way of doing things, and it can also do server push.
I have no idea about frameworks though, as I always use them without one. Websockets are fast, but server behaviour at high workloads depends on implementation, and I only have experience with the websocket server I wrote myself. I suppose the performance of the socket.io can easily be googled.

Moving from socko to akka-http websockets

I have an existing akka application built on socko websockets. Communication with the sockets takes place inside a single actor and messages both leaving and entering the actor (incoming and outgoing messages, respectively) are labelled with the socket id, which is a first class property of a socko websocket (in socko a connection request arrives labelled with the id, and all the lifecycle transitions such as handshaking, disconnection, incoming frames etc. are similarly labelled)
I'd like to reimplement this single actor using akka-http (socko is more-or-less abandonware these days, for obvious reasons) but it's not straightforward because the two libraries are conceptually very different; akka-http hides the lower level details of the handshaking, disconnection etc, simply sending whichever actor was bound to the http server an UpgradeToWebsocket request header. The header object contains a method that takes a materialized Flow as a handler for all messages exchanged with the client.
So far, so good; I am able to receive messages on the web socket and reply them directly. The official examples all assume some kind of stateless request-reply model, so I'm struggling with understanding how to make the next step to assigning a label to the materialized flow, managing its lifecycle and connection state (I need to inform other actors in the application when a connection is dropped by a client, as well as label the messages.)
The alternative (remodelling the whole application using akka-streams) is far too big a job, so any advice about how to keep track of the sockets would be much appreciated.
To interface with an existing actor-based system, you should look at Source.actorRef and Sink.actorRef. Source.actorRef creates an ActorRef that you can send messages to, and Sink.actorRef allows you to process the incoming messages using an actor and also to detect closing of the websocket.
To connect the actor created by Source.actorRef to the existing long-lived actor, use Flow#mapMaterializedValue. This would also be a good place to assign an unique id for a socket connection.
This answer to a related question might get you started.
One thing to be aware of. The current websocket implementation does not close the server to client flow when the client to server flow is closed using a websocket close message. There is an issue open to implement this, but until it is implemented you have to do this yourself. For example by having something like this in your protocol stack.
The answer from RĂ¼diger Klaehn was a useful starting point, thanks!
In the end I went with ActorPublisher after reading another question here (Pushing messages via web sockets with akka http).
The key thing is that the Flow is 'materialized' somewhere under the hood of akka-http, so you need to pass into UpgradeToWebSocket.handleMessagesWithSinkSource a Source/Sink pair that already know about an existing actor. So I create an actor (which implements ActorPublisher[TextMessage.Strict]) and then wrap it in Source.fromPublisher(ActorPublisher(myActor)).
When you want to inject a message into the stream from the actor's receive method you first check if totalDemand > 0 (i.e. the stream is willing to accept input) and if so, call onNext with the contents of the message.

GWT - Difference between (client --> server --> client) and (server --> client) communication

In my MVP app i have lots of (client --> server --> client) communication using GWT-RPC. The structure is straight forward, here's an example of a client call.
testObject.callToServer(example, new AsyncCallback<Void>(){
#Override
public void onFailure(Throwable caught) {
//handle failure
}
#Override
public void onSuccess(Object result)
//handle success
}
});
My question is, how would i implement server to client communication only, without the initial call from the client? Can i still use RPC, and if so can i reuse the code example above somehow?
Some more information, i'm using WebSockets to maintain an open communication link between client/server. I'm trying to figure out how to send more than just strings over the wire. I recognize the fact that RPC and Websockets are 2 different kinds of communication, and they may be mutually exclusive within a single communication instance.
As for what kind of data I want to be sending, right now just simple POJOs.
Thanks.
All methods of communicating with the server are 'just strings over the wire' - the difference with RPC is that it is a specially formatted string that describes the type, structure, and content of the objects being sent. The structures are predefined before the conversation even starts - in .class files for the server, and compiled into the JS format for the GWT code. There is a shared .gwt.rpc file in your app that then describes which types can be used in each pair of RPC interfaces, so that both sides can be sure that they know what the other side knows (specifically, the client names that file in each request, and the server agrees to use that file if it can be found, or else throw an error that the two are out of sync).
Putting objects into some other form of transport like websockets requires serializing objects into that string, and on the other side reading them out. To properly use RPC with the limitations it is designed to work with, you have to start the expected 'handshake', but since websockets starts from the client, this should be easily done.
In your self-answer, you mentioned that you switched to AutoBeans instead, letting you simply define very simple bean-like structures in interfaces, and be able to easily map them to JSON strings and back again. I've also done two simple implementations on the server of WebSockets plus RPC, with a single shared client impl: https://github.com/niloc132/webbit-gwt. This project support either JavaEE websockets, or Webbit (a websocket library that uses Netty). It isn't complete or bug-free, but lets you behave as if either side (server or client) can call the other freely, invoking methods with RPC-able objects, and provides some simple hooks for starting/stopping the socket.
I achieved this with GWT's Autobean framework.

How to communicate BOTH error and informational messages in a REST API?

We have a legacy application that allows our developers to "add" messages via a ThreadLocal in Java.
The current SOAP endpoints will scoop these messages off the thread and then package them up in the response.
The endpoints also catch all exceptions and then marshal those exceptions via this same mechanism to normalize the passing of messages (be they informational, warning, or error).
These messages are rich objects (they have a code, severity, classification, and then the actual message text.)
This is nice in many ways because now we have a standard way to communicate meaningful messages to the user (or calling service) but it also makes using the API more challenging because now the client must pick out the messages from the response AND also pick out the real payload.
Any web service can communicate messages this way...but only a handful do.
I would like to start moving our application towards a REST API but I am struggling on how best to handle the messaging. I am not super keen on adding an envelope to each of our REST responses because this really pollutes the API.
The alternative appears to be adding these messages to custom HTTP headers. Is this the "preferred" approach? Remember I will have a list of one or more of these messages and I will likely have to serialize them as json as well.
Thanks.

GWT RequestFactory and propagating server-side changes to the client

I need some advice on how propagating server-side changes of entities to the client is best handled with GWT's RequestFactory.
Let us assume we have two EntityProxies, a PersonProxy and a PersonListProxy (which has a getter for a List). Assume that the client has fetched a PersonList and a Person from the server.
In case the client is editing one of these proxies and firing a request, the machinery of RequestFactory (if I have understood the principles correctly) will fire an EntityProxyChange event if it detects changes done by server code (so that the client can update its display of the entities, for example).
Now assume that the server is changing its entities outside of a request by this client (e.g. due to another client calling the server) so that this client would see another version if it fetched the Person or the PersonList again.
My question is what is the best way inside the RequestFactory framework to tell the client of the changes (and to reuse as much of the machinery as possible)? We can assume that I have a way to send simple messages from the server to the client (e.g. Google App Engine's channel API or server-sent events).
One idea could be that the server sends over this channel a message telling that a Person or a PersonList with a specific id has changed. The client code handling the receipt of these messages could then use RequestFactory to re-fetch (e.g. find) the entity. This change should then be propagated to other parts of the client by an EntityProxyChange event.
Is this the way to go? (And in case that the client already has the current version of the entity, e.g. because the server was dumb and notified the client of changes the client itself made, would the triggered re-fetch just transport a few bits of metadata and not the whole entity again?)
ADDED:
Thinking a bit more about it, I wonder how EntityProxyId's can be generated for the server-sent event channel. When an entity on the server changes, the server only has the server id. It can then send it to the client, of course, but the client only knows of EntityProxyId's. Of course, I could add a getId() (in addition to getStableId()) to each EntityProxy, but it looks as if this would add redundant data to every server response.
Well, I realize that my post isn't precise answer to your question, but it's just my experience.
In fact, there is just a question how to deliver data from server to client.
I faced with some task couple years ago, and found for yourself an approach that make my life easier. To explain it, I want to specify my reasons:
You have to have full data delivery by requesting it from client - it's straight, natural way to requesting data;
You don't want to create and support 2 different models of full data delivery: one by requesting from client and second by pushing from server;
But you need to inform client about some changes on server side;
So, now I'm building my architecture using following approach:
Build full classical client-server API for data delivery - so you can load and refresh your application in natural way even if your pushing functionality is blocked or broken.
Define key information that may be changed on server side and should be delivered to client via push mechanism.
Create small push message construct(s) that will deliver to client just a notification about changes - no any valuable data should be delivered this way - just keys which data was changed.
All that is needed to do on client when it receives such notification is just to get/refresh data from server in natural client-server way that is already supported.
Server logic shouldn't bother client side by huge amount of notifications - sometimes is more effective do not deliver changes, but just refresh everything.
Hope this helps.