Streaming data to/from Play framework on an open connection - scala

I need to send a stream of data to Play server. The length of the stream is unknown and I need to get a response every line break \n or for every several lines. Rather then wait for the whole data to be sent.
Think of the following usecase:
lets say i'm intended to write a console application, that when launched, connects to my web server, and all the user input are being sent to play on every line break, and gets responded asynchronously. All above should be performed on a single connection, i.e. I don't want to open a new connection on every request I send to Play (a good analog would be 2 processes communicating through 2 pipes).
What is the best way to achieve this?
And is it possible to achieve with a client that communicates with the server only via http (with a single http connection)?
EDIT:
my current thoughts on how to approach this are as follows:
i can define a new BodyParser[Future[String]] which is basically an Iteratee[Array[Byte],Future[String]]. while the parsing takes place, i can compute the result asynchronously and the action can return the result as ChunkedResult in the future's onComplete method.
does this sound like the right approach?
any suggestions on how to achieve this?

Maybe you should look at websockets.
Java: http://www.playframework.com/documentation/2.1-RC3/JavaWebSockets
Scala: http://www.playframework.com/documentation/2.0/ScalaWebSockets

Related

What should be returned from the API for CQRS commands?

As far as I understand, in a CQRS-oriented API exposed through a RESTful HTTP API the commands and queries are expressed through the HTTP verbs, the commands being asynchronous and usually returning 202 Accepted, while the queries get the information you need. Someone asked me the following: supposing they want to change some information, they would have to send a command and then a query to get the resulting state, why to force the client to make two HTTP requests when you can simply return what they want in the HTTP response of the command in a single HTTP request?
We had a long conversation in DDD/CRQS mailing list a couple of months ago (link). One part of the discussion was "one way command" and this is what I think you are assuming. You can find out that Greg Young is opposed to this pattern. A command changes the state and therefore prone to failure, meaning it can fail and you should support this. REST API with POST/PUT requests provide perfect support for this but you should not just return 202 Accepted but really give some meaningful result back. Some people return 200 success and also some object that contains a URL to retrieve the newly created or updated object. If the command handler fails, it should return 500 and an error message.
Having fire-and-forget commands is dangerous since it can give a consumer wrong ideas about the system state.
My team also recently had a very heated discussion about this very thing. Thanks for posting the question. I have usually been the defender of the "fire and forget" style commands. My position has always been that, if you want to be able to move to an async command dispatcher some day, then you cannot allow commands to return anything. Doing so would kill your chances since an async command doesn't have much of a way to return a value to the original http call. Some of my team mates really challenged this thinking so I had to start thinking if my position was really worth defending.
Then I realized that async or not async is JUST an implementation detail. This led me to realize that, using our frameworks, we can build in middleware to accomplish the same thing our async dispatchers are doing. So, we can build our command handlers the way we want to, returning what ever makes sense, and then let the framework around the handlers deal with the "when".
Example: My team is building an http API in node.js currently. Instead of requiring a POST command to only return a blank 202, we are returning details of the newly created resource. This helps the front-end move on. The front-end POSTS a widget and opens a channel to the server's web socket using the same command as the channel name. the request comes to the server and is intercepted by middleware which passes it to the service bus. When the command is eventually processed synchronously by the handler, it "returns" via the web socket and the front-end is happy. The middleware can be disabled easily, making the API synchronous again.
There is nothing stopping you from doing that. If you execute your commands synchronously and create your projections synchronously, then it will be easy for you to just make a query directly after executing the command and returning that result. If you do this asynchronously via the rest-api, then you have no query result to send back. If you do it asynchronously within your system, then you can wait for the projection to be created and then send the response to the client.
The important thing is that you separate your write and read models in classic CQRS style. That does not mean that you cannot do a read in the same request as you do the command. Sure, you can send a command to the server and then with SignalR (or something) wait for a notification that your projection have been created/updated. I do not see a problem with waiting for the projection to be created on the server side instead for on the client.
How you do this will affect you infrastructure and error handling. Also, you will hold the HTTP request open for a longer time if you return the result at once.

iOS. Best way to pull data from a server (dynamic intervals) for HTTP chat client?

I am working on a chat client. To get new messages (or post new one) I have to perform GET (or POST) request. All new messages are stored via core data. At the moment I don't know how to implement it in most optimal way.
My thoughts:
On view controller init stage create background thread which will periodically checks for new messages (if conversation is active - with short period, if not - with period about 60 secs). If there are new messages, we store them in DB and signal delegate that there are new messages to display.
Friend suggested to use performSelector afterDelay, but I don't understand how to use it in my app.
Something else?
Thanks in advance.
Don't use performSelector afterDelay. Using NSTimer is much better (as the trigger for starting the next download). Also, use NSOperationQueue to manage your background tasks. Create yourself a custom NSOperation that you can instantiate and it will complete your request process. When you create a new operation to check for new messages, check if one is already in progress (there is no point having multiple requests in progress at the same time).
Other notes:
Make sure you consider the threading with regards to the Core Data store (having the operation call back to the main thread with the results will probably be easiest as the result data will always be relatively small).
If you have lots of messages being sent and you want to show constant status (like Skype does, showing you when someone is typing) you would need to use sockets to keep the connection alive the whole time (the cost of new connections each time would be prohibitive).

Backbone sync request sequence

I've got a Backbone web application that talks to a RESTful PHP server. For PUT and POST it matters in which order the requests arrive at the server and for GET it matters in which order the responses arrive at the client.
The web application does not need to be used concurrently by multiple users, but what might happen is that the user changes its name twice really fast. Then the order in which the server processes PUT /name/Ann and PUT /name/Bea determines whether the name is set to Ann or Bea.
Backbone.Safesync and Backbone.Sync.AjaxQueue are two libraries that try to solve this problem. Doesn't Safesync only solve the problem with GET? Sync.AjaxQueue is outdated, but might serve as inspiration to implement a custom queued sync function. Making sync synchronous would solve the problem. If a request is only sent after the previous response is received, then only one request is processed at a time.
Any advice on how to proceed?
BTW: I don't think using PATCH requests would solve anything, because in my example the same attribute is changed twice.
There's a few ways to solve this, here's two:
add a timestamp to all requests, store it in the DB as "modified" and let the server check whether the timestamp of the new request is later than the one in the DB in order to be valid
use Promises to delay the second request from being made before the first one is responded on, there's a promise/deferred mechanism built into jquery, but you can also use a 3rd party one, for instance Q or when
If you can afford the delay, an easy approach is to set the async option to false when you call whatever method you're calling that results in the Backbone.sync. For example, in the appropriate model(s) simply override the default sync method to include the additional option.

HTTP Request process line by line

I have an iOS app that I'm migrating from the very slow and clunky SOAP to a custom data format (basically CSV with some extra bits).
My priority is getting initial data to the client as quickly as possible while letting it still load more in the background. The server side is written to continuously flush data instead of caching the response.
So I'd like to parse out every line as they arrive at the client, instead of waiting for the full response.
If I view it in a browser I get progressive loading. However, using MKNetworkKit or ASIHTTPRequest or similar, I'm only able to get the full response which takes several seconds longer.
Does anyone know what the best options could be?
NSURLconnection can do what you want. You set the delegate and use -connection:didWriteData:totalBytesWritten:expectedTotalBytes: callback to read in a chunk of the data as it's downloading.
It will be up to you to properly handle splitting up the lines and handling chunks containing partial lines.

GWT: Is Timer the only way to keep my app up-to-date with the server?

I just got asked to reduce the traffic made by my GWT app. There is one method that checks for status.
This method is an asynchronous call wrapped in a Timer. I know web apps are stateless and all that, but I do wonder if there is some other way to do this, or if everyone has a Timer wrapped around a call when they need this kind of behaviour.
You can check out gwteventservice. It claims to have a way to push server events and notify the client.
I have a feeling they might be implemented as long running (hanging) client to server RPC calls which time out after an interval (say 20sec), and then are re-made. The server returns the callback if an event happens in the meanwhile.
I haven't used it personally but know of people using it to push events to the client. Have a look at the docs . If my assumption is correct, the idea is to send an RPC call to the server which does not return (hangs). If an event happens on the server, the server responds and that RPC call returns with the event. If there is no event, the call will time out in 20 seconds. Then a new call is made to the server which hangs in the same way until there is an event.
What this achieves is that it reduces the number of calls to the server to either one for each event (if there is one), or a call once every 20 seconds (if there isn't one). It looks like the 20 sec interval can be configured.
I imagine that if there is no event the amount of data sent back will be minimal - it might be possible to cancel the callback entirely, or have it fail without data transfer, but I don't really know.
Here is another resource on ServerPush - which is likely what's implemented by gwteventservice.
Running on Google app engine you could use they Channel technology
http://code.google.com/intl/en-US/appengine/docs/java/channel/overview.html
If you need the client to get the status from the server, then you pretty much have to make a call to the server to get it's status.
You could look at reducing the size of some of your messages
You could wind back the timer so the status call goes out less often
You could "optimise" so that the timer gets reset when some other client/server IO happens (i.e. if the server replies you know it is ok, so you don't need to send the next status request).