How to push data to variety of different client types in near real time? - push

We need is to push sports data to a number of different client types such as ajax/javascript, flash, .NET and Mac/iPhone. Data updates need to only be near-real time with delays of several seconds being acceptable.
How to best accomplish this?

The best solution (if we're talking .NET) seem to be to use WCF and streaming http. The client makes the first http connection to the server at port 80, the connection is then kept open with a streaming response that never ends. (And if it does it reconnects).
Here's a sample that demonstrates this: Streaming XML.
The solution to pushing through firewalls: Keeping connections open in IIS

I would go with XML. XML is widely supported on all platforms and has lots of libraries and tools available for it. And since it's text, there are no issues when you pass it between platforms.
I know JSON is another alternative, but I'm not familiar enough with it to know whether or not to recommend it in this case.

Related

Streaming data from web-server, trying to use vb.net and cgi

I need to stream data from a web server to clients. The data is location data that is collected and stored on the server. The clients will click a button on an html page to 'opt in' to start receiving the data. This data is never ending and there is at least one of the clients that needs to receive the data 24-7, with as few breaks as possible. The data being streamed will be client specific, as each client wont receive the exact same data.
I've done several multi-threaded tcp servers over sockets, and websockets are the way I would like to attack this, but the requirements are that this has to work in ie9.
The initial requirement was that this be a vb.net cgi executable - but during testing, I havent been able to 'use' the stream from the vb.net executable until the app finishes - like it wasn't able to flush the stdout even though I was specificly using the console.out.flush(). So If this isn't a viable option, and I can support this with facts, then I can get this requirement changed.
I've also read quite a bit about using a third party server to stream the data like Orbit and APE I think was a couple of them, but requirements are for 1 server - the web server. No other hardware can be required.
I'm pretty sure the vb.net CGI isn't the ideal solution based on what i've found, but is it doable or do I need to abandon that solution and move on to a newer technology , ISAPI? Any ideas or suggestions, even if they just point me in the right direction, are greatly appreciated.
You might go few ways.
If you would go C# .Net, then you might look into Silverlight solution. But it requires plugin in browser to be installed (like Flash). Good thing here, is that you are able to send data through normal sockets, in pure realtime from server. In same time Silverlight uses .Net so it makes some code to be shared. That helps development process. As well the way it will work in different browsers will be same.
You might have a look in similar solution using Java Applet with Java backend (can be even .Net, but again, easier to develop when both in same language).
Another option is to have fron-end using WebSockets, but as you know its not supported in IE9 and below (IE10 promises to be), and Opera is not supporting it as well.
Backend can be done in what you prefer. But bear in mind that WebSockets uses framing, and for constant but little packets its not efficient, because if you send 10 bytes, then it will create frame 2-12 bytes, and TCP packet header that is 40 bytes in average.
To support older browsers you might have a look in long-polling, but it is not as reliable as websockets.
As well it is important to calculate the amount of data and approximate amount of users that will use your system. Based on calculations you will have approximate information about how real it is, and what server will be required to handle.

RTP/RTSP start up latency: Would this method help to reduce it, and if yes, why we don't have it

This is probably not the best forum for such a specialized question, but at the moment I don't know of a better one (open to suggestions/recommendations).
I work on a video product which for the last 10+ years has been using proprietary communications protocol (DCOM-based) to send the video across the network. A while ago we recognized the need to standardize and currently are almost at a point of ripping out all that DCOM baggage and replacing it with a fully compliant RTP/RTSP client/server framework.
One thing we noticed during testing over the last few months is that when we switch the client to use RTP/RTSP, there's a noticeable increase in start-up latency. The problem is that it's not us but RTSP.
BEFORE (DCOM): we would send one DCOM command and before that command even returned back to the client, the server would already be sending video. -- total latency 1 RTT
NOW (RTSP): This is the sequence of commands, each one being a separate network request: DESCRIBE, SETUP, SETUP, PLAY (assuming the session has audio and video) -- total of 4 RTTs.
Works as designed - unfortunately it feels like a step backwards because prior user experience was actually better.
Can this be improved? If you stay with the standard, short answer is, NO. However, my team fully controls our entire RTP/RTSP stack and I've been thinking we could introduce a new RTSP command (without touching any of existing commands so we are still fully inter-operable) as a solution: DESCRIBE_SETUP_PLAY.
We could send this one command, pass in types of streams interested in (typically, there's only one video and 0..1 audio). Response would include the full SDP text, as well as all the port information and just like before, server would start streaming instantly without waiting for anything else from the client.
Would this work? any downside that I may not be seeing? I'm curious why this wasn't considered (or was dropped) from official spec, since latency even in local intranet is definitely noticeable.
FYI, it is possible according to the RTSP 1.0 specification:
9.1 Pipelining
A client that supports persistent connections or connectionless mode
MAY "pipeline" its requests (i.e., send multiple requests without
waiting for each response). A server MUST send its responses to those
requests in the same order that the requests were received.
The RTSP 2.0 draft also contains support for pipelining.
However none of the clients/servers I've used implement it AFAIK.

Implement server-push with GWTP

I got a project using GWTP (which involves MVP separation, Gin and Dispatch), now I'm on the situation where it is required that changes on the server are pushed to specific clients
I've reading the gwt-comet and gwteventservice documentation, It seems the first doesn't work with RPC and the second Ecnapsulates RPC, for which I don't know how to fit it in my current command pattern from GWTP. Ideas?
I have been using gwt-comet (http://code.google.com/p/gwt-comet/). It's a native comet implementation working pretty good like RPC, you can send Strings or your GWT-serialized objects as well. And the best thing you don't need to do many things to make it works.
i used "Server Push in GWT" described here http://code.google.com/p/google-web-toolkit-incubator/wiki/ServerPushFAQ - it seemed to work fairly well for a small project.
This is really a servlet problem, not a GWT or GWTP problem.
So there are a few approaches to doing this, the most stable (in my opinion) is to have a long or blocking poll servlet. This is basically a servlet that is polled by the client, and holds the connection open for some period of time if there is no message to 'push' to the client, and if too much time passes (this is to get around http timeouts) a heartbeat is returned of some kind. Either way, when the servlet request request returns, the client just makes another request. This is the most portable and stable way to my mind, since it uses only the core servlet api, doesn't suffer from network issues, and the blocking portion allows you to have the poll 'park' at the server for some period of time and reduces total request load, while allowing very quick return of new information to the client when there is some available.
The next way to achieve this is via WebSockets, this is great once you get it working and in my opinion is the way of the future without question. I think this is a good one to work with since this will be, in my opinion, a paradigm shift in web applications once it catches a head of steam, so we all need to be up to speed. Basically, you have a javascript 'socket' open via port 80 (this is one of the best features, since you don't have to open any firewall holes) and can communicate in two directions across that socket.
Comet can also work, but it will generally lock you down to one server type, which may be alright for your application. Caveat here!!!! I have only done very small tests with comet, it was flaky for me when I set it up, and was not as steady as the blocking poll solution as I had it set up.
Now the neatest one in my opinion, but this one is very limited due to network constraints probably to single domain intranet applications, is to use an applet based push. This setup (which could be done with udp or a straight socket, I did all web just to keep it all simpler conceptually) takes the applet, uses it to spin up a jetty server instance on the client, and the has the page publish the client's jetty 'endpoint' to the server. At this point, the client can contact the server using it's servlets, and the server can contact the client at the servlet(s) exposed on the jetty server. This is true push, it's neato, but there are network nightmares.
So of all the above, I use long polling, keep my eye on web sockets since they are the future in my mind, and really like the applet based version, although it's quite restricted in use due to the network resolution limitations.
Once you have this decided, from GWTP you would just have actions or JSNI bridge methods as needed to connect to your server and receive responses. I won't go into this, since this is really a core servlet/http/javascript question more than a GWT or GWTP centric question.
I hope that helps!

How should I implement bi-directional networking between an iPhone application and an Objective-C server-side application?

I'm looking for advice on the best way to implement some kind of bi-directional communication between a "server-side" application, written in Objective-C and running on a mac, and a client application running on an iPhone.
To cut a long story short, I'm adapting an existing library for use in a client-server environment. The library (which runs on the server) is basically a search engine which provides periodic results, and additionally can provide updates for any of those results at a later date. In an ideal world therefore I would be able to achieve the following with my hypothetical networking solution:
Start queries on the server.
Have the server "push" results to the client as they arrive.
Have the server "push" updates to individual results to the client as they arrive.
If I was writing this client to run on another Mac, I might well look at using Distributed Objects to mask the fact that the server was actually running remotely, but DO is not available on an iPhone.
If I was writing a more generic client-server application I would probably look at using HTTP to provide some kind of RESTful interface to searches, but this solution does not lend itself well to asynchronous updates and additionally what I am proposing does not fit well with the "stateless" tennet of REST: I would have to model my protocol so I "created" a search resource that I could subsequently query the state of and I would have to poll for updates to it.
One suggestion someone made was to make use of something like BLIP to provide me with a two-way pipe between the client and the server and implement my own "proxy" type objects for the server-side resources that knew how to fetch data from the server and additionally were addressable so that the server could push updates to them. Whilst BLIP provides the low-level messaging framework needed to communicate bi-directionally it still leaves me with a few questions:
How will I manage the lifetime of the objects on the server? I can have a message type that "creates" a search object, but when should that object be destroyed?
How well with this perform on an iPhone: if I have a persistent connection to the server will this drain the batteries too fast? This question is also pertinent in the HTTP world: most async updates are done using a COMET type hack which again requires a persistent connection.
So right now I'm still completely unsure what the best way to go is: I've done a lot of searching and reading but have not settled on any solution. I'm asking here on SO because I'm sure that there are many of you out there who have already solved this problem.
How have you gone about achieving real-time bidirectional networking between the iPhone and an Objective-C server-side app?

long polling vs streaming for about 1 update/second

is streaming a viable option?
will there be a performance difference on the server end depending on which i choose?
is one better than the other for this case?
I am working on a GWT application with Tomcat running on the server end. To understand my needs, imagine updating the stock prices of several stocks concurrently.
Do you want the process to be client- or server-driven? In other words, do you want to push new data to the clients as soon as it's available, or would you rather that the clients request new data whenever they see fit, even though that might not be once/second? What is the likelyhood that the client will be able to stick around to wait for an answer? Even though you expect the events to occur once/second, how long does it take between a request from a client and the return from the server? If it's longer than a second, I'd expect you to lean towards pushing the events to the clients, though the other way around, I'd expect polling to be okay. If the response takes longer than the interval, then you're essentially streaming anyway, since there's a new event ready by the time the client receives the last one, so the client could essentially poll continually and always receive events - in this case, streaming the data would actually be more lightweight, since you're removing the connection/negotiation overhead from the process.
I would suspect that server load to be higher for a client-based (pull) subscription, instead of a streaming configuration, since the client would have to re-negotiate the connection each time, instead of leaving a connection open, but each open connection in a streaming model would require server resources as well. It depends on what the trade-off is between how aggressive your negotiation process is vs. how much memory/processing is required for each open connection. I'm no expert, though, so there may be other factors.
UPDATE: This guy talks about the trade-offs between long-polling and streaming, and he seems to say that with HTTP/1.1, the connection re-negotiation process is trivial, so that's not as much of an issue.
It doesn't really matter. The connection re-negotiation overhead is so slim with HTTP1.1, you won't notice any significant performance differences one way or another.
The benefits of long-polling are compatibility and reliability - no issues with proxies, ports, detecting disconnects, etc.
The benefits of "true" streaming would potentially be reduced overhead, but as mentioned already, this benefit is much, much less than it's made out to be.
Personally, I find a well-designed comet server to be the best solution for large numbers of updates and/or server-push.
Certainly, if you're looking to push data, streaming would seem to provide better performance, if your server can handle the expected number of continuous connections. But there's another issue that you don't address: Are you internet or intranet? Streaming has been reported to have some problems across proxies, much as you'd expect. So for a general purpose solution, you would probably be better served by long poll - for an intranet, where you understand the network infrastructure, streaming is quite likely a simpler, better performance solution for you.
The StreamHub GWT Comet Adapter was designed exactly for this scenario of streaming stock quotes. Example here: GWT Streaming Stock Quotes. It updates the stock prices of several stocks concurrently. I think the implementation underneath is Comet which is essentially streaming over HTTP.
Edit: It uses a different technique for each browser. To quote the website:
There are several different underlying
techniques used to implement Comet
including Hidden iFrame,
XMLHttpRequest/Script Long Polling,
and embedded plugins such as Flash.
The introduction of HTML 5 WebSockets
in to future browsers will provide an
alternative mechanism for HTTP
Streaming. StreamHub uses a "best-fit"
approach utilizing the most performant
and reliable technique for each
browser.
Streaming will be faster because data only crosses the wire one way. With polling, the latency is at least twice.
Polling is more resilient to network outages since it doesn't rely on a connection being kept open.
I'd go for polling just for the robustness.
For live stock prices I would absolutely keep the connection open, and ensure user alert/reconnection on disconnect.