Streaming data from web-server, trying to use vb.net and cgi - streaming

I need to stream data from a web server to clients. The data is location data that is collected and stored on the server. The clients will click a button on an html page to 'opt in' to start receiving the data. This data is never ending and there is at least one of the clients that needs to receive the data 24-7, with as few breaks as possible. The data being streamed will be client specific, as each client wont receive the exact same data.
I've done several multi-threaded tcp servers over sockets, and websockets are the way I would like to attack this, but the requirements are that this has to work in ie9.
The initial requirement was that this be a vb.net cgi executable - but during testing, I havent been able to 'use' the stream from the vb.net executable until the app finishes - like it wasn't able to flush the stdout even though I was specificly using the console.out.flush(). So If this isn't a viable option, and I can support this with facts, then I can get this requirement changed.
I've also read quite a bit about using a third party server to stream the data like Orbit and APE I think was a couple of them, but requirements are for 1 server - the web server. No other hardware can be required.
I'm pretty sure the vb.net CGI isn't the ideal solution based on what i've found, but is it doable or do I need to abandon that solution and move on to a newer technology , ISAPI? Any ideas or suggestions, even if they just point me in the right direction, are greatly appreciated.

You might go few ways.
If you would go C# .Net, then you might look into Silverlight solution. But it requires plugin in browser to be installed (like Flash). Good thing here, is that you are able to send data through normal sockets, in pure realtime from server. In same time Silverlight uses .Net so it makes some code to be shared. That helps development process. As well the way it will work in different browsers will be same.
You might have a look in similar solution using Java Applet with Java backend (can be even .Net, but again, easier to develop when both in same language).
Another option is to have fron-end using WebSockets, but as you know its not supported in IE9 and below (IE10 promises to be), and Opera is not supporting it as well.
Backend can be done in what you prefer. But bear in mind that WebSockets uses framing, and for constant but little packets its not efficient, because if you send 10 bytes, then it will create frame 2-12 bytes, and TCP packet header that is 40 bytes in average.
To support older browsers you might have a look in long-polling, but it is not as reliable as websockets.
As well it is important to calculate the amount of data and approximate amount of users that will use your system. Based on calculations you will have approximate information about how real it is, and what server will be required to handle.

Related

RESTful interface for ECG/EEG sensor data in haskell

I'm working on a project in which I want to display biosensor EEG/ECG data measured by a portable device (e.g., a micro controller with wireless data transmission via Wifi or Bluetooth). For this purpose, I need to interface with the portable device/microcontroller, for which the many or some of the device seem to use RESTful interfaces, but offer also probably sockets.
One example of microcontroller with wifi is the "spark.io", which is based on a cortex m3 and CC3000 wireless controller for WiFi access on-board. The data to be transferred are around 500 to 1000 float values per second, which should arrive at the REST client with as little delay as possible. Probably an non-REST approach like sockets would fit better, but I would still like to test an approach based on a RESTFul interface (a tiny argument for this would be that transferring data via RESFul interface seems very common and has good library support).
Q: The question is, what is the best approach for a performant (in the sense of near-realtime) implementation that interfaces with this via REST interface?
I am sure this problem has been solved before, but I could not quickly find a paper via google scholar or technical/scientific blog post that explains this. The only link I found is on "rest hooks", but I am not sure if this is a good approach. Searching on SE didn't reveal a past question on this.
Side note: My approach would be to implement the interface in haskell first to test the design and performance of the RESFull interface. Later the working approach should be ported or implemented with Java/Android/spark.io/some other microcontroller.
(Please note this question is entirely about the architecture and not at all about haskell libraries or anything. If using REST is the stupiest thing, I will accept that as an answer if it is argumented. Also then the question is then whether in general microcontroller web-interfaces and specically their APIs, like that of "spark.io", are in general a stupid idea, if they are implemented via REST. Is this the case? If not, what definition of "near real time" justifies that a REST interface is a bad idea and thus other means of communcation are better. Like: one sensor read per minute? Or, one per second, by 1/10 second, by 1/100 second, by 1/1000 second?)
Okay, let's go through this.
REST is not necessarily a bad idea but it has a lot of features which you may not need. For example, there are REST verbs not just for retrieval, but also updating, deleting, and creating resources. If those functions are important (e.g. you need to send certain control data to the EEG controller) then REST will be nice. If you just want fast access to the stream of data, consider raw TCP instead.
Similarly, REST will package messages into "requests" and their "responses" which come with a bunch of "headers" indicating things like whether the request could be fulfilled, whether it's compressed, etc. These can be great features but may be bloat. You'll probably want to emit enough data on each request so that the ~1kB of headers are a small fraction of it. But given 8-byte floats (doubles), that requires transmitting 500-1000 data points, which you've said will take about one second. Is that our fate -- to always have 1s of latency?
REST will allow you to avoid some of that bloat by declaring a Transfer-Encoding: chunked so that the client can operate on individual chunks as they become available. So that's an architectural decision that I think will need to be made.
I would definitely get Keep-Alive working as soon as possible, and it would be my chief feature when looking for what library to use on the server. Keep-Alive is a standard extension to HTTP which avoids tearing down and rebuilding the TCP stack for each HTTP request. If you don't do this then you have some heavy protocol negotiations each time you send a request.
A crucial decision you'll have to make involves whether you want to do HTTP pipelining or not. You can combine HTTP pipelining with longer-lived requests (ones where you don't expect an immediate response) to essentially "send the data when it becomes available" (i.e. send the headers first and let the server push out the data when it's good and ready). This is an alternative to chunked transfers.
If you can work those out, then HTTP is regularly used to send megabytes per second, so your use case fits well within what REST is capable of. In terms of REST/HTTP libraries for Haskell, if you have to somehow program the controller yourself, the big options are wai, yesod, snap, and rest. If you just need an HTTP client there are a few of those too.

RTP/RTSP start up latency: Would this method help to reduce it, and if yes, why we don't have it

This is probably not the best forum for such a specialized question, but at the moment I don't know of a better one (open to suggestions/recommendations).
I work on a video product which for the last 10+ years has been using proprietary communications protocol (DCOM-based) to send the video across the network. A while ago we recognized the need to standardize and currently are almost at a point of ripping out all that DCOM baggage and replacing it with a fully compliant RTP/RTSP client/server framework.
One thing we noticed during testing over the last few months is that when we switch the client to use RTP/RTSP, there's a noticeable increase in start-up latency. The problem is that it's not us but RTSP.
BEFORE (DCOM): we would send one DCOM command and before that command even returned back to the client, the server would already be sending video. -- total latency 1 RTT
NOW (RTSP): This is the sequence of commands, each one being a separate network request: DESCRIBE, SETUP, SETUP, PLAY (assuming the session has audio and video) -- total of 4 RTTs.
Works as designed - unfortunately it feels like a step backwards because prior user experience was actually better.
Can this be improved? If you stay with the standard, short answer is, NO. However, my team fully controls our entire RTP/RTSP stack and I've been thinking we could introduce a new RTSP command (without touching any of existing commands so we are still fully inter-operable) as a solution: DESCRIBE_SETUP_PLAY.
We could send this one command, pass in types of streams interested in (typically, there's only one video and 0..1 audio). Response would include the full SDP text, as well as all the port information and just like before, server would start streaming instantly without waiting for anything else from the client.
Would this work? any downside that I may not be seeing? I'm curious why this wasn't considered (or was dropped) from official spec, since latency even in local intranet is definitely noticeable.
FYI, it is possible according to the RTSP 1.0 specification:
9.1 Pipelining
A client that supports persistent connections or connectionless mode
MAY "pipeline" its requests (i.e., send multiple requests without
waiting for each response). A server MUST send its responses to those
requests in the same order that the requests were received.
The RTSP 2.0 draft also contains support for pipelining.
However none of the clients/servers I've used implement it AFAIK.

Implement server-push with GWTP

I got a project using GWTP (which involves MVP separation, Gin and Dispatch), now I'm on the situation where it is required that changes on the server are pushed to specific clients
I've reading the gwt-comet and gwteventservice documentation, It seems the first doesn't work with RPC and the second Ecnapsulates RPC, for which I don't know how to fit it in my current command pattern from GWTP. Ideas?
I have been using gwt-comet (http://code.google.com/p/gwt-comet/). It's a native comet implementation working pretty good like RPC, you can send Strings or your GWT-serialized objects as well. And the best thing you don't need to do many things to make it works.
i used "Server Push in GWT" described here http://code.google.com/p/google-web-toolkit-incubator/wiki/ServerPushFAQ - it seemed to work fairly well for a small project.
This is really a servlet problem, not a GWT or GWTP problem.
So there are a few approaches to doing this, the most stable (in my opinion) is to have a long or blocking poll servlet. This is basically a servlet that is polled by the client, and holds the connection open for some period of time if there is no message to 'push' to the client, and if too much time passes (this is to get around http timeouts) a heartbeat is returned of some kind. Either way, when the servlet request request returns, the client just makes another request. This is the most portable and stable way to my mind, since it uses only the core servlet api, doesn't suffer from network issues, and the blocking portion allows you to have the poll 'park' at the server for some period of time and reduces total request load, while allowing very quick return of new information to the client when there is some available.
The next way to achieve this is via WebSockets, this is great once you get it working and in my opinion is the way of the future without question. I think this is a good one to work with since this will be, in my opinion, a paradigm shift in web applications once it catches a head of steam, so we all need to be up to speed. Basically, you have a javascript 'socket' open via port 80 (this is one of the best features, since you don't have to open any firewall holes) and can communicate in two directions across that socket.
Comet can also work, but it will generally lock you down to one server type, which may be alright for your application. Caveat here!!!! I have only done very small tests with comet, it was flaky for me when I set it up, and was not as steady as the blocking poll solution as I had it set up.
Now the neatest one in my opinion, but this one is very limited due to network constraints probably to single domain intranet applications, is to use an applet based push. This setup (which could be done with udp or a straight socket, I did all web just to keep it all simpler conceptually) takes the applet, uses it to spin up a jetty server instance on the client, and the has the page publish the client's jetty 'endpoint' to the server. At this point, the client can contact the server using it's servlets, and the server can contact the client at the servlet(s) exposed on the jetty server. This is true push, it's neato, but there are network nightmares.
So of all the above, I use long polling, keep my eye on web sockets since they are the future in my mind, and really like the applet based version, although it's quite restricted in use due to the network resolution limitations.
Once you have this decided, from GWTP you would just have actions or JSNI bridge methods as needed to connect to your server and receive responses. I won't go into this, since this is really a core servlet/http/javascript question more than a GWT or GWTP centric question.
I hope that helps!

Deciphering MMORPG Protocol Encoding

I plan on writing an automated bot for a game.
The tricky part is figuring out how they encoded their protocol... To make the bot run around is easy, simply make the character run and record what it does in wireshark. However, interpreting the environment is more difficult... It recieves about 5 packets each second if you are idle, hence lots of garbarge.
My plan: Because the game runs under TCP, I will use freecap (http://www.freecap.ru/eng) to force the game to connect to a proxy running on my machine. I will need this proxy to be capable of packet injection, or perhaps a server that is capable of resending captured packets. This way I can recreate and tinker around with what the server sends, and understand their protocol encoding.
Does anyone know where I can get a proxy that allows packet injection or where I can perform packet injection (not via hardware, as is the case with wireless or anything!)
Where of if I can find a server/proxy that resends captured packets (ie: replays a connection).
Any better tools or methodologies for pattern matching? Something which can highlight patterns from mutliple messages would be GREAT.
OR, is there a better way to decipher this here? Possibly a dissasembly strategy (via hooking a winsock function and starting the dissassembly from there) ? I have not done this before so I am not sure. OR , any other ideas?
Network traffic interception and protocol analysis is generally a less favored method to accomplish your goal here. For most modern games, encryption is a serious factor, and there are serious headaches associated with the protocol analysis for any but trivial factors of the most common gameplay scenarios.
Most modern implementations* of what you are trying to do rely on reading and manipulating the memory space and process of a running client. The client will have already done all the hard parts for you, including decrypting the traffic and sorting it into far more easy to read data structures. For interacting with the server you can call functions built into the client instead of crafting entire series of packets from scratch. The plus to this approach is that you have to do far less work to interpret the data and produce activity. The minus is that there is often some data in the network traffic that would be useful to a bot but is discarded by the client, or that you may want to send traffic to the server that the client cannot produce (which, in my own well-developed hierarchy for such, is a few steps farther down the "cheating" slope).
*...I say this having seen the evolution of the majority of MMORPG botting/hacking communities from network protocol analyzers like ShowEQ and Odin's Eye / Excalibur to memory-based applications like MacroQuest and InnerSpace. On that note, InnerSpace provides an excellent extensible framework for the memory/process-based variant of what you are attempting, and you should look into it as a basis for your project if you abandon the network analysis approach.
As I've done a few game bots in the past (for fun, not profit or griefing of course - writing game bots is a lot of fun), I recommend the following:
If you can code and there isn't cheat protection preventing you from doing it, I highly recommend writing an injected DLL for the following reasons:
Your DLL will be able to access the game's memory space directly, and once you reverse-engineer the data structures (either by poking around memory or by code disassembly), you'll have access to lots of data. This will also allow you to bypass any network encryption the game may have. The downside of accessing process memory directly is that offsets and data structures change between versions - however, data structures don't change very often with a stable game, and you can compensate offset changes by searching for code patterns instead of using fixed offsets.
Either way, you'll still be able to hook WinSock functions using API hooks (check out Microsoft Detours and the excellent but now-commercial madCodeHook).
otherwise, I can only advise that you give live/interactive packet editors like WPE Pro a try.
In most scenarios, the coolest methods (code reverse-engineering and direct memory access) tend to be the least productive. They require a lot of skill (to understand the code) and time, both initially (to go through all the code and develop code to interact with the data structure) and for maintainance (in case the game is being updated). (Of course, they sometimes do allow doing cool stuff which is impossible to do with the official client, but most of the time this is obvious as blatant cheating, and likely to attract the GMs quickly). Most of the time bots are made by replacing game graphics/textures with solid colours, and creating simple "pixel" bots which search for certain colours on the screen and react accordingly (e.g. click them).
Hope this helps, and remember - cheating is only fun when it doesn't make the game less fun for everyone else ;)
There are probably a few reasonable assumptions you can make that should simplify your task enormously. However, to make the best use of them you will probably need greater comfort with sleeves-rolled-up programming than it sounds like you have.
First, it's a safe bet that the encryption they are using falls into one of three categories:
None
Cheesy
Far better than you are likely to crack
With the odds of the middle case being very low.
Next, it's a safe bet that the packets are encrypted / decrypted close to the edge of the program (right as they come in, right before they go out) and that the body of the game deals with them in decrypted form.
Finally, the protocol they are using most likely consists of either
ascii with data blocks
binary goo
So do a little packet sniffing with a card set in promiscuous mode for unencrypted ascii. If you see some, great, you're ahead of the game. But if you don't give up the whole tapping-the-line idea and instead start following the code as it returns from the sending data out by breakpointing and stepping with a debugger. Figure the outermost layer or three will be standard network stuff, then will come the encryption layer, and beyond that the huge mass of stuff that deals with the protocol unencrypted.
You should be able to get this far in an hour if you're hot, a weekend if you're reasonably skilled, motivated, and diligent, and never if you are hopeless. But it is possible in principle (and doubtlessly far easier in practice) to do it this way.
Once you get to where something that looks like unencrypted goo comes in, gets mungled, and the mungled form goes out, then start worrying about what it means.
-- MarkusQ
A) I play a MMO and do not support bots, voting down...
B) Download Backtrack v.3, run an arpspoof on your default gateway and your host. There is an application that will spoof the remote host's SSL cert sslmitm (I believe is the name) which will then allow you to create a full connection through your host. Then fireup tcpdump/ethereal/wireshark (choose your pcap poison) and move around do random stuff to find out what packet is doing what. That will be your biggest challenge; but proxying with a Man in the Middle attack on yourself is the way to go.
C) I do not condone this activity, this information is only being provided as free information.
Sounds like there is not encryption going on, so you could do a network approach.
A great place to start would be to find the packet ID's - most of the time, something near the front of the packet is going to be an ID of the type of the packet. For example move could be 1, shoot fired could be "2", chat could be "4".
You can write your own proxy that listens on one port for your game to connect, and then connects to the server. You can make keypresses to your proxy fire off commands, or you can make your proxy write out debugging info to help you go further.
(I've written a bot for an online in game in PHP - of all things.)

How to push data to variety of different client types in near real time?

We need is to push sports data to a number of different client types such as ajax/javascript, flash, .NET and Mac/iPhone. Data updates need to only be near-real time with delays of several seconds being acceptable.
How to best accomplish this?
The best solution (if we're talking .NET) seem to be to use WCF and streaming http. The client makes the first http connection to the server at port 80, the connection is then kept open with a streaming response that never ends. (And if it does it reconnects).
Here's a sample that demonstrates this: Streaming XML.
The solution to pushing through firewalls: Keeping connections open in IIS
I would go with XML. XML is widely supported on all platforms and has lots of libraries and tools available for it. And since it's text, there are no issues when you pass it between platforms.
I know JSON is another alternative, but I'm not familiar enough with it to know whether or not to recommend it in this case.