RESTful interface for ECG/EEG sensor data in haskell - rest

I'm working on a project in which I want to display biosensor EEG/ECG data measured by a portable device (e.g., a micro controller with wireless data transmission via Wifi or Bluetooth). For this purpose, I need to interface with the portable device/microcontroller, for which the many or some of the device seem to use RESTful interfaces, but offer also probably sockets.
One example of microcontroller with wifi is the "spark.io", which is based on a cortex m3 and CC3000 wireless controller for WiFi access on-board. The data to be transferred are around 500 to 1000 float values per second, which should arrive at the REST client with as little delay as possible. Probably an non-REST approach like sockets would fit better, but I would still like to test an approach based on a RESTFul interface (a tiny argument for this would be that transferring data via RESFul interface seems very common and has good library support).
Q: The question is, what is the best approach for a performant (in the sense of near-realtime) implementation that interfaces with this via REST interface?
I am sure this problem has been solved before, but I could not quickly find a paper via google scholar or technical/scientific blog post that explains this. The only link I found is on "rest hooks", but I am not sure if this is a good approach. Searching on SE didn't reveal a past question on this.
Side note: My approach would be to implement the interface in haskell first to test the design and performance of the RESFull interface. Later the working approach should be ported or implemented with Java/Android/spark.io/some other microcontroller.
(Please note this question is entirely about the architecture and not at all about haskell libraries or anything. If using REST is the stupiest thing, I will accept that as an answer if it is argumented. Also then the question is then whether in general microcontroller web-interfaces and specically their APIs, like that of "spark.io", are in general a stupid idea, if they are implemented via REST. Is this the case? If not, what definition of "near real time" justifies that a REST interface is a bad idea and thus other means of communcation are better. Like: one sensor read per minute? Or, one per second, by 1/10 second, by 1/100 second, by 1/1000 second?)

Okay, let's go through this.
REST is not necessarily a bad idea but it has a lot of features which you may not need. For example, there are REST verbs not just for retrieval, but also updating, deleting, and creating resources. If those functions are important (e.g. you need to send certain control data to the EEG controller) then REST will be nice. If you just want fast access to the stream of data, consider raw TCP instead.
Similarly, REST will package messages into "requests" and their "responses" which come with a bunch of "headers" indicating things like whether the request could be fulfilled, whether it's compressed, etc. These can be great features but may be bloat. You'll probably want to emit enough data on each request so that the ~1kB of headers are a small fraction of it. But given 8-byte floats (doubles), that requires transmitting 500-1000 data points, which you've said will take about one second. Is that our fate -- to always have 1s of latency?
REST will allow you to avoid some of that bloat by declaring a Transfer-Encoding: chunked so that the client can operate on individual chunks as they become available. So that's an architectural decision that I think will need to be made.
I would definitely get Keep-Alive working as soon as possible, and it would be my chief feature when looking for what library to use on the server. Keep-Alive is a standard extension to HTTP which avoids tearing down and rebuilding the TCP stack for each HTTP request. If you don't do this then you have some heavy protocol negotiations each time you send a request.
A crucial decision you'll have to make involves whether you want to do HTTP pipelining or not. You can combine HTTP pipelining with longer-lived requests (ones where you don't expect an immediate response) to essentially "send the data when it becomes available" (i.e. send the headers first and let the server push out the data when it's good and ready). This is an alternative to chunked transfers.
If you can work those out, then HTTP is regularly used to send megabytes per second, so your use case fits well within what REST is capable of. In terms of REST/HTTP libraries for Haskell, if you have to somehow program the controller yourself, the big options are wai, yesod, snap, and rest. If you just need an HTTP client there are a few of those too.

Related

What can go wrong if we do NOT follow RESTful best practices?

TL;DR : scroll down to the last paragraph.
There is a lot of talk about best practices when defining RESTful APIs: what HTTP methods to support, which HTTP method to use in each case, which HTTP status code to return, when to pass parameters in the query string vs. in the path vs. in the content body vs. in the headers, how to do versioning, result set limiting, pagination, etc.
If you are already determined to make use of best practices, there are lots of questions and answers out there about what is the best practice for doing any given thing. Unfortunately, there appears to be no question (nor answer) as to why use best practices in the first place.
Most of the best practice guidelines direct developers to follow the principle of least surprise, which, under normal circumstances, would be a good enough reason to follow them. Unfortunately, REST-over-HTTP is a capricious standard, the best practices of which are impossible to implement without becoming intimately involved with it, and the drawback of intimate involvement is that you tend to end up with your application being very tightly bound to a particular transport mechanism. So, some people (like me) are debating whether the benefit of "least surprise" justifies the drawback of littering the application with REST-over-HTTP concerns.
A different approach examined as an alternative to best practices suggests that our involvement with HTTP should be limited to the bare minimum necessary in order to get an application-defined payload from point A to point B. According to this approach, you only use a single REST entry point URL in your entire application, you never use any HTTP method other than HTTP POST, never return any HTTP status code other than HTTP 200 OK, and never pass any parameter in any way other than within the application-specific payload of the request. The request will either fail to be delivered, in which case it is the responsibility of the web server to return an "HTTP 404 Not Found" to the client, or it will be successfully delivered, in which case the delivery of the request was "HTTP 200 OK" as far as the transport protocol is concerned, and anything else that might go wrong from that point on is exclusively an application concern, and none of the transport protocol's business. Obviously, this approach is kind of like saying "let me show you where to stick your best practices".
Now, there are other voices that say that things are not that simple, and that if you do not follow the RESTful best practices, things will break.
The story goes that for example, in the event of unauthorized access, you should return an actual "HTTP 401 Unauthorized" (instead of a successful response containing a json-serialized UnauthorizedException) because upon receiving the 401, the browser will prompt the user of credentials. Of course this does not really hold any water, because REST requests are not issued by browsers being used by human users.
Another, more sophisticated way the story goes is that usually, between the client and the server exist proxies, and these proxies inspect HTTP requests and responses, and try to make sense out of them, so as to handle different requests differently. For example, they say, somewhere between the client and the server there may be a caching proxy, which may treat all requests to the exact same URL as identical and therefore cacheable. So, path parameters are necessary to differentiate between different resources, otherwise the caching proxy might only ever forward a request to the server once, and return cached responses to all clients thereafter. Furthermore, this caching proxy may need to know that a certain request-response exchange resulted in a failure due to a particular error such as "Permission Denied", so as to again not cache the response, otherwise a request resulting in a temporary error may be answered with a cached error response forever.
So, my questions are:
Besides "familiarity" and "least surprise", what other good reasons are there for following REST best practices? Are these concerns about proxies real? Are caching proxies really so dumb as to cache REST responses? Is it hard to configure the proxies to behave in less dumb ways? Are there drawbacks in configuring the proxies to behave in less dumb ways?
It's worth considering that what you're suggesting is the way that HTTP APIs used to be designed for a good 15 years or so. API designers are tending to move away from that approach these days. They really do have their reasons.
Some points to consider if you want to avoid using ReST over HTTP:
ReST over HTTP is an efficient use of the HTTP/S transport mechanism. Avoiding the ReST paradigm runs the risk of every request / response being wrapped in verbose envelopes. SOAP is an example of this.
ReST encourages client and server decoupling by putting application semantics into standard mechanisms - HTTP and XML/JSON (or others data formats). These protocols and standards are well supported by standard libraries and have been built up over years of experience. Sure, you can create your own 'unauthorized' response body with a 200 status code, but ReST frameworks just make it unnecessary so why bother?
ReST is a design approach which encourages a view of your distributed system which focuses on data rather than functionality, and this has a proven a useful mechanism for building distributed systems. Avoiding ReST runs the risk of focusing on very RPC-like mechanisms which have some risks of their own:
they can become very fine-grained and 'chatty'
which can be an inefficient use of network bandwidth
which can tightly couple client and server, through introducing stateful-ness and temporal coupling beteween requests.
and can be difficult to scale horizontally
Note: there are times when an RPC approach is actually a better way of breaking down a distributed system than a resource-oriented approach, but they tend to be the exceptions rather than the rule.
existing tools for developers make debugging / investigations of ReSTful APIs easier. It's easy to use a browser to do a simple GET, for example. And tools such as Postman or RestClient already exist for more complex ReST-style queries. In extreme situations tcpdump is very useful, as are browser debugging tools such as firebug. If every API call has application layer semantics built on top of HTTP (e.g. special response types for particular error situations) then you immediately lose some value from some of this tooling. Building SOAP envelopes in PostMan is a pain. As is reading SOAP response envelopes.
network infrastructure around caching really can be as dumb as you're asking. It's possible to get around this but you really do have to think about it and it will inevitably involve increased network traffic in some situations where it's unnecessary. And caching responses for repeated queries is one way in which APIs scale out, so you'll likely need to 'solve' the problem yourself (i.e. reinvent the wheel) of how to cache repeated queries.
Having said all that, if you want to look into a pure message-passing design for your distributed system rather than a ReSTful one, why consider HTTP at all? Why not simply use some message-oriented middleware (e.g. RabbitMQ) to build your application, possibly with some sort of HTTP bridge somewhere for Internet-based clients? Using HTTP as a pure transport mechanism involving a simple 'message accepted / not accepted' semantics seems overkill.
REST is intended for long-lived network-based applications that span multiple organizations. If you don’t see a need for the constraints, then don’t use them. -- Roy T Fielding
Unfortunately, there appears to be no question (nor answer) as to why use best practices in the first place.
When in doubt, go back to the source
Fielding's dissertation really does quite a good job at explaining how the REST architectural constraints ensure that you don't destroy the properties those constraints are designed to protect.
Keep in mind - before the web (which is the reference application for REST), "web scale" wasn't a thing; the notion of a generic client (the browers) that could discover and consume thousands of customized applications (provided by web servers) had not previously been realized.
According to this approach, you only use a single REST entry point URL in your entire application, you never use any HTTP method other than HTTP POST, never return any HTTP status code other than HTTP 200 OK, and never pass any parameter in any way other than within the application-specific payload of the request.
Yup - that's a thing, it's called RPC; you are effectively taking the web, and stripping it down to a bare message transport application that just happens to tunnel through port 80.
In doing so, you have stripped away the uniform interface -- you've lost the ability to use commodity parts in your deployment, because nobody can participate in the conversation unless they share the same interpretation of the message data.
Note: that's doesn't at all imply that RPC is "broken"; architecture is about tradeoffs. The RPC approach gives up some of the value derived from the properties guarded by REST, but that doesn't mean it doesn't pick up value somewhere else. Horses for courses.
Besides "familiarity" and "least surprise", what other good reasons are there for following REST best practices?
Cheap scaling of reads - as your offering becomes more popular, you can service more clients by installing a farm of commodity reverse-proxies that will serve cached representations where available, and only put load on the server when no fresh representation is available.
Prefetching - if you are adhering to the safety provisions of the interface, agents (and intermediaries) know that they can download representations at their own discretion without concern that the operators will be liable for loss of capital. AKA - your resources can be crawled (and cached)
Similarly, use of idempotent methods (where appropriate) communicates to agents (and intermediaries) that retrying the send of an unacknowledged message causes no harm (for instance, in the event of a network outage).
Independent innovation of clients and servers, especially cross organizations. Mosaic is a museum piece, Netscape vanished long ago, but the web is still going strong.
Of course this does not really hold any water, because REST requests are not issued by browsers being used by human users.
Of course they are -- where do you think you are reading this answer?
So far, REST works really well at exposing capabilities to human agents; which is to say that the server side is so ubiquitous at this point that we hardly think about it any more. The notion that you -- the human operator -- can use the same application to order pizza, run diagnostics on your house, and remote start your car is as normal as air.
But you are absolutely right that replacing the human still seems a long ways off; there are various standards and media types for communicating semantic content of data -- the automated client can look at markup, identify a phone number element, and provide a customized array of menu options from it -- but building into agents the sorts of fuzzy intelligence needed to align offered capabilities with goals, or to recover from error conditions, seems to be a ways off.

WebSocket/REST: Client connections?

I understand the main principles behind both. I have however a thought which I can't answer.
Benchmarks show that WebSockets can serve more messages as this website shows: http://blog.arungupta.me/rest-vs-websocket-comparison-benchmarks/
This makes sense as it states the connections do not have to be closed and reopened, also the http headers etc.
My question is, what if the connections are always from different clients all the time (and perhaps maybe some from the same client). The benchmark suggests it's the same clients connecting from what I understand, which would make sense keeping a constant connection.
If a user only does a request every minute or so, would it not be beneficial for the communication to run over REST instead of WebSockets as the server frees up sockets and can handle a larger crowd as to speak?
To fix the issue of REST you would go by vertical scaling, and WebSockets would be horizontal?
Doe this make sense or am I out of it?
This is my experience so far, I am happy to discuss my conclusions about using WebSockets in big applications approached with CQRS:
Real Time Apps
Are you creating a financial application, game, chat or whatever kind of application that needs low latency, frequent, bidirectional communication? Go with WebSockets:
Well supported.
Standard.
You can use either publisher/subscriber model or request/response model (by creating a correlationId with each request and subscribing once to it).
Small size apps
Do you need push communication and/or pub/sub in your client and your application is not too big? Go with WebSockets. Probably there is no point in complicating things further.
Regular Apps with some degree of high load expected
If you do not need to send commands very fast, and you expect to do far more reads than writes, you should expose a REST API to perform CRUD (create, read, update, delete), specially C_UD.
Not all devices prefer WebSockets. For example, mobile devices may prefer to use REST, since maintaining a WebSocket connection may prevent the device from saving battery.
You expect an outcome, even if it is a time out. Even when you can do request/response in WebSockets using a correlationId, still the response is not guaranteed. When you send a command to the system, you need to know if the system has accepted it. Yes you can implement your own logic and achieve the same effect, but what I mean, is that an HTTP request has the semantics you need to send a command.
Does your application send commands very often? You should strive for chunky communication rather than chatty, so you should probably batch those change request.
You should then expose a WebSocket endpoint to subscribe to specific topics, and to perform low latency query-response, like filling autocomplete boxes, checking for unique items (eg: usernames) or any kind of search in your read model. Also to get notification on when a change request (write) was actually processed and completed.
What I am doing in a pet project, is to place the WebSocket endpoint in the read model, then on connection the server gives a connectionID to the client via WebSocket. When the client performs an operation via REST, includes an optional parameter that indicates "when done, notify me through this connectionID". The REST server returns saying if the command was sent correctly to a service bus. A queue consumer processes the command, and when done (well or wrong), if the command had notification request, another message is placed in a "web notification queue" indicating the outcome of the command and the connectionID to be notified. The read model is subscribed to this queue, gets messessages and forward them to the appropriate WebSocket connection.
However, if your REST API is going to be consumed by non-browser clients, you may want to offer a way to check of the completion of a command using the async REST approach: https://www.adayinthelifeof.nl/2011/06/02/asynchronous-operations-in-rest/
I know, that is quite appealing to have an low latency UP channel available to send commands, but if you do, your overall architecture gets messed up. For example, if you are using a CQRS architecture, where is your WebSocket endpoint? in the read model or in the write model?
If you place it on the read model, then you can easy access to your read DB to answer fast search queries, but then you have to couple somehow the logic to process commands, being the read model the responsible of send the commands to the write model and notify if it is unable to do so.
If you place it on the write model, then you have it easy to place commands, but then you need access to your read model and read DB if you want to answer search queries through the WebSocket.
By considering WebSockets part of your read model and leaving command processing to the REST interface, you keep your loose coupling between your read model and your write model.

Streaming data from web-server, trying to use vb.net and cgi

I need to stream data from a web server to clients. The data is location data that is collected and stored on the server. The clients will click a button on an html page to 'opt in' to start receiving the data. This data is never ending and there is at least one of the clients that needs to receive the data 24-7, with as few breaks as possible. The data being streamed will be client specific, as each client wont receive the exact same data.
I've done several multi-threaded tcp servers over sockets, and websockets are the way I would like to attack this, but the requirements are that this has to work in ie9.
The initial requirement was that this be a vb.net cgi executable - but during testing, I havent been able to 'use' the stream from the vb.net executable until the app finishes - like it wasn't able to flush the stdout even though I was specificly using the console.out.flush(). So If this isn't a viable option, and I can support this with facts, then I can get this requirement changed.
I've also read quite a bit about using a third party server to stream the data like Orbit and APE I think was a couple of them, but requirements are for 1 server - the web server. No other hardware can be required.
I'm pretty sure the vb.net CGI isn't the ideal solution based on what i've found, but is it doable or do I need to abandon that solution and move on to a newer technology , ISAPI? Any ideas or suggestions, even if they just point me in the right direction, are greatly appreciated.
You might go few ways.
If you would go C# .Net, then you might look into Silverlight solution. But it requires plugin in browser to be installed (like Flash). Good thing here, is that you are able to send data through normal sockets, in pure realtime from server. In same time Silverlight uses .Net so it makes some code to be shared. That helps development process. As well the way it will work in different browsers will be same.
You might have a look in similar solution using Java Applet with Java backend (can be even .Net, but again, easier to develop when both in same language).
Another option is to have fron-end using WebSockets, but as you know its not supported in IE9 and below (IE10 promises to be), and Opera is not supporting it as well.
Backend can be done in what you prefer. But bear in mind that WebSockets uses framing, and for constant but little packets its not efficient, because if you send 10 bytes, then it will create frame 2-12 bytes, and TCP packet header that is 40 bytes in average.
To support older browsers you might have a look in long-polling, but it is not as reliable as websockets.
As well it is important to calculate the amount of data and approximate amount of users that will use your system. Based on calculations you will have approximate information about how real it is, and what server will be required to handle.

What's the best IPC mechanism for medium-sized data in Perl? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm working on designing a multi-tiered app in Perl and I'm wondering about the pros and cons of the various IPC mechnisms available to me. I'm looking at handling moderately-sized data, typically a few dozen kilobytes but up to a couple of megabytes, and the load is pretty light, at most a couple of hundred requests per minute.
My primary concerns are maintainability and performance (in that order). I don't think I'll need to scale up to more than one server, or port off of our main platform (RHEL), but I suppose it's something to consider.
I can think of the following options:
Temporary files - Simplistic, probably the worst option in terms of speed and storage requirements
UNIX domain sockets - Not portable, not scalable
Internet Sockets - Portable, scalable
Pipes - Portable, not scalable (?)
Considering that scalability and portability are not my primary concerns, I need to learn more. What's the best choice, and why? Please comment if you need additional information.
EDIT: I'll try to give more detail in response to ysth's questions (warning, wall of text follows):
Are readers/writers in a one-to-one relationship, or something more more complicated?
What do you want to happen to the writer if the reader is no longer there or busy?
And vice versa?
What other information do you have about your desired usage?
At this point, I'm contemplating a three-tiered approach, but I'm not sure how many processes I'll have in each tier. I think I need to have more processes towards the left side and fewer toward the right, but maybe I should have the same number across the board:
.---------. .----------. .-------.
| Request | -----> | Business | -----> | Data |
| Manager | <----- | Logic | <----- | Layer |
`---------' `----------' `-------'
These names are still generic and probably won't make it into the implementation in these forms.
The request manager is responsible for listening for requests from different interfaces, for example web requests and CLI (where response time is important) and e-mail (where response time is less important). It performs logging and manages the responses to the requests (which are rendered in a format appropriate to the type of request).
It sends data about the request to the business logic which performs logging, authorization depending on business rules, etc.
The business logic (if it needs to) then requests data from the data layer, which can either talk to (most often) the internal MySQL database or some other data source outside our team's control (e.g., our organization's primary LDAP servers, or our DB2 employee information database, etc.). This is mostly simply a wrapper which formats the data in a uniform way so that it can be handled more easily in the business logic.
The information then flows back to to the request manager for presentation.
If, when data is flowing to the right, the reader is busy, for the interactive requests I'd like to simply wait a suitable period of time, and return a timeout error if I don't get access in that amount of time (e.g. "Try again later"). For the non-interactive requests (e.g. e-mail), the polling system can simply exit and try again on the next invocation (which will probably be once per 1-3 minutes).
When data is flowing in the other direction, there shouldn't be any waiting situations. If one of the processes has died when trying to travel back to the left, all I can really do is log and exit.
Anyway, that was pretty verbose, and since I'm still in early design I probably still have some confused ideas in there. Some of what I've mentioned is probably tangential to the issue of which IPC system to use. I'm open to other suggestions on the design, but I was trying to keep the question limited in scope (For example, maybe I should consider collapsing down to two tiers, which is a much simpler for IPC). What are your thoughts?
If you're unsure about your exact requirements at the moment, try to think of a simple interface that you can code to, that any IPC implementation (be it temporary files, TCP/IP or whatever) needs to support. You can then choose a particular IPC flavour (I would start with whatever's easiest and/or easiest to debug -- probably temporary files) and implement the interface using that. If that turns out to be too slow, implement the interface using e.g. TCP/IP. Actually implementing the interface does not involve much work as you will essentially just be forwarding calls to some existing library.
The point is that you have a high-level task to perform ("transmit data from program A to program B") which is more or less independent of the details of how it is performed. By establishing an interface and coding to it, you isolate the main program from changes in the event that you need to change the implementation.
Note that you don't need to use any heavyweight Perl language mechanisms to capitalise on the idea of having an interface. You could simply have e.g. 3 different packages (for temp files, TCP/IP, Unix domain sockets), each of which exports the same set of methods. Choosing which implementation you want to use in your main program amounts to choosing which module to use.
Temporary files (and related things, like a shared memory region), are probably a bad bet. If you ever want to run your server on one machine and your clients on another, you will need to rewrite your application. If you pick any of the other options, at least the semantics are the essentially the same, if you need to switch between them at a later date.
My only real advice, though, is to not write this yourself. On the server side, you should use POE (or Coro, etc.), rather than doing select on the socket yourself. Also, if your interface is going to be RPC-ish, use something like JSON-RPC-Common/ from the CPAN.
Finally, there is IPC::PubSub, which might work for you.
Temporary files have other problems besides that. I think Internet socks are really the best choice. They are well documented, and as you say, scalable and portable. Even if that is not a core requirement, you get it nearly for free. Sockets are pretty easy to deal with, again there is copious amounts of documentation. You can build out your data sharing mechanism and protocol out in a library and never have to look at it again!
UNIX domain sockets are portable across unices. It's no less portable than pipes. It's also more efficient than IP sockets.
Anyway, you missed a few options, shared memory for example. Some would add databases to that list but I'd say that's a rather heavyweight solution.
Message queues would also be a possibility, though you'd have to change a kernel option for it to handle such large messages. Otherwise, they have an ideal interface for a lot of things, and IMHO they are greatly underused.
I generally agree though that using an existing solution is better than building somethings of your own. I don't know the specifics of your problem, but I'd suggest you'd check out the IPC section of CPAN
There are so many different options because most of them are better for some particular case, but you haven't really given any information that would identify your case.
Are readers/writers in a one-to-one relationship, or something more more complicated?
What do you want to happen to the writer if the reader is no longer there or busy? And vice versa?
What other information do you have about your desired usage?
For "interactive" requests (holding the connection open while waiting for a response (asynchronously or not): HTTP + JSON. JSON::XS is insanely fast. Everyone and everything can speak HTTP and it's easy to load balance, debug, ...
For queued requests ("please do this, thanks!"): Beanstalkd and Beanstalk::Client. Serialize the requests in the beanstalk queue with JSON.
Thrift might also be worth looking into depending on your application.

Deciphering MMORPG Protocol Encoding

I plan on writing an automated bot for a game.
The tricky part is figuring out how they encoded their protocol... To make the bot run around is easy, simply make the character run and record what it does in wireshark. However, interpreting the environment is more difficult... It recieves about 5 packets each second if you are idle, hence lots of garbarge.
My plan: Because the game runs under TCP, I will use freecap (http://www.freecap.ru/eng) to force the game to connect to a proxy running on my machine. I will need this proxy to be capable of packet injection, or perhaps a server that is capable of resending captured packets. This way I can recreate and tinker around with what the server sends, and understand their protocol encoding.
Does anyone know where I can get a proxy that allows packet injection or where I can perform packet injection (not via hardware, as is the case with wireless or anything!)
Where of if I can find a server/proxy that resends captured packets (ie: replays a connection).
Any better tools or methodologies for pattern matching? Something which can highlight patterns from mutliple messages would be GREAT.
OR, is there a better way to decipher this here? Possibly a dissasembly strategy (via hooking a winsock function and starting the dissassembly from there) ? I have not done this before so I am not sure. OR , any other ideas?
Network traffic interception and protocol analysis is generally a less favored method to accomplish your goal here. For most modern games, encryption is a serious factor, and there are serious headaches associated with the protocol analysis for any but trivial factors of the most common gameplay scenarios.
Most modern implementations* of what you are trying to do rely on reading and manipulating the memory space and process of a running client. The client will have already done all the hard parts for you, including decrypting the traffic and sorting it into far more easy to read data structures. For interacting with the server you can call functions built into the client instead of crafting entire series of packets from scratch. The plus to this approach is that you have to do far less work to interpret the data and produce activity. The minus is that there is often some data in the network traffic that would be useful to a bot but is discarded by the client, or that you may want to send traffic to the server that the client cannot produce (which, in my own well-developed hierarchy for such, is a few steps farther down the "cheating" slope).
*...I say this having seen the evolution of the majority of MMORPG botting/hacking communities from network protocol analyzers like ShowEQ and Odin's Eye / Excalibur to memory-based applications like MacroQuest and InnerSpace. On that note, InnerSpace provides an excellent extensible framework for the memory/process-based variant of what you are attempting, and you should look into it as a basis for your project if you abandon the network analysis approach.
As I've done a few game bots in the past (for fun, not profit or griefing of course - writing game bots is a lot of fun), I recommend the following:
If you can code and there isn't cheat protection preventing you from doing it, I highly recommend writing an injected DLL for the following reasons:
Your DLL will be able to access the game's memory space directly, and once you reverse-engineer the data structures (either by poking around memory or by code disassembly), you'll have access to lots of data. This will also allow you to bypass any network encryption the game may have. The downside of accessing process memory directly is that offsets and data structures change between versions - however, data structures don't change very often with a stable game, and you can compensate offset changes by searching for code patterns instead of using fixed offsets.
Either way, you'll still be able to hook WinSock functions using API hooks (check out Microsoft Detours and the excellent but now-commercial madCodeHook).
otherwise, I can only advise that you give live/interactive packet editors like WPE Pro a try.
In most scenarios, the coolest methods (code reverse-engineering and direct memory access) tend to be the least productive. They require a lot of skill (to understand the code) and time, both initially (to go through all the code and develop code to interact with the data structure) and for maintainance (in case the game is being updated). (Of course, they sometimes do allow doing cool stuff which is impossible to do with the official client, but most of the time this is obvious as blatant cheating, and likely to attract the GMs quickly). Most of the time bots are made by replacing game graphics/textures with solid colours, and creating simple "pixel" bots which search for certain colours on the screen and react accordingly (e.g. click them).
Hope this helps, and remember - cheating is only fun when it doesn't make the game less fun for everyone else ;)
There are probably a few reasonable assumptions you can make that should simplify your task enormously. However, to make the best use of them you will probably need greater comfort with sleeves-rolled-up programming than it sounds like you have.
First, it's a safe bet that the encryption they are using falls into one of three categories:
None
Cheesy
Far better than you are likely to crack
With the odds of the middle case being very low.
Next, it's a safe bet that the packets are encrypted / decrypted close to the edge of the program (right as they come in, right before they go out) and that the body of the game deals with them in decrypted form.
Finally, the protocol they are using most likely consists of either
ascii with data blocks
binary goo
So do a little packet sniffing with a card set in promiscuous mode for unencrypted ascii. If you see some, great, you're ahead of the game. But if you don't give up the whole tapping-the-line idea and instead start following the code as it returns from the sending data out by breakpointing and stepping with a debugger. Figure the outermost layer or three will be standard network stuff, then will come the encryption layer, and beyond that the huge mass of stuff that deals with the protocol unencrypted.
You should be able to get this far in an hour if you're hot, a weekend if you're reasonably skilled, motivated, and diligent, and never if you are hopeless. But it is possible in principle (and doubtlessly far easier in practice) to do it this way.
Once you get to where something that looks like unencrypted goo comes in, gets mungled, and the mungled form goes out, then start worrying about what it means.
-- MarkusQ
A) I play a MMO and do not support bots, voting down...
B) Download Backtrack v.3, run an arpspoof on your default gateway and your host. There is an application that will spoof the remote host's SSL cert sslmitm (I believe is the name) which will then allow you to create a full connection through your host. Then fireup tcpdump/ethereal/wireshark (choose your pcap poison) and move around do random stuff to find out what packet is doing what. That will be your biggest challenge; but proxying with a Man in the Middle attack on yourself is the way to go.
C) I do not condone this activity, this information is only being provided as free information.
Sounds like there is not encryption going on, so you could do a network approach.
A great place to start would be to find the packet ID's - most of the time, something near the front of the packet is going to be an ID of the type of the packet. For example move could be 1, shoot fired could be "2", chat could be "4".
You can write your own proxy that listens on one port for your game to connect, and then connects to the server. You can make keypresses to your proxy fire off commands, or you can make your proxy write out debugging info to help you go further.
(I've written a bot for an online in game in PHP - of all things.)