Where would I learn more about interpreting network packets? - sockets

I'm working on a personal project. It's to recreate server software for the game "Chu Chu Rocket" for the Sega Dreamcast. Its' servers went down in 2004 I believe. My approach is to use dnsmasq to change the originl hostname that the game originally connected to, to my own system. With a DC-PC server set up, I have done just that, now instead of it looking up a non-existent dns record, it connects to my computer which will eventually run the server software. I've used tshark (cli wireshark) to capture what's going on between the client (dreamcast) and the server (my computer). The problem is, I'm getting data, but I'm not sure how to interpret it, I don't know what it's saying, but I'm sure it can be done because private PSO servers were created, those are far more complex.
Very simply, where would I go about learning how to interpret data packets, and possibly creating packets that will respond to such queries from the client?
Thanks,
Dragos240

If you can get the source code for the server software on your PC, then that is the best place to look.
Otherwise, all you can do is look at the protocol, compare runs, and make notes of similarities and differences. With any luck, the protocol won't be encrypted.

Related

Server for iPhone; continuous connection

Ok lets say I want to create a connection between my iPhone app and my server (i'd like to try and use GoDaddy servers for this) to server real time location data to users.
I've seen plenty of good stuff online about using sockets, streams, ASIHttpmessage, CFHTTPMessageRef, etc., but what I'm unclear about is how to set up a server that continuously servers real time data to users (I believe you'd need a stream of data going to the user for this, not just a single http request and response). How does one take a host like GoDaddy and run server code on it. I know you can set up a server like this using terminal, but I don't have access to command line or the ability to run this "server program" from my web host as far as I know. Is there software I can download on my cpanel for this? Do I need a virtual private server and different hosting via GoDaddy maybe?
Does anyone know how I can do this or if my understanding of this whole thing is wrong. Please keep in mind I need this real time (or close to). Please, educate me. I really just need a better understanding of how this works.

simulate server load with BSD sockets

I'm using blocking TCP sockets in C and I want to simulate a high load on the server when there are many simultaneous connections and then I want to measure the time necessary to access the server via a browser during this high load time (the server understands HTTP headers).
Also each client request ends fast (sends a HTTP header - gets text).
How do I do this (without crashing my local machine -> I tried using fork to make many clients; also, I have a virtual machine too).
If anyone has an idea or some general directions about how to do this, it would mean a lot.
Edit: I need to run this with my own client, which uses a modified version of the OpenSSL library to connect to my SSL/TLS server, so I can't use external test tools.
I want to know how to build the client and the server. I don't know too much about other sockets than the blocking ones, I'm just skimming through the UNIX Network Programming book of Richard Stevens, but I was wondering if anyone could point out the exact solution.
Thank you !
The easiest resolution to this would be to download an existing stress testing framework such as fwptt ( http://fwptt.sourceforge.net/ ).
If you want to implemennt your own stress testing framework, I'd suggest you lose the blocking nature of your code and go with a parallel design that will scale beautifully. The limiting factor is pretty much your CPU then.
Having two physical servers would be ideal, so that then your stress test isn't affecting the CPU (and therefore the response times) of the server. Also that VM of yours drains up precious CPU time.

Faulty-connection Proof File Transfer Protocol?

I frequently do website development live over an FTP connection. That is to say, I use a code editor with a built in FTP window and push/pull files to work on them, upload the changes, etc. This is mostly because it's unreasonable to try to create a local development server, and I use too many computers for that to be practical anyway without a lot of work.
My trouble is, the internet connection at our home is not exactly... stable. It's fast and mostly reliable, but it has a tendancy to glitch far more frequently than any other connection I've worked on (it's wireless DSL) and as a result, dropped connections are far too frequent. (It's about as reliable as AT&T is with phone calls in that regard.) When working with FTP, I find that if it drops the connection mid-file transfer, it can be difficult to recover. First of all, when the connection is dropped, it saves a blank file to the server (how is this helpful?) breaking the page I was working on completely, and the icing on the cake is that depending on the timing, vsftpd will get itself stuck in a timeout and I have to SSH in and restart it before I can access that file again.
This process alone has only been beneficial because it's taught me to build up some data protection techniques clientside, to prevent the server from eating my recent changes if the dropped connection happens to hang or crash my client. Overall though, it's a pretty failed situation, and I'm surprised I get any work done at all.
Long, long context, I know, but my question is this: Is there a file transfer protocol that is designed to handle "flakey" connections like mine? I'd imagine that, for example, trying to transfer files over a 3G tethered connection would yield the same results, especially while traveling. It seems like FTP and SFTP both rely on a persistant connection, and can deal with dropped packets but not the loss of the entire socket through a reconnect. It seems to me like a file transfer daemon should be able to store the state of the user interacting with it, and thus detect failed transfers and be ready to "resume" if the user reconnects in a reasonable amount of time.
Thanks if anyone knows anything. I'm seriously considering trying to write such a protocol myself (I've had a lot of success coding the ajax on my page to handle faulty connections, for example) but I don't want to dive in if there's already a solution available.
You want rsync. If the connection drops, you just repeat the command and it picks up right where it left off. Built in error checking and everything. Works over SSH, Windows client exists. Somebody's probably written a GUI front end.
BitTorrent works well with flakey connections. I hear that it is fast, too!

How do BitTorrents connect with eachother?

I was just downloading a new distro of linux using uTorrent, and started to wonder how uTorrent (and other bittorrents) send files to eachother through NAT routers? They obviously use the trackers to get introduced, but how do they pass info to eachother?
Is there a whitepaper on this? I couldn't find one :/
Thanks
Most of the time, they don't. I have a restricted network, and every time I run my torrent program it warns me that some of the ports/functionality required is not available to me.
If one party has a restricted network and another has an open network, the restricted client will always connect to the open client. If you have two restricted clients they will not be able to connect to each other. The reason it works at all is that most (enough) of the people on the torrent network do have some kind of port forwarding or UPNP (universal plug and play) to facilitate this.
Torrent clients work on the basis of what are known as Distributed Hash Tables. They start off with a set of known roots, and branch out looking for other, connected nodes (i.e., neighbours). Establish connections to them, and keep this up, up to a set limit. Since the client is initiating the connection, all the remote has to do is feed the data back, and you get it through the NAT just fine. It's how network traffic works.

gather file(s) from users

I'm looking for ways to gather files from clients. These clients have our software and we are currently using FTP for gathering files from them. The files are collected from the client's database, encrypted and uploaded via FTP to our FTP server. The process is fraught with frustration and obstacles. The software is frequently blocked by common firewalls and often runs into difficulties with VPNs and NAT (switching to Passive instead of Active helps usually).
My question is, what other ideas do people have for getting files programmatically from clients in a reliable manner. Most of the files they are submitting are < 1 MB in size. However, one of them ranges up to 25 MB in size.
I'd considered HTTP POST, however, I'm concerned that a 25 mb file would often fail over a post (the web server timing out before the file could completely be uploaded).
Thoughts?
AndrewG
EDIT: We can use any common web technology. We're using a shared host, which may make central configuration changes difficult to make. I'm familiar with PHP from a common usage perspective... but not from a setup perspective (written lots of code, but not gotten into anything too heavy duty). Ruby on Rails is also possible... but I would be starting from scratch. Ideally... I'm looking for a "web" way of doing it as I'd like to eventually be ready to transition from installed code.
Research scp and rsync.
One option is to have something running in the browser which will break the upload into chunks which would hopefully make it more reliable. A control which does this would also give some feedback to the user as the upload progressed which you wouldn't get with a simple HTTP POST.
A quick Google found this free Java Applet which does just that. There will be lots of other free and pay for options that do the same thing
You probably mean a HTTP PUT. That should work like a charm. If you have a decent web server. But as far as I know it is not restartable.
FTP is the right choice (passive mode to get through the firewalls). Use an FTP server that supports Restartable transfers if you often face VPN connection breakdowns (Hotel networks are soooo crappy :-) ) trouble.
The FTP command that must be supported is REST.
From http://www.nsftools.com/tips/RawFTP.htm:
Syntax: REST position
Sets the point at which a file transfer should start; useful for resuming interrupted transfers. For nonstructured files, this is simply a decimal number. This command must immediately precede a data transfer command (RETR or STOR only); i.e. it must come after any PORT or PASV command.