I'm looking for ways to gather files from clients. These clients have our software and we are currently using FTP for gathering files from them. The files are collected from the client's database, encrypted and uploaded via FTP to our FTP server. The process is fraught with frustration and obstacles. The software is frequently blocked by common firewalls and often runs into difficulties with VPNs and NAT (switching to Passive instead of Active helps usually).
My question is, what other ideas do people have for getting files programmatically from clients in a reliable manner. Most of the files they are submitting are < 1 MB in size. However, one of them ranges up to 25 MB in size.
I'd considered HTTP POST, however, I'm concerned that a 25 mb file would often fail over a post (the web server timing out before the file could completely be uploaded).
Thoughts?
AndrewG
EDIT: We can use any common web technology. We're using a shared host, which may make central configuration changes difficult to make. I'm familiar with PHP from a common usage perspective... but not from a setup perspective (written lots of code, but not gotten into anything too heavy duty). Ruby on Rails is also possible... but I would be starting from scratch. Ideally... I'm looking for a "web" way of doing it as I'd like to eventually be ready to transition from installed code.
Research scp and rsync.
One option is to have something running in the browser which will break the upload into chunks which would hopefully make it more reliable. A control which does this would also give some feedback to the user as the upload progressed which you wouldn't get with a simple HTTP POST.
A quick Google found this free Java Applet which does just that. There will be lots of other free and pay for options that do the same thing
You probably mean a HTTP PUT. That should work like a charm. If you have a decent web server. But as far as I know it is not restartable.
FTP is the right choice (passive mode to get through the firewalls). Use an FTP server that supports Restartable transfers if you often face VPN connection breakdowns (Hotel networks are soooo crappy :-) ) trouble.
The FTP command that must be supported is REST.
From http://www.nsftools.com/tips/RawFTP.htm:
Syntax: REST position
Sets the point at which a file transfer should start; useful for resuming interrupted transfers. For nonstructured files, this is simply a decimal number. This command must immediately precede a data transfer command (RETR or STOR only); i.e. it must come after any PORT or PASV command.
Related
I have a inbox that receives lots of emails from my various systems and such as Nagios and Azure alerts regarding disk usage, exceptions and job failures.
Since I get so many of these alerts, I was wondering if there was any tools that I could use to filter out and only receive the most important alerts - a lot of these are spam that are only affecting my development environments and so I only want to be alerted only when something goes wrong with my production environment.
Does anyone know of any such tools or knows a better way of dealing with this sort a problem? There must be a better solution to this rather than manually reading through all my emails and checking the contents.
I've heard of a tool known as LogRythm but I'm under the impression that this is purely a data security tool and am unsure whether it would be able to parse an inbox.
Thanks all in advance.
A good solution is IMAPfilter. This is a utility for Linux systems (but I run it on Windows with the Linux Subsystem) that uses the IMAP protocol to manage one or more mailboxes.
You need to define all your filters and actions using the Lua language (but it's pretty simple, in the GitHub there are many examples) and keep the program running all the time (if the program is not running, all mails will arrive unfiltered, and they will be reorganized once the program is started again)
This is not a pretty software (no GUI, weird language, needs an always on server to have real time filtering), but since you can program your filters you can do pretty much anything, and it's also very light
I need to stream data from a web server to clients. The data is location data that is collected and stored on the server. The clients will click a button on an html page to 'opt in' to start receiving the data. This data is never ending and there is at least one of the clients that needs to receive the data 24-7, with as few breaks as possible. The data being streamed will be client specific, as each client wont receive the exact same data.
I've done several multi-threaded tcp servers over sockets, and websockets are the way I would like to attack this, but the requirements are that this has to work in ie9.
The initial requirement was that this be a vb.net cgi executable - but during testing, I havent been able to 'use' the stream from the vb.net executable until the app finishes - like it wasn't able to flush the stdout even though I was specificly using the console.out.flush(). So If this isn't a viable option, and I can support this with facts, then I can get this requirement changed.
I've also read quite a bit about using a third party server to stream the data like Orbit and APE I think was a couple of them, but requirements are for 1 server - the web server. No other hardware can be required.
I'm pretty sure the vb.net CGI isn't the ideal solution based on what i've found, but is it doable or do I need to abandon that solution and move on to a newer technology , ISAPI? Any ideas or suggestions, even if they just point me in the right direction, are greatly appreciated.
You might go few ways.
If you would go C# .Net, then you might look into Silverlight solution. But it requires plugin in browser to be installed (like Flash). Good thing here, is that you are able to send data through normal sockets, in pure realtime from server. In same time Silverlight uses .Net so it makes some code to be shared. That helps development process. As well the way it will work in different browsers will be same.
You might have a look in similar solution using Java Applet with Java backend (can be even .Net, but again, easier to develop when both in same language).
Another option is to have fron-end using WebSockets, but as you know its not supported in IE9 and below (IE10 promises to be), and Opera is not supporting it as well.
Backend can be done in what you prefer. But bear in mind that WebSockets uses framing, and for constant but little packets its not efficient, because if you send 10 bytes, then it will create frame 2-12 bytes, and TCP packet header that is 40 bytes in average.
To support older browsers you might have a look in long-polling, but it is not as reliable as websockets.
As well it is important to calculate the amount of data and approximate amount of users that will use your system. Based on calculations you will have approximate information about how real it is, and what server will be required to handle.
I'm working on a personal project. It's to recreate server software for the game "Chu Chu Rocket" for the Sega Dreamcast. Its' servers went down in 2004 I believe. My approach is to use dnsmasq to change the originl hostname that the game originally connected to, to my own system. With a DC-PC server set up, I have done just that, now instead of it looking up a non-existent dns record, it connects to my computer which will eventually run the server software. I've used tshark (cli wireshark) to capture what's going on between the client (dreamcast) and the server (my computer). The problem is, I'm getting data, but I'm not sure how to interpret it, I don't know what it's saying, but I'm sure it can be done because private PSO servers were created, those are far more complex.
Very simply, where would I go about learning how to interpret data packets, and possibly creating packets that will respond to such queries from the client?
Thanks,
Dragos240
If you can get the source code for the server software on your PC, then that is the best place to look.
Otherwise, all you can do is look at the protocol, compare runs, and make notes of similarities and differences. With any luck, the protocol won't be encrypted.
I am trying to build an application that can inform a user about website specific information whenever they are visiting a website that is present in my database. This must be done in a browser independent way so the user will always see the information when visiting a website (no matter what browser or other tool he or she is using to visit the website).
My first (partially successful) approach was by looking at the data packets using the System.Net.Sockets.Socket class etc. Unfortunately I discoverd that this approach only works when the user has administrator rights. And of course, that is not what I want. My goal is that the user can install one relatively simple program that can be used right away.
After this I went looking for alternatives and found a lot about WinPcap and some of it's .NET wrappers (did I tell you I am programming c# .NET already?). But with WinPcap I found out that this must be installed on the user's pc and there is nog way to just reference some dll files and code away. I already looked at including WinPcap as a prerequisite in my installer but that is also to cumbersome.
Well, long story short. I want to know in my application what website my user is visiting at the moment it is happening. I think it must be done by looking at the data packets of the network but can't find a good solution for this. My application is build in C# .NET (4.0).
You could use Fiddler to monitor Internet traffic.
It is
a Web Debugging Proxy which logs all HTTP(S) traffic between your computer and the Internet. Fiddler allows you to inspect traffic, set breakpoints, and "fiddle" with incoming or outgoing data. Fiddler includes a powerful event-based scripting subsystem, and can be extended using any .NET language.
It's scriptable and can be readily used from .NET.
One simple idea: Instead of monitoring the traffic directly, what about installing a browser extension that sends you the current url of the page. Then you can check if that url is in your database and optionally show the user a message using the browser extension.
This is how extensions like Invisible Hand work... It scans the current page and sends relevant data back to the server for processing. If it finds anything, it uses the browser extension framework to communicate those results back to the user. (Using an alert, or a bar across the top of the window, etc.)
for a good start, wireshark will do what you want.
you can specify a filter to isolate and view http streams.
best part is wireshark is open source, and built opon another program api, winpcap which is open source.
I'm guessing this is what you want.
capture network data off the wire
view the tcp traffic of a computer, isolate and save(in part or in hole) http data.
store information about the http connections
number 1 there is easy, you can google for a winpcap tutorial, or just use some of their sample programs to capture the data.
I recomend you study up on the pcap file format, everything with winpcap uses this basic format and its structers.
now you have to learn how to take a tcp stream and turn it into a solid data stream without curoption, or disorginized parts. (sorry for the spelling)
again, a very good example can be found in the wireshark source code.
then with your data stream, you can simple read the http format, and html data, or what ever your dealing with.
Hope that helps
If the user is cooperating, you could have them set their browser(s) to use a proxy service you provide. This would intercept all web traffic, do whatever you want with it (look up in your database, notify the user, etc), and then pass it on to the original location. Run the proxy on the local system, or on a remote system if that fits your case better.
If the user is not cooperating, or you don't want to make them change their browser settings, you could use one of the packet sniffing solutions, such as fiddler.
A simple stright forward way is to change the comupter DNS to point to your application.
this will cause all DNS traffic to pass though your app which can be sniffed and then redirected to the real DNS server.
it will also save you the hussel of filtering out emule/torrent traffic as it normally work with pure IP address (which also might be a problem as it can be circumvented by using IP address to browse).
-How to change windows DNS Servers
-DNS resolver
Another simple way is to configure (programmaticly) the browsers proxy to pass through your server this will make your life easier but will be more obvious to users.
How to create a simple proxy in C#?
I frequently do website development live over an FTP connection. That is to say, I use a code editor with a built in FTP window and push/pull files to work on them, upload the changes, etc. This is mostly because it's unreasonable to try to create a local development server, and I use too many computers for that to be practical anyway without a lot of work.
My trouble is, the internet connection at our home is not exactly... stable. It's fast and mostly reliable, but it has a tendancy to glitch far more frequently than any other connection I've worked on (it's wireless DSL) and as a result, dropped connections are far too frequent. (It's about as reliable as AT&T is with phone calls in that regard.) When working with FTP, I find that if it drops the connection mid-file transfer, it can be difficult to recover. First of all, when the connection is dropped, it saves a blank file to the server (how is this helpful?) breaking the page I was working on completely, and the icing on the cake is that depending on the timing, vsftpd will get itself stuck in a timeout and I have to SSH in and restart it before I can access that file again.
This process alone has only been beneficial because it's taught me to build up some data protection techniques clientside, to prevent the server from eating my recent changes if the dropped connection happens to hang or crash my client. Overall though, it's a pretty failed situation, and I'm surprised I get any work done at all.
Long, long context, I know, but my question is this: Is there a file transfer protocol that is designed to handle "flakey" connections like mine? I'd imagine that, for example, trying to transfer files over a 3G tethered connection would yield the same results, especially while traveling. It seems like FTP and SFTP both rely on a persistant connection, and can deal with dropped packets but not the loss of the entire socket through a reconnect. It seems to me like a file transfer daemon should be able to store the state of the user interacting with it, and thus detect failed transfers and be ready to "resume" if the user reconnects in a reasonable amount of time.
Thanks if anyone knows anything. I'm seriously considering trying to write such a protocol myself (I've had a lot of success coding the ajax on my page to handle faulty connections, for example) but I don't want to dive in if there's already a solution available.
You want rsync. If the connection drops, you just repeat the command and it picks up right where it left off. Built in error checking and everything. Works over SSH, Windows client exists. Somebody's probably written a GUI front end.
BitTorrent works well with flakey connections. I hear that it is fast, too!