I have a remote server with some files. I want to use AsyncSocket to download a file, chunk by chunk. I would like to send HTTP requests with ranges through the socket and get the appropriate chunks of data. I understand how to do this on localhost, but not from a remote server. I really don't know how to use the connectToHost and acceptOnInterface (previously acceptOnAddress) methods.
Please help
Thanks
AsyncSocket is a general purpose data connection. If you want it to talk HTTP, you'll need to code the HTTP portion yourself. You probably don't actually want this; NSURLConnection should do what you want, provided the server supports it.
What you're asking for is the Range: header in HTTP. See 14.35.2 in RFC2616. You just need to add this header to your NSURLRequest. Again, this presumes that the server you're talking to supports this (you need to check the Accept-Ranges: header in the response).
There's a short article with example code about this at Surgeworks.
You should also look at ASIHTTPRequest, which includes resumable downloads and download progress delgates, and can likely be adapted to doing partial downloads. It may already have the solution to the specific issue you're trying to solve.
Related
I'm new to network programming and have recently been playing around with using sockets in C++.
I have a pretty decent handle on it at this point, and I understand HTTP/TCP/IP packets pretty well.
However, upon doing some research online it seems like the bulk of network programmers suggest using external libraries such as libcurl (or curl++ for c++) for sending HTTP requests.
Considering that HTTP is a text-based protocol, why is this more beneficial/easier than simply sending HTTP requests as text messages using socket programming?
I found a few sites that show that you can do this without too much difficulty: HTTP Requests in C++ without external libraries?,
Simple C example of doing an HTTP POST and consuming the response
It seems like sending HTTP requests is simply a matter of getting the formatting correct and then sending it via a TCP socket. How is this made easier with external libraries?
Please bear with me as I'm new to network programming and eager to learn.
The links you've provided in your question are in a way a pretty good explanation on why you should not code HTTP yourself it: the first link only points to the socket API and does not say anything about HTTP while the second one provides some examples and code which are too much simplified for real world use and will not even work with with typical setup of multiple domains on the same host since the requests are missing the Host field. In other words: these are resources which might look useful to the inexperienced developer but they will actually lead you into trouble.
HTTP is not the simple as it might look. HTTP/0.9 was simple but is no longer supported by many clients. HTTP/1.0 is still kind of simple if restricted to the basic aspects. But there are already enough pitfalls, like using the wrong delimiter between lines and request header/body or not using a Host field when accessing multi-domain hosts.
And once you want to get efficient you want to have multiple requests per TCP connection and compression and then it gets more complex. With HTTP/1.1 it gets even more complex due to the use of chunked data encoding and with HTTP/2 it gets more efficient but way more complex with a binary protocol and interleaved requests and responses.
And this was only HTTP. With HTTPS you have the additional and not trivial TLS layer which has its own pitfalls.
Thus, if you just want to use HTTP and HTTPS it is much better to use established libraries. Of course if you want to learn the innards of HTTP it might be useful to read all the relevant standards, look at packet traces and try to implement your own.
I discovered HTTP as a nice way to handle my files on my server. I write C programs based on the sockets interface.
When I issue a HTTP GET, I can easily download files, but just files with known extensions. A (backup) file with the extension XXX is "not found" (actually the response return code is 200 ("OK"), but the response content is an HTML page containing the error message (404 = not found).
How can I make sure that the web server sends any file I ask for? I have experimented with the Accept keyword in the HTTP GET request, but that does not help (or I make a mistake).
I do not own the server, so I can not alter the server settings. At the client server, I do not use a browser, only the sockets interface (see above).
I think it is important to understand that HTTP does not really have a concept of "files" and "directories." Instead, the protocol operates on locations and resources. While they can represent files and directories, they are absolutely not guaranteed to be the same.
The server in question seems to be configured to serve 404 error pages when encountering unknown extensions. This is a bit weird and absolutely not up to the standard. Though it may happen if a Web-Application Firewall is deployed. Again, HTTP does not trust file extensions in any way but relies on metadata in form of MIME media types instead. That would also be what goes (more or less) into the Accept header of a request.
How can I make sure that the web server sends any file I ask for?
Well, you can't. While the client may express preferences, the server is the ultimate authority on what gets sent in which way.
I have '.pcap' files that were generated by Tcpdump. I have been looking for a way with PHP to read data in the files. I have tried several methods available, but the only thing I was able to see was that there were some number of packets with a timestamp against each packet. I tried to read further but it was all in some binary.
Just wanted to ask if anyone out there has experience with packet capture. Would be great help.
I have tried these methods so far:
https://github.com/zobo/php-pcap
https://code.google.com/a/eclipselabs.org/p/php-pcap-analyzer/
and
http://systemsarchitect.net/parsing-binary-data-in-php-on-an-example-with-the-pcap-format/
http://systemsarchitect.net/
Thanks in advance :)
I was able to see http requests from my client machine to internet by using PHP's unpack() function and fread() combined. The libraries mentioned above are also useful to retrieve other information for example the ip addresses of client and server machines with timestamps
But I wasn't able read the responses. That is because the data returned from internet servers to remote client is encrypted and PHP is not a good technology to retrieve this data.
I have to pull set of images from FTP.
I have tried same thing with a tomcat server by just giving the image's server url, it looks fast and good. To make a study on FTP file pulling from FTP server got a sample from apple SimpleFTPSample
In the sample, there is a code to pull a image from FTP, but its too slow to pull an image.
Why its taking this much time for one image? if I have to get some set of images, i cant imagine the time delay?
Thanks,
Easwar
As Daniel states here:
What makes FTP faster:
No added meta-data in the sent files, just the raw binary
Never chunked encoding "overhead"
What makes HTTP faster:
reusing existing persistent connections make better TCP performance
pipelining makes asking for multiple files from the same server faster
(automatic) compression makes less data get sent
no command/response flow minimizes extra round-trips
Ultimately the net outcome of course differ depending on specific
details, but I would say that for single-shot static files, you won't
be able to measure a difference. For a single shot small file, you
might get it faster with FTP (unless the server is at a long
round-trip distance). When getting multiple files, HTTP should be the
faster one.
Use the following delegate method to track upload progress:
- (void)connection:(NSURLConnection *)connection didSendBodyData:(NSInteger)bytesWritten totalBytesWritten:(NSInteger)totalBytesWritten totalBytesExpectedToWrite:(NSInteger)totalBytesExpectedToWrite
totalBytesWritten / totalBytesExpectedToWrite gives me the upload percentage.
What makes FTP slower:
you have to build the connection each time for each file.
In this I am not sure: the handschaking is done on port X ( 22 maybe) and the data tranfer is done in port Y ( 21 maybe)
What makes HTTP slower:
The https header.
for one large file IO would use FTP, for a bunch of small files HTTP, for 1 or a few small file : the code which I can copy-paste in 10 seconds :)
The FTP requires an FTP server, and set rights and the HTTP server usually already exist, if you care about the server side requirements.
Firewall: usually http granted, ftp denied
FTP is far more complicated than HTTP:
1 several commands have to be executed to request the file
2 another TCP connection should be created to transfer the file data
so HTTP is the best choice if your application is latency sensitive.
I'm working on an iPhone application which will use long-polling to send event notifications from the server to the client over HTTP. After opening a connection on the server I'm sending small bits of JSON that represent events, as they occur. I am finding that -[NSURLConnectionDelegate connection:didReceiveData] is not being called until after I close the connection, regardless of the cache settings I use when creating the NSURLRequest. I've verified that the server end is working as expected - the first JSON event will be sent immediately, and subsequent events will be sent over the wire as they occur. Is there a way to use NSURLConnection to receive these events as they occur, or will I need to instead drop down to the CFSocket API?
I'm starting to work on integrating CocoaAsyncSocket, but would prefer to continue using NSURLConnection if possible as it fits much better with the rest of my REST/JSON-based web service structure.
NSURLConnection will buffer the data while it is downloading and give it all back to you in one chunk with the didReceiveData method. The NSURLConnection class can't tell the difference between network lag and an intentional split in the data.
You would either need to use a lower-level network API like CFSocket as you mention (you would have access to each byte as it comes in from the network interface, and could distinguish the two parts of your payload), or you could take a look at a library like CURL and see what types of output buffering/non-buffering there is there.
I ran into this today. I wrote my own class to handle this, which mimics the basic functionality of NSURLConnection.
http://github.com/nall/SZUtilities/blob/master/SZURLConnection.h
It sounds as if you need to flush the socket on the server-side, although it's really difficult to say for sure. If you can't easily change the server to do that, then it may help to sniff the network connection to see when stuff is actually getting sent from the server.
You can use a tool like Wireshark to sniff your network.
Another option for seeing what's getting sent/received to/from the phone is described in the following article:
http://blog.jerodsanto.net/2009/06/sniff-your-iphones-network-traffic/
Good luck!
We're currently doing some R&D to port our StreamLink comet libraries to the iPhone.
I have found that in the emulator you will start to get didReceiveData callbacks once 1KB of data is received. So you can send a junk 1KB block to start getting callbacks. It seems that on the device, however, this doesn't happen. In safari (on device) you need to send 2KB, but using NSURLConnection I too am getting no callbacks. Looks like I may have to take the same approach.
I might also play with multipart-replace and some other more novel headers and mime types to see if it helps stimulate NSURLConnection.
There is another HTTP API Implementation named ASIHttpRequest. It doesn't have the problem stated above and provides a complete toolkit for almost every HTTP feature, including File Uploads, Cookies, Authentication, ...
http://allseeing-i.com/ASIHTTPRequest/