iOS : Keep Http connection Open - iphone

I had been using CFHttp and NSUrlConnection. These classes create a new connection every time to do a http send and receive.
Basically i need a single connection to perform all my send and receive.
Open connection -> send http request -> receive http response ->send http request -> receive http response-> Close connection.
Is there any way to do this.

NSUrlConnection will automagically keep open and re-use a connection for you, via the HTTP 1.1 protocol. (See this accepted answer). It should do this out of the box, unless you're doing something to modify its default behaviour.
I recommend using a network sniffer to verify that connection re-use is happening (or not), and to verify after what amount of time the connection might be dropped (and hence re-opened on next request). Wireshark is a superb network analyzer with good protocol support.
You could also use a third party library; AFNetworking is nice, well designed, and gets good press. (I used to use ASIHttpRequest but it's recently been retired from active development, and its code structure is more monolithic.)

Related

How does Play framework track the blocked client and returns the response

The question is about Play framework specifically although concept is generic. I guess that the blocked client is listening to a socket which is tracked on the server side and passed around with the Future[Result] so that when the Future finishes, then the response is written to the socket and then the socket is closed.
Can someone share more concrete explanation with references?
Quoting from:
https://www.playframework.com/documentation/2.6.18/ScalaAsync
The web client will be blocked while waiting for the response, but
nothing will be blocked on the server, and server resources can be
used to serve other clients.
Note that Play does not manage how to address the client. This is managed by TCP. Basically (as a simple analogy) you can think of a client, like a web browser, as making a telephone call to the server. When the clients makes a request, one of it's sockets gets connected to a particular socket on the server - this is a persistent connection between the sockets for the duration of the request/response. Play's underlying server (Netty for older versions or Akka Http for v2.6+) will accept the incoming request from the socket and assign it a thread. Play will do the work and the resulting Response will get mapped back to the correct socket by the server. The TCP server will manage the mapping between response and the socket, not Play.
As others have noted, the reference to blocking is essentially to do with the way Play Action's are intended to work (non-blocking). They take the request, wrap whatever work you have coded in a Future, and hand this off to get completed at some point in the near future (it might be a different thread that completes the Future, or it could even end up being the same thread). The point is that the creation of the Future is quick and so the thread that handled the request gets returned quickly to the pool so it can pick up another request to work on. If you have heard about Reactive Programming then this is essentially the idea being keeping an Application Responsive.
The web client will be blocked while waiting for the response, but
nothing will be blocked on the server, and server resources can be
used to serve other clients.
So the client might be blocked on it's end whilst waiting for the response to come back through it's socket (unless it too is making async calls), but the idea is that the thread pool handling the requests in Play will not be blocked because of the way they create a Future and hand the completion of this back to Play so they can go back to handle other requests.
There is a bit more to it but hopefully this gives a bit more context to that particular statement from Play's docs.

send data from server to an OSX app

I am building an OSX app that needs to get data from server. The easy way, is to make a GET request at some fixed time interval, and process results. Thats not what I want. I want the other way around: e.g. server to send data to my app, when something happens on the server side. That way I do not need to make constant requests from client side. I don't need the data to visually be displayed, just processed.
Can this be implemented in OSX with Swift?
You have two ways to achieve this:
Websocket:
Websocket is a full-duplex communication channel over a TCP-Connection. It's established via HTTP.
Long Polling:
Same as you said before but without responding directly. Your client makes a HTTP request and set a very long timeout timer. The server responds after something is happening. (More)
I would recommend you Websocket since it was built exactly for this use case. But if you have to implement it quickly you should probably go with long polling for now, since the barrier to implement it is much lower and switch to Websocket later.

Long polling with NSURLConnection

I'm working on an iPhone application which will use long-polling to send event notifications from the server to the client over HTTP. After opening a connection on the server I'm sending small bits of JSON that represent events, as they occur. I am finding that -[NSURLConnectionDelegate connection:didReceiveData] is not being called until after I close the connection, regardless of the cache settings I use when creating the NSURLRequest. I've verified that the server end is working as expected - the first JSON event will be sent immediately, and subsequent events will be sent over the wire as they occur. Is there a way to use NSURLConnection to receive these events as they occur, or will I need to instead drop down to the CFSocket API?
I'm starting to work on integrating CocoaAsyncSocket, but would prefer to continue using NSURLConnection if possible as it fits much better with the rest of my REST/JSON-based web service structure.
NSURLConnection will buffer the data while it is downloading and give it all back to you in one chunk with the didReceiveData method. The NSURLConnection class can't tell the difference between network lag and an intentional split in the data.
You would either need to use a lower-level network API like CFSocket as you mention (you would have access to each byte as it comes in from the network interface, and could distinguish the two parts of your payload), or you could take a look at a library like CURL and see what types of output buffering/non-buffering there is there.
I ran into this today. I wrote my own class to handle this, which mimics the basic functionality of NSURLConnection.
http://github.com/nall/SZUtilities/blob/master/SZURLConnection.h
It sounds as if you need to flush the socket on the server-side, although it's really difficult to say for sure. If you can't easily change the server to do that, then it may help to sniff the network connection to see when stuff is actually getting sent from the server.
You can use a tool like Wireshark to sniff your network.
Another option for seeing what's getting sent/received to/from the phone is described in the following article:
http://blog.jerodsanto.net/2009/06/sniff-your-iphones-network-traffic/
Good luck!
We're currently doing some R&D to port our StreamLink comet libraries to the iPhone.
I have found that in the emulator you will start to get didReceiveData callbacks once 1KB of data is received. So you can send a junk 1KB block to start getting callbacks. It seems that on the device, however, this doesn't happen. In safari (on device) you need to send 2KB, but using NSURLConnection I too am getting no callbacks. Looks like I may have to take the same approach.
I might also play with multipart-replace and some other more novel headers and mime types to see if it helps stimulate NSURLConnection.
There is another HTTP API Implementation named ASIHttpRequest. It doesn't have the problem stated above and provides a complete toolkit for almost every HTTP feature, including File Uploads, Cookies, Authentication, ...
http://allseeing-i.com/ASIHTTPRequest/

Is HTTP 1.1 pipelining discouraged in native mobile apps?

For several years, I've been facing problems with HTTP 1.1 pipelining & continued to ask the server to send the HTTP Header:
Connection: close
I want to revisit this decision. Does your native mobile apps use HTTP pipelining ?
Some problems with HTTP pipelining I've faced:
Server not releasing TCP connections
My client is receiving multiple replies from one HTTP connection
That's exactly what persistent connections and pipelining are for: keeping the TCP connection open until the timeout expires (or the browser closes), and sending multiple requests down the same pipe.
You might want to consider removing persistent connections if your server serves a high number of clients (you might run out of workers, RAM, or even free ports, raising response time for new requests)
If you want to read further, a pointer about persistent connection behaviour
One of the requirements for clients/servers to be compatible with HTTP/1.1 is the support of pipelining. So I don't see how using it would be a problem... I would rather think it would be encouraged. Using pipelining you cut down on creating new resources, network bandwidth, etc.
All modern web servers support pipelining and any reasonably complete client library should, so I'm not sure what the problem could be... perhaps if you ask about specific errors we could help you with them.
HTTP "pipelining" does not only mean to keep the TCP connection open between consecutive requests/responses. It describes a user agent behaviour where it sends the next HTTP request even without waiting for the pending response to the last request.
In my experience almost any HTTP server supports persistent connections. Using pipelining additionally is less stable. Firefox implements this feature but diables it by default.
You're confusing HTTP pipelining and HTTP persistent connections.
Persistent connection is where you keep the TCP connection around for future requests, but still send them serially: http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html
Pipelining is a rarely used feature of HTTP 1.1 where you just fire multiple requests on the same connection without waiting for the responses. It's actually required by the HTTP specification, but rarely used by clients (Android's HTTP library doesn't, for example). Most servers seem to support it, though. It's described in section 8.1.2.2 of the same RFC.

What is a RESTful way of monitoring a REST resource for changes?

If there is a REST resource that I want to monitor for changes or modifications from other clients, what is the best (and most RESTful) way of doing so?
One idea I've had for doing so is by providing specific resources that will keep the connection open rather than returning immediately if the resource does not (yet) exist. For example, given the resource:
/game/17/playerToMove
a "GET" on this resource might tell me that it's my opponent's turn to move. Rather than continually polling this resource to find out when it's my turn to move, I might note the move number (say 5) and attempt to retrieve the next move:
/game/17/move/5
In a "normal" REST model, it seems a GET request for this URL would return a 404 (not found) error. However, if instead, the server kept the connection open until my opponent played his move, i.e.:
PUT /game/17/move/5
then the server could return the contents that my opponent PUT into that resource. This would both provide me with the data I need, as well as a sort of notification for when my opponent has moved without requiring polling.
Is this sort of scheme RESTful? Or does it violate some sort of REST principle?
Your proposed solution sounds like long polling, which could work really well.
You would request /game/17/move/5 and the server will not send any data, until move 5 has been completed. If the connection drops, or you get a time-out, you simply reconnect until you get a valid response.
The benefit of this is it's very quick - as soon as the server has new data, the client will get it. It's also resilient to dropped connections, and works if the client is disconnected for a while (you could request /game/17/move/5 an hour after it's been moved and get the data instantly, then move onto move/6/ and so on)
The issue with long polling is each "poll" ties up a server thread, which quickly breaks servers like Apache (as it runs out of worker-threads, so can't accept other requests). You need a specialised web-server to serve the long-polling requests.. The Python module twisted (an "an event-driven networking engine") is great for this, but it's more work than regular polling..
In answer to your comment about Jetty/Tomcat, I don't have any experience with Java, but it seems they both use a similar pool-of-worker-threads system to Apache, so it will have that same problem. I did find this post which seems to address exactly this problem (for Tomcat)
I'd suggest a 404, if your intended client is a web browser, as keeping the connection open can actively block browser requests in the client to the same domain. It's up to the client how often to poll.
2021 Edit: The answer above was in 2009, for context.
Today, I would suggest using a WebSocket interface with push notifications.
Alternatively, in the above suggestion, I might suggest holding the connection for 500-1000ms and check twice at the server before returning the 404, to reduce the overhead of creating multiple connections at the client.
I found this article proposing a new HTTP header, "When-Modified-After", that essentially does the same thing--the server waits and keeps the connection open until the resource is modified.
I prefer a version-based approach rather than a timestamp-based approach, since it's less prone to race conditions and gives you a little more information about what it is you're retrieving. Any thoughts to this approach?