Is HTTP 1.1 pipelining discouraged in native mobile apps? - iphone

For several years, I've been facing problems with HTTP 1.1 pipelining & continued to ask the server to send the HTTP Header:
Connection: close
I want to revisit this decision. Does your native mobile apps use HTTP pipelining ?
Some problems with HTTP pipelining I've faced:
Server not releasing TCP connections
My client is receiving multiple replies from one HTTP connection

That's exactly what persistent connections and pipelining are for: keeping the TCP connection open until the timeout expires (or the browser closes), and sending multiple requests down the same pipe.
You might want to consider removing persistent connections if your server serves a high number of clients (you might run out of workers, RAM, or even free ports, raising response time for new requests)
If you want to read further, a pointer about persistent connection behaviour

One of the requirements for clients/servers to be compatible with HTTP/1.1 is the support of pipelining. So I don't see how using it would be a problem... I would rather think it would be encouraged. Using pipelining you cut down on creating new resources, network bandwidth, etc.
All modern web servers support pipelining and any reasonably complete client library should, so I'm not sure what the problem could be... perhaps if you ask about specific errors we could help you with them.

HTTP "pipelining" does not only mean to keep the TCP connection open between consecutive requests/responses. It describes a user agent behaviour where it sends the next HTTP request even without waiting for the pending response to the last request.
In my experience almost any HTTP server supports persistent connections. Using pipelining additionally is less stable. Firefox implements this feature but diables it by default.

You're confusing HTTP pipelining and HTTP persistent connections.
Persistent connection is where you keep the TCP connection around for future requests, but still send them serially: http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html
Pipelining is a rarely used feature of HTTP 1.1 where you just fire multiple requests on the same connection without waiting for the responses. It's actually required by the HTTP specification, but rarely used by clients (Android's HTTP library doesn't, for example). Most servers seem to support it, though. It's described in section 8.1.2.2 of the same RFC.

Related

Do HTTP clients always wait for a response on a single TCP connection?

This is a purely curiosity-driven question about some subtle issue on the border between HTTP and TCP. I have no concrete problem to solve.
An HTTP request is done over a TCP connection, and a single TCP connection can be used for multiple HTTP requests in a row.
In principle, this means that the client can send a request on a connection before the response for the previous one arrived.
The interesting part is that such multiple requests can really end up being in the same IP packet, and theoretically even the multiple responses could be - de facto batching the requests.
I've come accross this topic while looking at the Techempower benchmarks which include a "plaintext" benchmark where 10 such requests are batched together in one send operation (the use the wrk tool to do this).
I'm wondering if this is a purely artificial hack or whether this actually happens, for instance when a browser requests mutliple resources from the same server.
Also, can one do this with the HTTP clients of common programming languages, or would one have to go to TCP sockets to get that behavior?
Sending multiple HTTP/1.1 requests without waiting for the response is known as HTTP pipelining (wikipedia link).
As you can read on wikipedia, the technique is promising but it is not enabled by default in browsers "due to several issues including buggy proxy servers and HOL blocking." Nevertheless there is support for it in major HTTP clients and servers.
The technique is not applicable to later versions of the protocol: HTTP/2 uses the TCP connection in a fundamentally different way, and HTTP/3 does not even use TCP.

HTTP 2.0 vs HTTP 1.1 with connection pool on server to server communication

I've been reading about HTTP 2.0 recently and I'm trying to understand if there are any benefits for server to server (REST) communication.
The scenario is server A sending REST Json messages to Server B (or a small number of Server 2 instances).
Usually HTTP clients maintain connection pools and reuse old connections, so if the servers communication over HTTP 1.1 once the connection is established it will be reused. In that case what would be the benefit of HTTP 2.0?
Also if server B tends to timeout a lot then with HTTP 1.1 the connections will have to be closed and opened again, which is an overhead. However with HTTP 2.0, wouldn't the situation be the same?
For a small number of servers, I don't think there will be a big difference between HTTP/1.1 and HTTP/2.
I think the same goes for small request rates.
The HTTP client in ServerA will need to open and pool a small number of connections, possibly just one, in both cases.
The scenario can be really different for a large number of servers (hundreds or more), or for high request rates, that would force HTTP/1.1 to open and maintain a large number of connections.
This is where the multiplexing feature of HTTP/2 can really shine and give an edge to HTTP/2 over HTTP/1.1.
Lastly, when using HTTP/2 you also need to take into account request and response content sizes.
Differently from HTTP/1.1, HTTP/2 enforces flow control at the protocol level, and this may stall uploads/downloads if the flow control windows are too small.
Fortunately, this should be configurable in good HTTP/2 client and server implementations.
For small content sizes, you should not see much interference of the flow control mechanism, so HTTP/2 should perform as well as HTTP/1.1.
For larger content sizes, you want to configure the flow control windows to larger values, to avoid that the flow control mechanism stalls the uploads/downloads too often.

What are the pitfalls of using Websockets in place of RESTful HTTP?

I am currently working on a project that requires the client requesting a big job and sending it to the server. Then the server divides up the job and responds with an array of urls for the client to make a GET call on and stream back the data. I am the greenhorn on the project and I am currently using Spring websockets to improve efficiency. Instead of the clients constantly pinging the server to see if it has results ready to stream back, the websocket will now just directly contact the client hooray!
Would it be a bad idea to have websockets manage the whole process from end to end? I am using STOMP with Spring websockets, will there still be major issues with ditching REST?
With RESTful HTTP you have a stateless request/response system where the client sends request and server returns the response.
With webSockets you have a stateful (or potentially stateful) message passing system where messages can be sent either way and sending a message has a lower overhead than with a RESTful HTTP request/response.
The two are fairly different structures with different strengths.
The primary advantages of a connected webSocket are:
Two way communication. So, the server can notify the client of anything at any time. So, instead of polling a server on some regular interval to see if there is something new, a client can establish a webSocket and just listen for any messages coming from the server. From the server's point of view, when an event of interest for a client occurs, the server simply sends a message to the client. The server cannot do this with plain HTTP.
Lower overhead per message. If you anticipate a lot of traffic flowing between client and server, then there's a lower overhead per message with a webSocket. This is because the TCP connection is already established and you just have to send a message on an already open socket. With an HTTP REST request, you have to first establish a TCP connection which is several back and forths between client and server. Then, you send HTTP request, receive the response and close the TCP connection. The HTTP request will necessarily include some overhead such as all cookies that are aligned with that server even if those are not relevant to the particular request. HTTP/2 (newest HTTP spec) allows for some additional efficiency in this regard if it is being used by both client and server because a single TCP connection can be used for more than just a single request/response. If you charted all the requests/responses going on at the TCP level just to make an https REST request/response, you'd be surpised how much is going on compared to just sending a message over an already established webSocket.
Higher Scale in some circumstances. With lower overhead per message and no client polling to find out if something is new, this can lead to added scalability (higher number of clients a given server can serve). There are downsides to the webSocket scalability too (see below).
Stateful connections. Without resorting to cookies and session IDs, you can directly store state in your program for a given connection. While a lot of development has been done with stateless connections to solve most problems, sometimes it's just simpler with stateful connections.
The primary advantages of a RESTful HTTP request/response are:
Universal support. It's hard to get more universally supported than HTTP. While webSockets enjoy relatively good support now, there are still some circumstances where webSocket support isn't regularly available.
Compatible with more server environments. There are server environments that don't allow long running server processes (some shared hosting situations). These environments can support HTTP request, but can't support long running webSocket connections.
Higher Scale in some circumstances. The webSocket requirement for a continuously connected TCP socket adds some new scale requirements to the server infrastructure that HTTP requests don't demand. So, this ends up being a tradeoff space. If the advantages of webSockets aren't really needed or being used in a significant way, then HTTP requests might actually scale better. It definitely depends upon the specific usage profile.
For a one-off request/response, a single HTTP request is more efficient than establishing a webSocket, using it and then closing it. This is because opening a webSocket starts with an HTTP request/response and then after both sides have agreed to upgrade to a webSocket connection, the actual webSocket message can be sent.
Stateless. If your job is not made more complicated by having a stateless infrastruture, then a stateless world can make scaling or fail-over much easier (just add or remove server processes behind a load balancer).
Automatically Cacheable. With the right server settings, http responses can be cached by browser or by proxies. There is no such built-in mechanism for requests sent via webSockets.
So, to address the way you asked the question:
What are the pitfalls of using websockets in place of RESTful HTTP?
At large scale (hundreds of thousands of clients), you may have to do some special server work in order to support large numbers of simultaneously connected webSockets.
All possible clients or toolsets don't support webSockets or requests made over them to the same level they support HTTP requests.
Some of the less expensive server environments don't support the long running server processes required to support webSockets.
If it's important to your application to get progress notifications back to the client, you could either use a long running http connection with continuing progress being sent down or you can use a webSocket. The webSocket is likely easier. If you really only need the webSocket for the relatively short duration of this particular activity, then you may find the best overall set of tradeoffs comes by using a webSocket only for the duration of time when you need the ability to push data to the client and then using http requests for the normal request/response activities.
It really depends on your requirements. REST services can be much more transparent and easier to pick up by developer compared to Websockets.
Using Websockets, you remove most of the advantages that RESTful webservices offer, such as the ability to reference a resource via a URI. Really what you should be doing is to figure out what the advantages are of REST and hypermedia, and based on that decide whether those advantages are important to you.
It's of course entirely possible to create a RESTful webservice, and augment it with a a websocket-based API for real-time responses.
But if you are creating a service that only you are going to consume in a controlled environment, the only disadvantage might be that not every client supports websockets, while pretty much any type of environment can do a simple http call.

Memcached Client-Server communication

I've been researching memcached, and I'm planning on using that with spymemcached on the client. I'm just curious how client/server communication works between the two. When creating a memcached client object, you can pass in a list of servers, but after the client is created is there any communication between the servers and the client saying that they are still alive and that the client send that particular server information? I've tried looking through the memcached and spymemcached documentation sites, but haven't found anything yet.
Spymemcached does not send any special messages to make sure that the connection is still alive, but you can do this in your application code if necessary by sending no-op messages to each server. You should also note that the TCP layer employs mechanisms such as keep-alive and timeout in order to try to detect dead connections. These parameters however may be different depending on the operating system you are using.

Why do I get HTTP Code 414 on one network but not another?

I have an otherwise working iPhone program. A recent change means that it generates some very long URLs (over 4000 characters sometimes) which I know isn't a great idea and I know how to fix -- that's not what I'm asking here.
The curious thing is that when I make the connection using a 3G network (Vodafone UK) I get this HTTP "414 Request-URI Too Long" error but when I connect using my local WiFi connection it works just fine.
Why would I get different results using different types of network? Could they be routing requests to different servers depending on where the connection originates? Or is there something else at stake here?
The corollary questions relate to how common this is. Is it likely to happen whenever I use a cell network or just some networks?
I would suspect that your 3G requests are being passed through some proxy which doesn't fancy 4000 char long URLs, and returns an HTTP 414 error.
I suspect the Vodafone connection is going through a proxy and/or gateway that can't handle the extra-long URL, and that your 414 Request-URI Too Long is coming from it.
Some wireless operators - including Vodafone UK, I believe - deploy inline proxies that transparently intercept your HTTP requests for purposes of optimization. Some of these proxies are based on software like the Squid proxy cache, which can have problems with very long URLs. As a result, your requests might not even be making it to your server.
To work around this issue, you can try sending your HTTP requests to the server on a non-standard TCP port. Generally speaking, these proxies are only configured to perform HTTP processing on port 80. Thus, if you can send your traffic on a different port, it might make it through unscathed.