Since HTTP is an application layer protocol using TCP, if I request to download a big file via HTTP here is what happens:
My HTTP request is going to be fragmented into TCP packets, and TCP is going to do a 3-way handshake and send my request packets to server. My question is the response from server ( the file) going to pass through old TCP connection, or server initiates another Transport layer connection with my browser and another 3-way handshake in order to send me the file?
The file transfer will use the existing connection. that will however make the connection busy until the file is transferred.
So if the user clicks on a link while the file is downloaded the connection is then busy. The web browser will therefore have to open an additional connection to be able to request the clicked url.
In HTTP/1.1 existing connections will be used if idle (idle connections will be closed when a period of time have passed).
Related
I need to explore the traffic from one program.
The program makes something like a connection through the WebSockets.
Fiddler displays this:
Request Headers: CONNECT 144.***:443 HTTP/1.0
Response: HTTP/1.0 200 Connection Established
End empty body.
But the HTTP analyzer displays full information after that response, and that information continues flowing. Very likely like WebSockets (one connection and receive more answers).
And fiddler display zero traffic.
How can I explore such traffic through the fiddler?
A CONNECT call is always the first command a client sends if it uses a Proxy. Translated CONNECT just means: Please start a connection to the following server and that port. Through that connection the real HTP calls are then transmitted. Therefore CONNECT is not a real HTTP
request.
Fiddler does not show the content of CONNECT requests/responses to port 443 endpoints because those connections are HTTPS/TLS protected (hence the shown data would be useless). You need to enable HTTPS decryption and install the Fiddler root CA certificate into the client app/OS to see the decrypted content of those connections.
How is HTTP Keep Alive implemented? Does it internally use TCP Keep Alive? If not, how does the server detect if the client is dead or alive?
I know this is an old question, but still:
HTTP Keep-Alive is a feature that allows HTTP client (usually browser) and server (webserver) to send multiple request/response pairs over the same TCP connection. This decreases latency for 2nd, 3rd,... HTTP request, decreases network traffic and similar.
TCP keepalive is a totally different beast. It keeps TCP connection opened by sending small packets. Additionally, when the packet is sent this serves as a check so the sender is notified as soon as connection drops (note that this is NOT the case otherwise - until we try to communicate through TCP connection we have no idea if it is ok or not).
To answer your questions about HTTP Keep-Alive:
How is HTTP Keep Alive implemented?
To put it simply, the HTTP server doesn't close the TCP connection after each response but waits some time if some other HTTP request will come over it too. After some timeout it closes it anyway.
Does it internally use TCP Keep Alive?
No, at least I see no point in it.
If not, how does the server detect if the client is dead or alive?
It doesn't - it doesn't need to. If a client sends a request, it will get the response. If the client doesn't send anything over TCP connection (maybe because the connection is dead) then a timeout will close the connection; client will of course notice this and will send request through another TCP connection if needed.
HTTP Keep-Alive is a feature of HTTP protocol. The web-server, implementing Keep-Alive Feature, has to check the connection/socket periodically (for incoming HTTP request) for the time span since it sent the last HTTP response (in case there was corresponding HTTP Request). If no HTTP request is received by the time of the configured keep-alive time (seconds) the web server closes the connection. No further HTTP request will be possible after the 'close' done by Web Server. On the other hand, TCP Keep-Alive is managed by OS in the TCP layer. HTTP Keep-Alive and TCP Keep-Alive is totally unrelated things.
HTTP keep-alive, a.k.a., HTTP persistent connection, is an instruction that allows a single TCP connection to remain open for multiple HTTP requests/responses.
By default, HTTP connections close after each request. When someone visits your site, their browser needs to create new connections to request each of the files that make up your web pages (e.g. images, Javascript, and CSS stylesheets), a process that can lead to high page load times.
Enabling the keep-alive header allows you to serve all web page resources over a single connection. Keep-alive also reduces both CPU and memory usage on your server.
Source: https://www.imperva.com/learn/performance/http-keep-alive/
http keep-alive is just making tcp living longer in order to transfer multi http request.After keep-alive timeout, the tcp connection will be closed.
tcp keep-alive is just a mechanism keeping the tcp connection,or check the tcp connection is not closed
I have the following queries:
1) Does TCP guarantee delivery of packets and thus is thus application level re-transmission ever required if transport protocol used is TCP. Lets say I have established a TCP connection between a client and server, and server sends a message to the client. However the client goes offline and comes back only after say 10 hours, so will TCP stack handle re-transmission and delivering message to the client or will the application running on the server need to handle it?
2) Related to the above question, is application level ACK needed if transport protocol is TCP. One reason for application ACK would be that without it, the application would not know when the remote end received the message. Is there any reason other than that? Meaning is the delivery of the message itself guaranteed?
Does TCP guarantee delivery of packets and thus is thus application level re-transmission ever required if transport protocol used is TCP
TCP guarantees delivery of message stream bytes to the TCP layer on the other end of the TCP connection. So an application shouldn't have to bother with the nuances of retransmission. However, read the rest of my answer before taking that as an absolute.
However the client goes offline and comes back only after say 10 hours, so will TCP stack handle re-transmission and delivering message to the client or will the application running on the server need to handle it?
No, not really. Even though TCP has some degree of retry logic for individual TCP packets, it can not perform reconnections if the remote endpoint is disconnected. In other words, it will eventually "time out" waiting to get a TCP ACK from the remote side and do a few retries. But will eventually give up and notify the application through the socket interface that the remote endpoint connection is in a dead or closed state. Typical pattern is that when a client application detects that it lost the socket connection to the server, it either reports an error to the user interface of the application or retries the connection. Either way, it's application level decision on how to handle a failed TCP connection.
is application level ACK needed if transport protocol is TCP
Yes, absolutely. Most client-server protocols has some notion of a request/response pair of messages. A TCP socket can only indicate to the application if data "sent" by the application is successfully queued to the kernel's network stack. It provides no guarantees that the application on top of the socket on the remote end actually "got it" or "processed it". Your protocol on top of TCP should provide some sort of response indication when ever a message is processed. Use HTTP as a good example here. Imagine if an application would send an HTTP POST message to the server, but there was not acknowledgement (e.g. 200 OK) from the server. How would the client know the server processed it?
In a world of Network Address Translators (NATs) and proxy servers, TCP connections that are idle (no data between each other) can fail as the NAT or proxy closes the connection on behalf of the actual endpoint because it perceives a lack of data being sent. The solution is to have some sort of periodic "ping" and "pong" protocol by which the applications can keep the TCP connection alive in the absences of having no data to send.
I'm writing an application that attempts to do the following:
create a TCP server listening on an available port
create a TCP socket that connects to the server
have the server socket write data to the client
have the server socket close its end of the connection
have the client write a message to the server
Here's where the problem lies. When I attempt to run the application, the TCP exchange goes like this:
The first three packets establish the three-way handshake, and the fourth and fifth packets are the transmission of the data written by the server and its acknowledgement.
As expected, the server socket sends a packet with the FIN flag set to indicate that it is closing its end of the connection. The client acknowledges this and then attempts to write its data to the socket. The server immediately sends an RST packet, terminating the connection prematurely.
Why does this happen?
Note: the above capture was done on Windows 8.1.
The sender cannot send data after a [FIN]. Such an action will result in the receiver issuing an [RST].
The FIN probably indicates that the server has fully closed the connection in both directions. In this case if it receives any further data on the connection it will issue an RST. This suggests an application protocol error on your part. If the server sends a reply and then closes the socket, the client can't send anything else via that connection.
Possibly you need your server to call shutdown() with SHUT_WR and then read something else from the client before closing the socket. Or possibly you're just doing it wrong.
I'm trying to implement an HTTP proxy for learning and debug purpose.
The support of plain HTTP transactions was pretty straightforward to implement and now I'm looking to implement support for SSL/TLS tunnels.
From RFC 7230:
A "tunnel" acts as a blind relay between two connections without
changing the messages. Once active, a tunnel is not considered a party
to the HTTP communication, though the tunnel might have been initiated
by an HTTP request.
It's not very clear whether I shall build the TLS socket from the socket on which the HTTP CONNECT transaction took place. I assume it is the case, since HTTP is stateless, but I just want to be sure.
When a client connects to an HTTP proxy, CONNECT is used to have the proxy establish a persistent TCP connection with the target TCP server. Then the proxy blindly passes data as-is back and forth between the two TCP connections until either the client or server disconnects, then the proxy disconnects the other party. This allows the client to send data to the server and vice versa, such as TLS packets. This is important so the TLS server can verify the client's identity during the TLS handshake.
So, to answer your question - yes, the client must establish a TLS session with the target server using the same TCP socket that it used to issue the CONNECT request on. Once the CONNECT request has succeeded, the client can treat the existing TCP connection as if it had connected to the server directly. The proxy is transparent at that point, neither party needs to care that it is present.