We have an UDP client that communicates against a server.
The server gives a single response on each request.
The client sends a request, and waits 5 seconds for the response.
If the server's response was not received by 5 seconds - the client assumes that the packet was lost in the network (this is UDP...), writes a report to a log, and sends the next request.
But, sometimes we have any delay in the network, and the server's response comes after 5 seconds.
Let's describe the scenario:
The client sent a packet named "X".
The timeout of 5 seconds expired, and the client reports that "X" is a lost packet.
The client sent another packet named "Y".
The server's response on "X" comes now to the client.
The client sees that the response is not compatible to the request, and report it to the log.
The client sent another packet named "Z".
The server's response on "Y" comes now to the client.
The client sees that the response is not compatible to the request, and report it to the log.
And this is an infinity loop!
What can we do?
Many UDP-based protocols include an identifier to indicate which request a given response belongs to. The client chooses the identifier and sends it to the server as part of the request, and then the server echoes it back in the response. This allows the client to match responses to requests, especially in situations like the one you describe. If the client received the response to X after having moved on, it would be able to simply ignore that response.
Related
How does one create a REST-like pattern with zeromq? Specifically, I have the following architecture:
A user sends a POST request to Server A, which then sends that request to Server B to do some more processing. After that, Server B sends that request to Server C, doesn't wait on its response, then responds back to Server A with a message indicating that the request has been enqueued. I want Server A to wait on Server B until this response is received.
Initially, I made this so that Server A and Server B are connected through DEALER/ROUTER. But when multiple users hit the POST route at the same time, there is no guarantee that the ROUTER's response will correspond to the correct request.
For example, let's say that John sends a request that takes 60 seconds to process. After that, Jane sends a different request that takes 30 seconds to process. Even if John sent the request first, Server A's first recv on the DEALER socket will return Jane's request since it finished first.
How do I make sure that the responses are sent to the correct "client"? I technically have only one client (Server A), but multiple requests can be made at the same time.
I am trying to understand the TCP reset problem mentioned in RFC 7230: HTTP/1.1 Message Syntax and Routing, § 6.6:
6.6. Tear-down
The Connection header field (Section 6.1) provides a "close"
connection option that a sender SHOULD send when it wishes to close
the connection after the current request/response pair.
So HTTP/1.1 has persistent connections, meaning that multiple HTTP request/response pairs can be sent on the same connection.
A client that sends a "close" connection option MUST NOT send further
requests on that connection (after the one containing "close") and
MUST close the connection after reading the final response message
corresponding to this request.
A server that receives a "close" connection option MUST initiate a
close of the connection (see below) after it sends the final response
to the request that contained "close". The server SHOULD send a
"close" connection option in its final response on that connection.
The server MUST NOT process any further requests received on that
connection.
So the client signals that it will close the connection by adding the Connection: close header field to the last HTTP request, and it closes the connection only after it receives the HTTP response acknowledging that the server received the request.
A server that sends a "close" connection option MUST initiate a close
of the connection (see below) after it sends the response containing
"close". The server MUST NOT process any further requests received on
that connection.
A client that receives a "close" connection option MUST cease sending
requests on that connection and close the connection after reading the
response message containing the "close"; if additional pipelined
requests had been sent on the connection, the client SHOULD NOT assume
that they will be processed by the server.
So the server signals that it will close the connection by adding the Connection: close header field to the last HTTP response, and it closes the connection. But it closes the connection only after receiving which message acknowledging that the client received the HTTP response?
If a server performs an immediate close of a TCP connection, there is
a significant risk that the client will not be able to read the last
HTTP response. If the server receives additional data from the client
on a fully closed connection, such as another request that was sent by
the client before receiving the server's response, the server's TCP
stack will send a reset packet to the client; unfortunately, the reset
packet might erase the client's unacknowledged input buffers before
they can be read and interpreted by the client's HTTP parser.
So in the case where the server initiates the close of the connection, if the server fully closes the connection right after sending the HTTP response with a Connection: close header field to an initial HTTP request, then the client may not receive that HTTP response because it received a TCP reset packet response to a subsequent HTTP request that it sent after the initial HTTP request. But how can the TCP reset packet response to the subsequent HTTP request precede the HTTP response to the initial HTTP request?
To avoid the TCP reset problem, servers typically close a connection
in stages. First, the server performs a half-close by closing only
the write side of the read/write connection. The server then
continues to read from the connection until it receives a
corresponding close by the client, or until the server is reasonably
certain that its own TCP stack has received the client's
acknowledgement of the packet(s) containing the server's last
response. Finally, the server fully closes the connection.
So in the case where the server initiates the close of the connection, the server only closes the write side of the connection right after sending the HTTP response with a Connection: close header field to an initial HTTP request, and it closes the read side of the connection only after receiving a subsequent corresponding HTTP request with a Connection: close header field or after waiting for a period long enough to assume that it received a TCP message acknowledging that the client received the HTTP response. But why would the client send a subsequent corresponding HTTP request with a Connection: close header field after receiving the HTTP response with a Connection: close header field, whereas paragraph 5 states: ‘A client that receives a "close" connection option MUST cease sending requests on that connection’?
It is unknown whether the reset problem is exclusive to TCP or might
also be found in other transport connection protocols.
But why would the client send a subsequent corresponding HTTP request with a Connection: close header field after receiving the HTTP response with a Connection: close header field, whereas paragraph 5 states: ‘A client that receives a "close" connection option MUST cease sending requests on that connection’?
With HTTP pipelining the client can send new requests even though the response for a previous request (and thus the Connection: close in this response) was not yet received. This is a slight optimization from only sending the next request after the response for the previous one was received, but it comes with the risk that this new request will not be processed by the server.
But how can the TCP reset packet response to the subsequent HTTP request precede the HTTP response to the initial HTTP request?
While the TCP RST will be send after the response it will be propagated early to the application. A TCP RST is sent if new data arrive at a socket which is already shut down for at least reading (i.e. close(fd) or shutdown(fd, SHUT_RD)). It will also be sent if there are still unprocessed data in the receive buffer of the socket on shutdown, i.e. like in the case of HTTP pipelining. Once a TCP RST is received by the peer, its socket will be marked as broken. On the next system call with this socket (i.e. typically a read or write) this error then will be delivered to the application—no matter if there would be still unread data in the receive buffer of the socket. These unread data are thus lost.
But it closes the connection only after receiving which message acknowledging that the client received the HTTP response?
It is not waiting for some application message from the client. It will first deliver the response with the Connection: close, then read on the socket in order to determine the close of the connection by the client. Then it will also close the connection. This waiting for close should of course be done with a short timeout, because disrupted connections might cause connections to never be explicitly closed. Alternatively it could just wait some seconds and hope that the client got and processed the response in the mean time.
According to sip protocol when first invite send, sip returns proxy authentication required message (if there are any proxy server available), then client send an acknowledge message. But what happen if the acknowledge message failed to reach the sip server? Server returns forbidden after sometimes and ignore all new invite with authentication header. Also when sip gets multiple acknowledge message it's immediately send forbidden.
If your question is what would the correct behaviour be for a SIP server that has issued a 407 and not received an ACK for it, please see RFC 3261 17.2.1 for the description of the INVITE server transaction.
Sending the 407 moves the state machine into the "Completed" state, at which point the G and H timers have to be be set. When G fires, the 407 response needs to be retransmitted. And if all the ACK messages get lost, then timer H will make the server transaction give up eventually. But if the second ACK reaches the server then that's it. You will have seen two 407 responses, one with a lost ACK, the second one with a successful ACK.
The handling of the subsequent INVITE with the credentials should be entirely independent with the previously described process. The INVITE message with the credentials will constitute a separate dialogue forming transaction.
I have a client application that repeatedly sends commands over a socket connection to a server and receives corresponding responses. This socket connection has a send/recv timeout set.
If the server is slow to respond for some reason, the client receives EAGAIN. On a timeout, I want the client to ignore that response and proceed with sending the next request.
However, currently when I ignore EAGAIN and send the next request, I receive the response from a previous request.
What is the best way to ignore/discard the response on an EAGAIN?
You can't. You have to read it. There is no mechanism to ignore bytes in a TCP byte stream.
EAGAIN may indicate a timeout elapsed (you also need to handle EWOULDBLOCK as well). If you are using TCP, you must read pending data before you can read any subsequent data. If you get an EAGAIN on a read, you have to perform the same read again, using the same parameters. Just because the server is slow to respond (or the network is slow to deliver the response) does not mean the response will not arrive at all, unless the connection is closed/lost.
If you really want to be able to receive responses out of order, you need to design your communication protocol to support that in the first place. Give each request a unique ID that is echoed in its response. Send the request but do not wait for the response to arrive. That will allow the client to have multiple requests in flight at a time, and allow the server to send back responses in any order. The client will have to read each response as it arrives (which means you have to do the reading asynchronously, typically using a separate thread or some other parallel signaling mechanism) and match up each response's ID to its original request so you know how to then process it.
I'm having a problem with a .NET client connecting to an apache server for XML requests. Exactly every 50th time XML is transferred the response seems to be lost.
Looking at a WireShark trace on the client I can see that every 50th time the apache server sends an encrypted alert followed by a FIN, ACK. The client responds with a RST which closes the socket, but then the client continues to try to use the socket, it sends SYN packets without any response. When this happens the response doesn't get back to the application layer on the client.
After the client times out and reconnects (renegotiates encryption) it works another 49 times and then fails again.
Just to add this same .NET client is in use on many other client machines without a problem.
I can't find anyone else having this issue. Any ideas how to resolve this?