Outstanding frames in packets - packet

I came across this term "Outstanding frames" while studying about packet transmission between client and server, but I have doubts around this term.
There is an older question on the same topic but it doesn't seem to have a definitive answer in it: what is an 'outstanding' frame?
I got 2 possible definitions of what it can be:
Packets which are queued up for transmission but not yet sent and
packets sent to client but Acknowledgement has not been received yet for the packets.
So which is the correct meaning for the term?

The second definition is more accurate: outstanding frames refer to packets that have been sent to the client, but for which an acknowledgement has not yet been received by the server. This means that the server is still waiting for confirmation that the packets were successfully received by the client. In contrast, packets that are queued up for transmission but not yet sent are referred to as "pending frames."

Related

ZeroMQ and TCP Retransmits on Linux

I'm having issues understanding what socket types are negatively impacted in the event that TCP must try retransmmitting messages.
We have a distributed system that uses a combination of inprocess and TCP connections for internal processes and external devices and applications. My concern is that in the event there is a significant traffic that causes latency and dropped packets, that a TCP retransmit will cause delay in the system.
What I'd like to avoid is an application that has messages compile in a queue waiting to be sent (via a single ZeroMQ TCP socket) because TCP is forcing the socket to repeatedly retransmit messages that never sent an acknowledge.
Is this an issue that can happen using ZeroMQ? Currently I am using PUSH/PULL on a Linux OS.
Or is this not a concern, and if not, why?
It is crucial that messages from the external devices/applications do not feed stale data.
First, the only transport where retransmits are possible is TCP over an actual physical network. And then likely not on a LAN, as it's unlikely that Ethernet packets will go missing on a LAN.
TCP internal to a computer, and especially IPC, INPROC, etc will all have guaranteed delivery of data first time, every time. There is no retransmit mechanism.
If one of the transports being used by socket does experience delays due to transmission errors, that will slow things up. ZMQ cannot consider a message "sent" until it's been propagated via all the transports used by the socket. The external visibility of "sent" is that the outbound message queue has moved away from the high water mark by 1.
It's possible that any one single message will arrive sooner over IPC than TCP, and possible that message 2 will arrive over IPC before message 1 has arrived via TCP. But if you're relying on message timing / relative order, you shouldn't be using ZMQ in the first place; it's Actor model, not CSP.
EDIT For Frank
The difference between Actor and CSP is that the former is asynchronous, the latter is synchronous. Thus for Actor model, the sender has zero information as to when the receiver actually gets a message. For CSP, the sending / receiving is an execution rendevous - the send completes only when the receive is complete.
This can be remarkably useful. If in your system it makes no sense for A to instruct C to do something before (in time, not just in A's code flow) instructing B, then you can do that with CSP (but not Actor model). That's because when A sends to B, B receives the message before A's send completes, freeing A to then send to C.
Unsurprisingly it's real time systems that benefit from CSP.
So consider ZMQ's Actor model with a mix of TCP, IPC and INPROC transports in ZMQ. There's a good chance that messages send via TCP will arrive a good deal later than messages sent through INPROC, even if they were sent first.

How to delete duplicate UDP packets from the send queue

I have UDP implementation with facility to get back acknowledge from server. The client re-sends packets for which acknowledgement is not received from server with in a specified time. Clients send around 10 packets while waiting for acknowledgement from server for 1st packet. It then repeats sending packets for which acknowledgement is not received. This works fine in normal scenario with minor delay in network.
The real issue is being experienced on a low bandwidth connection where round trip delay is a bit significant. Clients keeps on adding packets in send queue based on acknowledgement timeouts. This results into many duplicate packets getting added to queue.
Tried to find any elegant solution to avoid duplicate packets in send queue with no luck. Any help will be appreciated.
If I can get a way to mark/set a property of a packet such that if packet is not send within NN ms then it will be removed from queue then I can build algorithm around it.
UDP has no builtin duplicate detection as is the case with TCP. This means any kind of such detection has to be done by the application itself. Since the only way an application can interact with the send queue is to send datagrams any kind of duplicate detection on the sender side has to be done before the packet gets put into the send queue.
How you figure out at this stage if this is really a duplicate packet to a previous one which should not be sent or if this a duplicate packet which should be sent because the original one got lost is fully up to the application. And any "...not send within NN ms..." has to be implemented in the application too with timers or similar. You might additionally try to get more control of the queue by reducing the size of the send queue with SO_SNDBUF.

How to know when a tcp message was sent

In an online farm-like game I need to validate on server client's long processes like building a house. Say the house need 10 minutes to be built. Client sends "Start build" message asynchronously over TCP connection when it starts building house and "Finish build" when it thinks the house is built. On server I need to validate that house was built in at least 10 minutes. The issue is server doesn't know when client sent "start build" and "finish build" messages. It knows when message was received, but there is a network lag, possible network failures and messages can be long enough to take a few tcp segments. As I understand the time client took to send message can be up to few minutes and depends on client TCP configuration. The question is: is there a way to know when message was issued on client side? If not, how can I guarantee time period in that message was sent, possibly some server TCP configuration? Some timeout in that server either receives the message or fails would be ok. Any other solutions to main task I may not think about are also welcome.
Thanks in advance!
If I understand you correctly, your main issue is not related to TCP itself (because the described scenario could also happen using UDP) but to the chronology of your messages and securing that the timeline has not been faked.
So the only case you want to avoid is the following:
STARTED send at 09:00:00 and received at 09:00:30 (higher latency)
FINISHED send at 09:10:00 and received at 09:10:01 (lower latency)
As it looks to the server as if there were only 9.5 minutes spent constructing your virtual building. But the client didn't cheat it was only that the first message had a higher latency than the second.
The other way around would be no problem:
STARTED send at 09:00:00 and received at 09:00:01 (lower latency)
FINISHED send at 09:10:00 and received at 09:10:30 (higher latency)
or
STARTED send at 09:00:00 and received at 09:00:10 (equal latency)
FINISHED send at 09:10:00 and received at 09:10:10 (equal latency)
As at least 10 minutes elapsed between the receiving of the two messages.
Unfortunately there is no way to ensure the client does not cheat by using timestamps or such. It does not matter if your client writes the timestamps in the messages or if the protocol does it for you. There are two reasons for that:
Even if your client does not cheat, the system clocks of client and
server might not be in sync
All data written in the network packet are just bytes and can be manipulated. Someone could use a RAW socket and fake the entire TCP layer
So the only thing that is for sure is the time when the messages were received by the server. A simple solution would be to send some sort of RETRY message containing the time left to the client if the server thinks that not enough time elapsed when receiving the FINISHED message. So the client could adjust the construction animation and then send the FINISHED message again, depending on how much time was left.

TCP Sockets: "Rollback" after timeout occured

This is a rather general question about TCP sockets. I got a client/server application setup where messages are sent over the wire via TCP. The implementation is done via C++ POCO, however the question is not related to a certain technology.
A message can be a request (initiated by the client) or a response (initiated by the server).
A request has the structure:
Message Header
Request Header
Parameters
A response has the structure
Message Header
Response Header
Parameters
I know TCP guarantees that sent packages will be delivered in the order they have been sent. However, nothing can be assumed about the timespan a delivery might need.
On both sides I have a read/send timeout configured. Now I wonder how to have a clean set up on the transmitted data after a timeout. Don't know how to express this in the right terms, so let me describe an example:
Server S sends a response to the client (Message Header, Response Header, Parameters are put into the stream)
Client C receives the message header partially (e.g. the first 4 bytes of 12)
After these 4 bytes have been received, the reception timeout occurs
On client-side, an appropriate exception is thrown, the reception will be stopped.
The client considers the package as invalid.
Now the problem is, when the client tries to receive another package, he might receive the lasting part of the "old" response message header. From the point of view of the currently processed transaction (send request/get response), the client receives garbage.
So it seems that after a timeout has occured (no matter whether it has been on client or server-side), the communication should continue with a "clean setup", meaning that none of the communication partners will try to send some old package data and that no old package data is stored within the stream buffer of the respective socket.
So how are such situations commonly handled? Is there some kind of design pattern / idiomatic way to solve this?
How are such situations handled within other TCP-based protocols, e.g. HTTP?
In all the TCP samples around the net I've never seen an implementation that deals with those kind of problems...
Thank you in advance
when the client tries to receive another package, he might receive the lasting part of the "old" response message header
He will receive the rest of the failed message, if he receives anything at all. He can't receive anything else, and specifically data that was sent later can't be received before or instead of data that was sent earlier. It is a reliable byte-stream. You can code accordingly.
the communication should continue with a "clean setup", meaning that none of the communication partners will try to send some old package data
You can't control that. If a following message has been written to the TCP socket send buffer, which is all that send() actually does, it will be sent, and there is no way of preventing it short of resetting the connection.
So you either need to code your client to cope with the entire bytestream as it arrives or possibly close the connection on a timeout and start again.

Ensuring send() data delivered

Is there any way of checking if data sent using winsock's send() or WSASend() are really delivered to destination?
I'm writing an application talking with third party server, which sometimes goes down after working for some time, and need to be sure if messages sent to that server are delivered or not. The problem is sometimes calling send() finishes without error, even if server is already down, and only next send() finishes with error - so I have no idea if previous message was delivered or not.
I suppose on TCP layer there is information if certain (or all) packets sent were acked or not, but it is not available using socket interface (or I cannot find a way).
Worst of all, I cannot change the code of the server, so I can't get any delivery confirmation messages.
I'm sorry, but given what you're trying to achieve, you should realise that even if the TCP stack COULD give you an indication that a particular set of bytes has been ACK'd by the remote TCP stack it wouldn't actually mean anything different to what you know at the moment.
The problem is that unless you have an application level ACK from the remote application which is only sent once the remote application has actioned the data that you have sent to it then you will never know for sure if the data has been received by the remote application.
'but I can assume its close enough'
is just delusional. You may as well make that assumption if your send completes as it's about as valid.
The issue is that even if the TCP stack could tell you that the remote stack had ACK'd the data (1) that's not the same thing as the remote application receiving the data (2) and that is not the same thing as the remote application actually USING the data (3).
Given that the remote application COULD crash at any point, 1, 2 OR 3 the only worthwhile indication that the data has arrived is one that is sent by the remote application after it has used the data for the intended purpose.
Everything else is just wishful thinking.
Not from the return to send(). All send() says is that the data was pushed into the send buffer. Connected socket streams are not guarenteed to send all the data, just that the data will be in order. So you can't assume that your send() will be done in a single packet or if it will ever occur due to network delay or interruption.
If you need a full acknowledgement, you'll have to look at higher application level acks (server sending back a formatted ack message, not just packet ACKs).