How to know when a tcp message was sent - sockets

In an online farm-like game I need to validate on server client's long processes like building a house. Say the house need 10 minutes to be built. Client sends "Start build" message asynchronously over TCP connection when it starts building house and "Finish build" when it thinks the house is built. On server I need to validate that house was built in at least 10 minutes. The issue is server doesn't know when client sent "start build" and "finish build" messages. It knows when message was received, but there is a network lag, possible network failures and messages can be long enough to take a few tcp segments. As I understand the time client took to send message can be up to few minutes and depends on client TCP configuration. The question is: is there a way to know when message was issued on client side? If not, how can I guarantee time period in that message was sent, possibly some server TCP configuration? Some timeout in that server either receives the message or fails would be ok. Any other solutions to main task I may not think about are also welcome.
Thanks in advance!

If I understand you correctly, your main issue is not related to TCP itself (because the described scenario could also happen using UDP) but to the chronology of your messages and securing that the timeline has not been faked.
So the only case you want to avoid is the following:
STARTED send at 09:00:00 and received at 09:00:30 (higher latency)
FINISHED send at 09:10:00 and received at 09:10:01 (lower latency)
As it looks to the server as if there were only 9.5 minutes spent constructing your virtual building. But the client didn't cheat it was only that the first message had a higher latency than the second.
The other way around would be no problem:
STARTED send at 09:00:00 and received at 09:00:01 (lower latency)
FINISHED send at 09:10:00 and received at 09:10:30 (higher latency)
or
STARTED send at 09:00:00 and received at 09:00:10 (equal latency)
FINISHED send at 09:10:00 and received at 09:10:10 (equal latency)
As at least 10 minutes elapsed between the receiving of the two messages.
Unfortunately there is no way to ensure the client does not cheat by using timestamps or such. It does not matter if your client writes the timestamps in the messages or if the protocol does it for you. There are two reasons for that:
Even if your client does not cheat, the system clocks of client and
server might not be in sync
All data written in the network packet are just bytes and can be manipulated. Someone could use a RAW socket and fake the entire TCP layer
So the only thing that is for sure is the time when the messages were received by the server. A simple solution would be to send some sort of RETRY message containing the time left to the client if the server thinks that not enough time elapsed when receiving the FINISHED message. So the client could adjust the construction animation and then send the FINISHED message again, depending on how much time was left.

Related

How to delete duplicate UDP packets from the send queue

I have UDP implementation with facility to get back acknowledge from server. The client re-sends packets for which acknowledgement is not received from server with in a specified time. Clients send around 10 packets while waiting for acknowledgement from server for 1st packet. It then repeats sending packets for which acknowledgement is not received. This works fine in normal scenario with minor delay in network.
The real issue is being experienced on a low bandwidth connection where round trip delay is a bit significant. Clients keeps on adding packets in send queue based on acknowledgement timeouts. This results into many duplicate packets getting added to queue.
Tried to find any elegant solution to avoid duplicate packets in send queue with no luck. Any help will be appreciated.
If I can get a way to mark/set a property of a packet such that if packet is not send within NN ms then it will be removed from queue then I can build algorithm around it.
UDP has no builtin duplicate detection as is the case with TCP. This means any kind of such detection has to be done by the application itself. Since the only way an application can interact with the send queue is to send datagrams any kind of duplicate detection on the sender side has to be done before the packet gets put into the send queue.
How you figure out at this stage if this is really a duplicate packet to a previous one which should not be sent or if this a duplicate packet which should be sent because the original one got lost is fully up to the application. And any "...not send within NN ms..." has to be implemented in the application too with timers or similar. You might additionally try to get more control of the queue by reducing the size of the send queue with SO_SNDBUF.

TCP connection call collision simulation

I am learning socket programming and have a simple simulator where i have both client and server on the same machine. And i am trying to simulate a call collision. To achieve a "collision", response time between client and server should only take less than 1 microsecond. I used tcpdump to capture data when sending requests and reponses between client and server.
I tried to put timing to atleast synchronize the disconnection between the 2 but still, the timing results are more than 1 microsecond.
Any ideas?

recv - When does it start to discard unprocessed packets?

We are using recv(2) in order to listen to incoming ICMP packets in a network management application in which there is a status poll feature to ping all managed hosts.
In a productive environment, the number of managed hosts can become pretty large (1,000+), and at the moment, when said status poll is performed, we send out all the ping requests sequentially with no delay inbetween.
Obviously, this will lead to many ping replies being sent back almost simultaneously, and it appears that our dispatching routine cannot catch up. This seems to cause that packets are dropped and the ping replies are never actually received by our application. We believe so because many hosts are falsely detected as being "unavailable".
The dispatcher doesn't do anything more than adding incoming replies to an "inbox" of sorts, which is processed later by another thread, ie it does not take much time and can probably not be optimized any further.
Is it possible that the internal packet buffer used by recv (in the kernel? in the network hardware?) fills up and starts dropping packets? If so, is there a good way to determine a reasonable maximum amount of simultaneous pings that can be performed safely (ie by getting that buffer size from the OS)?

Should I keep a socket open during a long running process?

I've got some programs that occasionally (anywhere from every few minutes to once an hour) need to send metrics to Graphite. Should I keep the socket to the graphite server open for the duration of my process or make a new connection every time I need to send some metrics? What are the considerations when doing one or the other?
Sounds like you need a TCP connection.
If you should keep the connection active or not depends on answers to points like:
- Would you like to monitor the "connected" clients at the server at any given time?
- Is there a limit at the Server side in relation to the previous point?
- The amount of such clients "connected" to the server?
- Is it a problem if the connection creation takes some time?
If you keep the connection open, just make sure to send keep-alive messages from time to time (application level proffered).
A large amount of clients connected to the server, even when not active, may consume resources of memory or objects (for example, if there is one thread per connection).
On the other hand, keeping the connection on, will allow the client to detect if there is a connection problem to the server much faster (if that even matters).
it all depends on when is needed.

Ensuring send() data delivered

Is there any way of checking if data sent using winsock's send() or WSASend() are really delivered to destination?
I'm writing an application talking with third party server, which sometimes goes down after working for some time, and need to be sure if messages sent to that server are delivered or not. The problem is sometimes calling send() finishes without error, even if server is already down, and only next send() finishes with error - so I have no idea if previous message was delivered or not.
I suppose on TCP layer there is information if certain (or all) packets sent were acked or not, but it is not available using socket interface (or I cannot find a way).
Worst of all, I cannot change the code of the server, so I can't get any delivery confirmation messages.
I'm sorry, but given what you're trying to achieve, you should realise that even if the TCP stack COULD give you an indication that a particular set of bytes has been ACK'd by the remote TCP stack it wouldn't actually mean anything different to what you know at the moment.
The problem is that unless you have an application level ACK from the remote application which is only sent once the remote application has actioned the data that you have sent to it then you will never know for sure if the data has been received by the remote application.
'but I can assume its close enough'
is just delusional. You may as well make that assumption if your send completes as it's about as valid.
The issue is that even if the TCP stack could tell you that the remote stack had ACK'd the data (1) that's not the same thing as the remote application receiving the data (2) and that is not the same thing as the remote application actually USING the data (3).
Given that the remote application COULD crash at any point, 1, 2 OR 3 the only worthwhile indication that the data has arrived is one that is sent by the remote application after it has used the data for the intended purpose.
Everything else is just wishful thinking.
Not from the return to send(). All send() says is that the data was pushed into the send buffer. Connected socket streams are not guarenteed to send all the data, just that the data will be in order. So you can't assume that your send() will be done in a single packet or if it will ever occur due to network delay or interruption.
If you need a full acknowledgement, you'll have to look at higher application level acks (server sending back a formatted ack message, not just packet ACKs).