How'd I determine where one packet ends and where another one starts - sockets

While sending packets across a network, how can one determine where one packet ends and where another starts?
Is sending/receiving acknowledgment one of the ways of doing so?

TCP is a stream-based protocol. That is, it provides a stream vs. packet or message-based interface to the application. If using TCP, an application must implement its own method of determining packets or messages. For example, (a) all message are a fixed size, or (b) each message is prefixed with its subsequent size, or (c) there is a special "end-of-record" sequence in the data stream to indicate a message boundary. Search google for lots of information on how one can implement message boundaries in TCP.

I assume here that you mean application-level 'packets'.
If you use UDP, you don't need to since it's a message protocol. TCP is a byte streaming protocol, so it cannot send packets, just bytes. If you need to send anything more complex than a byte-stream across TCP, you have to add another protocol on top - HTTP is one such protocol. Text is fairly easy since lines have terminating characters, usually CR/LF/CRLF. Sending non-text messages will require a different protocol.
One approach that is often used with TCP is to connect, stream a protocol-unit, disconnect. This works OK, but slowly because of the huge latency of continually opening and closing TCP connections. HTTP usually works like this in order to serve up web pages to large numbers of users who, if left permanently connected while they viewed pages, would needlessly use up all the server sockets.
Waiting for an application-level ACK from the peer is sometimes necessary if it absolutely essential that peer receipt is known before the next message is sent, but again, this is slow because of the connection latency. TCP was not designed with this approach in mind.
If the commonly available IP protocols cannot directly provide what you need, you will have to resort to implementing your own.
What sort of 'packet' are you sending?
Rgds,
Martin

With TCP sockets, you just see the datastream where you can receive and send bytes. You have no way of knowing where a packet ends and another begins.
This is a feature (and a problem) of TCP. Most people just read data into a buffer until a linefeed (\n) is seen. Then process the data and wait for the next line. If transferring chunks of binary data, one can first inform the receiver of how many bytes of data are coming.
If packet boundaries are important, you could use UDP but then the packet order might change or some packets might be lost on the way without you knowing.
The newer SCTP protocol behaves much like TCP (lost packets are resend, packet ordering is retained) but with SCTP sockets you can send packets so that receiver gets exactly the same packet.

Related

Game server TCP networking sockets - fairness

I'm writing a game server for a turn-based game. One criteria is that the game needs to be as fair for all players as possible.
So far it works like this:
Each client has a TCP connection. (If relevant, the connection is opened via WebSockets)
While running, continually check for incoming socket messages via epoll.
Iterate through clients with sockets ready to read:
Read all messages from the client.
Update the internal game state for each message.
Queue outgoing messages to affected clients.
At the end of each "window" (turn):
Iterate through clients and write all queued outgoing messages to their sockets
My concern for fairness raises the following questions:
Does it matter in which order I send messages to the clients?
Calling write() on all the sockets takes only a fraction of a second for my program, but somewhere in the underlying OS or networking would it make a difference if I sorted the client list?
Perhaps I should be sending to the highest-latency clients first?
Does it matter how I write the outgoing messages to the sockets?
Currently I'm writing them as one large chunk. The size can exceed a single packet.
Would it be faster for the client to begin its processing if I sent messages in smaller chunks than 1 packet?
Would it be better to write 1 packet worth to each client at a time, and iterate over the clients multiple times?
Are there any linux/networking configurations that would bear impact here?
Thanks in advance for your feedback and tips.
Does it matter in which order I send messages to the clients?
Yes, by fractions of milliseconds. If the network interface is available for sending the OS will immediately start sending. Why would it wait?
Perhaps I should be sending to the highest-latency clients first?
I think you should be sending in random order. Shuffle the list prior to sending. This makes it fair. I think your question is valid and this should be addressed.
Currently I'm writing them as one large chunk. [...]
First, realize that TCP is stream-based and that there are no packets/messages at the protocol level. On a physical level data is indeed packetized.
It is not necessary to manually split off packets because clients will read data as it arrives anyway. If a client issues a read, that read will complete immediately once the first packet has arrived. There is no artificial waiting in the OS.
Are there any linux/networking configurations that would bear impact here?
I don't know. Be sure to disable nagling.

Packet drop notification in Layer-2

IS there a way I can in user-space get notification about a packet being dropped at Layer-2 in 802.11.
According to my understanding what happens is, when a packet is sent out on the medium, there are Layer-2 ACKs which are received if it is delivered correctly (if not,it does the retransmission and ultimately drops the packet if not delivered after several retries..)
I want to be able to access this notification (in user-space)and change the behavior of packet transmission.
I want to be able to send the packet to another host from the FIB rather than dropping the packet.
I have read about libpcap libraries and netfilter hooks which allows me to capture packet and inject them back on the networking stack..
But I'm not able to find hooks (if any, for the wireless stack) to help me capture the packet notification in Layer-2.
Please correct me if I'm not understanding something correctly. Also, any heads-up or links to read would be great.
No, you cannot, at least not using the standardised sockets interfaces. 802.11 is a link layer, and the sockets API is strictly link-layer agnostic: unless it's going to work on all link layers, it's not in sockets. There are good reasons for that: the kind of cross-layer interaction that you envision has been tried many times, and it's always turned out more trouble than it's worth.
You didn't give us any details about the application — but the best solution is most probably to change your application-layer protocol to send explicit acknowledgments, and send your data over the fallback route when you fail to receive an ACK.

use serial communication over TCP / UDP

I have several apps which communicate through serial comm (RS-232 and RS-422), and i would like them to communicate through TCP or UDP without changing them. Another point is that some of the apps must run on linux.
I would like to know if there are exsiting tools for that purpose..
Thanks a lot!
Tal
If all you do with your serial port is read and write bytes, and if precise timing is not a concern, then you may be able to replace your serial port object with a TCP socket and send the exact same data over the socket as you would have sent over the port. The biggest complications are that the timing on a TCP socket is much looser than on a serial port, and TCP sockets' mechanisms for sending "out-of-band" data are different from those of a serial port.
I am unaware of any standards for sending serial data via UDP. Conceptually, it would seem like a useful thing to have since there are many serial-port protocols-based in which it would be more useful to drop data that can't be delivered within a certain time frame than to deliver it late. For example, if the intended recipient of serial-port data is an embedded controller that will sometimes so busy that it drops some incoming data, but which will respond within a few milliseconds to everything it does receive, a one-second hiccup on a TCP connection [not unusual] may cause software which expects to be talking directly to the controller to retransmit a command a dozen times. Even if the device would be capable of detecting and rejecting retransmissions it receives, it would be better for the earlier transmission requests to be abandoned than to have them be delivered late. Note that to be useful, a UDP-based scheme would have to include enough wrapper logic to guarantee that packets would never be delivered out of order; once the data for a packet has been sent to the serial port, any UDP packets which are received later despite having been sent earlier must be discarded; the recipient should probably include logic so that if many out-of-sequence packets are received, it will wait a little while after receiving any packet whose sequence number does not immediately follow the last one, to see if missing packets show up before it commits itself to discarding them.
There is no standard tool to do that. I am thinking to develop one. UDP is ideal in this case since it is 100% guaranteed that there is no out of order packet delivery on a short LAN as is in your case.
In several projects I have used free tool Hercules (https://www.hw-group.com/software/hercules-setup-utility) for protoyping and testing phases. No advertising intended, just a recommendation.

Will a TCP RST cause a host to drop the receive buffer?

Upon receiving a TCP RST packet, will the host drop all the remaining data in the receive buffer that has already been ACKed by the remote host but not read by the application process using the socket?
I'm wondering if it's dangerous to close a socket as soon as I'm not interested in what the other host has to say anymore (e.g. to conserver resources); e.g. if that could cause the other party to lose any data I've already sent, but he has not yet read.
Should RSTs generally be avoided and indicate a complete, bidirectional failure of communication, or are they a relatively safe way to unidirectionally force a connection teardown as in the example above?
I've found some nice explanations of the topic, they indicate that data loss is quite possible in that case:
http://blog.olivierlanglois.net/index.php/2010/02/06/tcp_rst_flag_subtleties
http://blog.netherlabs.nl/articles/2009/01/18/the-ultimate-so_linger-page-or-why-is-my-tcp-not-reliable also gives some more information on the topic, and offers a solution that I've used in my code. So far, I've not seen any RSTs sent by my server application.
Application-level close(2) on a socket does not produce an RST but a FIN packet sent to the other side, which results in normal four-way connection tear-down. RSTs are generated by the network stack in response to packets targeting not-existing TCP connection.
On the other hand, if you close the socket but the other side still has some data to write, its next send(2) will result in EPIPE.
With all of the above in mind, you are much better off designing your own protocol on top of TCP that includes explicit "logout" or "disconnect" message.

Sending a huge amount of real time processed data via UDP to iPhone from a server

I'm implementing a remote application. The server will process & render data in real time as animation. (a series of images, to be precise) Each time, an image is rendered, it will be transferred to the receiving iPhone client via UDP.
I have studied some UDP and I am aware of the following:
UDP has max size of about 65k.
However, it seems that iPhone can only receive 41k UDP packet. iPhone seems to not be able to receive packet larger than that.
When sending multiple packets, many packets are being dropped. This is due to oversizing UDP processing.
Reducing packet size increase the amount of packets not being dropped, but this means more packets are required to be sent.
I never write real practical UDP applications before, so I need some guidance for efficient UDP communication. In this case, we are talking about transferring rendered images in real time from the server to be displayed on iPhone.
Compressing data seems mandatory, but in this question, I would like to focus on the UDP part. Normally, when we implement UDP applications, what can we do in terms of best practice for efficient UDP programming if we need to send a lot of data non-stop in real time?
Assuming that you have a very specific and good reason for using UDP and that you need all your data to arrive ( i.e. you can't tolerate any lost data ) then there are a few things you need to do ( this assumes a uni-cast application ):
Add a sequence number to the header for each packet
Ack each packet
Set up a retransmit timer which resends the packet if no ack recv'ed
Track latency RTT ( round trip time ) so you know how long to set your timers for
Potentially deal with out of order data arrival if that's important to your app
Increase receive buffer size on client socket.
Also, you could be sending so fast that you are dropping packets internally on the sending machine without them even getting out the NIC onto the wire. On certain systems calling select for write-ablity on the sending socket can help with this. Also, calling connect on the UDP socket can speed up performance leading to less dropped packets.
Basically, if you need guaranteed in-order delivery of your data than you are going to re-implement TCP on top of UDP. If the only reason you use UDP is latency, then you can probably use TCP and disable the Nagle Algorithm. If you want packetized data with reliable low latency delivery another possibility is SCTP, also with Nagle disabled. It can also provide out-of-order delivery to speed things up even more.
I would recommend Steven's "Unix Network Programming" which has a section on advanced UDP and when it's appropriate to use UDP instead of TCP. As a note, he recommends against using UDP for bulk data transfer, although the reality is that this is becoming much more common these days for streaming multimedia apps.
Small packets is probably better than large packets :-)