Linux: checking of incoming UDP datagrams - sockets

I'm working with special-purpose hardware that is connected on a 10G Ethernet link. I've got some questions on the handling of incoming datagrams, as follows:
What happens if the NIC discovers an incorrect link-level Ethernet CRC? Some searching shows that errors may not be reliably reported (here, for instance). Can I expect to get better stats from more recent kernels (2.6 - 3.10?)
What does the kernel actually check before deciding whether to return a packet to a recv? I'm guessing that for IPv4, the IPv4 header checksum must be correct, but what about the optional UDP header checksum?
Can recv ever return 0 for a UDP/SOCK_DGRAM?
For a non-blocking SOCK_DGRAM socket, does recv always return the entire packet when data is available? I guess it has to, but it's not obvious from the docs.
Thanks.

My knowledge may be out of date here, but historically, packets with FCS errors were not delivered at all and were not counted toward the interface statistics. The Ethernet layer error counts are usually reported by ethtool -S <interface>. The problem has always been that the interface statistics were maintained above the driver level and there was no standard API internally for network drivers to report those statistics. (Also, of course, in the very old days of 10Mb half duplex, collisions happened pretty frequently and Ethernet layer statistics weren't terribly informative as to your own adapter's behavior.)
You should not receive a packet if its IP header checksum is wrong, or if the UDP checksum is wrong when a checksum is provided (i.e. non-zero).
Yes. If you provide a zero length buffer, you will receive the next incoming datagram but then the entire content will be truncated, resulting in a return value of zero. Also, UDP permits zero-length datagrams: so if you receive a datagram with no content, the return value would also be zero. Aside from those two cases, I don't believe you'll get a return value of zero.
Yes, you should get the entire datagram provided there is space in your buffer. Otherwise, no. If you don't provide enough space to hold the entire datagram, the part that doesn't fit is discarded (i.e. your next recv will get a subsequent packet, not the end of the truncated one).

Related

How bad is ip fragmentation

I understand that when sending ip messages around, each hop in the network path between be and my packet's destination will check if the next hop's MTU is bigger than the size of the packet I sent. If so, the packet will be fragmented and the two packets will be separately sent to the next hop, only to be reassembled at destination (or, in some cases, at the first NAT router encountered).
As far as I understand, this thing can be pretty bad, but I don't really understand why.
I understand that if the connection tends to drop a lot of packets, losing a single fragment means I have to resend the whole packet (this is actually the only thing I figured out myself)
Is there a chance that instead of being fragmented my packet will just be dropped?
How are packet fragments identified? Can I be 100% sure that they will be reassembled correctly? On example, if I send two ip packets of the same length nearly simultaneously to the same destination, how likely it is that fragments of the two will be swaped, like AAA, BBB reassembled into ABA, BAB?
In principle, if packets aren't dropped and fragments are reassembled correctly, actually using packet fragmentation seems like a good idea to save on local bandwidth and avoid having to send more and more headers instead of just one big packet.
Thank you
IP fragmentation can cause several problems:
1) Application layer loss is increased
As you mentioned, if a single fragment is dropped, the entire layer 4 packet will be lost. Thus, for a network with a small random packet loss rate, the application layer loss rate is increased by a factor approximately equal to the number of fragments for each layer 4 packet.
2) Not all networks handle fragmented packets
Some systems, such as Google's Compute Engine, do not reassemble fragmented packets.
3) Fragmentation can cause re-ordering
When routers split traffic down parallel paths, they may try to keep packets from the same flow on a single path. Because only the first fragment has layer 4 information like UDP/TCP port number, subsequent fragments may be routed down a different path, delaying assembly of the layer 4 packet and causing re-ordering.
4) Fragmentation can cause confusing behavior that is hard to debug
For example, if you send two UDP streams, A and B, from one source to a destination running Linux, the destination may discard packets from one of the streams. This is because by default, Linux "times out" fragment queues if more than 64 other fragments have been received from the same source. If stream A has a much higher data rate than stream B, 64 fragments from stream A may arrive in between the fragments from stream B, causing the B fragment to be dropped.
Thus, while IP fragmentation can reduce overhead by minimizing user headers, it may cause more trouble than it is worth.
To my knowledge, the only case where packets will be dropped rather than fragmented (barring cases where it would be dropped anyway), is packets which are marked "don't fragment". These packets are to be discarded rather than being fragmented.
Fragmented packets have identifier, fragment offset, and more fragments fields in their headers that, when combined, allow the destination host to reliably reassemble the packet upon receipt of all the fragments. The first fragment's offset is zero, and the last fragment has the more fragments flag set to zero. It is still possible (although very unlikely) to reassemble an incorrect packet if two packets' headers are mutated so their fragment offsets are exchanged, but their checksums are still valid. The probability of this happening is essentially zero. Bear in mind that IP does not provide any mechanism for ensuring the integrity of the data payload, only the integrity of the control information in the header.
Packet fragmentation necessarily wastes bandwidth because each fragment has a copy of [most of] the original datagram's header. Packets can be fragmented down to only 8 bytes per fragment, so we could have a maximum-sized packet at 60 + 65536 bytes fragmented into 60 * 8192 + 65536 bytes, yielding a payload increase of about 750% in the worst case. The only example I can come up with where you would come out ahead is if you fragmented a packet in order to send its fragments in parallel using some kind of Frequency Division Multiplexing scheme with the knowledge that the other channels are free. At that point, it still seems like it would require more work than would be saved to detect that circumstance and divide the packet rather than just sending it.
All the basic details about the mechanics of packet fragmentation in IP can be found in IETF RFC 791, if you're hungry for more information.

Does UDP allow repacketization?

I know that for TCP you can have for example Nagle's Algorithm enabled. However, can you have something similar for UDP?
Practical Question(assume UDP socket):
If I call send() two times in a short period of time with 1 byte of data in each send() call. Is it possible that the transport layer decides to send only 1 UPD packet with the 1 byte + 1 byte = 2 bytes of data?
Thanks in advance!
No. UDP datagrams are delivered intact exactly as sent, or not at all.
Not according to the RFC (RFC 768). Above IP facilities themselves, UDP really only provides, as extras, port-based routing and a little bit of extra detection for corruption or misrouting.
That means there's no facility to combine datagrams. In fact, since it's meant to be transaction oriented, I would say that combining two transactions into one may well be a bad idea in terms of keeping these transaction disparate.
Otherwise, you would need a layer above UDP which could figure out how to extract these transactions from a datagram. At the moment, that's not necessary since the datagram is the transaction.
As added support (though not, of course, definitive) for this contention, see the UDP wikipedia page:
Datagrams – Packets are sent individually and are checked for integrity only if they arrive. Packets have definite boundaries which are honored upon receipt, meaning a read operation at the receiver socket will yield an entire message as it was originally sent.
However, the best support for it comes from one of its clients. UDP was specially engineered for TFTP (among other things) and that protocol breaks down if you cannot distinguish a transaction.
Specifically, one of the TFTP transaction types is the data transaction which consists of an opcode, block number and up to 512 bytes of data. Without a length indication at the start or a sentinel value at the end, there is no way to work out where the next transaction would start unless there is a one-to-one mapping between transaction and datagram.
As an aside, the other four TFTP transaction types have either a fixed length or end-of-string sentinel values but the data transaction is the decider here.

Is a successful send() "atomic"?

Does a successful call to send() with the number returned equal to the amount specified in the size parameter guarantee that no "partial sends" will occur?
Or is there some way that the OS might be interrupted while servicing the system call, send part of the data, wait for a possibly long time, then send the rest and return without notifying me with a smaller return value?
I'm not talking about a case where there is not enough room in the kernel buffer; I realize that I would then get a smaller return value and have to try again.
Update:
Based on the answers so far, my question could be rephrased as follows:
Is there any way for packets/data to be sent over the wire before the call to send() returns?
Does a successful call to send() with the number returned equal to the amount specified in >the size parameter guarantee that no "partial sends" will occur?
No, it's possible that parts of your data gets passed over the wire, and another part only goes as far as being copied into the internal buffers of the local TCP stack. send() will return the no. of bytes passed to the local TCP stack, not the no. of bytes that gets passed onto the wire (and even if the data reaches the wire, it might not reach the peer).
Or is there some way that the OS might be interrupted while servicing the system call, send part of the data, wait for a possibly long time, then send the rest and return without notifying me with a smaller return value?
As send() only returns the no. of bytes passed into the local TCP stack, not whether send() actually sends anything, you can't really distinguish these two cases anyway. But yes, it's possibly only some data makes it over the wire. Even if there's enough space in the local buffer, the peer might not have enough space. If you send 2 bytes, but the peer only has room for 1 more byte, 1 byte might be sent, the other will reside in the local tcp stack until the peer has enough room again.
(That's an extreme example, most TCP stacks protects against sending such small segments of data at a time, but the same applies if you try to send 4k of data but the peer only have room for 3k).
I'm not talking about a case where there is not enough room in the kernel buffer; I realize that I would then get a smaller return value and have to try again
That will only happen if your socket is non-blocking. If it's blocking and the local buffers are full, send() will wait until there's room in the local buffers again (or, it might return
a short count if parts of the data was delivered, but an error occured in the mean time.)
Edit to answer:
Is there any way for packets/data to be sent over the wire before the call to send() returns?
Yes. That might happen for many reasons.
e.g.
The local buffers gets filled up by that recent send() call, and you use blocking I/O.
The TCP stack sends your data over the wire but decides to schedule other processes to
run before that sending process returns from send().
Though this depends on the protocol you are using, the general question is no.
For TCP the data gets buffered inside the kernel and then sent out at the discretion of the TCP packetization algorithm, which is pretty hairy - it keeps multiple timers, minds path MTU trying to avoid IP fragmentation.
For UDP you can only assume this kind of "atomicity" if your datagram does not exceed link frame size (usual value is 1472 = 1500 of ethernet frame - 20 bytes of IP header - 8 bytes of UDP header). Otherwise your sending host will have to IP-fragment the datagram.
Then intermediate routers can still IP-fragment the passing packet if their outgoing link MTU is less then the packet size.

Benefits of "Don't Fragment" on TCP Packets?

One of our customers is having trouble submitting data from our application (on their PC) to a server (different geographical location). When sending packets under 1100 bytes everything works fine, but above this we see TCP retransmitting the packet every few seconds and getting no response. The packets we are using for testing are about 1400 bytes (but less than 1472). I can send an ICMP ping to www.google.com that is 1472 bytes and get a response (so it's not their router/first few hops).
I found that our application sets the DF flag for these packets, and I believe a router along the way to the server has an MTU less than/equal to 1100 and dropping the packet.
This affects 1 client in 5000, but since everybody's routes will be different this is expected.
The data is a SOAP envelope and we expect a SOAP response back. I can't justify WHY we do it, the code to do this was written by a previous developer.
So... Are there any benefits OR justification to setting the DF flag on TCP packets for application data?
I can think of reasons it is needed for network diagnostics applications but not in our situation (we want the data to get to the endpoint, fragmented or not). One of our sysadmins said that it might have something to do with us using SSL, but as far as I know SSL is like a stream and regardless of fragmentation, as long as the stream is rebuilt at the end, there's no problem.
If there's no good justification I will be changing the behaviour of our application.
Thanks in advance.
The DF flag is typically set on IP packets carrying TCP segments.
This is because a TCP connection can dynamically change its segment size to match the path MTU, and better overall performance is achieved when the TCP segments are each carried in one IP packet.
So TCP packets have the DF flag set, which should cause an ICMP Fragmentation Needed packet to be returned if an intermediate router has to discard a packet because it's too large. The sending TCP will then reduce its estimate of the connection's Path MTU (Maximum Transmission Unit) and re-send in smaller segments. If DF wasn't set, the sending TCP would never know that it was sending segments that are too large. This process is called PMTU-D ("Path MTU Discovery").
If the ICMP Fragmentation Needed packets aren't getting through, then you're dealing with a broken network. Ideally the first step would be to identify the misconfigured device and have it corrected; however, if that doesn't work out then you add a configuration knob to your application that tells it to set the TCP_MAXSEG socket option with setsockopt(). (A typical example of a misconfigured device is a router or firewall that's been configured by an inexperienced network administrator to drop all ICMP, not realising that Fragmentation Needed packets are required by TCP PMTU-D).
The operation of Path-MTU discovery is described in RFC 1191, https://www.rfc-editor.org/rfc/rfc1191.
It is better for TCP to discover the Path-MTU than to have every packet over a certain size fragmented into two pieces (typically one large and one small).
Apparently, some protocols like NFS benefit from avoiding fragmentation (link text). However, you're right in that you typically shouldn't be requesting DF unless you really require it.

Raw Socket Receive Buffer

We are currently testing a Telecom application over IP. We open a Raw Socket and receives messages from the remote side (msgrate#750+msgs/second approx size of 180 bytes excluding IP).
On top of the Raw socket sits a layer called SCTP (just like TCP) which is indicating every now and then that it is missing some packets. Now, we are running Wireshark on the receive node and we can see that packet in Wireshark.
It looks to me that the receive buffer of the socket is small causing IP(?) to drop messages. However, IP Pegs(netstat -sv) show NO dropped packets. We have tried setting the socket receive queue to 40000 without any success.
I would appreciate any pointers as to what option, if any, of IP layer should we be configuring or is there any specific socket option that we need to set.
Thanks for your inputs. However, we have been able to "solve" this problem.
Earlier, I described how we read messages.
Once select returns, we run a loop (to the tune of number of Raw Messages to read which was >1 in our case).
1) we call ioctl(FIONREAD) to find the number of bytes to read;
2) read that many bytes by calling recvfrom
3) send the bytes upto the user
4) go into loop again and call ioctl(FIONREAD) and then repeat the steps
However, at point 4, ioctl(FIONREAD) use to return 0. Our code had a defensive check. It was expecting, a 0 bytes from ioctl(FIONREAD) means that the sender has send an IP header with 0 payload. Therefore, it use to call recvfrom(bytes to read=0) to flush out the IP header lest the select will set again on this.
At time t0, ioctl(FIONREAD) returns 0 as number of bytes to read
At time t1, recvfrom(bytes to read=0) is called.
Sometimes, between t0 and t1, actual data use to get queued in the socket receive queue and use to get discarded as we were calling recvFrom(bytes=0).
Setting, the number of rawMsgsToRead=1 has "solved" this problem. However, my guess is it will impact our performance. Is their any ioctl call which can differentiate between octets in the queue as 0 and IP header with payload 0
I have a few questions and a few things to think about.
1) Which implementation of SCTP are you using and on which OS. Some SCTP implementations are more robust than others.
2) Is SCTP negatively acknowledging the dropped packets? Search for a gap acks in wireshark.
3) And where you see the dropped packets in wireshark are you sure that these are not retransmissions?
4) Where in the system is wireshark monitoring? If it is not on the same wire as your application then it may be seeing messages which your application doesn't.
5) What exactly is the indication SCTP is giving?
If you believe that the IP socket rx buffer is overflowing then you could consider reducing the size of the SCTP RX window; this is often configurable in sctp stacks. The Rx window limits the amount of data that can be outstanding waiting for acknowledgement and consequently restricts the amount of data which could be in the IP buffer.
You could also try raising the priority of your SCTP task so that it more quickly reads messages out of the IP buffers (This may be the easiest thing to try and in my opinion a good thing to do).
Regards