2 layer switch how to handle the datagram bigger than MTU? - ethernet

If the datagram bigger than MTU, 2 layer switch will drop it? Dose 2 layer switch can report a ICMP? If not report ICMP, how can I determine the data size to pass the switch successfully?

If the datagram bigger than MTU, 2 layer switch will drop it?
Yes. A switch does not forward frames larger than the (configured) maximum size and drops them. For standard Ethernet, that's 1500 bytes payload plus 18 bytes L2 overhead. Note that MTU is an L3 term referring to the maximum packet size that an underlying network can transport.
Does 2 layer switch report a ICMP?
No. A layer-2 switch generally sends no ICMP messages nor is there an ICMP message to report oversized frames in L2.
A layer-3 switch used as gateway should return an ICMP Fragmentation required when the destination network's MTU does not admit the IP packet without fragmentation and its DF bit is set or IPv6 is used. For IPv4 without DF, the gateway just fragments the packet.
If not report ICMP, how can I determine the data size to pass the switch successfully?
On an unmanaged switch, see above for the maximum standard size. A few support jumbo frames, check their documentation. On some managed switches you can configure the maximum frame size globally or by VLAN. Methods and syntax vary.

Related

how decrease UDP backlog to once packet at a time?

Most articles are about how to increase a UDP socket's receive buffer size to handle more packets, but I need a solution to decrease the UDP receive buffer to accept only 1 packet at a time and discard/drop all other packets until that packet is read.
I'm trying to do this for Linux, and did some network stack tuning, like setting the RCVBUFF and RCVBUFFFORCE socket options, but that didn't work. I cannot reduce RCVBUFF lower than 2046B (maybe 1 memory page), even when setting the udp_rmem_min to 0.
Why i can’t set UDP RCVBUF lower than 2046?

How does UDP SetWriteBuffer and SetReadBuffer the OS's buffers?

Description
I'm busy writing a high frequency UDP server with Go. I'd estimate at least 1000 packets/second both ways.
However as the size of data I'm sending over the UDP socket grew, I eventually ran into the follow error: read udp 127.0.0.1:1541->127.0.0.1:9737: wsarecv: A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram into was smaller than the datagram itself.
I eventually just grew the size of the buffers I was reading from and writing into as follows:
buffer := make([]byte, 64 * 1024 * 1024) // used to just be 1024
l, err := s.socketSim.Read(buffer)
This worked fine and I stopped getting the error... However then I can across two functions inside the net package:
s.socketSim.SetWriteBuffer(64 * 1024 * 1024)
s.socketSim.SetReadBuffer(64 * 1024 * 1024)
I learned that these two act on the operating system's transmit buffer
Question
Do I even care to set the operating system buffer size and why? How does the size on the application buffer impact the size of the operating system buffer? Should they always be the same and how big should/can they become?
First, not only do you have an MTU size for each interface on your device and whatever destination you're send/recving from, but there is also an MTU size for each device in between. For this reason, as others have mentioned, you might want to use what is generally accepted for MTU since you might not control every device in the data route. In the case of UDP, MTU really just means how big a datagram can be before fragmenting.
Second, you almost certainly want your SND/RCV buffers to be larger than the MTU. These are kernel buffers which hold on to data when you're not ready to receive them. A larger UDP RCV buffer means that the kernel will buffer more packets for you instead before dropping them into the abyss. Maybe you have some non-trivial work to do for each packet. Depending on the bitrate, you might want a larger or smaller kernel buffer.
Finally, you're using UDP. There is no guarantee that you'll receive packets in order or at all. Any router in between you and a peer could decide to drop the packet for any reason. Since you're using UDP, you should prepare for dropped and out-of-order packets. You also might need some sort of retransmission mechanism, which further complicates things.
Or you might consider using TCP if dropped packets are unacceptable, knowing that timing is indeterminate.
If you're on linux, you can see current buffer sizes in /proc/sys/net. Usually the kernel will double what you ask for.
Also, you can tune your buffer size by watching for packet drops in /proc/net/udp. If you see drops, you might want to make your rcv buffer bigger, especially if the data is bursty and the processing intensive. If you're data is coming in at a consistent rate and you're still dropping packets, then you aren't processing them fast enough.

link speed vs throughput

I am new to networking. while doing an experiment on file transfer protocol(wired connection), I have to calculate the time taken to transfer 1 file from source to destination.
For calculating the file transfer time, i require the file size as well as the link speed.
Can anyone please explain what is this link speed and how to calculate it?
is it same as PHY rate?
Does PHY rates exist for wired connections or it exists in wireless connections only?
And also,please explain the difference between PHY rate,link speed and throughput.
Thanks in advance.
You will need to consider the whole protocol stack for the exercise:
FTP
TCP
IP
Ethernet
PHY
Each of these layers reduces the raw PHY rate.
On the Ethernet and IP layers, it's quite simple. Each frame on these protocols has a maximum size (MTU) and a fixed size which needs to be allocated for the header of each frame.
After subtracting the overhead for the headers, you have the throughput via IP.
For TCP, we can ignore the data overhead for now, as the main factor is the additional round-trips added. In this case let's only deal with the handshake and ignore the other details for now. This means for the SYN-ACK-ACK sequence, we will to have account for twice the delay before the link is established from the client side.
For FTP, let's also assume the most simple case, anonymous login, active transfer, no encoding. That adds one more roundtrip before the actual data transfer starts.
Why did we choose to ignore the data size in the FTP and TCP protocol? Because for all modern link speeds, this is completely masked by the delay.
So in total your effective throughput is now PHY rate * Ethernet overhead * IP overhead + file size / (4 * Delay)
Choosing a different transfer encoding in FTP would add another factor to the left side. Accounting for TCP window scaling, retransmissions, login via FTP etc. would add more round-trips.
There could also be additional protocols in that stack, introducing further overhead. E.g. network tunnels.

UDP packet loss rate might increase on conditions?

Does UDP packet loss percentage might increase considering packet size? For example if I send 100'000 packets, in first try byte[] size is 30, but second 300. Could packet size play role in it's drop ability or packet loss percentage is not its size dependent?
The packet loss is depending on the size of the packet. This has several reasons.
IP packets can go up to 64k approximately, but they are fragmented up to the MTU of ethernet and if one of those packets gets lost , the whole IP packet is dropped. For larger packets if the traffic is high the probability is higher that the larger packet will be dropped. The MTU is around 1500 bytes.
There is more to it than just that. Internally a protocol stack is implemented using internal buffers that are a lot smaller than the mtu, this can vary from 300 bytes and larger. But the point is that these buffers are also a limited resource. If the network device runs out of buffers, then the packet will be dropped as well.
If you don't know the MTU on the network in question according to the link below a 512-byte UDP payload is considered reasonable to allow a margin for other header information that you may not have anticipated.
What is the largest Safe UDP Packet Size on the Internet
Because you're sending larger packets, yes it could increase the chances that packets are dropped.
Now if you compare sending 100000 packets of 30 bytes or 10000 packets of 300 bytes, even though the user data is the same the total size of the packets is larger due to the headers.

Receive buffer size for tcp/ip sockets

What's the maximum data size one should expect in a receive operation? The data that has to be sent is very large, but at some point there will be packet fragmentation I guess?
You can always limit the size of the buffer recv() would fill (parameter)
Your application design should not be sensitive to the amount of bytes recv() is willing to provide in one call.
It has little to do with MTU. In some TCP stack designs, one call to recv() would not return more than one datagram of underlying packet protocol. In others, it may be as big as socket's receive buffer.
There is something like maximum network packet size:
MTU
this indicates the max size off the low level buffer (3 iso/osi layer IP) during data transfer over network (not loopback).
Which is typically 1492 in Ethernet networks.
So it's worth to optimize data transfer to size of this amount.
(there are also so called Jumbo frames which breaks this rule, but there must be software/hardware which accepts that)
However simple recv() on socket, can return more bytes than MTU.
So you need to transfer first packet with the size of the rest data.
size = recv(512) // size should came in one shot
while( count(data) == size) // the rest of actual data can came sliced, so You should receive until size
data[offset] = recv(size)