(Using Linux)
Creating TCP packets using raw sockets - it turns out that calculating the checksum is the bottleneck for me, in high performance networks. Since the NIC's would support checksum offloading, and ethtool also says that it is enabled, I hoped that I could use checksum offloading.
But it seems that the checksum is not calculated, when I use raw sockets. Is there a way to enable tcp checksum offloading using raw sockets?
Edit:
Actually the behaviour of my machine/NIC (Thinkpad x201) does not seem to be too logical: when sending packets with normal tcp sockets, all checksums are wrong, on the loopback interface as well as between machines. Funnily the other machine silently delivers the packets though ?
Edit2: Ok now I just looked at the packets on the wrong machine, the offloading works. But when I leave the tcp_checksum field 0, it does not get filled in, it simply stays 0.
I have the same problem here: sent TCP or UDP packet in raw socket but can't take advantage the NIC whose checksuming-offloading is on. Wish there is a setsockopt() or ioctl() type of function that enable checksuming-offloading on the raw socket.
For the question why wireshark shows packets to have checksum errors but destination host accepts all the packets anyway, the reason is that wireshark (through winpcap etc if on windows) captures packets before the packets reached the NIC from OS. The packets don't have the checksum fields filled correctly by OS or application --- this is what checksum offloading feature on NIC is for.
The question is, how to enable NIC to do checksum offloading on a raw socket.
Related
Golang application with a client and server.
Server uses net.ListenUDP -- client also uses net.ListenUDP, connects to server and sends a packet with conn.WriteToUDP with the server address.
Server receives the packet with ReadFromUDP and grabs the return address. Using this return address, it then sends a large number of packets back to the client.
When running both client and server on local machine, this works perfectly. Using Wireshark I can inspect the UDP packets and see that they contain the source and destination ports - and in the application I can see that they arrive and my various checksum tests show the data is accurate.
I then moved the server off site to a remote machine. The application stops working. I can successfully send the first message from the client to server - this is received just fine. The server sends the response back 'toward' the client - but the client never receives them.
Using Wireshark, I can see that the packets do arrive back on the local machine with the correct IP address. It appears that my network router has performed NAT on the outgoing packets - and has correctly re-addressed response packets to the internal IP.. BUT there is no port.
So I have UDP packets arriving on the correct machine, but no port - so the client application does not receive them. Application times out on ReadFromUDP.
I don't know if it is relevant, but on local machine, Wireshark labels the packets as BT-uTP Utorrent packets. When they come in from remote server, this is what I see in Wireshark - note the lack of Port.
Any thoughts how I can solve this. I didn't think this was a UDP hole punching problem because although I am establishing a connection across a NAT it is with a server not a peer.
This packet is fragmented, You can see this under Internet Protocol Version 4 > Flags.
If you look at the frame as shown on the bottom of the picture you provided you should see the ports.
net.ListenUDP doesn't appear to support fragmentation at the socket level.
Do you have a PPPoe connection? You may need to reduce your packet size being sent by 8 bytes or change the MTU on the routers external interface of the remote side. You may also need to change the local routers MTU if it's on a PPPoe interface.
I'm working on simple traffic tunneling solution (Linux).
Client side creates tun interface, routes all traffic on it, packages all arrived packets and sends to the server side via udp or tcp connection.
Server side expected to work like NAT. Change source ip address, source port (for tcp/udp) put packet on external network interface via sock_raw, listen for response via sock_raw, keep map of original-source-port <-> replaced-source-port and send responses back to the client.
The question is: how should I choose replaced-source-port ? OS chooses them from ephemeral ports. I can't choose it by myself, it would cause conflicts. OS kernel chooses port after I send packet via sock_raw and I have no chance to build original-source-port <-> replaced-source-port map. Even if I choose port by myself – OS kernel will reply with tcp rst to all incoming tcp packets with dst port not associated with particular app.
P.S. I'm not sure on the overall solution for tunneling too. Your suggestions would be highly appreciated.
I am trying to programatically send out ICMPv6 echo requests (ping6) using WinSock2. The ICMPv6 checksum is calculated based on the whole IPv6 packet that will be sent out. For that reason - from what I understand - the OS (kernel?) is supposed to calculate it and write it into the ICMPv6 header when sending the packet.
This works very well on a SUSE Linux Enterprise Server 11, however, Windows XP does not seem to do this. It leaves the checksum to what I set it by default (zero, I analyzed this using Wireshark), thus the receiving end will discard it and not reply.
IPv6 is correctly set up on this WinXP machine. With the help of Wireshark I even found out that it responds correctly to ICMPv6 pings from the SUSE Linux server, sent using the very same code. So it cannot be that Windows XP doesn't support ICMPv6. However, I wonder whether WinSock2 under Windows XP does.
The WinSock2 API does provide the IPPROTO_ICMPV6 protocol which I create my raw socket for. Is there any special socket option I need to set for the ICMPv6 checksum to be calculated automatically, or are there any other tricks?
Most probable reason for the behavior you describe is checksum offloading. It means, checksum calculation might be delegated to networking hardware, so that sniffed packet doesn't contain the correct checksum value. Refer to http://www.wireshark.org/docs/wsug_html_chunked/ChAdvChecksums.html or to http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Checksum_offload
We have a .NET 2.0 desktop application which sends and receives network
packets over UDP.
Several users have reported an occasional socket error 10052 which happens
when the code calls socket.BeginReceiveFrom on a the UDP socket.
What does this mean?
The official MS documentation for socket error 10052 says - quote:
"WSAENETRESET (10052) Network dropped connection on reset . The connection
has been broken due to keep-alive activity detecting a failure while the
operation was in progress. It can also be returned by setsockopt if an
attempt is made to set SO_KEEPALIVE on a connection that has already
failed."
This just doesn't make much sense for a UDP socket since UDP is a
connectionless protocol.
I know that another close error code 10054 in connection with UDP sockets
means that an ICMP message "Port Unreachable" was received, and I am
wondering if 10052 might map to another ICMP message?
I have googled this for months, read network books, etc. but can't find
anything.
Please help - what does socket error 10052 on a UDP socket mean?
Thanks in advance
See http://msdn.microsoft.com/en-us/library/ms740120%28v=vs.85%29.aspx, which describes the recvfrom function. It says of WSAENETRESET (which is winsock error 10052):
For a datagram socket, this error indicates that the time to live has expired.
Be sure that TTL value is high enough, when sending UDP datagrams.
If you are using UdpClient class.
Then use the following before sending the datagram:
myUdpClient.Ttl = 255;
Note: 255 is the maximum value for TTL.
There is some network problem if that value is not enough.
WSAE NET RESET suggests that it happens due to a reset of the network interface itself. Your program is sitting there bound to a UDP port, so in a sense it is connected, but to the network interface rather than to a remote peer.
Try starting your program, getting it to the point where this BeginReceiveFrom call is about to be made, then disable your NIC in the Device Manager and re-enable it. Or, with Wi-Fi, drop and reestablish the connection to the WAP. It might even happen by just unplugging the Ethernet cable to your machine, as recent versions of Windows default to killing all sockets connected through that NIC when this happens.
It would explain the rare problem reports from the field. This probably only happens when there is some local networking fault at the hardware level.
I have an FPGA device with which my code needs to talk. The protocol is as follows:
I send a single non-zero byte (UDP) to turn on a feature. The FPGA board then begins spewing data on the port from which I sent.
Do you see my dilemma? I know which port I sent the message to, but I do not know from which port I sent (is this port not typically chosen automatically by the OS?).
My best guess for what I'm supposed to do is create a socket with the destination IP and port number and then reuse the socket for receiving. If I do so, will it already be set up to listen on the port from which I sent the original message?
Also, for your information, variations of this code will be written in Python and C#. I can look up specific API's as both follow the BSD socket model.
This is exactly what connect(2) and getsockname(2) are for. As a bonus for connecting the UDP socket you will not have to specify the destination address/port on each send, you will be able to discover unavailable destination port (the ICMP reply from the target will manifest as error on the next send instead of being dropped), and your OS will not have to implicitly connect and disconnect the UDP socket on each send saving some cycles.
You can bind a socket to a specific port, check man bind
you can bind the socket to get the desired port.
The only problem with doing that is that you won't be able to run more then one instance of your program at a time on a computer.
You're using UDP to send/receive data. Simply create a new UDP socket and bind to your desired interface / port. Then instruct your FPGA program to send UDP packets back to the port you bound to. UDP does not require you to listen/set up connections. (only required with TCP)