How to receive multicast data on a multihomed server's non-default interface - sockets

I have a linux server with two NICs (eth0 and eth1), and have set eth0 as default in "ip route." Now I would like to receive multicast packets on eth1. I have added "224.0.20.0/24 dev eth1 proto static scope link" to the routing table, and I connect as follows:
sock = socket(PF_INET, SOCK_DGRAM, IPPROTO_IP);
// port 12345, adress INADDR_ANY
bind(sock, &bind_addr, sizeof(bind_addr));
// multicast address 224.0.20.100, interface address 10.13.0.7 (=eth1)
setsockopt(sock, IPPROTO_IP, IP_ADD_MEMBERSHIP, &imreq, sizeof(imreq));
According to ip maddr it connects to that group on the right interface, and tshark -i eth1 shows that I am actually getting multicast packets.
However, I don't get any packets when calling recvfrom(sock). If I set "ip route default" to eth1 (instead of eth0), I do get packets via recvfrom. Is this an issue with my code or with my network setup, and what is the correct way of doing this?
(update) solution: caf hinted that this might be the same problem; indeed: after doing echo 0 > /proc/sys/net/ipv4/conf/eth1/rp_filter I can now receive multicast packets!

caf's comment that this is a duplicate of receiving multicast on a server with multiple interfaces (linux) answered this! (And I post this as an answer for clarity.) Namely, an echo 0 > /proc/sys/net/ipv4/conf/eth1/rp_filter resolves my issue.

Try adding a netmask and specifying 10.13.0.7 as the gateway in your routing table entry.

Correct, assuming you had two NICs with a default gw on only one of them.
Multicast uses unicast routes to determine path back to the source. It means, if multicast path is different from unicast path, then a multicast path will exit. It's a loop prevention mechanism called RPF check.
In this case the application bound to a NIC effectively was forced to join the IGMP over where as the unicast routes were learned from the other NIC with default gateway. So the check was failing. Thus no data.
You don't need to add any static routes. It should just work when you change the rp_filter value to 0.

Related

Can I bind a client socket to an ip not belongs to any interfaces?

For a client socket, I can use bind() to bind it to a specific source Ip address to select a specific interface. Or I can use connect() directly then it will pick the source ip based on routing table.
I wonder can I bind a client socket to an ip not belongs to any interfaces ? E.g.: I have two interfaces:
eth0 : ip0
eth1 : ip1
(1) If I bind the client socket to ip2. Is this feasible ?
(2) If (1) is feasible, assuming client socket sent packets thru eth0. Then I configure the iptables in this client host, to forward all incoming packets to ip0 (eth0). In this case, if there are packets sent back from server side with destination ip address is ip2 (assuming this packet will reach my client host). Will my client socket receive the packet ?
Thanks in advance.
I don't really understand your question, but here goes:
For client sockets, you typically want the the OS and its routing table to pick the best interface for you using any available port. In which case, you bind to INADDR_ANY (0) and port 0. Or don't explicitly call bind at at all. Just call connect() and it will do the right thing.
If you need the client connection to occur through a specific interface, then bind the socket to a specific IP address. And then the OS will attempt to use that interface for the subsequent connect call and all traffic after that.
Attempting to bind the socket to an IP that doesn't belong to a local interface is surely going to result in an error.
Not sure what you mean about the iptables stuff. Sounds dicey.
Please have a look at:
https://www.rsyslog.com/doc/v8-stable/configuration/modules/omfwd.html#ipfreebind
MAN:
https://man7.org/linux/man-pages/man7/ip.7.html
IP_FREEBIND (since Linux 2.4)
If enabled, this boolean option allows binding to an IP
address that is nonlocal or does not (yet) exist. This
permits listening on a socket, without requiring the
underlying network interface or the specified dynamic IP
address to be up at the time that the application is
trying to bind to it. This option is the per-socket
equivalent of the ip_nonlocal_bind /proc interface
described below.

What does it mean to bind() a socket to any address other than localhost?

I don't understand what it means to bind a socket to any address other than 127.0.0.1 (or ::1, etc.).
Am I not -- by definition -- binding the socket to a port on my own machine.. which is localhost?
What sense does it make to bind or listen to another machine or IP address's port?
Conceptually, it just doesn't make sense to me!
(This has proven surprisingly hard to Google... possibly because I'm not Googling the right terms.)
Binding of a socket is done to address and port in order to receive data on this socket (most cases) or to use this address/port as the source of the data when sending data (for example used with data connections in FTP server).
Usually there are several interfaces on a specific machine, i.e. the pseudo-interface loopback where the machine can reach itself, ethernet, WLAN, VPN... . Each of these interfaces can have multiple IP addresses assigned. For example, loopback usually has 127.0.0.1 and with IPv6 also ::1, but you can assign others too. Ethernet or WLAN have the IP addresses on the local network, i.e. 172.16.0.34 or whatever.
If you bind a socket for receiving data to a specific address you can only receive data sent to this specific IP address. For example, if you bind to 127.0.0.1 you will be able to receive data from your own system but not from some other system on the local network, because they cannot send data to your 127.0.0.1: for one any data to 127.0.0.1 will be sent to their own 127.0.0.1 and second your 127.0.0.1 is an address on your internal loopback interface which is not reachable from outside.
You can also bind a socket to a catch-all address like 0.0.0.0 (Ipv4) and :: (Ipv6). In this case it is not bound to a specific IP address but will be able to receive data send to any IP address of the machine.

Why is UDP socket identified by destination IP address and destination port?

According to "Computer networking: a top-down approach", Kurose et al., a UDP socket is fully identified by destination IP and destination port.
Why do we need destination IP here? I thought UDP only need the destination port for the demultiplexing.
The machine may have multiple IPs, and different sockets may be bound to the same port on different IPs. It needs to use the destination IP to know which of these sockets the incoming datagram should be sent to.
In fact, it's quite common to use a different socket for each IP. When sending the reply, we want to ensure that the source IP matches the request's destination IP, so that the client can tell that the response came from the same server it sent to. By using different sockets for each IP, and sending the reply out the same socket that the request came in on, this consistency is maintained. Some socket implementations have an extension to allow setting the source IP at the time the reply is being sent, so they can use a single socket for all IPs, but this is not part of the standard sockets API.
I think that you are confusing UDP with Mulitcast.
Multicast is a broadcast protocol that doesn't need a destination IP address. It only needs a port number because it is delivered to all IP's on the given port.
UDP, by contrast, is only delivered to one IP. This is why it needs that destination IP address.

Freebsd How to forward any classes IP?

I installed a FreeBSD 10.0 server(IP:10.1.2.3), and want to send packets to remote clients, with fake source ip, such as:
socket_sendto($socket $data, $length, 0, $ip, $port)
$data contains IP header, where i specify my "fake ip" here.
The questions is:
if i specify the IP to C class, everything goes well(below success):
10.1.2.4
10.1.3.5
if i specify the IP to B or A class, nothing send to destination(below failed):
10.2.1.2
11.1.2.3
So, how can i resolve the issue?
Btw i already modified sysctl.conf to :
net.inet.ip.forwarding=1
net.inet6.ip6.forwarding=1
net.inet.ip.fastforwarding=1
Sorry for poor English.
May be related to routing (netmasks). If Your server IP is 10.1.2.3/16, all IP adresses like 10.1.X.Y are directly reachable, but if You try to send to IP addresses outside this range, IP packet goes via routers. Properly configured router should not pass such fake packets. You should check defaultrouter setting in /etc/rc.conf. This defaultrouter may receive such fake packets, unless something else is blocking them on Your FreeBSD machine.
#Kestas is right, try the commands bellow:
1) Verify if you have route to the destination;
# netstat -ln
2) Test the connectivity:
# tracepath 10.2.1.2
3) Put on same network:
# ifconfig re0 10.2.1.1 / 255.0.0.0
GL !

What does it mean to bind a multicast (UDP) socket?

I am using multicast UDP between hosts that have multiple network interfaces.
I am using boost::asio, and am confused by the 2 operations receivers have to make: bind, then join-group.
Why do you need to specify the local address of an interface, during bind, when you do that with every multicast group that you join?
The sister-question regards the multicast port: Since during sending, you send to a multicast address & port, why, during subscription to a multicast group, you only specify the address, not the port - the port being specified in the confusing call to bind.
Note: the "join-group" is a wrapper over setsockopt(IP_ADD_MEMBERSHIP), which as documented, may be called multiple times on the same socket to subscribe to different groups (over different networks?). It would therefore make perfect sense to ditch the bind call and specify the port every time I subscribe to a group.
From what I see, always binding to "0.0.0.0" and specifying the interface address when joining the group, works very well. Confused.
To bind a UDP socket when receiving multicast means to specify an address and port from which to receive data (NOT a local interface, as is the case for TCP acceptor bind). The address specified in this case has a filtering role, i.e. the socket will only receive datagrams sent to that multicast address & port, no matter what groups are subsequently joined by the socket. This explains why when binding to INADDR_ANY (0.0.0.0) I received datagrams sent to my multicast group, whereas when binding to any of the local interfaces I did not receive anything, even though the datagrams were being sent on the network to which that interface corresponded.
Quoting from UNIX® Network Programming Volume 1, Third Edition: The Sockets Networking API by W.R Stevens.
21.10. Sending and Receiving
[...] We want the receiving socket to bind the multicast group and
port, say 239.255.1.2 port 8888. (Recall that we could just bind the
wildcard IP address and port 8888, but binding the multicast address
prevents the socket from receiving any other datagrams that might
arrive destined for port 8888.) We then want the receiving socket to
join the multicast group. The sending socket will send datagrams to
this same multicast address and port, say 239.255.1.2 port 8888.
The "bind" operation is basically saying, "use this local UDP port for sending and receiving data. In other words, it allocates that UDP port for exclusive use for your application. (Same holds true for TCP sockets).
When you bind to "0.0.0.0" (INADDR_ANY), you are basically telling the TCP/IP layer to use all available adapters for listening and to choose the best adapter for sending. This is standard practice for most socket code. The only time you wouldn't specify 0 for the IP address is when you want to send/receive on a specific network adapter.
Similarly if you specify a port value of 0 during bind, the OS will assign a randomly available port number for that socket. So I would expect for UDP multicast, you bind to INADDR_ANY on a specific port number where multicast traffic is expected to be sent to.
The "join multicast group" operation (IP_ADD_MEMBERSHIP) is needed because it basically tells your network adapter to listen not only for ethernet frames where the destination MAC address is your own, it also tells the ethernet adapter (NIC) to listen for IP multicast traffic as well for the corresponding multicast ethernet address. Each multicast IP maps to a multicast ethernet address. When you use a socket to send to a specific multicast IP, the destination MAC address on the ethernet frame is set to the corresponding multicast MAC address for the multicast IP. When you join a multicast group, you are configuring the NIC to listen for traffic sent to that same MAC address (in addition to its own).
Without the hardware support, multicast wouldn't be any more efficient than plain broadcast IP messages. The join operation also tells your router/gateway to forward multicast traffic from other networks. (Anyone remember MBONE?)
If you join a multicast group, all the multicast traffic for all ports on that IP address will be received by the NIC. Only the traffic destined for your binded listening port will get passed up the TCP/IP stack to your app. In regards to why ports are specified during a multicast subscription - it's because multicast IP is just that - IP only. "ports" are a property of the upper protocols (UDP and TCP).
You can read more about how multicast IP addresses map to multicast ethernet addresses at various sites. The Wikipedia article is about as good as it gets:
The IANA owns the OUI MAC address 01:00:5e, therefore multicast
packets are delivered by using the Ethernet MAC address range
01:00:5e:00:00:00 - 01:00:5e:7f:ff:ff. This is 23 bits of available
address space. The first octet (01) includes the broadcast/multicast
bit. The lower 23 bits of the 28-bit multicast IP address are mapped
into the 23 bits of available Ethernet address space.
Correction for What does it mean to bind a multicast (udp) socket? as long as it partially true at the following quote:
The "bind" operation is basically saying, "use this local UDP port for sending and receiving data. In other words, it allocates that UDP port for exclusive use for your application
There is one exception. Multiple applications can share the same port for listening (usually it has practical value for multicast datagrams), if the SO_REUSEADDR option applied. For example
int sock = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP); // create UDP socket somehow
...
int set_option_on = 1;
// it is important to do "reuse address" before bind, not after
int res = setsockopt(sock, SOL_SOCKET, SO_REUSEADDR, (char*) &set_option_on,
sizeof(set_option_on));
res = bind(sock, src_addr, len);
If several processes did such "reuse binding", then every UDP datagram received on that shared port will be delivered to each of the processes (providing natural joint with multicasts traffic).
Here are further details regarding what happens in a few cases:
attempt of any bind ("exclusive" or "reuse") to free port will be successful
attempt to "exclusive binding" will fail if the port is already "reuse-binded"
attempt to "reuse binding" will fail if some process keeps "exclusive binding"
It is also very important to distinguish a SENDING multicast socket from a RECEIVING multicast socket.
I agree with all the answers above regarding RECEIVING multicast sockets.
The OP noted that binding a RECEIVING socket to an interface did not help.
However, it is necessary to bind a multicast SENDING socket to an interface.
For a SENDING multicast socket on a multi-homed server, it is very important to create a separate socket for each interface you want to send to. A bound SENDING socket should be created for each interface.
// This is a fix for that bug that causes Servers to pop offline/online.
// Servers will intermittently pop offline/online for 10 seconds or so.
// The bug only happens if the machine had a DHCP gateway, and the gateway is no longer accessible.
// After several minutes, the route to the DHCP gateway may timeout, at which
// point the pingponging stops.
// You need 3 machines, Client machine, server A, and server B
// Client has both ethernets connected, and both ethernets receiving CITP pings (machine A pinging to en0, machine B pinging to en1)
// Now turn off the ping from machine B (en1), but leave the network connected.
// You will notice that the machine transmitting on the interface with
// the DHCP gateway will fail sendto() with errno 'No route to host'
if ( theErr == 0 )
{
// inspired by 'ping -b' option in man page:
// -b boundif
// Bind the socket to interface boundif for sending.
struct sockaddr_in bindInterfaceAddr;
bzero(&bindInterfaceAddr, sizeof(bindInterfaceAddr));
bindInterfaceAddr.sin_len = sizeof(bindInterfaceAddr);
bindInterfaceAddr.sin_family = AF_INET;
bindInterfaceAddr.sin_addr.s_addr = htonl(interfaceipaddr);
bindInterfaceAddr.sin_port = 0; // Allow the kernel to choose a random port number by passing in 0 for the port.
theErr = bind(mSendSocketID, (struct sockaddr *)&bindInterfaceAddr, sizeof(bindInterfaceAddr));
struct sockaddr_in serverAddress;
int namelen = sizeof(serverAddress);
if (getsockname(mSendSocketID, (struct sockaddr *)&serverAddress, (socklen_t *)&namelen) < 0) {
DLogErr(#"ERROR Publishing service... getsockname err");
}
else
{
DLog( #"socket %d bind, %# port %d", mSendSocketID, [NSString stringFromIPAddress:htonl(serverAddress.sin_addr.s_addr)], htons(serverAddress.sin_port) );
}
Without this fix, multicast sending will intermittently get sendto() errno 'No route to host'.
If anyone can shed light on why unplugging a DHCP gateway causes Mac OS X multicast SENDING sockets to get confused, I would love to hear it.