Solaris11: multicast over loopback interface doesn't work - solaris

We have some processes that send data to other processes using multicast. Up to now, we have specified a normal network interface to send / receive on, since the receiver applications are often (but not always) on different hosts from the senders. So far, this has always worked fine, without a hitch.
We are now trying to send some of the traffic (the messages intended for receivers on the same box as the sender) via the loopback interface (by specifying "loopback" or 127.0.0.1 as the interface.) This works fine on our development system (Solaris 10), but not on the production systems (solaris 11.)
On the Solaris 10 system, "netstat -ng" shows the group being joined on lo0. On the solaris 11 system, it doesn't. If I switch the receiver to listen on another interface, it works fine on both systems (the joins show up regardless of whether anybody's multicasting on the group+interface.)
I don't know if this is a Solaris 10 vs. Solaris 11 difference, or something to do with how the sysadmins for the prod systems have set things up.
Any idea what is going wrong?
If it makes any difference: we're using IPv4, the programs are written in C++. I don't think the solaris 11 systems are zoned. (I can't see why they would be, we're the only users of the machines, but you never know.)

Short answer (from Oracle, via our sysadmins):
Support for multicast over loopback was dropped in Solaris 11. I.e., it's a feature, not a bug.

Related

How does the OS resolve which NIC to send/receive on?

My PC has two gigabit ethernet connections (NICs) - one on the motherboard, and one on a plugin card. I've never used multiple NICs before, and I'm simply not clear on how the OS resolves which NIC to use, and at what stage it occurs. Chance are "you don't have to know" because it happens automatically... but I'd still like to know - does it happen when you call the bind() function, for example, or later during a send or receive? Is it precisely the same process prior to both send and receive? Is it the same for TCP, UDP or any other protocol? Is it different between Windows and UNIX/Linux or Mac systems?
I'm motivated to ask because I have some Winsock2 code that "was working fine", but which stopped working when I reversed the order of the send and receive on a single socket. I discovered that it only received when there was at least one packet sent first.
I'm 99% sure there will be a bug somewhere, but I'd like to be 100% sure in the unlikely case that this is a "feature", or a bug beyond my code... because the symptoms are consistent with the possibility that the receive functionality is working fine, but somehow waiting to receive on the wrong NIC.
It consults the IP routing tables to find the cheapest route, whuch determines the outbound NIC. This happens when you connect(). In UDP if you don't connect, as you usually don't, it happens on send().

UDP broadcast worked for years, now messages are blocked by WinXP before firewall. Clues?

I developed (in VB6) a small app that send an UDP broadcast message (address 255.255.255.255) and then listen to the answer from the electronic devices we produce (this is to know the IP address of the devices for further messagging).
This was about 6-7 years ago, and all worked well till 1 month ago.
Now the UDP messages does not exit from my PC. With wireshark i can see the UDP messages sent from other PCs, and the answers from the connected devices, but not the messages i send from my PC.
Also, i use Comodo firewall, and even it can't see the message coming out (i deleted the related rules to let Comodo ask permission for my program, but the request pops out only when it sends TCP messages). Even didabling Comodo did not solve the problem.
WinXP firewall is disabled and untouched from years.
So my guess is that a recent Windows update changed something.... but what should i look?
What's blocking UDP calls BEFORE it reaches Comodo Firewall, or how to discover it?
I have no antivirus, and just in case i disinstalled Windows Live Protection ... so really i don't know what to look. I'm an experieced Windows programmer but my API knowledge is mostly about graphics, and i'm not a network expert either (we work with microprocessor, and use TCP/UDP sockets for basic communication).
Thanks
Well, reinstalled VB6 (sigh) and discovered that, as usual, when the problems are inesplicable the cause is often a trivial mistake.
The UDP socked was using a predefined port, and now that port is already in use. The error trapping was hiding the generated error, so i did'nt know it.
Changed the local port to 0 allows the system to pick one random port, which is fine for my purposes.

Where would I learn more about interpreting network packets?

I'm working on a personal project. It's to recreate server software for the game "Chu Chu Rocket" for the Sega Dreamcast. Its' servers went down in 2004 I believe. My approach is to use dnsmasq to change the originl hostname that the game originally connected to, to my own system. With a DC-PC server set up, I have done just that, now instead of it looking up a non-existent dns record, it connects to my computer which will eventually run the server software. I've used tshark (cli wireshark) to capture what's going on between the client (dreamcast) and the server (my computer). The problem is, I'm getting data, but I'm not sure how to interpret it, I don't know what it's saying, but I'm sure it can be done because private PSO servers were created, those are far more complex.
Very simply, where would I go about learning how to interpret data packets, and possibly creating packets that will respond to such queries from the client?
Thanks,
Dragos240
If you can get the source code for the server software on your PC, then that is the best place to look.
Otherwise, all you can do is look at the protocol, compare runs, and make notes of similarities and differences. With any luck, the protocol won't be encrypted.

Poor UDP broadcast performance to multiple processes on same PC

We have an application that broadcasts data using UDP from a server system to client applications running on multiple Windows XP PC's. This is on a LAN, typically Gigabit. This has been running fine for some years.
We now have a requirement to have two (or more) of the client applications running on each quad core PC, with each instance of the application receiving the broadcast data. The method I have used to implement this is to give each client PC multiple IP addresses. Each client app then connects to the server using the same port number but on a different IP. This works functionally but the performance for some reason is very poor. My data transfer rate is cut by around a factor of 10!
To get multiple IP addresses I have tried both using two NIC adapters and assigning multiple IP addresses to a single NIC in the advanced TCP/IP network properties. Both methods seem to give similarly poor performance. I also tried using several different manufacturers NICs but that didn't help either.
One thing I did notice is that the data seems to come over more fragmented. With just a single client on a PC if I send 20kBytes of data to the client it almost always receives it all in one chunk. But with two clients running the data seems to mostly come over in blocks the size of a frame (1500 bytes) so my code has to iterate around more times. But I wouldn't expect this on its own to cause such a dramatic performance hit.
So I guess my question is does any one know why the performance is so much slower and if anything can be done to speed it up?
I know I could re-design things so that the server only sends data to one client per PC, and that client could then mirror the data on to the other clients on the same PC. But that is a major redesign and re-coding effort so I'd like to keep that as a last resort.
Instead of creating one IP address for each client, try using setsockopt() to enable the SO_REUSEADDR option for each of your sockets. This will allow all of your clients to bind to the same port on the same host address and receive the broadcast data. Should be easier to manage than the multiple NIC/IP address approach.
SO_REUSEADDR will allow broadcast and multicast sockets to share the same port and address. For more info see:
SO_REUSEADDR and UDP behavior in Windows
and
Uses of SO_REUSEADDR?

UDP multicast from specific network card

I'm looking for some networking gurus to help me with a problem. I have many computers running my software which uses UDP multicasting. This works fine if the computers are connected ONLY to one network (network A). My computer (which is also running said software) will listen on port XXXX for the multicasts. This computer has two network cards and when I connect it to another network, network B, my software goes haywire. The problem is that I do not know what network a given multicast came from. And if I send out a multicast, I cannot tell it to use network A instead of network B or vice versa.
My questions:
Is there a way to distinguish packets coming in from different networks??
Is there a way to send a multicast to network A and NOT network B?
I'm using C++ and Win32 sockets. Thanks to anyone that replies.
You should listen for multicast packets on one interface where you joined the group. You should explicitly set the interface used for sending the multicast packets (otherwise they are routed as everything else, default route, etc.). Both are accomplished via setsockopt calls. Here are some links for you:
Multicast programming - talks about setting "send" interface,
IP Multicast Extensions - talks about both "send" and "receive" interfaces.
Disclaimer: the links are admittedly Unix-centric, so your Windows mileage may vary :)
Working on a project with MC UDP on redundant NICs over the last year, we saw a similar problem. After battling it a bit with winsock, our ultimate solution was to prioritize traffic using the DOS command route
route add 224.x.x.x ... [desired gateway] METRIC 1
This ensured that the traffic only went out on the Interface we wanted.
I realize this might not be exactly what you want, but it could at least be a stopgap solution while you implement another fix.
On multihomed hosts you need to join the multicast group via all interfaces sequentially, or via all the ones you care about. If you are interested in network of origin you could use multiple M/C sockets, each bound to a different interface, same port, and each of them joined to the group; then the receiving socket itself tells you which network any incoming traffic comes from.