My PC has two gigabit ethernet connections (NICs) - one on the motherboard, and one on a plugin card. I've never used multiple NICs before, and I'm simply not clear on how the OS resolves which NIC to use, and at what stage it occurs. Chance are "you don't have to know" because it happens automatically... but I'd still like to know - does it happen when you call the bind() function, for example, or later during a send or receive? Is it precisely the same process prior to both send and receive? Is it the same for TCP, UDP or any other protocol? Is it different between Windows and UNIX/Linux or Mac systems?
I'm motivated to ask because I have some Winsock2 code that "was working fine", but which stopped working when I reversed the order of the send and receive on a single socket. I discovered that it only received when there was at least one packet sent first.
I'm 99% sure there will be a bug somewhere, but I'd like to be 100% sure in the unlikely case that this is a "feature", or a bug beyond my code... because the symptoms are consistent with the possibility that the receive functionality is working fine, but somehow waiting to receive on the wrong NIC.
It consults the IP routing tables to find the cheapest route, whuch determines the outbound NIC. This happens when you connect(). In UDP if you don't connect, as you usually don't, it happens on send().
Related
Recently we ran into what looked like a connectivity issue when a particular customer of ours installed our product. We ultimately traced it to a low MTU (~1300 bytes) being configured on one of the devices in the network. In this particular deployment, we had two Windows machines running our application communicating with one another, and their link MTUs were set at 1500.
One thing that made this particularly difficult to troubleshoot, was that our application would work fine during the handshake phase (where only small requests are sent), but would sometimes fail sending a specific request of size ~4KB across the network. If it makes a difference, the application is written in C# and these are WCF messages.
What could account for this indeterminism? I would have expected this to always fail, as the message size we were sending was always larger than the link MTU perceived by the Windows client, which would lead to at least one full 1500-byte packet, which would lead to problems. Is there something in TCP that could make it prefer smaller packets, but only sometimes?
Some other things that we thought might be related:
1) The sockets were constantly being set up and torn down (as the application received what it interpreted as a network failure), so this doesn't appear to be related to TCP slow start.
2) I'm assuming that WCF "quickly" pushes the entire 4KB message to the socket, so there's always something to send that's larger than 1500 bytes.
3) Using WireShark, I didn't spot any TCP retransmissions which might explain why only subsets of the buffer were being sent.
4) Using WireShark, I saw a single 4KB IP packet being sent, which perhaps indicates that TCP Segment Offloading is being implemented by the NIC? (I'm not sure how TSO would look on WireShark). I didn't see in WireShark the 4KB request being broken down to multiple IP packets, in either successful or unsuccessful instances.
5) The customer claims that there's no route between the two Windows machines that circumvents the "problematic" device with the small MTU.
Any thoughts on this would be appreciated.
I developed (in VB6) a small app that send an UDP broadcast message (address 255.255.255.255) and then listen to the answer from the electronic devices we produce (this is to know the IP address of the devices for further messagging).
This was about 6-7 years ago, and all worked well till 1 month ago.
Now the UDP messages does not exit from my PC. With wireshark i can see the UDP messages sent from other PCs, and the answers from the connected devices, but not the messages i send from my PC.
Also, i use Comodo firewall, and even it can't see the message coming out (i deleted the related rules to let Comodo ask permission for my program, but the request pops out only when it sends TCP messages). Even didabling Comodo did not solve the problem.
WinXP firewall is disabled and untouched from years.
So my guess is that a recent Windows update changed something.... but what should i look?
What's blocking UDP calls BEFORE it reaches Comodo Firewall, or how to discover it?
I have no antivirus, and just in case i disinstalled Windows Live Protection ... so really i don't know what to look. I'm an experieced Windows programmer but my API knowledge is mostly about graphics, and i'm not a network expert either (we work with microprocessor, and use TCP/UDP sockets for basic communication).
Thanks
Well, reinstalled VB6 (sigh) and discovered that, as usual, when the problems are inesplicable the cause is often a trivial mistake.
The UDP socked was using a predefined port, and now that port is already in use. The error trapping was hiding the generated error, so i did'nt know it.
Changed the local port to 0 allows the system to pick one random port, which is fine for my purposes.
I'm trying to generate ARP (Address Resolution Protocol) request packets on the iPhone and listen for the associated responses that come back.
Google searches have led me into a dead-end. In order to send logical-layer packets, I'd need something along the lines of a raw socket, but need super-user permissions to create them. I'm trying to avoid jailbreaking my phone.
There's lots of c code out there that can do this, but I can't find anything that can translate to iOS due to the permissions.
I was ready to throw in the towel when I decided to Wireshark a couple network discovery apps I have. Namely "Fing" and "Pinggy" (hats off to Fing and Pinggy btw... awesome apps!)
https://itunes.apple.com/us/app/pinggy/id562201096?mt=8
https://itunes.apple.com/us/app/fing-network-scanner/id430921107?mt=8
Running Wireshark alongside these iPhone apps shows that they do an ARP scan from XXX.XXX.X.0 all the way to XXX.XXX.X.255. I do not see any ICMP packets go out simultaneously with the "ARPs". This leads me to believe that sending and receiving ARP packets are indeed possible on iOS.
I've thought about a ping sweep, assuming that it will generate ARP requests on its own. However, I will still need a raw socket to listen to the responses, correct?
Questions: What's available for sending/receiving packets at the logical layer? Specifically for sending receiving ARP packets? Am I missing anything fundamental?
Thanks in advance!
ARP requests do go out when I attempted to ping the problematic devices. This was seen with a Wireshark session running alongside the ping scanner. I found that I could not reproduce the "missing devices" I was seeing earlier that led me to ask my original question.
So, to answer my own question: ARP requests are sent per IP address when doing a simple ping scan on my subnet. I would see the ARP request go out (using Wireshark) as well as the ping request. If you need to generate an ARP request, simply send out a ping.
Even if the "problematic" device won't respond to ping requests, the ARP table will be notified of its existence.
You can't do what you want to do, and get the app in the AppStore,
since what you are trying to do isn't in the public API.
So one thing you could do, for testing purposes on your own network, or enterprise distributed apps is looking in the private/undocumented APIs.
One such list is maintained at https://github.com/nst/iOS-Runtime-Headers, but I can't vouch for its accuracy.
Good luck!
I have been working on a local LAN service which uses a multicast port to coordinate several machines. Each machine listens on the multicast port for instructions, and when a certain instruction is received, will send messages directly to other machines.
In other words the multicast port is used to coordinate peer-to-peer UDP messaging.
In practice this works quite well but there is a lingering issue related to correctly setting up these peer-to-peer transmissions. Basically, each machine needs to announce on the multicast port its own IP address, so that other machines know where to send messages when they wish to start a P2P transmission.
I realize that in general the idea of identifying the local IP is not necessarily sensible, but I don't see any other way-- the local receiving IP must be announced one way or another. At least I am not working on the internet, so in general I won't need to worry about NATs, just need to identify the local LAN IP. (No more than 1 hop for the multicast packets is allowed.)
I wanted to, if possible, determine the IP passively, i.e., without sending any messages.
I have been using code that calls getifaddrs(), which returns a linked list of NICs on the machine, and I scan this list for non-zero IP addresses and choose the first one.
In general this has worked okay, but we have had issues where for example a machine with both a wired and wifi connection are active, it will identify the wrong one, and the only work-around we found was to turn off the wifi.
Now, I imagine that a more reliable solution would be to send a message to the multicast telling other machines to report back with the source address of the message; that might allow to identify which IP is actually visible to the other machines on the net. Alternatively maybe even just looking at the multicast loopback message would work.
What do you think, are there any passive solutions to identify which address to use? If not, what's the best active solution?
I'm using POSIX socket API from C. Must work on Linux, OS X, Windows. (For Windows I have been using GetAdapterAddresses().)
Your question about how to get the address so you can advertise it right is looking at it from the wrong side. It's a losing proposition to try to guess what your address is. Better for the other side to detect it itself.
When a listening machine receives a message, it is probably doing do using recvfrom(2). The fifth argument is a buffer into which the kernel will store the address of the peer, if the underlying protocol offers it. Since you are using IP/UDP, the buffer should get filled in with a sockaddr_in showing the IP address of the sender.
I'd use the address on the interface I use to send the announcement multicast message -- on the wired interface announce the wired address and on the wireless interface announce the wireless address.
When all the receivers live on the wired side, they will never see the message on the wireless network.
When there is a bridge between the wired and the wireless network, add a second step in discovery for round-trip time estimation, and include a unique host ID in the announcement packet, so multiple routes to the same host can be detected and the best one chosen.
Also, it may be a good idea to add a configuration option to limit the service to certain interfaces.
I have clients that need to all connect to a single server process. I am using UDP discovery for the clients to find the server. I have the client and server exchange IP address and port number, so that a TCP/IP connection can be established after completion of the discovery. This way the packet size is kept small. I see that this could be done in one of two ways using UDP:
Each client sends out its own multicast message in search of the server, which the server then responds to. The client can repeat sending this multicast message in regular intervals (in the case that the server is down) until the server responds.
The server sends out a multicast message beacon at regular intervals. The clients subscribe to the multicast group and in this way receives the server's multicast message and complete the discovery.
In 1. if there are many clients then initially there would be many multicast messages transmitted (one from each client). Only the server would subscribe and receive the multicast messages from the clients. Once the server has responded to the client, the client ceases to send out the multicast message. Once all clients have completed their discovery of the server no further multicast messages are transmitted on the network. If however, the server is down, then each client would be sending out a multicast message beacon in intervals until the server is back up and can respond.
In 2. only the server would submit a multicast message beacon in regular intervals. This message would end up getting routed to all clients that are subscribed to the multicast group. Once the clients receive the packet the client's UDP listening socket gets closed and they are no longer subscribed to the multicast group. However, the server must continue to send the multicast beacon, so that new clients can discover it. It would continue sending out the beacon at regular intervals regardless of whether any clients are out their requiring discovery or not.
So, I see pros and cons either way. It seems to me that #1 would result in heavier load initially, but this load eventually reduces down to zero. In #2 the server would continue sending out a beacon forever.
UDP and multicast is a fairly new topic to me, so I am interested in finding out which would be the preferred approach and which would result in less network load.
I've used option #2 in the past several times. It works well for simple network topologies. We did see some throughput problems when UDP datagrams exceeded the Ethernet MTU resulting in a large amount of fragmentation. The largest problem that we have seen is that multicast discovery breaks down in larger topologies since many routers are configured to block multicast traffic.
The issue that Greg alluded to is rather important to consider when you are designing your protocol suite. As soon as you move beyond simple network topologies, you will have to find solutions for address translation, IP spoofing, and a whole host of other issues related to the handoff from your discovery layer to your communications layer. Most of them have to do specifically with how your server identifies itself and ensuring that the identification is something that a client can make use of.
If I could do it over again (how many times have we uttered this phrase), I would look for standards-based discovery mechanisms that fit the bill and start solving the other protocol suite problems. The last thing that you really want to do is come up with a really good discovery scheme that breaks the week after you deploy it because of some unforeseen network topology. Google service discovery for a starting list. I personally tend towards DNS-SD but there are a lot of other options available.
I would recommend method #2, as it is likely (depending on the application) that you will have far more clients than you will servers. By having the server send out a beacon, you only send one packet every so often, rather than one packet for each client.
The other benefit of this method, is that it makes it easier for the clients to determine when a new server becomes available, or when an existing server leaves the network, as they don't have to maintain a connection to each server, or keep polling each server, to find out.
Both are equally viable methods.
The argument for method #1 would be that in normal principle, clients initiate requests, and servers listen and respond to them.
The argument for method #2 would be that the point of multicast is so that one host can send a packet and it can be received by many clients (one-to-many), so it's meant to be the reverse of #1.
OK, as I think about this I'm actually drawn to #2, server-initiated beacon. The problem with #1 is that let's say clients broadcast beacons, and they hook up with the server, but the server either goes offline or changes its IP address.
When the server is back up and sends its first beacon, all the clients will be notified at the same time to reconnect, and your entire system is back up immediately. With #1, all of the clients would have to individually realize that the server is gone, and they would all start multicasting at the same time, until connected back with the server. If you had 1000 clients and 1 server your network load would literally be 1000x greater than method #2.
I know these messages are most likely small, and 1000 packets at a time is nothing to a UDP network, but just from a design standpoint #2 feels better.
Edit: I feel like I'm developing a split-personality disorder here, but just thought of a powerful point of why #1 would be an advantage... If you ever wanted to implement some sort of natural load balancing or scaling with multiple servers, design #1 works well for this. That way the first "available" server can respond to the client's beacon and connect to it, as opposed to #2 where all the clients jump to the beaconing server.
Your option #2 has a big limitation in that it assumes that the server can communicate more or less directly with every possible client. Depending on the exact network architecture of your operational system, this may not be the case. For example, you may be depending that all routers and VPN software and WANs and NATs and whatever other things people connect networks together with, can actually handle the multicast beacon packets.
With #1, you are assuming that the clients can send a UDP packet to the server. This is an entirely reasonable expectation, especially considering the very next thing the client will do is make a TCP connection to the same server.
If the server goes down and the client wants to find out when it's back up, be sure to use exponential backoff otherwise you will take the network down with a packet storm someday!