After physically pulling the line and reconnecting it again, pcap (I am programming it in C) produces packets which are most likely not really there and misses out on all "normal" traffic which is going on. I have two nodes on the network which continue talking pure ethernet frames and are 100% undisturbed by me pulling the cable from the sniffer node - after I reconnect, their traffic is no longer seen by pcap. I am using plain vanilla pcap_loop() without any filter or timeout. The pcap_loop() doesn't terminate when I do this. Does the handle to the interface (pcap_t descriptor) become invalid when there is no physical connection? Anyone who knows how pcap reacts on a disconnected interface?
Related
I am trying to run a simulation to test packet loss in an environment where packet collision is happening. My current setup includes several discrete machines each with their own network interface to send/receive packets. These machines are connected by wifi through an AP. I'm currently using UDP for its ability to broadcast packets on a single address. All machines are listening on a shared IP address, something like 192.168.1.255.
This answer mentions that UDP packets are unreliable, but will they fail because of a collision? Here, I use collision to refer to interference caused by multiple simultaneous transmission. That is, will the simultaneous broadcast of two UDP nodes in the network induce the unreliability I am looking to test? If it's not, will I have to look into changing my network configuration or even start tinkering with kernel code?
If the question is vague, I will say that my end goal involves writing some distributed algorithm that may or may not be resistant to collisions.
I am trying to run a simulation to test packet loss in an environment
where packet collision is happening.
You might want to include in your question what you mean by the word collision. I'm going to assume in my answer that you mean it in the traditional sense (i.e. two network endpoints transmitting at approximately the same time and thereby "talking over each other" and garbling each other's transmissions such that neither transmission is successful), and not in any broader sense of "a packet got dropped due to network congestion".
This answer mentions that UDP packets are unreliable, but will they
fail because of a collision?
The answer is going to depend entirely on what sort of network hardware you are running your UDP packets over. The UDP protocol itself is hardware-independent, so it's not going to specify anything about whether collisions can occur or not, since there's no way for it to know.
That said, most low-level networking hardware these days has provisions for avoiding collisions (in the sense I mentioned above) -- for example, modern Ethernet switches do a limited amount of active queueing/buffering of packets when necessary (which is much more efficient and reliable than the old 10Mb/sec Ethernet hubs, which basically just electrically connected the Ethernet RX and TX leads of all the endpoints into one big "shared wire", and hoped for the best)
The other commonly used networking-hardware type, Wi-Fi, also has mechanisms to reduce collisions, but that doesn't mean that UDP broadcast over Wi-Fi is a good idea, because it suffers from other issues -- for one thing, the Wi-Fi router has to receive your broadcast packet and rebroadcast it to make sure all other clients can receive it, and worse, it will typically be set to retransmit it at a very slow "legacy" rate, in order to make sure that any ancient Wi-Fi cards out there can still receive the broadcast data. My advice is that if you're going to be using Wi-Fi, keep your broadcast (and multicast) transmissions to an absolute minimum; even sending separate/identical unicast packets to every other client is usually more efficient(!) -- not to avoid collisions, but rather because even a modest amount of broadcast/multicast traffic can bring your Wi-Fi network to a crawl.
UDP is said to be unreliable because it does not guarantee packet delivery, retransmission, flow control, or congestion. So, the sending/receiving of UDP packets can fail for many reasons: collision, unreliable physical medium, interference, dropping of packets due to router queue overflow, etc.
I have some (very) old software written in C, that was used for two devices that communicate via serial cable (RS232) - both sending and receiving messages.
Now the old devices are to be replaced by new modern ones that do not have serial ports, but only Ethernet.
Hence, the request now is to convert the old serial communication to UDP communication (C++ is the choice for the moment).
So, I have some questions about this "conversion":
1) Suppose there are two peers A and B. Should I implement a server and a client for each peer, i.e.: serverA+clientA (for device A) and serverB+clientB (for device B)? Or is there some other/different approach?...
2) The old serial communication had some CRC, probably to ensure some reliability. Is it CRC necessary to be implemented (in my custom messages) also on UDP communication or not?
Thanks in advance for your time and patience.
1) UDP is a connectionless protocol so there's no rigid client and server roles here. You simply have some code that handles receiving and some code that facilitates sending.
2) You don't need CRC for UDP. First, there's a FCS (CRC32) in each Ethernet frame. Then, there's a header checksum in IP packets. After all, checksum is already included in UPD datagram!
Please also consider the following things:
In everyday life COM ports are long gone from the physical world, but they're still with us in the virtual form (even Android phones have COM ports). There are a lot of solutions for doing COM over USB/TCP/whatever. Some of them are PC apps, some of them are implemented in hardware (see Arduino's COM over USB),
When an UDP datagram fails checksum test, it is dropped (usually) silently. So in UDP you don't have built-in capabilities to distinguish between "nothing was received" and "we received something but that's not a valid thing". Check UDP-Lite if you want to handle these situations on the application level (it should simplify the porting process I believe).
Default choice for transferring data is TCP, because it provides reliable delivery. UDP is recommended for users that care about being realtime and for those who can tolerate some data loss. Or for those who care about the resources.
Choose TCP if you are going to send large amount of data or be ready to handle packet congestion on ports. Choose TCP if you plan to go wireless in future or be ready to handle periodical significant loss of packets.
If your devices are really tiny or filled with other stuff, it is possible to operate directly on Level 2 (Ethernet).
We keep hearing about unreliability of udp that it may reach or not reach or just reach out of order (Signifying delay).
Where is it held until delivered?
Since its connection less if you keep sending packets without a network connection where will it go? Driver buffer?
Similarly when the receiver is not reachable is the packet immediately lost or does it float around a bit expecting host to be available soon? if yes then where?
On a direct connection from one device to another, with no intervening devices, there shouldn't be a problem. Where you can run into problems is where you go through a bunch of switches and routers (like the Internet).
A few reasons:
If a switch drops a frame, there is no mechanism to resend the frame.
Routers will buffer packets when they get congested, and packets can
be dropped if the buffers are full, or they may be purposely dropped to prevent congestion.
Load balancing can cause packets to be delivered out of order.
You have no control over anything outside your network.
Where is it held until delivered?
Packet buffering can occur if packets arrive faster than the device can read. Buffering can be either at NIC of the device or software queue of device driver or in the software queue between driver and stack. But, if the rate of arrival
is much higher such that it cannot be handled by these buffering mechanisms, then it will get dropped at those appropriate layer/location (based on design).
Since its connection less if you keep sending packets without a
network connection where will it go? Driver buffer?
If there is no network, there might be no other intermediate network devices and hence there should not be significant problems. But, it also depends on your architecture / design / configurations. If the configured value of internal OS receive buffer limit / socket buffer size (SO_RCVBUF, rmem_max, rmem_default) is exceeded, there can be drops here. And, if the software queue in respective device driver overflows or the software queue between device driver & stack of the device overflows, there can be drops here. Also, if the CPU is busy addressing another priority task where by it suspends reception, there can be drops here.
Similarly when the receiver is not reachable is the packet immediately
lost or does it float around a bit expecting host to be available
soon? if yes then where?
If there is no reachable destination, it shall be dropped by router.
Also, note that the particular router shall also drop the packet if the TTL/hoplimit count (in IP) is zero by the time the packet reaches this router.
Recently we ran into what looked like a connectivity issue when a particular customer of ours installed our product. We ultimately traced it to a low MTU (~1300 bytes) being configured on one of the devices in the network. In this particular deployment, we had two Windows machines running our application communicating with one another, and their link MTUs were set at 1500.
One thing that made this particularly difficult to troubleshoot, was that our application would work fine during the handshake phase (where only small requests are sent), but would sometimes fail sending a specific request of size ~4KB across the network. If it makes a difference, the application is written in C# and these are WCF messages.
What could account for this indeterminism? I would have expected this to always fail, as the message size we were sending was always larger than the link MTU perceived by the Windows client, which would lead to at least one full 1500-byte packet, which would lead to problems. Is there something in TCP that could make it prefer smaller packets, but only sometimes?
Some other things that we thought might be related:
1) The sockets were constantly being set up and torn down (as the application received what it interpreted as a network failure), so this doesn't appear to be related to TCP slow start.
2) I'm assuming that WCF "quickly" pushes the entire 4KB message to the socket, so there's always something to send that's larger than 1500 bytes.
3) Using WireShark, I didn't spot any TCP retransmissions which might explain why only subsets of the buffer were being sent.
4) Using WireShark, I saw a single 4KB IP packet being sent, which perhaps indicates that TCP Segment Offloading is being implemented by the NIC? (I'm not sure how TSO would look on WireShark). I didn't see in WireShark the 4KB request being broken down to multiple IP packets, in either successful or unsuccessful instances.
5) The customer claims that there's no route between the two Windows machines that circumvents the "problematic" device with the small MTU.
Any thoughts on this would be appreciated.
I have an Arduino Uno R3 with an Arduino WiFi shield. The WiFi shield has the most current firmware (V1.1.0). I am trying to send a packet to the router that is about 900 bytes (the packet is for setting up a UPnP port map). This packet is stored in program memory to conserve SRAM. Using strcat_P, I can pull the packet from memory into a buffer and send it using the WiFiClient library (TCP).
The problem is that I can't send the whole packet. For testing, I just send the packet to my computer located on the same LAN where I use a packet sniffer to view the packet. Using WiFiClient.write(), I get differing performance depending on the size of the buffer I use. I seem to get the best performance calling WiFiClient.write() with a buffer size of 80 repeatedly until the whole packet has been "sent". Anything greater than about 80 will cause blank packets on the other end. However, with 80, I usually only see about 500 bytes get transmitted. The packet always gets cut off at an arbitrary point. Does anyone know what could be causing this?
I've did a lot of Googling, and I see others having similar problems. I have never ran across a solution, though.
I know this is old, but I recently found this article which addresses the issue you are describing.
tl;dr - You can only write 90 bytes at a time to the wifi shield's buffer