Arduino WiFi Shield doesn't send whole TCP packet - sockets

I have an Arduino Uno R3 with an Arduino WiFi shield. The WiFi shield has the most current firmware (V1.1.0). I am trying to send a packet to the router that is about 900 bytes (the packet is for setting up a UPnP port map). This packet is stored in program memory to conserve SRAM. Using strcat_P, I can pull the packet from memory into a buffer and send it using the WiFiClient library (TCP).
The problem is that I can't send the whole packet. For testing, I just send the packet to my computer located on the same LAN where I use a packet sniffer to view the packet. Using WiFiClient.write(), I get differing performance depending on the size of the buffer I use. I seem to get the best performance calling WiFiClient.write() with a buffer size of 80 repeatedly until the whole packet has been "sent". Anything greater than about 80 will cause blank packets on the other end. However, with 80, I usually only see about 500 bytes get transmitted. The packet always gets cut off at an arbitrary point. Does anyone know what could be causing this?
I've did a lot of Googling, and I see others having similar problems. I have never ran across a solution, though.

I know this is old, but I recently found this article which addresses the issue you are describing.
tl;dr - You can only write 90 bytes at a time to the wifi shield's buffer

Related

How long does a UDP packet keeps floating and where?

We keep hearing about unreliability of udp that it may reach or not reach or just reach out of order (Signifying delay).
Where is it held until delivered?
Since its connection less if you keep sending packets without a network connection where will it go? Driver buffer?
Similarly when the receiver is not reachable is the packet immediately lost or does it float around a bit expecting host to be available soon? if yes then where?
On a direct connection from one device to another, with no intervening devices, there shouldn't be a problem. Where you can run into problems is where you go through a bunch of switches and routers (like the Internet).
A few reasons:
If a switch drops a frame, there is no mechanism to resend the frame.
Routers will buffer packets when they get congested, and packets can
be dropped if the buffers are full, or they may be purposely dropped to prevent congestion.
Load balancing can cause packets to be delivered out of order.
You have no control over anything outside your network.
Where is it held until delivered?
Packet buffering can occur if packets arrive faster than the device can read. Buffering can be either at NIC of the device or software queue of device driver or in the software queue between driver and stack. But, if the rate of arrival
is much higher such that it cannot be handled by these buffering mechanisms, then it will get dropped at those appropriate layer/location (based on design).
Since its connection less if you keep sending packets without a
network connection where will it go? Driver buffer?
If there is no network, there might be no other intermediate network devices and hence there should not be significant problems. But, it also depends on your architecture / design / configurations. If the configured value of internal OS receive buffer limit / socket buffer size (SO_RCVBUF, rmem_max, rmem_default) is exceeded, there can be drops here. And, if the software queue in respective device driver overflows or the software queue between device driver & stack of the device overflows, there can be drops here. Also, if the CPU is busy addressing another priority task where by it suspends reception, there can be drops here.
Similarly when the receiver is not reachable is the packet immediately
lost or does it float around a bit expecting host to be available
soon? if yes then where?
If there is no reachable destination, it shall be dropped by router.
Also, note that the particular router shall also drop the packet if the TTL/hoplimit count (in IP) is zero by the time the packet reaches this router.

Pcap producing strange packets after un- and replugging cable

After physically pulling the line and reconnecting it again, pcap (I am programming it in C) produces packets which are most likely not really there and misses out on all "normal" traffic which is going on. I have two nodes on the network which continue talking pure ethernet frames and are 100% undisturbed by me pulling the cable from the sniffer node - after I reconnect, their traffic is no longer seen by pcap. I am using plain vanilla pcap_loop() without any filter or timeout. The pcap_loop() doesn't terminate when I do this. Does the handle to the interface (pcap_t descriptor) become invalid when there is no physical connection? Anyone who knows how pcap reacts on a disconnected interface?

Fragmentation of IPv6 using BSD sockets

I'm writing a PMTUD app for both IPv4 and v6. I am doing this on Ubuntu 12.04, but I would like to make it as OS-independent as possible, and that's where I stumbled upon a problem.
IPv6 packets get fragmented by the sender by default, and I do not know how to turn this behaviour off. I found some socket options like IPV6_MTU_DISCOVER and IPV6_DONTFRAG, but I found these under linux/in6.h, which does not help as I'm using the netinet header family and neither of those is under netinet/in.h - although IPV6_MTU_DISCOVER should be there according to this. Am I missing something?
EDIT: Let me clarify a bit then.
I have a socket(AF_INET6, SOCK_RAW, IPPROTO_ICMPV6) through which I wish to send an ICMPv6 packet of such size that I will receive a reply telling me it's too big, and from that reply I will get the path MTU.
However, to truly get the MTU along the whole path I also have to factor in the outgoing device's MTU.
I am using miredo to tunnel IPv6, which has a set MTU of minimal size, e.g. 1280. Sending a packet bigger that 1280 will result in fragmentation of said packet (this behaviour I observed in Wireshark), but I need the socket to REFUSE to send the packet and inform me about it rather than fragment it.
You do not need to do this yourself. MTU discovery is supposed to happen automatically. As a side effect of this, all devices along the path MUST allow ICMP V6 packets to pass.
IPv6 packets get fragmented by the sender by default
No. TCP packets get fragmented by the sender and intermediate routers by default.
, and I do not know how to turn this behaviour off.
You cannot turn it off. You can certainly try, but the only result will be non-delivery. If a router needs to fragment a packet and you don't permit it, it will drop it instead. However the sending host also needs to fragment, to fit inside the path MTU, and you cannot stop that. If you write the receiver correctly, i.e. in the expectation that it is reading a byte stream rather than discrete messages, it should make no difference to you whether the packets got fragmented in transit or not.

How to speed up slow / laggy Windows Phone 7 (WP7) TCP Socket transmit?

Recently, I started using the System.Net.Sockets class introduced in the Mango release of WP7 and have generally been enjoying it, but have noticed a disparity in the latency of transmitting data in debug mode vs. running normally on the phone.
I am writing a "remote control" app which transmits a single byte to a local server on my LAN via Wifi as the user taps a button in the app. Ergo, the perceived responsiveness/timeliness of the app is highly important for a good user experience.
With the phone connected to my PC via USB cable and running the app in debug mode, the TCP connection seems to transmit packets as quickly as the user taps buttons.
With the phone disconnected from the PC, the user can tap up to 7 buttons (and thus case 7 "send" commands with 1 byte payloads before all 7 bytes are sent.) If the user taps a button and waits a little between taps, there seems to be a latency of 1 second.
I've tried setting Socket.NoDelay to both True and False, and it seems to make no difference.
To see what was going on, I used a packet sniffer to see what the traffic looked like.
When the phone was connected via USB to the PC (which was using a Wifi connection), each individual byte was in its own packet being spaced ~200ms apart.
When the phone was operating on its own Wifi connection (disconnected from USB), the bytes still had their own packets, but they were all grouped together in bursts of 4 or 5 packets and each group was ~1000ms apart from the next.
btw, Ping times on my Wifi network to the server are a low 2ms as measured from my laptop.
I realize that buffering "sends" together probably allows the phone to save energy, but is there any way to disable this "delay"? The responsiveness of the app is more important than saving power.
This is an interesting question indeed! I'm going to throw my 2 cents in but please be advised, I'm not an expert on System.Net.Sockets on WP7.
Firstly, performance testing while in the debugger should be ignored. The reason for this is that the additional overhead of logging the stack trace always slows applications down, no matter the OS/language/IDE. Applications should be profiled for performance in release mode and disconnected from the debugger. In your case its actually slower disconnected! Ok so lets try to optimise that.
If you suspect that packets are being buffered (and this is a reasonable assumption), have you tried sending a larger packet? Try linearly increasing the packet size and measuring latency. Could you write a simple micro-profiler in code on the device ie: using DateTime.Now or Stopwatch class to log the latency vs. packet size. Plotting that graph might give you some good insight as to whether your theory is correct. If you find that 10 byte (or even 100byte) packets get sent instantly, then I'd suggest simply pushing more data per transmission. It's a lame hack I know, but if it aint broke ...
Finally you say you are using TCP. Can you try UDP instead? TCP is not designed for real-time communications, but rather accurate communications. UDP by contrast is not error checked, you can't guarantee delivery but you can expect faster (more lightweight, lower latency) performance from it. Networks such as Skype and online gaming are built on UDP not TCP. If you really need acknowledgement of receipt you could always build your own micro-protocol over UDP, using your own Cyclic Redundancy Check for error checking and Request/Response (acknowledgement) protocol.
Such protocols do exist, take a look at Reliable UDP discussed in this previous question. There is a Java based implementation of RUDP about but I'm sure some parts could be ported to C#. Of course the first step is to test if UDP actually helps!
Found this previous question which discusses the issue. Perhaps a Wp7 issue?
Poor UDP performance with Windows Phone 7.1 (Mango)
Still would be interested to see if increasing packet size or switching to UDP works
ok so neither suggestion worked. I found this description of the Nagle algorithm which groups packets as you describe. Setting NoDelay is supposed to help but as you say, doesn't.
http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.nodelay.aspx
Also. See this previous question where Keepalive and NoDelay were set on/off to manually flush the queue. His evidence is anecdotal but worth a try. Can you give it a go and edit your question to post more up to date results?
Socket "Flush" by temporarily enabling NoDelay
Andrew Burnett-Thompson here already mentioned it, but he also wrote that it didn't work for you. I do not understand and I do not see WHY. So, let me explain that issue:
Nagle's algorithm was introduced to avoid a scenario where many small packets had to been sent through a TCP network. Any current state-of-the-art TCP stack enables Nagle's algorithm by default!
Because: TCP itself adds a substantial amount of overhead to any the data transfer stuff that is passing through an IP connection. And applications usually do not care much about sending their data in an optimized fashion over those TCP connections. So, after all that Nagle algorithm that is working inside of the TCP stack of the OS does a very, very good job.
A better explanation of Nagle's algorithm and its background can be found on Wikipedia.
So, your first try: disable Nagle's algorithm on your TCP connection, by setting option TCP_NODELAY on the socket. Did that already resolve your issue? Do you see any difference at all?
If not so, then give me a sign, and we will dig further into the details.
But please, look twice for those differences: check the details. Maybe after all you will get an understanding of how things in your OS's TCP/IP-Stack actually work.
Most likely it is not a software issue. If the phone is using WiFi, the delay could be upwards of 70ms (depending on where the server is, how much bandwidth it has, how busy it is, interference to the AP, and distance from the AP), but most of the delay is just the WiFi. Using GMS, CDMA, LTE or whatever technology the phone is using for cellular data is even slower. I wouldn't imagine you'd get much lower than 110ms on a cellular device unless you stood underneath a cell tower.
Sounds like your reads/writes are buffered. You may try setting the NoDelay property on the Socket to true, you may consider trimming the Send and Receive buffer sizes as well. The reduced responsiveness may be a by-product of there not being enough wifi traffic, i'm not sure if adjusting MTU is an option, but reducing MTU may improve response times.
All of these are only options for a low-bandwidth solution, if you intend to shovel megabytes of data in either direction you will want larger buffers over wifi, large enough to compensate for transmit latency, typically in the range of 32K-256K.
var socket = new System.Net.Sockets.Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp)
{
NoDelay = true,
SendBufferSize = 3,
ReceiveBufferSize = 3,
};
I didn't test this, but you get the idea.
Have you tried setting SendBufferSize = 0? In the 'C', you can disable winsock buffering by setting SO_SNDBUF to 0, and I'm guessing SendBufferSize means the same in C#
Were you using Lumia 610 and mikrotik accesspoint by any chance?
I have experienced this problem, it made Lumia 610 turn off wifi radio as soon as last connection was closed. This added perceivable delay, compared to Lumia 800 for example. All connections were affected - simply switching wifi off made all apps faster. My admin says it was some feature mikrotiks were not supporting at the time combined with WMM settings. Strangely, most other phones were managing just fine, so we blamed cheapness of the 610 at the beginning.
If you still can replicate the problem, I suggest trying following:
open another connection in the background and ping it all the time.
use 3g/gprs instead of wifi (requires exposing your server to the internet)
use different (or upgraded) phone
use different (or upgraded) AP

Sending a huge amount of real time processed data via UDP to iPhone from a server

I'm implementing a remote application. The server will process & render data in real time as animation. (a series of images, to be precise) Each time, an image is rendered, it will be transferred to the receiving iPhone client via UDP.
I have studied some UDP and I am aware of the following:
UDP has max size of about 65k.
However, it seems that iPhone can only receive 41k UDP packet. iPhone seems to not be able to receive packet larger than that.
When sending multiple packets, many packets are being dropped. This is due to oversizing UDP processing.
Reducing packet size increase the amount of packets not being dropped, but this means more packets are required to be sent.
I never write real practical UDP applications before, so I need some guidance for efficient UDP communication. In this case, we are talking about transferring rendered images in real time from the server to be displayed on iPhone.
Compressing data seems mandatory, but in this question, I would like to focus on the UDP part. Normally, when we implement UDP applications, what can we do in terms of best practice for efficient UDP programming if we need to send a lot of data non-stop in real time?
Assuming that you have a very specific and good reason for using UDP and that you need all your data to arrive ( i.e. you can't tolerate any lost data ) then there are a few things you need to do ( this assumes a uni-cast application ):
Add a sequence number to the header for each packet
Ack each packet
Set up a retransmit timer which resends the packet if no ack recv'ed
Track latency RTT ( round trip time ) so you know how long to set your timers for
Potentially deal with out of order data arrival if that's important to your app
Increase receive buffer size on client socket.
Also, you could be sending so fast that you are dropping packets internally on the sending machine without them even getting out the NIC onto the wire. On certain systems calling select for write-ablity on the sending socket can help with this. Also, calling connect on the UDP socket can speed up performance leading to less dropped packets.
Basically, if you need guaranteed in-order delivery of your data than you are going to re-implement TCP on top of UDP. If the only reason you use UDP is latency, then you can probably use TCP and disable the Nagle Algorithm. If you want packetized data with reliable low latency delivery another possibility is SCTP, also with Nagle disabled. It can also provide out-of-order delivery to speed things up even more.
I would recommend Steven's "Unix Network Programming" which has a section on advanced UDP and when it's appropriate to use UDP instead of TCP. As a note, he recommends against using UDP for bulk data transfer, although the reality is that this is becoming much more common these days for streaming multimedia apps.
Small packets is probably better than large packets :-)