I have written a single server-client program and I want to ask: Is there any difference in the behavior of the recv() function between 32 and 64 bit operating systems.
I am asking this because I am running both the server and the client on a 64 bit laptop and everything is working fine. I call recv() this way:while((tmp = recv(client_sock,rec_msg,256,0))>0) and as expected if for example i have 3 strings to send from client,in the server part it enters the while 3 times and prints the right result.
When i run exactly the same programs on a 32 bit Debian machine it seems that for some unknown reason if i send 3 strings for example from client it enters the while loop in server part only one time and receives the 3 strings as one.
I have used print statements and found out that it enters the while loop one time and receive all the buffer although in the client part while loop is entered 3 times as expected and 3 strings are sent in 3 different times. I can't found a logical reason for why it is working fine on 64 bit and not working in 32 bit and that's why i am asking this question.
Thanks in advance for your time and your help.
If this is a stream socket, then there are no inherent message boundaries, and there's no correlation between the messages sent and received. recv() may return part of a message, the whole message, or multiple messages; all that's guaranteed is that the bytes are received in the same order that they were sent.
The difference you're seeing is probably just due to speed differences between the two machines. The 32-bit machine is slower, so in the time it takes to check for data being available on the network all 3 packets have arrived. But the faster 64-bit machine processes the received data from the first packet before the second one arrives.
Related
I've read many stack overflow questions similar to this, but I don't think any of the answers really satisfied my curiosity. I have an example below which I would like to get some clarification.
Suppose the client is blocking on socket.recv(1024):
socket.recv(1024)
print("Received")
Also, suppose I have a server sending 600 bytes to the client. Let us assume that these 600 bytes are broken into 4 small packets (of 150 bytes each) and sent over the network. Now suppose the packets reach the client at different timings with a difference of 0.0001 seconds (eg. one packet arrives at 12.00.0001pm and another packet arrives at 12.00.0002pm, and so on..).
How does socket.recv(1024) decide when to return execution to the program and allow the print() function to execute? Does it return execution immediately after receiving the 1st packet of 150 bytes? Or does it wait for some arbitrary amount of time (eg. 1 second, for which by then all packets would have arrived)? If so, how long is this "arbitrary amount of time"? Who determines it?
Well, that will depend on many things, including the OS and the speed of the network interface. For a 100 gigabit interface, the 100us is "forever," but for a 10 mbit interface, you can't even transmit the packets that fast. So I won't pay too much attention to the exact timing you specified.
Back in the day when TCP was being designed, networks were slow and CPUs were weak. Among the flags in the TCP header is the "Push" flag to signal that the payload should be immediately delivered to the application. So if we hop into the Waybak
machine the answer would have been something like it depends on whether or not the PSH flag is set in the packets. However, there is generally no user space API to control whether or not the flag is set. Generally what would happen is that for a single write that gets broken into several packets, the final packet would have the PSH flag set. So the answer for a slow network and weakling CPU might be that if it was a single write, the application would likely receive the 600 bytes. You might then think that using four separate writes would result in four separate reads of 150 bytes, but after the introduction of Nagle's algorithm the data from the second to fourth writes might well be sent in a single packet unless Nagle's algorithm was disabled with the TCP_NODELAY socket option, since Nagle's algorithm will wait for the ACK of the first packet before sending anything less than a full frame.
If we return from our trip in the Waybak machine to the modern age where 100 Gigabit interfaces and 24 core machines are common, our problems are very different and you will have a hard time finding an explicit check for the PSH flag being set in the Linux kernel. What is driving the design of the receive side is that networks are getting way faster while the packet size/MTU has been largely fixed and CPU speed is flatlining but cores are abundant. Reducing per packet overhead (including hardware interrupts) and distributing the packets efficiently across multiple cores is imperative. At the same time it is imperative to get the data from that 100+ Gigabit firehose up to the application ASAP. One hundred microseconds of data on such a nic is a considerable amount of data to be holding onto for no reason.
I think one of the reasons that there are so many questions of the form "What the heck does receive do?" is that it can be difficult to wrap your head around what is a thoroughly asynchronous process, wheres the send side has a more familiar control flow where it is much easier to trace the flow of packets to the NIC and where we are in full control of when a packet will be sent. On the receive side packets just arrive when they want to.
Let's assume that a TCP connection has been set up and is idle, there is no missing or unacknowledged data, the reader is blocked on recv, and the reader is running a fresh version of the Linux kernel. And then a writer writes 150 bytes to the socket and the 150 bytes gets transmitted in a single packet. On arrival at the NIC, the packet will be copied by DMA into a ring buffer, and, if interrupts are enabled, it will raise a hardware interrupt to let the driver know there is fresh data in the ring buffer. The driver, which desires to return from the hardware interrupt in as few cycles as possible, disables hardware interrupts, starts a soft IRQ poll loop if necessary, and returns from the interrupt. Incoming data from the NIC will now be processed in the poll loop until there is no more data to be read from the NIC, at which point it will re-enable the hardware interrupt. The general purpose of this design is to reduce the hardware interrupt rate from a high speed NIC.
Now here is where things get a little weird, especially if you have been looking at nice clean diagrams of the OSI model where higher levels of the stack fit cleanly on top of each other. Oh no, my friend, the real world is far more complicated than that. That NIC that you might have been thinking of as a straightforward layer 2 device, for example, knows how to direct packets from the same TCP flow to the same CPU/ring buffer. It also knows how to coalesce adjacent TCP packets into larger packets (although this capability is not used by Linux and is instead done in software). If you have ever looked at a network capture and seen a jumbo frame and scratched your head because you sure thought the MTU was 1500, this is because this processing is at such a low level it occurs before netfilter can get its hands on the packet. This packet coalescing is part of a capability known as receive offloading, and in particular lets assume that your NIC/driver has generic receive offload (GRO) enabled (which is not the only possible flavor of receive offloading), the purpose of which is to reduce the per packet overhead from your firehose NIC by reducing the number of packets that flow through the system.
So what happens next is that the poll loop keeps pulling packets off of the ring buffer (as long as more data is coming in) and handing it off to GRO to consolidate if it can, and then it gets handed off to the protocol layer. As best I know, the Linux TCP/IP stack is just trying to get the data up to the application as quickly as it can, so I think your question boils down to "Will GRO do any consolidation on my 4 packets, and are there any knobs I can turn that affect this?"
Well, the first thing you can do is disable any form of receive offloading (e.g. via ethtool), which I think should get you 4 reads of 150 bytes for 4 packets arriving like this in order, but I'm prepared to be told I have overlooked another reason why the Linux TCP/IP stack won't send such data straight to the application if the application is blocked on a read as in your example.
The other knob you have if GRO is enabled is GRO_FLUSH_TIMEOUT which is a per NIC timeout in nanoseconds which can be (and I think defaults to) 0. If it is 0, I think your packets may get consolidated (there are many details here including the value of MAX_GRO_SKBS) if they arrive while the soft IRQ poll loop for the NIC is still active, which in turn depends on many things unrelated to your four packets in your TCP flow. If non-zero, they may get consolidated if they arrive within GRO_FLUSH_TIMEOUT nanoseconds, though to be honest I don't know if this interval could span more than one instantiation of a poll loop for the NIC.
There is a nice writeup on the Linux kernel receive side here which can help guide you through the implementation.
A normal blocking receive on a TCP connection returns as soon as there is at least one byte to return to the caller. If the caller would like to receive more bytes, they can simply call the receive function again.
I'm running a client server configuration over Ethernet and measuring packet latency at both ends. The client (windows) is sending packets every 5 ms (confirmed with wire shark) as it should. Yet, the server (embedded linux) only receives packets at 5 ms intervals for a few seconds, at which point it stops for 300 ms. After this break the latency is only 20 us. After another period of about a few seconds it takes another break for 300 ms. This repeats indefinitely (300ms break, 20 us packet latency burst). It seems as if the server program is being optimized mid-execution to read IO in shorter bursts. Why is this happening?
Disclaimer: I haven't posted the code, as the client and server are small subsets of more complex applications, however, I am willing to factor it out if an obvious answer doesn't present itself.
This is UDP so there is no handshake or any flow control mechanism. Those 300 ms must be because of work the server is doing in the processing of the UDP messages received. During those 300 ms the server has surely lost around 60 messages that were not read from client.
You might probably want to check the server does not take more than 5 ms in processing each message if it uses one thread to process. If the server uses multi-threading to process the messages and the processing takes some time, even if it takes 1 ms, you might be in a situation where at some point all threads are competing for resources and they don't finish in time to read the next message. For the problem you are describing I would bet the server is multithreaded and you have that problem. I cannot assure that 100% for lack of info though. But in any case, you want to check the time it takes to process messages because you might be dealing with real-time requirements.
I spaced out the measurements to 1 in every 1000 packets and now it is behaving itself. I was using printf every 5ms which must've eventually filled the printf tx queue entirely. This then delayed the execution for 300ms. Once printf caught its breath, the program had a queue full of incoming packets and thus was seemingly receiving packets every 20 us.
I have a perhaps noobish question to ask, I've looked around but haven't seen a direct answer addressing it and thought I might get a quick answer here. In a simple TCP/IP client-server select loop using bsd sockets, if a client sends two messages that arrive simultaneously at a server, would one call to recv at the server return both messages bundled together in the buffer, or does recv force each distinct arriving message to be read separately?
I ask because I'm working in an environment where I can't tell how the client is building its messages to send. Normally recv reports that 12 bytes are read, then 915, then 12 bytes, then 915, and so on in such an alternating 12 to 915 pattern... but then sometimes it reports 927 (which is 915+12). I was thinking that either the client is bundling some of it's information together before it sends it out to the server, or that the messages arrive before recv is invoked and then recv pulls all the pending bytes simultaneously. So I wanted to make sure I understood recv's behavior properly. I think perhaps I'm missing something here in my understanding, and I hope someone can point it out, thanks!
TCP/IP is a stream-based transport, not a datagram-based transport. In a stream, there is no 1-to-1 correlation between send() and recv(). That is only true for datagrams. So, you have to be prepared to handle multiple possibilities:
a single call to send() may fit in a single TCP packet and be read in full by a single call to recv().
a single call to send() may span multiple TCP packets and need multiple calls to recv() to read everything.
multiple calls to send() may fit in a single TCP packet and be read in full by a single call to recv().
multiple calls to send() may span multiple TCP packets and require multiple calls to recv() for each packet.
To illustrate this, consider two messages are being sent - send("hello", 5) and send("world", 5). The following are a few possible combinations when calling recv():
"hello" "world"
"hel" "lo" "world"
"helloworld"
"hel" "lo" "worl" "d"
"he" "llow" "or" "ld"
Get the idea? This is simply how TCP/IP works. Every TCP/IP implementation has to account for this fragementation.
In order to receive data properly, there has to be a clear separation between logical messages, not individual calls to send(), as it may take multiple calls to send() to send a single message, and multiple recv() calls to receive a single message in full. So, taking the earlier example into account, let's add a separator between the messages:
send("hello\n", 6);
send("world", 5);
send("\n", 1);
On the receiving end, you would call recv() as many times as it takes until a \n character is received, then you would process everything you had received leading up to that character. If there is any read data left over when finished, save it for later processing and start calling recv() again until the next \n character, and so on.
Sometimes, it is not possible to place a unique character between messages (maybe the message body allows all characters to be used, so there is no distinct character available to use as a separator). In that case, you need to prefix the message with the message's length instead, either as a preceeding integer, a structured header, etc. Then you simply call recv() as many times as needed until you have received the full integer/header, then you call recv() as many times as needed to read just as many bytes as the length/header specifies. When finished, save any remaining data if needed, and start calling recv() all over again to read the next message length/header, and so on.
It is definitely valid for both messages to be returned in a single recv call (see Nagle's Algorithm). TCP/IP guarantees order (the bytes from the messages won't be mixed). In addition to them being returned together in a single call, it is also possible for a single message to require multiple calls to recv (although it would be unlikely with packets as small as described).
The only thing you can count on is the order of the bytes. You cannot count on how they are partitioned into recv calls. Sometimes things get merged either at the endpoint or along the way. Things can also get broken up along the way and so arrive independently. It does sound like your sender is sending alternating 12 and 915 but you can't count on it.
I have been having a heck of a time getting Udp sockets working correctly on Windows Phone 7 (Mango). First I had this problem udp async receive and now that I figured it out, I am seeing a weird behavior where the end of the data I send over the socket is all zero.
At first, I thought there was a weird size cap. All my packets were user 1380 bytes. I was seeing that for some reason, after ~byte 1220 it was all zeros, but according to the socket, I was still receiving all ~1380 bytes. I matched up the sizes with my server application, and I was receiving the correct number of byte. So I printed the bytes out on both sides of the connection and saw this issue with much of the last 200 bytes or so being zero.
So I reduced the size of my packet data to ~1200 bytes, and I was still seeing the issue. I even reduced it to 1000 bytes and still!
Any ideas?
Update - I have done some testing, and it seems that the last 144 bytes are FUBAR. Sometimes they are zero, sometimes they are garbage. Think this is a bug?
You need to check how many bytes were transferred in the async operation. Check SocketAsyncEventArgs.BytesTransferred to see how many bytes in the buffer are actually valid.
Sorry, I had a bug in my code where I was using an array over, overwriting my own data.
I'm using Perl sockets in AIX 5.3, Perl version 5.8.2
I have a server written in Perl sockets. There is a option called "Blocking", which can be set to 0 or 1. When I use Blocking => 0 and run the server and client send data (5000 bytes), I am able to recieve only 2902 bytes in one call. When I use Blocking => 1, I am able to recieve all the bytes in one call.
Is this how sockets work or is it a bug?
This is a fundamental part of sockets - or rather, TCP, which is stream-oriented. (UDP is packet-oriented.)
You should never assume that you'll get back as much data as you ask for, nor that there isn't more data available. Basically more data can come at any time while the connection is open. (The read/recv/whatever call will probably return a specific value to mean "the other end closed the connection.)
This means you have to design your protocol to handle this - if you're effectively trying to pass discrete messages from A to B, two common ways of doing this are:
Prefix each message with a length. The reader first reads the length, then keeps reading the data until it's read as much as it needs.
Have some sort of message terminator/delimiter. This is trickier, as depending on what you're doing you may need to be aware of the possibility of reading the start of the next message while you're reading the first one. It also means "understanding" the data itself in the "reading" code, rather than just reading bytes arbitrarily. However, it does mean that the sender doesn't need to know how long the message is before starting to send.
(The other alternative is to have just one message for the whole connection - i.e. you read until the the connection is closed.)
Blocking means that the socket waits till there is data there before returning from a recieve function. It's entirely possible there's a tiny wait on the end as well to try to fill the buffer before returning, or it could just be a timing issue. It's also entirely possible that the non-blocking implementation returns one packet at a time, no matter if there's more than one or not. In short, no it's not a bug, but the specific 'why' of it is the old cop-out "it's implementation specific".