Mechanism of MSG_WAITALL in Berkeley socket - sockets

In Berkeley socket, is recv function with MSG_WAITALL flag set, replaces having multiple read functions until the the whole data requested has been read?
I mean does recv function read the whole block determined by the size in one call, whereas the the read function might read part of the data block, and I need to call it multiple times in a loop until the whole block is read?

Yes, MSG_WAITALL tells recv() to wait until all of the requested bytes have been read. However, that it is only supported in blocking mode and not in non-blocking mode, and it only works on stream-oriented sockets, like TCP. Even then, you also still have to loop, such as on Linux if recv() gets interrupted by a signal and has to be called again to continue reading.

Related

When is a file descriptor not considered available for writing? [duplicate]

When, exactly, does the BSD socket send() function return to the caller?
In non-blocking mode, it should return immediately, correct?
As for blocking mode, the man page says:
When the message does not fit into the send buffer of the socket, send() normally blocks, unless the socket has been placed in non-blocking I/O mode.
Questions:
Does this mean that the send() call will always return immediately if there is room in the kernel send buffer?
Is the behavior and performance of the send() call identical for TCP and UDP? If not, why not?
Does this mean that the send() call will always return immediately if there is room in the kernel send buffer?
Yes. As long as immediately means after the memory you provided it has been copied to the kernel's buffer. Which, in some edge cases, may not be so immediate. For instance if the pointer you pass in triggers a page fault that needs to pull the buffer in from either a memory mapped file or the swap, that would add significant delay to the call returning.
Is the behavior and performance of the send() call identical for TCP and UDP? If not, why not?
Not quite. Possible performance differences depends on the OS' implementation of the TCP/IP stack. In theory the UDP socket could be slightly cheaper, since the OS needs to do fewer things with it.
EDIT: On the other hand, since you can send much more data per system call with TCP, typically the cost per byte can be a lot lower with TCP. This can be mitigated with sendmmsg() in recent linux kernels.
As for the behavior, it's nearly identical.
For blocking sockets, both TCP and UDP will block until there's space in the kernel buffer. The distinction however is that the UDP socket will wait until your entire buffer can be stored in the kernel buffer, whereas the TCP socket may decide to only copy a single byte into the kernel buffer (typically it's more than one byte though).
If you try to send packets that are larger than 64kiB, a UDP socket will likely consistently fail with EMSGSIZE. This is because UDP, being a datagram socket, guarantees to send your entire buffer as a single IP packet (or train of IP packet fragments) or not send it at all.
Non blocking sockets behave identical to the blocking versions with the single exception that instead of blocking (in case there's not enough space in the kernel buffer), the calls fail with EAGAIN (or EWOULDBLOCK). When this happens, it's time to put the socket back into epoll/kqueue/select (or whatever you're using) to wait for it to become writable again.
As usual when working on POSIX, keep in mind that your call may fail with EINTR (if the call was interrupted by a signal). In this case you most likely want to call send() again.
If there is room in the kernel buffer, then send() copies as many bytes as it can into the buffer and exits immediately, returning how many bytes were actually copied (which can be fewer than how many you requested). If there is no room in the kernel buffer, then send() blocks until either room becomes available or a timeout occurs (if one is configured).
The send() will return as soon as the data has been accepted by the kernel.
In case of blocking socket: The send() will block if the kernel buffer is not free enough to intake the data provided to send() call.
Non blocking sockets: send() will not block, but would fail and returns -1 or it may return number of bytes copied partially(depending on the buffer space available). It sets the errno EWOULDBLOCK or EAGAIN. This means at that time of send(), the buffer was not able to intake all the bytes and you should try again with select() call to send() the data again. Or you could put a loop with a sleep() and call send(), but you have to take care of number of bytes actually sent and the remaining number of bytes that are to be sent.
Does this mean that the send() call
will always return immediately if
there is room in the kernel send
buffer?
Shouldn't it? The moment after which the data "is sent" can be defined differently. I think this is a moment when OS accepted your data for delivery on stack. Otherwise it's quite diffucult to define it. Is it a moment, when data is transmitted to network card buffer? Or after the moment when data is pushed out of network card buffer?
Is there any problem you need to know this for sure or you are just curious?
Your presumption is correct. If there is room in the kernel send buffer, the kernel will copy the data into the send buffer and send() will return.

Why would one need to use `MSG_WAITALL` FLAG instead of `0` FLAG? Why to use it with UDP?

At some point when coding sockets one will face the receive-family of functions (recv, recvfrom, recvmsg).
This function accepts a FLAG argument, in which I see that the MSG_WAITALL is used in many examples on the web, such as this example on UDP.
Here is a definition of the MSG_WAITALL flag
MSG_WAITALL (since Linux 2.2)
This flag requests that the operation block until the full request is satisfied. However, the call may still return less data than requested if a signal is caught, an error or disconnect occurs, or the next data to be received is of a different type than that returned. This flag has no effect for datagram sockets.
Hence, my two questions:
Why would one need to use MSG_WAITALL FLAG instead of 0 FLAG? (Could someone explain a scenario of a problem for which the use of this would be the solution?)
Why to use it with UDP?
As the quoted man page mentions, MSG_WAITALL has no effect on UDP sockets, so there's no reason to use it there. Examples that do use it are probably confused and/or the result of several generations of cargo-cult/copy-and-paste programming. :)
For TCP, OTOH, the default behavior of recv() is to block until at least one byte of data can be copied into the user's buffer from the sockets incoming-data-buffer. The TCP stack will try to provide as many bytes of data as it can, of course, but in a case where the socket's incoming-data-buffer contains fewer bytes of data than the user has passed in to recv(), the TCP stack will copy as many bytes as it can, and return the byte-count indicating how many bytes it actually provided.
However, some people find would prefer to have their recv() call keep blocking until all of the bytes in their passed-in array have been filled in, regardless of how long that might take. For those people, the MSG_WAITALL flag provides a simple way to obtain that behavior. (The flag is not strictly necessary, since the programmer could always emulate that behavior by writing a while() loop that calls recv() multiple times as necessary, until all the bytes in the buffer have been populated... but it's provided as a convenience nonetheless)

What's the read logic when I call recvfrom() function in C/C++

I wrote a C++ program to create a socket and bind on this socket to receive ICMP/UDP packets. The code I wrote as following:
while(true){
recvfrom(sockId, rePack, sizeof(rePack), 0, (struct sockaddr *)&raddr, (socklen_t *)&len);
processPakcet(recv_size);
}
So, I used a endless while loop to receive messages continually, But I worried about the following two questions:
1, How long the message would be kept in the receiver queue or say in NIC queue?
I worried about that if it takes too long to process the first message, then I might miss the second message. so how fast should I read after read.
2, How to prevent reading the duplicated messages?
i.e, does the receiver queue knows me, when my thread read the first message done, would the queue automatically give me the second one? or say, when I read the first message, then the first message would be deleted by the queue and no one could receive it again.
Additionally, I think the while(true) module is not good, anyone could give me a good suggestion please. (I heard something like polling module).
First, you should always check the return value from recvfrom. It's unlikely the recvfrom will fail, but if it does (for example, if you later implement signal handling, it might fail with EINTR) you will be processing undefined data. Also, of course, the return value tells you the size of the packet you received.
For question 1, the actual answer is operating system-dependent. However, most operating systems will buffer some number of packets for you. The OS interrupt handler that handles the incoming packet will never be copying it directly into your application level buffer, so it will always go into an OS buffer first. The OS has previously noted your interest in it (by virtue of creating the socket and binding it you expressed interest), so it will then place a pointer to the buffer onto a queue associated with your socket.
A different part of the OS code will then (after the interrupt handler has completed) copy the data from the OS buffer into your application memory, free the OS buffer, and return to your program from the recvfrom system call. If additional packets come in, either before or after you have started processing the first one, they'll be placed on the queue too.
That queue is not infinite of course. It's likely that you can configure how many packets (or how much buffer space) can be reserved, either at a system-wide level (think sysctl-type settings in linux), or at the individual socket level (setsockopt / ioctl).
If, when you call recvfrom, there are already queued packets on the socket, the system call handler will not block your process, instead it will simply copy from the OS buffer of the next queued packet into your buffer, release the OS buffer, and return immediately. As long as you can process incoming packets roughly as fast as they arrive or faster, you should not lose any. (However, note that if another system is generating packets at a very high rate, it's likely that the OS memory reserved will be exhausted at some point, after which the OS will simply discard packets that exceed its resource reservation.)
For question 2, you will receive no duplicate messages (unless something upstream of your machine is actually duplicating them). Once a queued message is copied into your buffer, it's released before returning to you. That message is gone forever.
(Note that it's possible that some other process has also created a socket expressing interest in the same packets. That process would also get a copy of the packet data, which is typically handled internal to the operating system by reference counting rather than by actually duplicating the OS buffers, although that detail is invisible to applications. In any case, once all interested processes have received the packet, it will be discarded.)
There's really nothing at all wrong with a while (true) loop; it's a very common control structure for long-running server-type programs. If your program has nothing else it needs to be doing in the meantime, while true allowing it to block in recvfrom is the simplest and hence clearest way to implement it.
(You could use a select(2) or poll(2) call to wait. This allows you to handle waiting for any one of multiple file descriptors at the same time, or to periodically "time out" and go do something else, say, but again if you have nothing else you might need to be doing in the meantime, that is introducing needless complication.)

TCP buffering on read

I want to reduce the latency of a TCP server. So I read about, and used TCP_NODELAY. Great! overall latency went down a bit! Now I'm thinking that I can probably also reduce latency when reading. But I don't understand very well the behavior of the TCP stack. What happens for example in the following code in the receiver side, if the sender sends a packet of just 25 bytes?
BUFFER_SIZE = 4096
char buffer[BUFFER_SIZE];
received = read (common_socket, buffer, BUFFER_SIZE);
My particular question is, if the socket is blocking, when the call to read will return? Are there any cases when TCP will wait a little bit for more data to arrive before a return from the read call?
if the socket is blocking, when the call to read will return?
If there is data in the socket receive buffer or a pending end-of-stream or error it will return immediately, otherwise it will block, once, until one of those conditions occurs.
Are there any cases when TCP will wait a little bit for more data to arrive before a return from the read call?
No.
read is a blocking call, that means it will block at the read line until you'll receive something.
If you receive less than your buffer size, you'll move to the next operation and your "received" variable will hold the number of bytes that have been read.
reference:
http://man7.org/linux/man-pages/man2/read.2.html
On success, the number of bytes read is returned (zero indicates end
of file), and the file position is advanced by this number. It is
not an error if this number is smaller than the number of bytes
requested; this may happen for example because fewer bytes are
actually available right now (maybe because we were close to end-of-
file, or because we are reading from a pipe, or from a terminal), or
because read() was interrupted by a signal. On error, -1 is
returned, and errno is set appropriately. In this case, it is left
unspecified whether the file position (if any) changes.

MSG_READALL is to recv() as ?? is to send()

From the recv(2) man page:
MSG_WAITALL
This flag requests that the operation block until the full request is satisfied. However, the call may still return less data than requested if a signal is caught, an error or disconnect occurs, or the next data to be received is of a different type than that returned.
It doesn't look like there's an equivalent flag for send(2), which strikes me as strange. Maybe send()s are guaranteed to always accept the whole buffer, but I don't see that written anywhere (and anyway, that seems unlikely to me).
Is there a way to tell send() to wait until it's sent the whole buffer before returning, equivalently to recv()'s MSG_WAITALL?
Edit: I understand that send() just copies data into a buffer in the operating system and that I can't force send() to put data onto the wire. My question is: Can I force send() to block until all the data I gave it has been copied into the OS's buffer?
You can't. send just offloads the buffer to the kernel, then returns.
To quote from the Unix standard:
The send() function shall initiate transmission of a message from the specified socket to its peer (...)
Successful completion of a call to send() does not guarantee delivery of the message.
Note the word "initiate". It doesn't actually send anything, rather tells the OS to send the message when it's ready for it (when its buffers are full or after some time has passed).
send(2) for TCP does not actually "send" anything on the wire, but places your bytes into the socket send buffer. It tells you how many bytes it was able to copy there in the return value.
Make the send buffer bigger (see setsockopt(2) and tcp(7)), pay attention to the syscall return value. In any case, TCP is a stream. You need to manage application-level protocol yourself.