UDP socket gives error WSAETIMEDOUT - sockets

I have a call to sendto() for a UDP socket. Sometimes(not always) it blocks my application for ~2.5 seconds. When I check the return value of the sendto() call I get SOCKET_ERROR(-1) and WSAGetLastError() returns WSAETIMEDOUT(10060)
Why would a UDP socket timeout? Under what circumstances would sendto() be a blocking call?

Why would a UDP socket timeout?
It can happen if the socket is running in blocking mode (the default mode), and has a send timeout assigned to it.
Under what circumstances would sendto() be a blocking call?
Sockets are created in blocking mode by default. You have to explicitly request non-blocking behavior if you need it.
In blocking mode, a UDP socket can block if the kernel buffer fills up or if WinSock has to wait for a network event before completing the send. This is documented behavior:
sendto() function
When issuing a blocking Winsock call such as sendto, Winsock may need to wait for a network event before the call can complete. Winsock performs an alertable wait in this situation, which can be interrupted by an asynchronous procedure call (APC) scheduled on the same thread. Issuing another blocking Winsock call inside an APC that interrupted an ongoing blocking Winsock call on the same thread will lead to undefined behavior, and must never be attempted by Winsock clients.
...
If no buffer space is available within the transport system to hold the data to be transmitted, sendto will block unless the socket has been placed in a nonblocking mode. On nonblocking, stream oriented sockets, the number of bytes written can be between 1 and the requested length, depending on buffer availability on both the client and server systems. The select, WSAAsyncSelect or WSAEventSelect function can be used to determine when it is possible to send more data.

Related

MSG_WAITALL combined with SO_RCVTIMEO?

On a blocking socket, can flag MSG_WAITALL in a call to recv() be combined with socket option SO_RCVTIMEO
set with a call to setsockopt() on the socket?
My goal here is to either receive a full message, or a timeout/error...
Have tested it now, and it works fine to combine MSG_WAITALL and SO_RCVTIMEO on blocking sockets!
A call to recv() then returns when the requested length has been received, or when the configured socket timeout expires (or if there is an error/interrupt).

How is it possible to have send timeout on a non blocking socket?

I have some problems understanding the working of sockets in Linux.
setsockopt(sockfd, SOL_SOCKET, SO_SNDTIMEO, &timeout, sizeof(int));
write = write(sockfd, buf, len);
In the above code as writes are buffered, send timeout doesn't make any sense(write system call will return immediately when the user space buffer is copied into the kernel buffers). Send buffer size is much more important parameter, but send timeout seems it does nothing worthwile. But I am certainly wrong, as I have seen quite a lot of code which uses SO_SNDTIMEO. How can user space code timeout using SO_SNDTIMEO assuming that the receiver is very slow?
How is it possible to have send timeout on a non blocking socket?
It isn't. Timeouts are for blocking mode. A non-blocking recv() won't block, and therefore cannot time out either.
I have seen a lot of code which uses SO_SNDTIMEO.
Not in non-blocking mode unless the code concerned is nonsense.
SO_SNDTIMEO is useful for a blocking socket. If the socket's buffer is full, send() can block, in which case it may be useful to use the SO_SNDTIMEO socket option. For non-blocking sockets, if the socket's buffer is full, send will fail immediately, so there is no point in setting SO_SNDTIMEO with a non-blocking socket.

Determine how many bytes can be sent with winsock (FIONWRITE)?

With select I can determine if any bytes can be received or sent without blocking.
With this function I can determine how many bytes can be received:
function BytesAvailable(S: TSocket): Integer;
begin
if ioctlsocket(S, FIONREAD, Result) = SOCKET_ERROR then
Result := -1;
end;
Is there also a way to determine how many bytes can be sent?
So I can be sure when I call send with N bytes, it will return exactly N bytes sent (or SOCKET_ERROR) but not less (send buffer is full).
FIONWRITE is not available for Winsock.
According to MVP Alexander Nickolov, there is no such facility in Windows. He also mentions that "good socket code" doesn't use FIONWRITE-like ioctls, but doesn't explain why.
To circumvent this issue, you could enable non-blocking I/O (using FIONBIO, I guess) on sockets you're interested in. That way, WSASend will succeed on such sockets when it can complete sending without blocking, or fail with WSAGetLastError() == WSAEWOULDBLOCK when the buffer is full (as stated in the documentation for WSASend):
WSAEWOULDBLOCK
Overlapped sockets: There are too many outstanding overlapped I/O requests. Nonoverlapped sockets: The socket is marked as nonblocking and the send operation cannot be completed immediately.
Also read further notes about this error code.
Winsock send() blocks only if the socket is running in blocking mode and the socket's outbound buffer fills up with queued data. If you are managing multiple sockets in the same thread, do not use blocking mode. If one receiver does not read data in a timely maner, it can cause all of the connections on that thread to be affected. Use non-blocking mode instead, then send() will report when a socket has entered a state where blocking would occur, then you can use select() to detect when the socket can accept new data again. A better option is to use overlapped I/O or I/O Completion Ports instead. Submit outbound data to the OS and let the OS handle all of the waiting for you, notifying you when the data has eventually been accepted/sent. Do not submit new data for a given socket until you receive that notification. For scalability to a large number of connections, I/O Completion Ports are generally a better choice.

close() socket directly after send(): unsafe?

Is it wise/safe to close() a socket directly after the last send()?
I know that TCP is supposed to try to deliver all remaining data in the send buffer even after closing the socket, but can I really count on that?
I'm making sure that there is no remaining data in my receive buffer so that no RST will be sent following my close.
In my case, the close is actually the very last statement of code before calling exit().
Will the TCP stack really continue to try and transmit the data even after the process sending it has terminated? Is that as reliable as waiting for an arbitrary timeout myself before calling close() by setting SO_LINGER?
That is, do the same TCP timeouts apply, or are they shorter? With a big send buffer and a slow connection, the time to actually transfer all the buffered data could be substantial, after all.
I'm not interested at all in being notified of the last byte sent; I just want them to eventually arrive at the remote host as reliably as possible.
Application layer acknowledgements are not an option (the protocol is HTTP, and I'm writing a small server).
I've been reading the The ultimate SO_LINGER page, or: why is my tcp not reliable blog post a lot. I recommend you read it too. It discusses edge cases of large data transfers with regards to TCP sockets.
I'm not the expert at SO_LINGER, but on my server code (still in active development) I do the following:
After the last byte is sent via send(), I call shutdown(sock, SHUT_WR) to trigger a FIN to be sent.
Then wait for a subsequent recv() call on that socket to return 0 (or recv returns -1 and errno is anything other that EAGAIN/EWOULDBLOCK).
Then the server does a close() on the socket.
The assumption is that the client will close his socket first after it has received all the bytes of the response.
But I do have a timeout enforced between the final send() and when recv() indicates EOF. If the client never closes his end of the connection, the server will give up waiting and close the connection anyway. I'm at 45-90 seconds for this timeout.
All of my sockets are non-blocking and I use poll/epoll to be notified of connection events as a hint to see if it's time to try calling recv() or send() again.
Application layer acknowledgements are not an option (the protocol is HTTP, and I'm writing a small server).
HTTP protocol doesn't suffer from this problem. A HTTP server is not supposed to close the connection in any normal operation. The client closes it after recv(), and it knows exactly how many bytes it expects.
And just to be clear, the answer is "no".
Yes, it is safe that send() then close() immediately.
the kernel will sent out all data in buffer and wait ack, then fin the socket gracefully.

WinSock select() on listen()ing socket, non-blocking I/O?

When I do a select() on a listen()ing socket on Windows and it is non-blocking. Do I get a read event or a write event when there is a connection pending?
read.
From MSDN:
The parameter readfds identifies the
sockets that are to be checked for
readability. If the socket is
currently in the listen state, it will
be marked as readable if an incoming
connection request has been received
such that an accept is guaranteed to
complete without blocking.