Time Gap Between Socket Calls ie. Accept() and recv/send calls - sockets

I am implementing a server in which i listen for the client to connect using the accept socket call.
After the accept happens and I receive the socket, i wait for around 10-15 seconds before making the first recv/send call.
The send calls to the client fails with errno = 32 i.e broken pipe.
Since i don't control the client, i have set socket option *SO_KEEPALIVE* in the accepted socket.
const int keepAlive = 1;
acceptsock = accept(sock, (struct sockaddr*)&client_addr, &client_addr_length)
if (setsockopt( acceptsock, SOL_SOCKET, SO_KEEPALIVE, &keepAlive, sizeof(keepAlive)) < 0 )
{
print(" SO_KEEPALIVE fails");
}
Could anyone please tell what may be going wrong here and how can we prevent the client socket from closing ?
NOTE
One thing that i want to add here is that if there is no time gap or less than 5 seconds between the accept and send/recv calls, the client server communication occurs as expected.

connect(2) and send(2) are two separate system calls the client makes. The first initiates TCP three-way handshake, the second actually queues application data for transmission.
On the server side though, you can start send(2)-ing data to the connected socket immediately after successful accept(2) (i.e. don't forget to check acceptsock against -1).

After the accept happens and I receive the socket, i wait for around 10-15 seconds before making the first recv/send call.
Why? Do you mean that the client takes that long to send the data? or that you just futz around in the server for 10-15s between accept() and recv(), and if so why?
The send calls to the client fails with errno = 32 i.e broken pipe.
So the client has closed the connection.
Since I don't control the client, i have set socket option SO_KEEPALIVE in the accepted socket.
That won't stop the client closing the connection.
Could anyone please tell what may be going wrong here
The client is closing the connection.
and how can we prevent the client socket from closing ?
You can't.

Related

Does close() on socket on one end, closes on the other end as well?

If ESTABLISHED socket (after connecting from client via connect()), exits and thus kernel closes all open file descriptor, what happens to the other side? If the client sends FIN and the server ACK it (which is just half-closed state), but the server tries to read() on that socket, what happen then? I can imagine 2 situation:
the socket server read()s on, is closed also. But in the server side there is no exit(), so noone has closed that socket at that side. So here I do not know how server ends, since its end of that socket should not be closed
the server socked is not closed, but it reads 0 bytes. (the return value from read() is simply 0) and the rest remain on designer how to handle return value from read. But still even if the server side socket is not closed, when does the server sends its FIN bit? After it finishes execution (to complete full connection termination) ?
here is the statement at server side that reads from closed socket (closed at client side):
while ((len = read(sockfd, buf, 256)) > 0){
...
}
here, will it return because the read() reads on closed sockfd? or because the read() returns 0 and thus false the condition? (the 2 situation described above). As far as I know, if read() would read on closed fd, then error would be return (-1). But 0 bytes reads just return (0). So what is returned?
A close of a connection means that both peers agree that they don't want to communicate any more with each other. If only one peer closes the socket it just communicates with the FIN that it will no longer send any data. It also communicates to the local OS that it is no longer willing to receive any data - here close(sock) differs from shutdown(sock,SHUT_WR).
A call to read in the server will return 0 if the client closed or shutdown the socket, since this meant that no more data are send from client to server. The server then might decide to also close or shutdown the socket. But might also decide to send more data to the client, since the socket is not closed yet in the server. If the server sends more data the client will respond with a RST (connection reset) since it is not expecting any more data. When receiving the RST the server side socket gets automatically closed too.
while ((len = read(sockfd, buf, 256)) > 0){
In most cases read will return 0 here if the client has closed the connection. It will return -1 in case an error on the socket occurred, notably Connection reset. This might happen if the server has written data to the client while the client has already closed the connection (i.e. race condition), in which case the client will return with a RST. This error will be delivered on the next syscall on the socket, i.e. the read.

Using "send" to tcp socket/Windows/c

For c send function(blocking way) it's specified what function returns with size of sent bytes when it's received on destinations. I'm not sure that I understand all nuances, also after writing "demo" app with WSAIoctl and WSARecv on server side.
When send returns with less bytes number than asked in buffer-length parameter?
What is considered as "received on destinations"? My first guess it's when it sit on server's OS buffer and server application is notified. My second one it's when server application recv call have read it fully?
Unless you are using a (somewhat exotic) library, a send on a socket will return the number of bytes passed to the TCP buffer successfully, not the number of bytes received by the peer (see Microsoft´s docs for example).
When you are streaming data via a socket, you need to check the bytes effectively accepted into the TCP send buffer. That´s why usually a send command is inside a loop that will issue several sends if needed.
Errors in send are local: for example if the socket is closed by the peer during a sending operation (making your socket invalid) or if the operation times out (TCP buffer not emptying, i. e. peer not receiving data fast enough or some other trouble).
After all send is completed you have no easy way of knowing if the peer received all the bytes you sent. You´ll usually just issue closesocket and make sure that your socket has a proper linger option set (i. e. only close after timeout or sucessfully finishing the send). Alternatively you wait for a confirmation by the peer (for example via a recv that returns zero bytes, indicating that the connection was gracefully closed).
Edit: typo

How to implement Socket.PollAsync in C#

Is it possible to implement the equivalent of Socket.Poll in async/await paradigm (or BeginXXX/EndXXX async pattern)?
A method which would act like NetworkStream.ReadAsync or Socket.BeginReceive but:
leave the data in the socket buffer
complete after the specified interval of time if no data arrived (leaving the socket in connected state so that the polling operation can be retried)
I need to implement IMAP IDLE so that the client connects to the mail server and then goes into waiting state where it received data from the server. If the server does not send anything within 10 minutes, the code sends ping to the server (without reconnecting, the connection is never closed), and starts waiting for data again.
In my tests, leaving the data in the buffer seems to be possible if I tell Socket.BeginReceive method to read no more than 0 bytes, e.g.:
sock.BeginReceive(b, 0, 0, SocketFlags.None, null, null)
However, not sure if it indeed will work in all cases, maybe I'm missing something. For instance, if the remote server closes the connection, it may send a zero-byte packet and not sure if Socket.BeginReceive will act identically to Socket.Poll in this case or not.
And the main problem is how to stop socket.BeginReceive without closing the socket.

lwip - what's the reason tcp socket blocked in send()?

I am make a application base on lwip,the applcation just send data to the server;
When my app works for some times (about 5 hours),I found that the send thread hung in send() function,and after about 30min send() return 0,and my thread run agin;
In the server side ,have make a keepalive,its time is 5min,when my app hungs,5min later the server close the sockect,but my app have not get this,still hungs in send() until 30min get 0 return; why this happen?
1: the up speed is not enough to send data,it will hungs in send?
2: maybe the server have not read data on time,and it make send buff full and hungs?
how can i avoid these peoblems in my code ? I have try to set TCP_NODELAY,SO_SNDTIMEO and select before send,but also have this problem.
send() blocks when the receiver is too far behind the sender. recv() returns zero when the peer has closed the connection, which means you must close the socket and stop reading.

WSASocket programming, how to hold on the non-blocking connect() function till some time point?

I have a problem in the WSASocket programming. I want to do some trick at server side so that
it could hold the client side wait for a while in the WSAWaitForEvents() function waiting for
FD_CONNECT event.
Details are as follows:
At the client side, the socket is non-blocking mode. And it tries to connect to the server. It main code is something like:
SOCKET s = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
WSAEvent hEvent = WSACreateEvent();
WSAEventSelect(s, hEvent, FD_CONNECT); //this also make socket non-blocking mode.
connect(s, &someserveraddr, sizeof(someserveraddr)); //connect to some server in non-block mode
WSAWaitForMultipleEvents(1, &hEvent, TRUE, WSA_INFINITE, FALSE); //this will block until success or faile
At the server side, one the server sees a connection from that client, it will do something special which will also take sometime, for example: calling doSomethingLengthy(), So I want to hold the client side blocking at function WSAWaitForMultipleEvents(...) until the server finishes that task. But I don't know how to achieve this. Usually, the server code would like:
SOCKET s = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
bind(s, &someaddress, sizeof(someaddress));
listen(s, 5);
SOCKET acceptSocket = accept(s, &someotheraddress, sizeof(someotheraddress));
The problem is that, I don't know where/when to call doSomethingLengthy(). I know that once listen() is done, the client will be notified done and WSAWaitForMultipleEvents() will return. But I cannot call doSomethingLengthy() before listen() otherwise the client side connect() will fail.
You can't. The server end of connect happens before the server gets to see the accepted socket, via the backlog queue.
FD_CONNECT tells you when the socket has connected, but FD_WRITE tells you when you are allowed to send data over the connection. So try waiting for FD_WRITE instead (do note that you can get FD_WRITE multiple times during a connection's lifetime, but you will always get it after a successful connect() in addition to FD_CONNECT).