NetSocket write exception when network becomes unreachable - sockets

I am using NetSocket class to write data to a TCP server. When the TCP server gracefully shuts down, socket's close handler is invoked and everything is working correctly. However, when I cut off the network during execution, I get success on write handler. In pure Java I would get something like this java.net.SocketException: Broken pipe.
Is this a bug and how to implement this? I would like to avoid implementing my own timeout and rather have the info when the socket is abruptly closed.
This is the code I'm using for writing:
socket.write(requestCommandBuffer, writeBuffer -> {
if (writeBuffer.succeeded()) {
//I always get success here, even when other side abruptly closes the socket
LOG.info("Successful write.");
} else {
LOG.error("Failed to write.");
}
});

Related

if i called shutdown(fd,SHUT_RDWR),but not called close(fd),what will happen?

if i called shutdown(fd,SHUT_RDWR),but not called close(fd),what will happen?
inline void CSocket::close()
{
if (_socket_fd != INVALID_SOCKET)
{
::shutdown(_socket_fd, SHUT_RDWR);
::close(_socket_fd);
_socket_fd = INVALID_SOCKET;
}
}
if i called shutdown(fd,SHUT_RDWR),but not called close(fd),what will happen?
On the network, a FIN will be sent, which will cause the peer to encounter end of stream after reading any pending data.
Future recv() and read() calls in your socket will return zero, indicating end of stream.
You will not be able to send from this socket.
What happens if the peer tries to send you more data is system-dependent: if your platform is Windows a reset will be issued; if it is Unix the data will be acknowledged and thrown away; if it is Linux the data will be acknowledged and uselessly saved in your socket receiver buffer, from whence you can never retrieve it, and which will eventually close your receive window and prevent the peer from sending.
the socket descriptor is not closed or released to the operating system.

Epoll events for connecting sockets

I create epoll and register some non-blocking sockets which try connect to closed ports on localhost. Why epoll tells me, that i can write to this socket (it give event for one of created socket with eventmask contain EPOLLOUT)? But this socket doesn't open and if i try send something to it i get an error Connection refused.
Another question - what does mean even EPOLLHUP? I thought that this is event for refused connection. But how in this case event can have simultaneously EPOLLHUP and EPOLLOUT events?
Sample code on Python:
import socket
import select
poll = select.epoll()
fd_to_sock = {}
for i in range(1, 3):
s = socket.socket()
s.setblocking(0)
s.connect_ex(('localhost', i))
poll.register(s, select.EPOLLOUT)
fd_to_sock[s.fileno()] = s
print(poll.poll(0.1))
# prints '[(4, 28), (5, 28)]'
All that poll guarantees is that your application won't block after calling corresponding function. So you are getting what you've paid for - you can now rest assured writing to this socket won't block - and it didn't block, did it?
Poll never guarantees that corresponding operation will succeed.
poll/select/epoll return when the file descriptor is "ready" but that just means that the operation will not block (not that you will necessarily be able to write to it successfully).
Likewise for EPOLLIN: for example, it will return ready when a socket is closed; in that case, you won't actually be able to read data from it.
EPOLLHUP means that there was a "hang up" on the connection. That would really only occur once you actually had a connection. Also, the documentation (http://linux.die.net/man/2/epoll_ctl) says that you don't need to include it anyway:
EPOLLHUP
Hang up happened on the associated file descriptor. epoll_wait(2) will always wait for this event; it is not necessary to set it in events.

infinite loop for listening to socket

I have an implementation where I listen to a port for events and do processing based on the input. I have kept it in a infinite loop. However it only works once and then I have to restart the program again. Does control never come back. Is this infinite loop a good idea?
Integer port = Integer.parseInt(Configuration.getProperty("Environment", "PORT"));
ServerSocket serverSocket = new ServerSocket(port);
LOG.info("Process Server listening on PORT: " + port);
while(true){
Socket socket = serverSocket.accept();
new Thread(new ProcessEvent(socket)).start();
}
Once you started the thread that handle the client, you also need to loop on a read function, because after you read a message, you will need to read the next messages. The accept() will return only once per client connection. After the connection is opened, everything happen in the thread, until the connection is closed.
Looping on accept() is a good idea, but the spawned thread must not exit as long as your client is connected. If you intentionally close the connection, then it should be fine if you make sure it is handled correctly on both sides, and the client needs to reopen the connection for further communication.

OpenSSL Nonblocking Socket Accept And Connect Failed

Here is my question:
Is it bad to set socket to nonblocking before I call accept or connect? or it should be using blocking accept and connect, then change the socket to nonblocking?
I'm new to OpenSSL and not very experienced with network programming. My problem here is I'm trying to use OpenSSL with a nonblocking socket network to add security. When I call SSL_accept on server side and SSL_connect on client side, and check return error code using
SSL_get_error(m_ssl, n);
char error[65535];
ERR_error_string_n(ERR_get_error(), error, 65535);
the return code from SSL_get_error indicates SSL_ERROR_WANT_READ, while ERR_error_string_n prints out "error:00000000:lib(0):func(0):reason(0)", which I think it means no error. SSL_ERROR_WANT_READ means I need to retry both SSL_accept and SSL_connect.
Then I use a loop to retry those function, but this just leads to a infinite loop :(
I believe I have initialized SSL properly, here is the code
//CRYPTO_malloc_init();
SSL_library_init();
const SSL_METHOD *method;
// load & register all cryptos, etc.
OpenSSL_add_all_algorithms();
// load all error messages
SSL_load_error_strings();
if (server) {
// create new server-method instance
method = SSLv23_server_method();
}
else {
// create new client-method instance
method = SSLv23_client_method();
}
// create new context from method
m_ctx = SSL_CTX_new(method);
if (m_ctx == NULL) {
throwError(-1);
}
If there is any part I haven't mentioned but you think it could be the problem, please let me know.
SSL_ERROR_WANT_READ means I need to retry both SSL_accept and SSL_connect.
Yes, but this is not the full story.
You should retry the call only after the socket gets readable, e.g. you need to use select or poll or similar functions to wait, until the socket gets readable. Same applies to SSL_ERROR_WANT_WRITE, but here you have to wait for the socket to get writable.
If you just retry without waiting it will probably eventually succeed, but only after having 100s of failed calls. While doing select does not guarantee that it will succeed immediately at the next call it will take only a few calls of SSL_connect/SSL_accept until it succeeds and it will not busy loop and eat CPU in the mean time.

GCDAsyncSocket write timeout does not work

I am trying to set a timeout on write operations when using GCDAsyncSocket. The code is pretty simple and is the following.
[iAsyncSocket writeData:bytesToSend withTimeout:3.0 tag:0];
Then I disable the Internet connection on my Mac and wait for write timeout to occur, but nothing happens. I don't get a disconnection with a GCDAsyncSocketWriteTimeoutError error as I should.
I have also validated that my server stops, as expected, receiving the messages after I turn off the Internet connection.
I have looked inside the source code and I have found out that the writeTimer, that is responsible for firing a write timeout event, is always cancelled (function endCurrentWrite is called). Tracing back to where the timer is cancelled, I ended up at the following line of code.
ssize_t result = write(socketFD, buffer, (size_t)bytesToWrite);
The write system call always returns the total number of bytes that I am sending, as if the socket manages to send the data although there is no Internet connection. Is this logical?
Has anyone come up with the same problem or seen similar behaviour? Or has anyone managed to set a write timeout for a GCDAsyncSocket?
Thanks a lot.