I'm having what doesn't seem to be a problem but I don't understand the error. Basically, I've been trying to work on code that "gracefully" shuts down a tcp socket on both ends. At the end of my program, on both sides, I shutdown my sockets before closing them. On a quick restart, which is the thing I'm trying to solve, the much worse problem of crashing because the sockets are lingering on both sides doesn't happen anymore. The shutdown/then close seems to work on that front.
However, i get "Address already in use" still, which usually made it so that I couldn't connect. Now I'm able to connect after that error just fine. I've read a lot on the subject of graceful shutdown, reuse address, and the like. And I guess my question is, how, if the socket error'd on bind ("Address already in use"), after a successful open, is it possibly able to connect to the endpoint? In other words, if the address is actually already in use, how is the connection being made? Also, of note, reuse address doesn't work in this situation. Because I'm using the same socket settings, local/remote addresses and ip.
Failing to bind() the socket to an address does not invalidate the underlying socket. As such, the connect() operation will continue with an unbound socket, deferring to the kernel to bind to a local endpoint.
Here is a complete example demonstrating this behavior:
#include <boost/asio.hpp>
#include <boost/bind.hpp>
// This example is not interested in all handlers, so provide a noop function
// that will be passed to bind to meet the handler concept requirements.
void noop() {}
int main()
{
using boost::asio::ip::tcp;
// Create all I/O objects.
boost::asio::io_service io_service;
tcp::acceptor acceptor(io_service, {tcp::v4(), 0});
tcp::socket server(io_service, tcp::v4());
// Open socket1, binding to a random port.
tcp::socket socket1(io_service, {boost::asio::ip::address_v4::any(), 0});
tcp::socket socket2(io_service); // non-open
// Explicitly open client2, which will bind it to the any address.
boost::system::error_code error;
socket2.open(tcp::v4(), error);
assert(!error);
assert(socket2.local_endpoint().port() == 0);
// Attempt to bind socket2 to socket1's address will fail with
// an already in use error, leaving socket2 bound to the any endpoint.
// (e.g. a failed bind does not have side effects on the socket)
socket2.bind(socket1.local_endpoint(), error);
assert(error == boost::asio::error::address_in_use);
assert(socket2.local_endpoint().port() == 0);
// Connect will defer to the kernel to bind the socket.
acceptor.async_accept(server, boost::bind(&noop));
socket2.async_connect(acceptor.local_endpoint(),
[&error](const boost::system::error_code& ec) { error = ec; });
io_service.run();
io_service.reset();
assert(!error);
assert(socket2.local_endpoint().port() != 0);
}
Related
(Running on VS2017, Win7 x64)
I am confused about the point of SO_REUSEADDR and SO_EXCLUSIVEADDRUSE. And yes, I've read the MSDN documentation, but I'm obviously not getting it.
I have the following simple code in two separate processes. As expected, because I enable SO_REUSEADDR on both sockets, the second process's bind succeeds. If I don't enable this on any one of these sockets, the second bind will not succeed.
#define PORT 5150
SOCKET sockListen;
if ((sockListen = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, WSA_FLAG_OVERLAPPED)) == INVALID_SOCKET)
{
printf("WSASocket() failed with error %d\n", WSAGetLastError());
return 1;
}
int optval = 1;
if (setsockopt(sockListen, SOL_SOCKET, `SO_REUSEADDR`, (char*)&optval, sizeof(optval)) == -1)
return -1;
SOCKADDR_IN InternetAddr;
InternetAddr.sin_family = AF_INET;
InternetAddr.sin_addr.s_addr = inet_addr("10.15.20.97");
InternetAddr.sin_port = htons(PORT);
if (::bind(sockListen, (PSOCKADDR)&InternetAddr, sizeof(InternetAddr)) == SOCKET_ERROR)
{
printf("bind() failed with error %d\n", WSAGetLastError());
return 1;
}
So doesn't having to enable SO_REUSEADDR for both sockets make SO_EXCLUSIVEADDRUSE unnecessary - if I don't want anyone to foricibly bind to my port, I just don't enable SO_REUSEADDR in that process?
The only difference I can see is that if I enable SO_EXCLUSIVEADDRUSE in the first process, then attempt a bind in the second process, that second bind will fail with
a) WSAEADDRINUSE if I don't enable SO_REUSEADDR in that second process
b) WSAEACCES if I do enable SO_REUSEADDR in that second process
So I tried enabling both SO_EXCLUSIVEADDRUSE and SO_REUSEADDR in the first process but found that whichever one I attempted second failed with WSAEINVAL.
Note also that I have read this past question but what that says isn't what I'm seeing: it states
A socket with SO_REUSEADDR can always bind to exactly the same source
address and port as an already bound socket, even if the other socket
did not have this option set when it was bound
Now if that were the case then I can definitely see the need for SO_EXCLUSIVEADDRUSE.
I'm pretty sure I'm doing something wrong but I cannot see it; can someone clarify please?
As stated in the docs, SO_EXCLUSIVEADDRUSE became available on Windows NT4 SP4; before that there was only SO_REUSEADDR. So both being present has (also) historical reasons.
I think of SO_REUSEADDR as the intention to share an address (which is only really useful for UDP multicast. For unicast or TCP it really doesn´t do much since the bahaviour is non-deterministic for both sockets).
SO_EXCLUSIVEADDRUSE is a security measure to avoid my (server) application´s traffic being hijacked / rendered useless by a later binding to the same IP/port.
As I see it, you need SO_REUSEADDR for UDP multicats, and you need SO_EXCLUSIVEADDRUSE as a security measure for server applications.
I am creating a UDP-proxy in go, but while doing some load test using iperf, I start to get this error:
socket: too many open files
After searching and testing, I found that if I create a pool using a map of opening connections being the key *net.UDPAddr.String() and the value an instance of UDP-proxy containing an *net.UDPConn, I am available to reuse existing connection in case the client address is the same:
var clients map[string]*UDPProxy.UDPProxy = make(map[string]*UDPProxy.UDPProxy)
This block of code looks something like:
// wait for connections
for {
n, clientAddr, err := conn.ReadFromUDP(buffer)
if err != nil {
log.Println(err)
}
counter++
if *d {
log.Printf("new connection from %s", clientAddr.String())
}
fmt.Printf("Connections: %d, clients: %d\n", counter, len(clients))
proxy, found = clients[clientAddr.String()]
if !found {
// make new connection to remote server
proxy = UDPProxy.New(conn, clientAddr, raddr_udp, *d)
clients[clientAddr.String()] = proxy
}
go proxy.Start(buffer[0:n])
}
This seems to be working, but the problem I have now, is that I need find a way of expiring,cleaning the map when the client exists or is not using any more the proxy so that I could avoid having multiple unused connections.
Any idea how of could I improve this or even better, how could I replace totally the map, I don't know if channels could be help full?
Thanks in advance.
Since you are creating UDP proxies, you probably know that you have to come up with your own solution for deciding when to "terminate" the proxy session. The session is just an abstraction when it comes to UDP - unless the UDPProxy package you're using has an established mechanism already.
Depending on why you are creating UDP proxies, it might be easy to arbitrarily cleanup connections ...
So if you know that a client is exiting, call the Close() method on the proxy (assuming there is one) and use delete on the map entry.
How to decide that a client is exiting is up to you. Could use a slice as a FIFO, or pick one randomly, or try setting timers for each.
I am facing this problem of Binding to socket.
1st instance works properly i.e.
socket() returns success and hence forth bind() and listen(), accept() and hence recv() - All fine till here.
2nd instance throw error while binding "Address already in use"
I went through all the post earlier on this and i dont see any specific solution provided on the same.
My code is as below :-
if((status = getaddrinfo(NULL,"8080",&hints,&servinfo))!=0){
ALOGE("Socket:: getaddrinfo failed %s\n",strerror(errno));
return NULL;
}
server_sockfd = socket(servinfo->ai_family, servinfo->ai_socktype, servinfo->ai_protocol);
if(server_sockfd == -1) {
ALOGE("Socket:: Scoket System Call failed %s\n",strerror(errno));
return NULL;
}
if ((setsockopt(server_sockfd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(int))) < 0)
{
ALOGE("Socket:: setsockopt failed %s\n",strerror(errno));
return NULL;
}
ret = bind(server_sockfd, servinfo->ai_addr,servinfo->ai_addrlen);
if(ret!=0) {
ALOGE("Socket:: Error Binding on socket %s\n",strerror(errno));
return NULL;
}
This code runs on android platform.
I have properly closed each session before opening a new session as below :-
ret = shutdown(client_sockfd,0);
if(ret != 0)
ALOGE("Socket:: Shutdown Called%s\n",strerror(errno));
I tried with close as well but it did not work.
Surprisingly the error does not disappear even when we try to open the socket after long time (as per TIME_WAIT logic)
Could anyone please guide me to proper call or API or Logic(in code and not on command line apart from directly killing the process) to handle this situation ?
A socket is one half a channel of communication between two computers over a network on a particular port. (the other half is the corresponding socket on the other computer)
Error is very clear I suppose in this case. As mentioned Address already in use, so the the socket you are trying to connect in the second attempt is already used (port was already occupied) -> maybe due to first socket connection.
To investigate further check another SO question here and here
You can't share a TCP listening port between two processes even with SO_REUSEADDR.
NB shutdown() does not close a TCP session. It half-closes it. You have to close the socket.
Here is my question:
Is it bad to set socket to nonblocking before I call accept or connect? or it should be using blocking accept and connect, then change the socket to nonblocking?
I'm new to OpenSSL and not very experienced with network programming. My problem here is I'm trying to use OpenSSL with a nonblocking socket network to add security. When I call SSL_accept on server side and SSL_connect on client side, and check return error code using
SSL_get_error(m_ssl, n);
char error[65535];
ERR_error_string_n(ERR_get_error(), error, 65535);
the return code from SSL_get_error indicates SSL_ERROR_WANT_READ, while ERR_error_string_n prints out "error:00000000:lib(0):func(0):reason(0)", which I think it means no error. SSL_ERROR_WANT_READ means I need to retry both SSL_accept and SSL_connect.
Then I use a loop to retry those function, but this just leads to a infinite loop :(
I believe I have initialized SSL properly, here is the code
//CRYPTO_malloc_init();
SSL_library_init();
const SSL_METHOD *method;
// load & register all cryptos, etc.
OpenSSL_add_all_algorithms();
// load all error messages
SSL_load_error_strings();
if (server) {
// create new server-method instance
method = SSLv23_server_method();
}
else {
// create new client-method instance
method = SSLv23_client_method();
}
// create new context from method
m_ctx = SSL_CTX_new(method);
if (m_ctx == NULL) {
throwError(-1);
}
If there is any part I haven't mentioned but you think it could be the problem, please let me know.
SSL_ERROR_WANT_READ means I need to retry both SSL_accept and SSL_connect.
Yes, but this is not the full story.
You should retry the call only after the socket gets readable, e.g. you need to use select or poll or similar functions to wait, until the socket gets readable. Same applies to SSL_ERROR_WANT_WRITE, but here you have to wait for the socket to get writable.
If you just retry without waiting it will probably eventually succeed, but only after having 100s of failed calls. While doing select does not guarantee that it will succeed immediately at the next call it will take only a few calls of SSL_connect/SSL_accept until it succeeds and it will not busy loop and eat CPU in the mean time.
I try to use a boost asio socket, bound to a local address/port combination. That works great. What doesn't work, is the re-using of the socket once the socket and application has been stopped and restarted.
//
// open the socket - it would also be opened by the async_connect()
// method but we might need an open socket to bind it
_socket.open(boost::asio::ip::tcp::v4());
if ( _bindLocal ) {
boost::asio::socket_base::reuse_address option(true);
_socket.set_option(option);
_socket.bind( _localEndpoint );
}
// Invoke async. connect. Immediate return, no throw.
_socket.async_connect(_remoteEndpoint,
boost::bind(&MyTransceiver::handleConnect, this,
boost::asio::placeholders::error));
What am I missing? Is the ordering of the open(), set_option() and bind() call correct?
The code looks fine. Try to use error_code to get the result of your set_option() call.
boost::system::error_code ec;
_socket.set_option(boost::asio::socket_base::reuse_address(true), ec);