Is it possible to reconnect an already disconnected socket without having to create a new socket FD?
Example:
int s = socket();
connect(s,...);
....
socket disconnects
....
connect(s,...); <-------
According to the manpage, "Generally, stream sockets may successfully connect() only once; datagram sockets may use connect() multiple times to change their association." So if your socket is a TCP socket, the answer is "probably not"; if it's a UDP socket, the answer is "probably".
Related
We have a situation where client writes faster than the server can read, say every 1 second or less a client writes to a server making the tcp socket buffer full and therefore disconnects.
How to handle this sort of situation?
Is there a way to check tcp socket buffer from client side before writing and waits until buffer is freed and can send again?
Here is a sample pseudo code to easily reproduce the issue
Server
socket = create server Socket at port 7777;
socket->Accept(); //wait for just 1 connection
while(true)
{
// just do nothing and let the client fill the buffe
}
Client
socket = connect to localhost 7777
while(true)
{
socket->write("hello from test");
}
this will loop until write buffer is full, and it will hang up, and will disconnects with win socket error 10057.
My experiment showed that I can write to a non-blocking socket just after the connect() call, with no TCP connection established yet, and the written data correctly received by the peer after connection occured (asynchronously). Is this guaranteed on Linux / FreeBSD? I mean, will write() return > 0 when the connection is still in progress? Or maybe I was lucky and the TCP connection was successfully established between the connect() and write() calls?
The experiment code:
int fd = socket (PF_INET, SOCK_STREAM, 0);
fcntl(fd, F_SETFL, O_NONBLOCK)
struct sockaddr_in addr;
memset(&addr, 0, sizeof(addr));
addr.sin_family = AF_INET;
addr.sin_port = htons(_ip_port.port);
addr.sin_addr.s_addr = htonl(_ip_port.ipv4);
int res = connect(fd, (struct sockaddr*)&addr, sizeof(addr));
// HERE: res == -1, errno == 115 (EINPROGRESS)
int r = ::write(fd, "TEST", 4);
// HERE: r == 4
P.S.
I process multiple listening and connecting sockets (incoming and outgoing connections) in single thread and manage them by epoll. Usually, when I want to create a new outgoing connection, I call non-blocking connect() and wait the EPOLLOUT (epoll event) and then write() my data. But I noticed that I can begin writing before the EPOLLOUT and get appropriate result. Can I trust this approach or should I use my old fashion approach?
P.P.S.
I repeated my experiment with a remote host with latency 170ms and got different results: the write() (just after connect()) returned -1 with errno == EAGAIN. So, yes, my first experiment was not fair (connecting to fast localhost), but still I think the "write() just next to connect()" can be used: if write() returned -1 and EAGAIN, I wait the EPOLLOUT and retry writing. But I agree, this is dirty and useless approach.
Can I write() to a socket just after connect() call, but before TCP connection established?
Sure, you can. It's just likely to fail.
Per the POSIX specification of write():
[ECONNRESET]
A write was attempted on a socket that is not connected.
Per the Linux man page for write():
EDESTADDRREQ
fd refers to a datagram socket for which a peer address has
not been set using connect(2).
If the TCP connect has not completed, your write() call will fail.
At least on Linux, the socket is marked as not writable until the [SYN, ACK] is received from the peer. This means the system will not send any application data over the network until the [SYN, ACK] is received.
If the socket is in non-blocking mode, you must use select/poll/epoll to wait until it becomes writable (otherwise write calls will fail with EAGAIN and no data will be enqueued). When the socket becomes writable, the kernel has usually already sent an empty [ACK] message to the peer before the application has had time to write the first data, which results in some unnecessary overhead due to the API design.
What appears to be working is to after calling connect on a non-blocking socket and getting EINPROGRESS, set the socket to blocking and then start to write data. Then the kernel will internally first wait until the [SYN, ACK] is received from the peer and then send the application data and the initial ACK in a single packet, which will avoid that empty [ACK]. Note that the write call will block until [SYN, ACK] is received and will e.g. return -1 with errno ECONNREFUSED, ETIMEDOUT etc. if the connection fails. This approach however does not work in WSL 1 (Windows Subsystem for Linux), which just fails will EPIPE immediately (no SIGPIPE though).
In any case, not much can be done to eliminate this initial round-trip time due to the design of TCP. If the TCP Fast Open (TFO) feature is supported by both endpoints however, and can accept its security issues, this round-trip can be eliminated. See https://lwn.net/Articles/508865/ for more info.
For a simple python server using TCP socket as below, when there comes a TCP packet, and transport layer get port number, how does OS/transport layer know which thread/process to wake up(assuming the thread/process is blocking because of recv() system call)? for the code below, both parent thread and child thread have the connectionsocket file descriptor, how OS know which one to wake up? Thanks
host = 'localhost'
port = 55567
buf = 1024
addr = (host, port)
welcomesocket = socket(AF_INET, SOCK_STREAM)
welcomesocket.bind(addr)
welcomesocket.listen(2)
while 1:
connectionsocket, clientaddr = serversocket.accept()
thread.start_new_thread(handler, (connectionsocket, clientaddr))
serversocket.close()
There is an hash map tracking all used port in the kernel space.
When a packet arrives, kernel lookup the table using the port information in the packet, find the associated socket, and notify it
Here is how linux do it http://lxr.free-electrons.com/source/net/ipv4/udp.c#L489
http://erlangcentral.org/wiki/index.php/Building_a_Non-blocking_TCP_server_using_OTP_principles describe how to build a non-blocking tcp server, and one question about inet_async message.
handle_info({inet_async, ListSock, Ref, Error}, #state{listener=ListSock, acceptor=Ref} = State) ->
error_logger:error_msg("Error in socket acceptor: ~p.\n", [Error]),
{stop, Error, State};
If Error = {error, close}, who close the socket, client or server?
It depends, if you get that error, the socket may not have been opened in the first place. So if you try gen_tcp:send(Socket, "Message") you will get that the connection is closed.
Other reasons that the connection closed could be that the listening socket timed out waiting on a connection, or that gen_tcp:close(Socket) was called before the attempt to send a message.
Also you need to make sure you are connecting to the same port that the server initially opened the listening socket. So to answer your question, it could be either closed the connection.
I would like to fetch the IP address and port number of an incoming TCP/IP connection. Unfortunately gen_tcp's accept and recv functions only give back a socket, while gen_udp's recv function also gives back the address information. Is there a straightforward way to collect address information belonging to a socket in Erlang?
You need inet/peername 1. From the Erlang inet docs:
peername(Socket) -> {ok, {Address, Port}} | {error, posix()}
Types:
Socket = socket()
Address = ip_address()
Port = int()
Returns the address and port for the other end of a connection.