Passive socket of IOLib throws out EADDRINUSE - lisp

IOLib allows to create a passive socket to listen the clients' connection, before listen is called, we need to call (bind-address) to bind the socket to an specified address/port.
Well, the problem is that the first time I bind the socket to a port, it runs well, then I use C-c C-c in slime to terminate the thread, and run the program again, this time it throws out exception of EADDRINUSE:
<SOCKET-ADDRESS-IN-USE-ERROR 98 :EADDRINUSE "address already in use", FD: 10>
I already set the reuse_addr option to bind-address like that:
(bind-address socket
+ipv4-unspecified+
:port 1080
:reuse-addr t)
But I don't think this is the problem, because when I did the same thing in C, I use Ctrl+C to terminate the process, I can rebind the port, but in slime, the only solution is to restart emacs, it's really not conveninent, so How can I solve this problem, thanks.

When you exit a process, any open file descriptors (including network sockets) are closed, which is why it seems to work in C but not in CL. When a thread terminates, however, this doesn't happen. You'll find that you'll get the desired behavior by using the restart-inferior-lisp command in SLIME.
Not all is lost, however. If you wrap the function in the thread in an UNWIND-PROTECT form, you can arrange for the socket to be closed when the function is exited.

Related

Octave socket close

I'm trying to use Octave to open a simple socket server. While debugging, my script crashed after it had bound to a port. Of course, subsequent binds to the same port now fail. How can I close the socket so that I can reuse the port? Right now all I can do is close active entirely, which kills the process that is running the listener.
Ric
To prevent this from happening in the future, you can use onCleanup or unwind_protect to ensure that the socket-closing code always happens, even if your script errors out unexpectedly.

How can two Unicorn servers bind to the same Unix socket?

This (rather old) article seems to suggest that two Unicorn master processes
can bind to the same Unix socket path:
When the old master receives the QUIT, it starts gracefully shutting down its workers. Once
all the workers have finished serving requests, it dies. We now have a fresh version of our
app, fully loaded and ready to receive requests, without any downtime: the old and new workers
all share the Unix Domain Socket so nginx doesn’t have to even care about the transition.
Reading around, I don't understand how this is possible. From what I understand, to truly have zero
downtime you have to use SO_REUSEPORT to let the old and new servers temporarily be bound to the
same socket. But SO_REUSEPORT is not supported on Unix sockets.
(I tested this by binding to a Unix socket path that is already in use by another server, and I got
an EADDRINUSE.)
So how can the configuration that the article describes be achieved?
Nginx forwards HTTP requests to a Unix socket.
Normally a single Unicorn server accepts requests on this socket and handles them (fair enough).
During redeployment, a new Unicorn server begins to accept requests on this socket and handles them, while the old server is still running (how?)
My best guess is that the second server calls unlink on the socket file immediately before calling bind with the same socket file, so in fact there is a small window where no process is bound to the socket and a connection would be refused.
Interestingly, if I bind to a socket file and then immediately delete the file, the next connection to the socket actually gets accepted. The second and subsequent connections are refused with ENOENT as expected. So maybe the kernel covers for you somewhat while one process is taking control of a socket that was is bound by another process. (This is on Linux BTW.)

How is Ctrl+C key behaves in a TCP connection

I'm studying network and specifically tcp connection and i wondering in a situation that you connect remotely to a server using tcp connection and sending command line to execute some actions, How they handle sending a ctrl+c signals?
Is it sends a normal tcp package that in data section describes ctrl+c hits?
or is it sends a package that have RST flag turned on or FIN flag to cut or close the connection?
There's no such thing as sending a signal over TCP.
Ctrl+C is a terminal generated signal. Assuming you (or the running process) didn't change the terminal's settings, this means that the terminal driver transforms the Ctrl+C key combination into a kill(x, SIGINT), where x is the process group ID of the foreground process group (and as such, SIGINT is delivered to every process in the foreground process group, which, in your case, is probably just one process).
What the process does when the signal is delivered is not the terminal driver's business. The process may have ignored the signal, so nothing happens. Or it may have installed a signal handler, and do some work inside the signal handler (like writing something to the socket that when read by the receiver will cause it to send SIGINT to itself - this emulates a "remote signal delivery"). Or it may have blocked the signal - in that case, the signal is delivered when the process unblocks it, or it is canceled if the process ignores it in the meantime.
If, on the other hand, you (or the running process) changed the terminal settings such that Ctrl+C is not interpreted as a signal-generating key combination, then the process will read Ctrl+C from input. Of course, what happens depends on what the process does with the input that it reads.
In short, if you didn't change the default behavior for SIGINT and you didn't change your terminal's settings, Ctrl+C raises SIGINT; the default action is to terminate the process, and so the socket will be closed and the connection terminated.

Racket not closing TCP port

I've written a simple HTTP echo server in Racket. When I run the server from within DrRacket and then click the Stop button, my program terminates, but the port that was being used takes an annoyingly long time to close. If I run lsof -i :<port> in my terminal after terminating the program, I don't see anything bound to that port, but DrRacket disagrees and refuses to let me restart my program, telling me that something is already bound to that port.
Is this a bug in Racket, or is there something that I'm missing?
If you are using tcp-listen directly (meaning that you handle all the low-level socket stuff yourself, and manually handle HTTP too), you need to call it with the reuse? parameter set to #t.
If you are using the web-server module, it already sets reuse? to #t so it should already work.

observing on-off socket problem

recently I encounter a problem. I am using two programs A and B, developed by someone else, which use TCP sockets to communicate each other, A is server, B is client. That is what I observed: when I start both A and B, they run and communicate with each other, if I first kill A, then restart A again, now by checking the processes, A is successfully launched, but cannot be connected by B, no matter I restart B. however, If I continue to kill this non-detectable A and start A again, it can be detected by B.
At the same time, if I close B's socket before kill A, then when I start A and B, they work very well.
what the problem might be and is there some way to see the opened sockets when I kill A?
It depends on the OS you are using.
lsof -p <pid> is quite common on UNIX and lets you list all file descriptors used by a process.
netstat is probably available and will also list opened ports.
This is probably due to the TIME_WAIT state. When you kill A, the server port is still allocated by the OS and can be reused only if A sets a specific flag when opening server port to be able to reuse this port (SO_REUSEADDR). Otherwise, A won't be able to reuse the server port until it is closed by the OS (can take a few minutes, reason why when you continue to kill A, at some point, the port is available again). I don't know what A is doing if it cannot open the server port because of that.