I'm studying network and specifically tcp connection and i wondering in a situation that you connect remotely to a server using tcp connection and sending command line to execute some actions, How they handle sending a ctrl+c signals?
Is it sends a normal tcp package that in data section describes ctrl+c hits?
or is it sends a package that have RST flag turned on or FIN flag to cut or close the connection?
There's no such thing as sending a signal over TCP.
Ctrl+C is a terminal generated signal. Assuming you (or the running process) didn't change the terminal's settings, this means that the terminal driver transforms the Ctrl+C key combination into a kill(x, SIGINT), where x is the process group ID of the foreground process group (and as such, SIGINT is delivered to every process in the foreground process group, which, in your case, is probably just one process).
What the process does when the signal is delivered is not the terminal driver's business. The process may have ignored the signal, so nothing happens. Or it may have installed a signal handler, and do some work inside the signal handler (like writing something to the socket that when read by the receiver will cause it to send SIGINT to itself - this emulates a "remote signal delivery"). Or it may have blocked the signal - in that case, the signal is delivered when the process unblocks it, or it is canceled if the process ignores it in the meantime.
If, on the other hand, you (or the running process) changed the terminal settings such that Ctrl+C is not interpreted as a signal-generating key combination, then the process will read Ctrl+C from input. Of course, what happens depends on what the process does with the input that it reads.
In short, if you didn't change the default behavior for SIGINT and you didn't change your terminal's settings, Ctrl+C raises SIGINT; the default action is to terminate the process, and so the socket will be closed and the connection terminated.
Related
In our flask socketio app, we have a socket.on("disconnect") that is called whenever a socket client disconnects, to handle the db state updates. However, when our server is killed due to a restart or due to a crash, this disconnect function cannot be called (since the server is transiently nonexistent), and is discarded. When the server is back up, all those socket disconnects to each frontend can never be processed properly, so the state is inconsistent.
Is there a way to "cache" these disconnect events to run when the server is back up? The end goal is to ideally have all the sockets would reconnect as well automatically, but currently we do this disconnect then reconnect manually. Our setup is Gunicorn flask threads being Nginx load balanced with a redis event queue with flask socket io.
You should register process signal
def handler(signum, frame):
# loop and handle all socket disconnect
# Set the signal handler and a 5-second alarm
signal.signal(signal.SIGALRM, handler)
The best way to handle this is to not force-kill your server. Instead, handle a signal such as SIGINT and disconnect all clients in a corresponding signal handler.
Another alternative is to keep track of your connected clients in a database (redis, for example). If the application is killed and restarted, it can go through those users and perform any necessary cleanup before starting a new server instance.
It's possible to interrupt a frozen a q process with ctrl+c :
http://www.timestored.com/kdb-guides/debugging-kdb#interrupt-q
But is it possible to send SIGINT to process via ipc, so we could interrupt remote q server in ide (or other client) ?
You can do that exact thing. From https://code.kx.com/q/kb/faq-listbox/ :
How to kill long/invalid query on a server?
You can achieve that by sending SIGINT to the server process. In *nix shell, try
$ kill -INT <pid>
Worth noting that this only works if the process is in a state to respond to the signal i.e. if it is waiting on swap or is blocked on large numbers of disk reads, it may take a long while to stop itself.
I'm trying to use Octave to open a simple socket server. While debugging, my script crashed after it had bound to a port. Of course, subsequent binds to the same port now fail. How can I close the socket so that I can reuse the port? Right now all I can do is close active entirely, which kills the process that is running the listener.
Ric
To prevent this from happening in the future, you can use onCleanup or unwind_protect to ensure that the socket-closing code always happens, even if your script errors out unexpectedly.
IOLib allows to create a passive socket to listen the clients' connection, before listen is called, we need to call (bind-address) to bind the socket to an specified address/port.
Well, the problem is that the first time I bind the socket to a port, it runs well, then I use C-c C-c in slime to terminate the thread, and run the program again, this time it throws out exception of EADDRINUSE:
<SOCKET-ADDRESS-IN-USE-ERROR 98 :EADDRINUSE "address already in use", FD: 10>
I already set the reuse_addr option to bind-address like that:
(bind-address socket
+ipv4-unspecified+
:port 1080
:reuse-addr t)
But I don't think this is the problem, because when I did the same thing in C, I use Ctrl+C to terminate the process, I can rebind the port, but in slime, the only solution is to restart emacs, it's really not conveninent, so How can I solve this problem, thanks.
When you exit a process, any open file descriptors (including network sockets) are closed, which is why it seems to work in C but not in CL. When a thread terminates, however, this doesn't happen. You'll find that you'll get the desired behavior by using the restart-inferior-lisp command in SLIME.
Not all is lost, however. If you wrap the function in the thread in an UNWIND-PROTECT form, you can arrange for the socket to be closed when the function is exited.
I have a server and client program on the same machine. The server is part of an application- it can start and stop arbitrarily. When the server is up, I want the client to connect to the server's listening socket. There are win32 functions to wait on file system changes (ReadDirectoryChangesW) and registry changes (RegNotifyChangeKeyValue)- is there anything similar for network changes? I'd rather not have the client constantly polling.
There is no such Win32 API, however this can be easily accomplished by using an event. The client would wait on that event to be signaled. The server would signal the event when it starts up.
The related API that you will need to use is CreateEvent, OpenEvent, SetEvent, ResetEvent and WaitForSingleObject.
If your server will run as a service, then for Vista and up it will run in session 0 isolation. That means you will need to use an event with a name prefixed with "Global\".
You probably do have a good reason for needing this, but before you implement this please consider:
Is there some reason you need a connect right away? I see this as a non issue because if you perform an action in the client, you can at that point make a new server connection.
Is the server starting and stopping more frequently than the client? You could switch roles of who listens/connects
Consider using some form of Windows synchronization, such as semaphore. The client can wait on the synchronization primitive and the server can signal it when it starts up.
Personally I'd use a UDP broadcast from the server and have the "client" listening for it. The server could broadcast a UDP packet every X period whilst running and when the client gets one, if it's not already connected, it could connect.
This has the advantage that you can move the client onto a different machine without any issues (and since the main connection from client to server is sockets already it would be a pity to tie the client and server to the same machine simply because you selected a local IPC method for the initial bootstrap).