Interrupt a frozen q process via ipc - kdb

It's possible to interrupt a frozen a q process with ctrl+c :
http://www.timestored.com/kdb-guides/debugging-kdb#interrupt-q
But is it possible to send SIGINT to process via ipc, so we could interrupt remote q server in ide (or other client) ?

You can do that exact thing. From https://code.kx.com/q/kb/faq-listbox/ :
How to kill long/invalid query on a server?
You can achieve that by sending SIGINT to the server process. In *nix shell, try
$ kill -INT <pid>
Worth noting that this only works if the process is in a state to respond to the signal i.e. if it is waiting on swap or is blocked on large numbers of disk reads, it may take a long while to stop itself.

Related

How to always gracefully disconnect sockets on server kill?

In our flask socketio app, we have a socket.on("disconnect") that is called whenever a socket client disconnects, to handle the db state updates. However, when our server is killed due to a restart or due to a crash, this disconnect function cannot be called (since the server is transiently nonexistent), and is discarded. When the server is back up, all those socket disconnects to each frontend can never be processed properly, so the state is inconsistent.
Is there a way to "cache" these disconnect events to run when the server is back up? The end goal is to ideally have all the sockets would reconnect as well automatically, but currently we do this disconnect then reconnect manually. Our setup is Gunicorn flask threads being Nginx load balanced with a redis event queue with flask socket io.
You should register process signal
def handler(signum, frame):
# loop and handle all socket disconnect
# Set the signal handler and a 5-second alarm
signal.signal(signal.SIGALRM, handler)
The best way to handle this is to not force-kill your server. Instead, handle a signal such as SIGINT and disconnect all clients in a corresponding signal handler.
Another alternative is to keep track of your connected clients in a database (redis, for example). If the application is killed and restarted, it can go through those users and perform any necessary cleanup before starting a new server instance.

How is Ctrl+C key behaves in a TCP connection

I'm studying network and specifically tcp connection and i wondering in a situation that you connect remotely to a server using tcp connection and sending command line to execute some actions, How they handle sending a ctrl+c signals?
Is it sends a normal tcp package that in data section describes ctrl+c hits?
or is it sends a package that have RST flag turned on or FIN flag to cut or close the connection?
There's no such thing as sending a signal over TCP.
Ctrl+C is a terminal generated signal. Assuming you (or the running process) didn't change the terminal's settings, this means that the terminal driver transforms the Ctrl+C key combination into a kill(x, SIGINT), where x is the process group ID of the foreground process group (and as such, SIGINT is delivered to every process in the foreground process group, which, in your case, is probably just one process).
What the process does when the signal is delivered is not the terminal driver's business. The process may have ignored the signal, so nothing happens. Or it may have installed a signal handler, and do some work inside the signal handler (like writing something to the socket that when read by the receiver will cause it to send SIGINT to itself - this emulates a "remote signal delivery"). Or it may have blocked the signal - in that case, the signal is delivered when the process unblocks it, or it is canceled if the process ignores it in the meantime.
If, on the other hand, you (or the running process) changed the terminal settings such that Ctrl+C is not interpreted as a signal-generating key combination, then the process will read Ctrl+C from input. Of course, what happens depends on what the process does with the input that it reads.
In short, if you didn't change the default behavior for SIGINT and you didn't change your terminal's settings, Ctrl+C raises SIGINT; the default action is to terminate the process, and so the socket will be closed and the connection terminated.

Looking for "hung socket simulator" for testing socket timeouts

I am testing handling for socket timeout conditions - for example, connection timeout, connect but no accept, accept but won't read, etc.
I'm looking for a program/script that will act as a server socket producing these effects.
This "hung socket simulator" needs to run on Mac OS (or Linux).
I found one called Bane: https://github.com/danielwellman/bane.
I think the powerful tool socat might be helpful here, it can redirect the request to the real endpoint and thus, you can have full control to the socat process itself to simulate what you want (like suspending the process at certain phase by kill -stop or so)
one of my use cases is that I just want my client app to finish the handshake with the remote service but read no more data:
socat -d -d -d TCP-LISTEN:22181,fork SYSTEM:'socat - "TCP:the-remote-host:2181" \| dd bs=1 count=50' &
the above example only send the first 50 bytes of the response back.
Why don't you just start your client program on your dev machine and when you want the TimeOut to appear, just unplug your network cable.

observing on-off socket problem

recently I encounter a problem. I am using two programs A and B, developed by someone else, which use TCP sockets to communicate each other, A is server, B is client. That is what I observed: when I start both A and B, they run and communicate with each other, if I first kill A, then restart A again, now by checking the processes, A is successfully launched, but cannot be connected by B, no matter I restart B. however, If I continue to kill this non-detectable A and start A again, it can be detected by B.
At the same time, if I close B's socket before kill A, then when I start A and B, they work very well.
what the problem might be and is there some way to see the opened sockets when I kill A?
It depends on the OS you are using.
lsof -p <pid> is quite common on UNIX and lets you list all file descriptors used by a process.
netstat is probably available and will also list opened ports.
This is probably due to the TIME_WAIT state. When you kill A, the server port is still allocated by the OS and can be reused only if A sets a specific flag when opening server port to be able to reuse this port (SO_REUSEADDR). Otherwise, A won't be able to reuse the server port until it is closed by the OS (can take a few minutes, reason why when you continue to kill A, at some point, the port is available again). I don't know what A is doing if it cannot open the server port because of that.

Is there a way to wait for a listening socket on win32?

I have a server and client program on the same machine. The server is part of an application- it can start and stop arbitrarily. When the server is up, I want the client to connect to the server's listening socket. There are win32 functions to wait on file system changes (ReadDirectoryChangesW) and registry changes (RegNotifyChangeKeyValue)- is there anything similar for network changes? I'd rather not have the client constantly polling.
There is no such Win32 API, however this can be easily accomplished by using an event. The client would wait on that event to be signaled. The server would signal the event when it starts up.
The related API that you will need to use is CreateEvent, OpenEvent, SetEvent, ResetEvent and WaitForSingleObject.
If your server will run as a service, then for Vista and up it will run in session 0 isolation. That means you will need to use an event with a name prefixed with "Global\".
You probably do have a good reason for needing this, but before you implement this please consider:
Is there some reason you need a connect right away? I see this as a non issue because if you perform an action in the client, you can at that point make a new server connection.
Is the server starting and stopping more frequently than the client? You could switch roles of who listens/connects
Consider using some form of Windows synchronization, such as semaphore. The client can wait on the synchronization primitive and the server can signal it when it starts up.
Personally I'd use a UDP broadcast from the server and have the "client" listening for it. The server could broadcast a UDP packet every X period whilst running and when the client gets one, if it's not already connected, it could connect.
This has the advantage that you can move the client onto a different machine without any issues (and since the main connection from client to server is sockets already it would be a pity to tie the client and server to the same machine simply because you selected a local IPC method for the initial bootstrap).