How to get jupyter notebook kernel state? - ipython

I want to be able to detect from outside a notebook server if the kernel is busy or actively running some cell.
Is there some way for me to print this state as a command line call or have it returned as the response to a http request.

There is not, this state is not stored anywhere, in part because it changes rapidly, and in part because there shouldn't be many, if any, actions that should be taken differently based on its value. It is only published via messages on the IOPub channel, which you can connect to via zeromq or websocket. If you want to know the busy/idle state of a kernel:
connect to kernel (zmq or websocket)
initial state is busy
send a kernel_info request
monitor status IOPub messages for busy/idle changes
If the kernel is idle, it will handle the kernel_info request promptly and you will get a status:idle message.

Related

How would you grab the latest message from multiple connections to a single ZMQ socket?

I am new to ZMQ and am not sure if what I want is even possible or if I should use another technology.
I would like to have a socket that multiple servers can stream to.
It appears that a ZMQ socket can do this based on this documentation: http://api.zeromq.org/4-0:zmq-setsockopt
How would I implement a ZMQ socket on the receiving end that only grabs the latest message sent from each server?
You can do this with Zmq's PUB / SUB.
The first key thing is that a SUB socket can be connected to multiple PUBlishers. This is covered in Chapter 1 of the guide:
Some points about the publish-subscribe (pub-sub) pattern:
A subscriber can connect to more than one publisher, using one connect call each time. Data will then arrive and be interleaved “fair-queued” so that no single publisher drowns out the others.
If a publisher has no connected subscribers, then it will simply drop all messages.
If you’re using TCP and a subscriber is slow, messages will queue up on the publisher. We’ll look at how to protect publishers against this using the “high-water mark” later.
So, that means that you can have a single SUB socket on your client. This can be connected to several PUB sockets, one for each server from which the client needs to stream messages.
Latest Message
The "latest message" can be partially dealt with (as I suspect you'd started to find) using high water marks. The ZMQ_RCVHWM option allows the number to be received to be set to 1, though this is an imprecise control.
You also have to consider what it is that is meant by "latest" message; the PUB servers and SUB client will have different views of what this is. For example, when the zmq_send() function on a PUB server returns, the sent message is the one that the PUBlisher would regard as the "latest".
However, over in the client there is no knowledge of this as nothing has yet got down through the PUBlishing server's operating system network stack, nothing has yet touched the Ethernet, etc. So the SUBscribing client's view of the "latest" message at that point in time is whichever message is in ZMQ's internal buffers / queues waiting for the application to read it. This message could be quite old in comparison to the one the PUBlisher has just started sending.
In reality, the "latest" message seen by the client SUBscriber will be dependent on how fast the SUBscriber application runs.
Provided it's fast enough to keep up with all the PUBlishers, then every single message the SUBscriber gets will be as close to the "latest" message as it can get (the message will be only as old as the network propagation delays and the time taken to transit through ZMQ's internal protocols, buffers and queues).
If the SUBscriber isn't fast enough to keep up, then the "latest" messages it will see will be at least as old as the processing time per message multiplied by the number of PUBlishers. If you've set the receive HWM to 1, and the subscriber is not keeping up, the publishers will try publishing messages but the subscriber socket will keep rejecting them until the subscribed application has cleared out the old message that's caused the queue congestion, waiting for zmq_recv() to be called.
If the subscriber can't keep up, the best thing to do in the subscriber is:
have a receiving thread dedicated to receiving messages and dispose of them until processing becomes available
have a separate processing thread that does the processing.
Have the two threads communicate via ZMQ, using a REQ/REP pattern via an inproc connection.
The receiving thread can zmq_poll both the SUB socket connection to the PUBlishing servers and the REP socket connection to the processing thread.
If the receiving thread receives a message on the REP socket, it can reply with the next message read from the SUB socket.
If it receives a message from the SUB socket with no REPly due, it disposes of the message.
The processing thread sends 1 bytes messages (the content doesn't matter) to its REQ socket to request the latest message, and receives the latest message from the PUBlishers in reply.
Or, something like that. That'll keep the messages flowing from PUBlishers to the SUBscriber, thus the SUBscriber always has a message as close to possible as being "the latest" and is processing that as and when it can, disposing of messages it can't deal with.

socket: how to detect peer's shutdown(SD_RECEIVE) by select?

When server calls shutdown(SD_SEND), client can detect it by select and recv(returns 0). But how can client detect shutdown(SD_RECEIVE)?
If the server shuts down its receive channel, it can't receive any further data anymore. There is no way to detect this condition with select(), except that the socket simply won't be reported as writable anymore. There is no signal sent to the client in this situation (unlike shutting down the send channel, which sends a FIN packet indicating no more data will be sent). So, any attempts by the client to send data afterwards will either be buffered indefinitely while waiting for the server to acknowledge there is room to receive, or more likely will simply fail with an error.
Short answer: there's no magic API the client can use to detect if the (Microsoft!) server happened to call shutdown(SD_RECV). However, all subsequent sends from the client to the server on that socket will fail.
Longer answer:
shutdown(SD_RECV) is a Windows thing. It doesn't necessarily pertain to sockets in general, and certainly not to TCP/IP itself. There is no TCP-level FIN or RST. There's no "message" to the client.
From the documentation:
https://learn.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-shutdown
The shutdown function does not close the socket. Any resources
attached to the socket will not be freed until closesocket is invoked.
Another method to wait for notification that the remote end has sent
all its data and initiated a graceful disconnect uses overlapped
receive calls follows :
Call shutdown with how=SD_SEND.
Call recv or WSARecv until the function completes with success and indicates zero bytes were received. If SOCKET_ERROR is returned,
then the graceful disconnect is not possible.
Call closesocket.

How to detect when socket connection is lost?

I have a script (I don't have the code example here at the moment but I used IO::Async) which connects to socket on a remote server and listens. Client usually just listens for new data.
Problem is that the client is not able to detect if network problems occur and the socket connection is gone.
I used IO::Async and I also tried it with IO::Socket. Handle is always "connected" after the initial connection is established.
If the network connection is established again the socket connection is naturally still lost because the script has no idea that it should reconnect.
I was thinking to create some kind of "keepAlive" which "pings" (syswrite) the socket every X seconds (if nothing new came through socket) to check whether the connection is still there.
Is this the correct way to do it or is there maybe an another more creative or cleaner solution?
You can set the SO_KEEPALIVE socket option which, for TCP, sends periodic keepalive messages, and may help detect this condition. If this is detected, you will be delivered an EOF condition (most likely causing the containing IO::Async::Stream to fire on_read_eof).
For a better solution you might consider some sort of application-level keepalive message, such as IRC's PING command.
The short answer is there is no default way to automatically detect a dropped socket in perl.
Your approach of pinging would probably work pretty well; you could run a continuous thread in the background that sends ping requests and if it doesn't receive a response the main thread can be notified and a reconnect should be issued.
If you want to get messy you can work with select() to detect keep alive messages; however this may require some OS configuration depending upon your platform.
See this thread for more details: http://www.perlmonks.org/?node_id=566568

Do I stay in the MUC when pause() and attach()?

I have a client written using Strophe that is loaded on every page on my website. To minimize latency I save the rid, the jid and the sid at each page change so that I can use Strophe's attach() method.
However, I am unsure of if the pausing and attaching keeps me in the MUC. If it does, is there a patch to the Strophe MUC plugin that lets me set handlers without rejoining the MUC?
Yes, you do. BOSH pause and attach leaves your stream open, the XMPP server does not even know it happened (since it happens at the BOSH layer).
Pausing is just a graceful way of telling the BOSH connection manager not to expect requests from you for a short period of time. In BOSH it is not necessary to keep a HTTP request open at all times to keep the XMPP stream alive, only that you make requests often enough for the connection manager to be satisfied that you have not gone offline without warning.

Ensuring send() data delivered

Is there any way of checking if data sent using winsock's send() or WSASend() are really delivered to destination?
I'm writing an application talking with third party server, which sometimes goes down after working for some time, and need to be sure if messages sent to that server are delivered or not. The problem is sometimes calling send() finishes without error, even if server is already down, and only next send() finishes with error - so I have no idea if previous message was delivered or not.
I suppose on TCP layer there is information if certain (or all) packets sent were acked or not, but it is not available using socket interface (or I cannot find a way).
Worst of all, I cannot change the code of the server, so I can't get any delivery confirmation messages.
I'm sorry, but given what you're trying to achieve, you should realise that even if the TCP stack COULD give you an indication that a particular set of bytes has been ACK'd by the remote TCP stack it wouldn't actually mean anything different to what you know at the moment.
The problem is that unless you have an application level ACK from the remote application which is only sent once the remote application has actioned the data that you have sent to it then you will never know for sure if the data has been received by the remote application.
'but I can assume its close enough'
is just delusional. You may as well make that assumption if your send completes as it's about as valid.
The issue is that even if the TCP stack could tell you that the remote stack had ACK'd the data (1) that's not the same thing as the remote application receiving the data (2) and that is not the same thing as the remote application actually USING the data (3).
Given that the remote application COULD crash at any point, 1, 2 OR 3 the only worthwhile indication that the data has arrived is one that is sent by the remote application after it has used the data for the intended purpose.
Everything else is just wishful thinking.
Not from the return to send(). All send() says is that the data was pushed into the send buffer. Connected socket streams are not guarenteed to send all the data, just that the data will be in order. So you can't assume that your send() will be done in a single packet or if it will ever occur due to network delay or interruption.
If you need a full acknowledgement, you'll have to look at higher application level acks (server sending back a formatted ack message, not just packet ACKs).