How to query ipython (jupyterlab?) kernel from javascript in the notebook? - ipython

You can write some html and js and display things in the notebook but what is the standard way of querying the "backend" ... i.e. sending requests to the python process driving the notebook?

Use the Jupyter messaging protocol:
https://jupyter-client.readthedocs.io/en/stable/messaging.html

Related

How to connect to influxDB in version 2.0

After running influxd daemon when I try to start influx as in older version I'm not able to connect any CLI interface for DB.
Instead it is showing help commands.
Can anyone please help here
Connect to port 8086 of the machine you installed influx on, to communicate via the web based interface and play with queries in the Explore section.
Alternatively, you can interact via the CLI using the influx command only if you're on the same machine as the installation - those are the help commands you're seeing - there's no REPL style interaction in v2.
(Well, unless you build and install the Flux REPL available here - everything you do here you can also do via the Influx UI I mentioned above via a web session on port 8086.)

How to create a jupyter-lab extension that interacts with a running kernel

I am trying to create an extension for jupyter-lab that interacts with a running ipython kernel, running either in a console or a notebook.
As a minimal example I am trying to get two "hello world" type programs. The first should print a message to stdout of the running kernel that is input in the extension. The second should log a message to the console of the extension that is input in the running ipython kernel.
To understand jupyter-lab extension I started with the astronomy picture of the day! tutorial. Although this covers the basics of extensions this does not address interacting with a running kernel.
From "really confused with jupyter notebook lab extensions and ipywidgets"! it seems I should be looking at the Comms module however this is documentation for Jupyter-notebook and not for Jupyter-lab. As far as I understand, Jupyter-lab is sufficiently different under the hood that this should not work.
Could someone explain to me how to create an extension that interacts with a running kernel and what the appropriate documentation/tutorial is to look at if it exists?

Run Databricks notebook jobs via API in a shared context

In the REST documentation for Databricks, you can submit a notebook task as a job to a cluster using the 2.0 API or you can submit a command or python script using the 1.2 API
The 1.2 API allows you to create a context and then all subsequent commands or scripts can be submitted against this context. This allows you to maintain state (dataframes, variables etc) which is much more akin to running notebooks interactively in the browser
What i want is to be able to submit my notebooks into the same context and get the same behaviour as the 1.2 API but this does not seem possible, is there a reason for that? Or am i missing something if it can be done?
My use case is i want to be able to re-run a notebook from the API and have it remember its last state (in the most basic example just knowing its already loaded a dataframe) but more generally having the ability for subsequent jobs to only run what changed since the last run.
As far as I can tell, failing the ability to do this via the 2.0 API, I have 2 options:
Convert my notebook to Python script and have a bootstrap script on client side that invokes an entry point using the 1.2 API within the same context
Create temp tables at checkpoints in my notebook and possibly maintain a special variables dataframe of state variables
Both of these seem unecessarily complex, any other ideas?

Determine if we're in an IPython notebook session

I want my function to do one thing if it's called from the IPython notebook and another thing if its called from a console or library code. In particular I'm making a progress bar with the desired following behavior:
Notebook: Immediately return a widget
Console: Block and dump information to sys.stdout
Is there a flag somewhere I can check to determine if the user called my function from a notebook or otherwise?
You can't detect that the frontend is a notebook with perfect precision, because an IPython Kernel can have one or more different Jupyter frontends with different capabilities (terminal console, qtconsole, notebook, etc.). You can, however, identify that it is a Kernel and not plain terminal IPython:
import sys
def is_kernel():
if 'IPython' not in sys.modules:
# IPython hasn't been imported, definitely not
return False
from IPython import get_ipython
# check for `kernel` attribute on the IPython instance
return getattr(get_ipython(), 'kernel', None) is not None
Since the notebook is so much more popular than other frontends, this indicates that the frontend is probably a notebook, but not conclusively. So you want to be sure there's a way out for the more primitive frontends.

When an ipython console connect to an existing kernel linked to a notebook, %load magic can not load code into console

I learned this trick from Using IPython console along side IPython notebook . But when I connect this kernel using console, the %load magic will load file into a pager (like you you do 'man thecodefile.py' in console), not to the input line.
Anyone know how to change these behavior back to default?
You ask two question here, please ask them separately.
As for !su ausername there is no way to forward stdin through subprocess.