Start notebook from other notebook - jupyter

Using jupyter-lab
%run otherNotebook.ipynb
gives the following error message
Error: file not found otherNotebook.ipynb.py
How can I use the magic method and prevent it from adding .py to the file

As described here %run is for running a named file inside IPython as a program. Jupyter notebooks are not Python programs.
Notebooks can be converted to Python programs/scripts using Jupytext. Following that conversion you could then use %run.
Alternatively, you can use nbconvert to execute a notebook or use Papermill to execute a notebook. Papermill allows you to easily pass in parameters at the time of run. I have an example of both commented out in code under 'Step #5' here and 'Step#2' here.
If you are actually trying to bring the code into your present notebook, then you may want to explore importing Jupyter notebooks as modules. importnb is recommended here for making importing notebooks more convenient. Or, I just came across the subnotebook project that let's you run a notebook as you would call a Python function, pass parameters and get results back, including output contents.

Related

Can I execute a file (or some lines of python) within a running jupyterlab kernel for a notebook?

I've got a notebook that has got a bit unwieldy and I'm doing some refactoring which isn't fun.
I was wondering if it would be possible to execute code in this notebook from the command line for debugging.
Ideally, I would run something like:
run-in-jupyter $notebook file.py
and see the output from the command line. There is an interpreter in jupyterlab that can do this, so this make me think that it is possible.
I have a brief search but couldn't find much
How to run an .ipynb Jupyter Notebook from terminal? I explicitly don't want to do this (I want to run commands in an existing instace)
There is this library but this seems quite involved and some of the results I found on the internet where people not being able to use the library
jupyter console (pip install jupyter-console) connects to a running jupyter kernel from the kernel. Details on running kernels can be found amongst jupyter's run time files, on my box these live in ~/.local/share/jupyter/runtime. You can find the path to the kernel data file corresponding to an open workbook with %config IPKernelApp.connection_file which will look something like ~/.local/share/jupyter/runtime/kernel-55da8a07-b67d-4584-9ec6-f24e4a26cbbd.json.
You can then connect from the command line with
jupyter console --existing ~/.local/share/jupyter/runtime/kernel-55da8a07-b67d-4584-9ec6-f24e4a26cbbd.json
You can pipe commands into it as shown
echo h=87 | jupyter console --existing 55da8a07-b67d-4584-9ec6-f24e4a26cbbd 'h=57' --simple-prompt -y

How can I programmatically check that I am running code in a notebook in julia?

I would need to programmatically check that I am running code in a jupyter notebook from Julia. One way would be using
isdefined(Main, :IJulia)
However this does not work for notebooks within vscode since they are run from outside IJulia is there a check that would work in this case as well?
What about #__FILE__ this yields REPL[_] in Julia REPL, In[_] in Jupyter and "/path/to/file.jl#==#hashocde" in Pluto so the test could be:
match(r"^In\[[0-9]*\]$", #__FILE__) != nothing
and in VSCode:
so you can check if the file ends with ".ipynb" if you want to find VSCode. Moreover: isdefined(Main, :VSCodeServer) yields true if you run from VSCode.

jupyter nbconvert doesn't save actual output

I have a notebook script that I run on different datasets. I want to save the script, INCLUDING the output cells, in the data folder each time I run it.
I have the following command placed at the end of my script that I run in jupyter. I intend to save pretty much what I can see on the screen to a HTML file.
"here is my notebook script with inputs and ouput including graphs"
cmd='jupyter nbconvert --to html odnp_postprocessing.ipynb --output-dir '+dataFolder
os.system(cmd)
However, nbconvert does not export the actual cells. It will print out only the input cell without ouput the first time I run, but if I re-run, it will finally export both input and output. However then if I change something in the script, it will always export the first version. Then the only way around I found is to restart the kernel and re-run (twice) the new script with the modification.
Basically, it looks like nbconvert exports some kind of buffer that is not necesseraly the actual input and ouput cells that the user sees.
What I want to do, which is programmatically saved my notebook (inputs and ouputs) into HTML, each time I run it.
Is there a command to save the current version of the notebook? I tried to add %notebook before nbconvert command but a whole bunch of old inputs are saved as well.
I know I can run the notebook within nbconvert, but I'd like to avoid it as I already run it manually in jupyter.
Any idea?
I'm using jupyter through enthought canopy in Chrome browser.
Thanks
You're not telling us exactly how you are running nbconvert; from the current notebook ? Thus it is hard to figured things out.
Nbconvert converts the current file as it is on disk; as a wild guess: you haven't save your file. If you do not save your file then nbconvert will likely not have access to the outputs of cells; and wild guess again when you run it a second time autosave have kicked in.
Remember:
- Nbconvert does not execute the file
- Nbconvert is a separate process it can't magically access what is in your browser, which is potentially a different machine.
Usually think of it this way:
Run the notebook;
Save the notebook
Close the notebook
Run nbconvert.
reopen the notebook.
If you are using this command at the end of a notebook to save it in another format, then what you are looking for are save hooks that will trigger some code – server-side – every time you save a notebook.
Side note, learn about how to run shell command in IPython; ! can be used to execute shell command in CWD and does variable interpolation.

Can't launch PySpark in browser (windows 10)

I'm trying to launch PySpark notebook in my browser by typing in pyspark from the console, but I get the following error:
c:\Spark\bin>pyspark
python: can't open file 'notebook': [Errno 2] No such file or directory
What am I doing wrong here?
Please help?
Sounds like the jupyter notebook is either not installed or not in your path.
I prefer to use Anaconda for my python distribution and Jupyter comes standard and will install all necessary path information as well.
After that as long as you have set PYSPARK_PYTHON_DRIVER=jupyter and PYSPARK_PYTHON_DRIVER_OPTS='notebook' correctly you are good to go.
You want to launch the jupyter notebook when you invoke the command pyspark. Therefore you need to add the following to the bash_profile or zshrc_profile.
export PYSPARK_SUBMIT_ARGS="pyspark-shell"
export PYSPARK_DRIVER_PYTHON=ipython
export PYSPARK_DRIVER_PYTHON_OPTS="notebook" pyspark

Can I save ipython command line history to a notebook file?

I was using iPython command line interface and after some operations I want to save my operation history to a notebook file. But I was not using iPython notebook from the beginning. Can I still make it?
From #Thomas K (I don't know why he didn't post an answer):
%notebook -e myhistory.ipynb
The short answer is in a couple of ways, the slightly longer answer is Yes - but you might not get what you expect!
Really long answer: The explanation is that when you are working in a notebook, now called a jupyter notebook of course, your work is stored in a series of cells each of which has one or more lines of code or markdown while when you are working in a console all of your work is a series of lines of python code.
From within a console session you can save, using %save some or all of your work to one or more python files that you can then paste, import, etc, into notebook cells. You can also save using %save -r to .ipy files your work including the magics as magics rather than the results of magics that again you can use from within your notebook later.
You can also use the %notebook magic to save all of your current history in one of an ipynb json file or a python .py text file with the -e export flag. However, it is not clear from the documentation if the history will end up in a single cell, one cell per command or some other division. A little testing suggests one cell per numbered line of your console, so a single command or definition, per cell.
Personally I will stick with outputting anything useful into python files using the %save command - or better yet start a notebook when I think I might be doing something that I would need later.
If you are using jupyter console (ie, the command line version of jupyter notebook) you can use the following command within a cell to save the notebook as a python file (.py)
save yourfilename
this will save the contents of the notebook as "yourfilename.py" in your current working directory (from which you started the jupyter console)
Alternatively, you can save the whole program by this command :
%save -r program-name 1-999999
The following commands were written to file program-name.ipy