While working on an IPython notebook, I'm increasingly finding myself wishing that the notebook would have a console attached to it, for interactive programming. I find myself adding lines to test snippets of code and then deleting them and that's the good usage. In the worse usage I'm changing the commands in the same line, evaluating the line over and over, changing entirely the purpose of the line until I get it right, and then I'm Ctrl-Zing all the way back to the original cell content.
If I could have an interactive interpreter at the bottom of the notebook, that would definitely increase my productivity. I know that a notebook has a kernel, but I wasn't able to attach a new ipython console to it. So my wonders are:
Is there a more efficient way to work with the notebook?
Assuming there isn't, how can I attach an ipython console to a notebook kernel?
Thanks!
Just do %qtconsole in one cell, and it will start a qtconsole attached to the same kernel.
Of course your kernel need to be local.
you can of course use the long method :
In [1]: %connect_info
{
"stdin_port": 50845,
"ip": "127.0.0.1",
"control_port": 50846,
"hb_port": 50847,
"signature_scheme": "hmac-sha256",
"key": "c68e7f64-f764-4417-ba3c-613a5bf99095",
"shell_port": 50843,
"transport": "tcp",
"iopub_port": 50844
}
Paste the above JSON into a file, and connect with:
$> ipython <app> --existing <file>
or, if you are local, you can connect with just:
$> ipython <app> --existing kernel-45781.json
or even just:
$> ipython <app> --existing
if this is the most recent IPython session you have started.
then
ipython qtconsole --existing kernel-45781.json
When you start the ipython notebook in the terminal, it will output something like this:
2015-03-26 13:05:52.772 [NotebookApp] Kernel started: 4604c4c3-523b-4373-bfdd-222eb1260156
Then start the ipython console like this:
ipython console --existing 4604c4c3
I find this easier than the other solution.
Related
Any guesses on how to specify a specific ipython config from qtconsole?
Without qtconsole:
ipython --profile=my_profile_name
Where my_profile_name is a profile name under your global ipython directory, for ipython kernel options. This lets you specify ipython-specific things, like modules to import on load.
With qtconsole:
jupyter console --config=/./jupyter_qtconsole_config.py
Where you can specify a specific config file to setup general non-ipython-specific qtconsole settings, like font size.
How can you specify set the ipython profile (ideally point it to a file, but may be limited to specifying a global profile name) from qtconsole? ie add the ---profile tag to jupyter qtconsole? Im this link: https://groups.google.com/forum/#!topic/jupyter/kzEws9ZeCFE Matthias mentions specifying a kernel, but that seems overkill.
You can specify profile in a file called 'ipython_kernel_config.py'; perhaps the solution lies in launching qtconsole with --config=jupyter_qtconsole_config.py, and pointing in this file to a custom ipython_kernel_config.py that points to a profile name; not sure how to point to the kernel config file, and no obvious way in the jupyter config docs.
You need to create a custom kernelspec and launch the qtconsole for this specific kernel.
Usually a "kernel" is seen as a language; this is an extremely restrictive view of what a kernel is. In your case what you want to do is have multipel IPython kernels, each launching IPython with a different profile. Here is the more formal definition of what a kernelspec is; but roughly it describe how to start a process.
By using jupyter kernelspec list, I can see I have a Python kernelspec in /usr/local/share/jupyter/kernels/python3; let's have a look at it, and in particular the kernel.json file:
{
"argv": [
"$HOME/anaconda/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "Python 3",
"language": "python"
}
Now you just need to duplicate all of that, and add "--profile=my_profile_name" in the "argv" list. Don't forget to give a different name to the Folder and and change "display_name": "Python 3" to "display_name": "Python 3 (my_profile)"; once this is available. just launch a qtconsole, a notebook or anything else with this kernel, and you should get your new profile.
You can of course use utilities like a2km to do that programmatically from the command line.
I know about nbconvert and use it to generate static html or ipynb files with the results output. However, I want to be able to generate a notebook that stays attached to a kernel that I already have running, so that I can do further data exploration after all of the template cells have been run. Is there a way to do that?
Apparently, you can do this through the Python API. I didn't try it myself, but for someone who will be looking for a solution, this PR has an example in the comments:
from nbconvert.preprocessors.execute import executenb, ExecutePreprocessor
from nbformat import read as nbread
from jupyter_client.manager import start_new_kernel
nb = nbread('parsee.ipynb', as_version=4)
kernel_name = nb.metadata.get('kernelspec', {}).get('name', 'python')
km, kc = start_new_kernel(kernel_name=kernel_name)
executenb(nb, kernel=(km, kc))
kc.execute_interactive('a') # a is a variable defined in parsee.ipynb with 'a = 1'
Not quite sure about your purpose. But my general solutions are,
to execute the notebook in command line and see the execution at the same time,
jupyter nbconvert --debug --allow-errors --stdout --execute test.ipynb
this will show the execute through all cells in debug mode even exception happens. but I can't see the result until the end of the execution.
to output the result to a html file, and then open the html file to see the results. I found this is more convenient.
jupyter nbconvert --execute --allow-errors --stdout test.ipynb >> result.html 2>&1
if you open result.html, it will be,
and all the errors and results will be shown on the page.
I would like to learn other answers/solutions from you all. thank you.
If I understood correctly you wish to open a Python console, and connect Jupyter notebook to that kernel instance?
Perhaps your solution would be to edit jupyter scripts itself and run the server in separate thread/background task implementing some sort of connection between threads and work in the jupyter console? Currently it's impossible because main thread is running the server.
This would require some work and I don't have any solution as-is, but I will look into that and maybe edit this answer if I can make it work.
But it seems that the easiest solution is to simply add another field in the notebook and do whatever you wish to do there. Is there a reason for not doing that?
The iPython profile or Jupyter profile path: ~/.ipython/profile_default/startup/startup.ipy
I update this quite often.
Is there a way to source this within a notebook like when you're in the terminal and source ~/.bash_profile after you make an update? My current method is to close the kernel and Jupyter session then restart.
You can use %run to do this:
%run ~/.ipython/profile_default/startup/startup.ipy
If you do %run -i [script], then your current interactive namespace will be available to the script.
I am writing my script interactively with IPython. This is what I currently do:
write a chunk of code,
run in ipython with "run -i file_name.py".
make changes and repeat 2 until I think it is OK .
comment out the entire previous chunk.
write new chunk of code that is based on the previous one.
go back to step 2.
......
Is there more efficient way? Can I start a script from a specific line while using all the variables in current namespace?
Use ipdb ("pip install ipdb" on the command line to install it).
Suppose you want to run script "foo.py" from line 18 to 23.
You'll want to start like this:
ipdb foo.py
Now, let's jump to line 18 (i.e., ignore all the lines before the 18th):
ipdb> j 18
Next, we set a breakpoint at line 23 (we don't want to go further):
ipdb> b 23
Finally, let's execute:
ipdb> c
Job done :)
I'd personally also use the ipython notebook, but you call also use you favorite text editor and always copy out the chunk of code you want to run and use the magic command %paste to run that chunk in the ipython shell. It will take care of indentation for you.
Use the magic of %edit stuff.py (first use) and %ed -p (after the first use) and it will invoke your $EDITOR from inside of ipython. Upon exiting from the editor ipython will run the script (unless you called %ed -x). That is by far the fastest way I found to work in CLI-ipython. The notebooks are nice, but I like having a real editor for code.
(Based on lev's answer)
From the interactive shell:
%run -i -d foo.py
should then enter the debugger, and proceed with:
j <line_number>
c
etc.
EDIT: unfortunately, this seems to sort of break ipython's magic %debug command.
An IPython Notebook allows you to interactively run scripts line by line. It comes with IPython, just run:
ipython notebook
from the terminal to launch it. Its a web interface to IPython, where you can save the notebooks to *.py files by clicking save as in the settings.
Here's some more info from this video.
For something fast as well as flexible use http://qtconsole.readthedocs.io/en/stable/
It is similar to the Jupyter notebook based on your browsers (as pointed out by #agonti and #magellan88, but presumably much faster. It also has emacs style keybindings.
I use ipdb, ipython, comupled with tmux and vim and get almost IDE like features and much faster.
Does anyone know if it is possible to run an IPython/Jupyter notebook non-interactively from the command line and have the resulting .ipynb file saved with the results of the run. If it isn't already possible, how hard would it be to implement with phantomJS, something to turn the kernel on and off, and something to turn the web server on and off?
To be more specific, let's assume I already have a notebook original.ipynb and I want to rerun all cells in that notebook and save the results in a new notebook new.ipynb, but do this with one single command on the command line without requiring interaction either in the browser or to close the kernel or web server, and assuming no kernel or web server is already running.
example command:
$ ipython notebook run original.ipynb --output=new.ipynb
Yes it is possible, and easy, it will (mostly) be in IPython core for 2.0, I would suggest looking at those examples for now.
[edit]
$ jupyter nbconvert --to notebook --execute original.ipynb --output=new.ipynb
It is now in Jupyter NbConvert. NbConvert comes with a bunch of Preprocessors that are disabled by default, two of them (ClearOutputPreprocessor and ExecutePreprocessor) are of interest. You can either enabled them in your (local|global) config file(s) via c.<PreprocessorName>.enabled=True (Uppercase that's python), or on the command line with --ExecutePreprocessor.enabled=True keep the rest of the command as usual.
The --ExecutePreprocessor.enabled=True has convenient --execute alias that can be used on recent version of NbConvert. It can be combine with --inplace if desired
For example, convert to html after running the notebook headless :
$ jupyter nbconvert --to=html --execute RunMe.ipynb
converting to PDF after stripping outputs
$ ipython nbconvert --to=pdf --ClearOutputPreprocessor.enabled=True RunMe.ipynb
This (of course) does work with non-python kernels by spawning a <insert-your-language-here> kernel, if you set --profile=<your fav profile>. The conversion can be really long as it needs to rerun the notebook. You can do notebook to notebook conversion with the --to=notebook option.
There are various other options (timeout, allow errors, ...) that might need to be set/unset depending on use case. See documentation and of course jupyter nbconvert --help, --help-all, or nbconvert online documentation for more information.
Until this functionality becomes part of the core, I put together a little command-line app that does just what you want. It's called runipy and you can install it with pip install runipy. The source and readme are on github.
Run and replace original .ipynb file:
jupyter nbconvert --ExecutePreprocessor.timeout=-1 --to notebook --inplace --execute original.ipynb
To cover some features such as parallel workers, input parameters, e-mail sending or S3 input/output... you can install jupyter-runner
pip install jupyter-runner
Readme on github: https://github.com/omar-masmoudi/jupyter-runner
One more way is to use papermill, it has Command Line Interface
Usage example: (you need to specify output path for execution results to be stored)
papermill your_notebook.ipynb logs/yourlog.out.ipynb
You also can specify required params if you wish with -p flag for each param:
papermill your_notebook.ipynb logs/yourlog.out.ipynb -p env "prod" -p tests "e2e"
one more related to papermill reply - https://stackoverflow.com/a/55458141/2957102
You can just run the iPython-Notebook-server via command line:
ipython notebook --pylab inline
This will start the server in non-interactive mode and all output is printed below the code. You can then save the .ipynb-File which includes Code & Output.