Is there any Matlab workspace equivalent in Jupyter Notebook that I can directly check for the variables that I'm using? And I can easily copy them out for some other use?
In matlab, the workspace look like this:
Or, if there is none, is there any related resources that I can read, so I can make one on my own?
The question is not duplicated, this is some progress I find so far
A variable explorer:
https://github.com/jupyter/notebook/issues/1516
For JupyterLab I can recommend the JupyterLab VariableInspector extension. It is very similar to matlab and allows to inspect values inside arrays.
Note that you need to have both numpy and pandas installed for the array value inspection to work.
You should be very careful though when using this extension. Jupyter will become unresponsive if you have to much data in your variables. See for example https://github.com/lckr/jupyterlab-variableInspector/issues/122.
You can use the Jupyter contrib extension Variable Inspector:
Related
I really like Jupyter Notebooks.
However, working with them is cumbersome in conjunction with a source control system like git, because an ipynb-File contains the source code (what you actually write in the notebook) and the generated output text / HTML / images / metadata / ...
For example, merge conflicts are difficult to resolve now, because everything is stored in one huge file with lots of generated data.
I wonder if I can configure Jupyter to store notebooks as
A source file: For example, I imagine this to be a Markdown file where everything surrounded by three backticks (```) is interpreted as a code cell. Diffs of that file would be meaningful and merge conflicts would be simple to resolve manually.
A generated file: This contains everything else. If there is a merge conflict within this file, it can be resolved by regenerating it.
Is this possible?
For reference: There is a slightly more general version of this question which lists various efforts at adapting IPython and Jupyter to this effect, and this answer proposes to solve the problem via Git. There is a Github project with a Git filter based on that answer, and (in its edit at the end) the answer links a few similar tools like nbstripout.
Question
How can I setup a MathJax "preamble" for use in IPython (or Jupyter) notebooks for repeated use in a way that is convenient for others to read my documents (on http://nbviewer.org) and that works for LaTeX/PDF generation?
Background
I would like to use IPython (now Jupyter) notebooks for documents that I later convert to PDF via LaTeX (using ipython nbconvert). The problem is how to include a bunch of macro definitions that I use in almost every document. Something like:
\newcommand{\vect}[1]{\vec{#1}}
\newcommand{\abs}[1]{\lvert#1\rvert}
\DeclareMathOperator{\erf}{erf}
etc. As far as the notebooks is concerned, one unsatisfactory solution is to simply include these in a markdown cell at the top of the notebook, embeded between two dollar signs $$ so it is interpreted as math. If this is done after some introductory text, then it does not even affect the output.
The problem is that, when converting to LaTeX (for PDF export), these commands are embedded in a math environment in the LaTeX file. This has several problems:
Commands like \DeclareMathOperator must come in the LaTeX document preamble.
Command definitions are local to the equation and not available later in the document. (This can be overcome by using \gdef or \global\def but then one must trick MathJax into recognising these commands with something like \let\gdef{\def} which is somehow hidden from LaTeX. Any way I have found of making this work amounts to an ugly hack.)
Sometimes commands are already defined in LaTeX and need to have \renewcommand (not supported by MathJax, but again can be provided by \let\renewcommand\newcommand etc. which seems reasonable to me since MathJax can't have some idea of what preamble might be used for the final LaTeX file).
Probably the solution is to provide a set of macros to MathJax by adding code like (not sure the equivalent of \DeclareMathOperator here...)
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
TeX: {
Macros: {
vect: ["{\\vec #1}",1],
abs: ["{\\lvert #1 \\rvert}",1]
}
}
});
</script>
to a custom.js file and then providing a LaTeX package for inclusion when converting to PDF. The problem I have with this approach is: How to distribute the custom.js file and LaTeX style file for others (collaborators and viewers) to use?
I want collaborators to be able to edit and read my documents without having to install custom extensions in their global configuration. To be specifiec, I am fine with requiring them to run a command like python setup.py configure once they download/checkout my code which does local modifications to the project like populating ipython_notebook_config.py files in all directories containing notebooks, but am not happy installing extensions, or modifying their personal global custom.js file.
My stumbling block here is that I don't know how to add contributions from a local custom.js file to the notebook chain, and suspect that this might violate a security policy.
The best solution would not require any action on my collaborator's part.
I want my notebooks to work on http://nbviewer.org, and for people to be able to download the notebook and produce a PDF. (I think this rules out the possibility of using custom.js hacks and a distributed *.sty file, but am not certain.)
I would prefer to be able to simply start a new notebook and then start writing without having to insert a bunch of boilerplate code at the start of each notebook, though would be amenable to having a simple way of automating this process using an notebook extension or some hooks in python_notebook_config.py.
References
The following posts address some of these issues, but fall short on most fronts:
usepackage and making macros in ipython notebook
Physics bra-ket symbols in IPython (specifically this answer notes related difficulties)
How do I get MathJax to enable the mhchem extension in ipython notebook
Discussions about (potential) problems with the pandoc production of LaTeX files from IPython notebooks:
Getting some problems with pandoc and mathjax
\newcommand environment when convert from markdown to pandoc
Pandoc IPython notebook loses some Mathjax
General discussion of math in notebooks:
How to write LaTeX in IPython Notebook?
I think you can solve some of your problems, but not all.
First, the stumbling block. I believe (though I might be wrong) that nbviewer doesn't look at anything but the notebook itself. For example, I don't see how it could run an ipython_notebook_config.py stored alongside your notebook. So that rules out that line of thought, meaning that I think you'll have to bite the bullet and add boilerplate to every notebook. But you might at least be able to minimize the boilerplate. In that vein:
You could maintain your custom.js (probably under a more descriptive name) on github or whatever, and then add one line of boilerplate to all your notebooks to load that script from the URL. You would still need boilerplate, but it would be a lot shorter.
Once you have executed the code cell containing the javascript, it is saved in the notebook, which means that it will automatically happen the next time the browser loads it, even before the code cell is executed. So unless nbviewer prevents the javascript's execution, it should work just fine. This would also make things work nicely for collaborators, since they wouldn't have to download additional files.
As for your own style file, I suspect that anyone sophisticated enough to install ipython and latex, download your notebook, and run nbconvert on it would also be sophisticated enough to download the .sty file. Anyway, I don't see any way around the need to do that...
Is there a way to verify that an iPython notebook's code is PEP8 compliant, after it has been exported as an .ipynb file?
.ipynb files are pure json, you can read it, concatenate all the cells, and run pep8 on it. On the other end, getting the correct cell number/line number to "fix" them would be slightly more difficult.
I'm not aware of any project the does it right now.
I just modified the pep8.py file to extract out the python code from the json and check it for pep8 compatibility. The modified pep8.py file.
Use it without installing ( since it has not yet been reviewed ) :
python pep8.py notebook.ipynb --format="ipynb"
--format="ipynb" is used to get the line number offset on a per-code basis, instead of a cumulative numbering.
I've sent a Pull Request for the same on github.
Though I am not sure whether it would be merged, I feel you might find the same useful. Try it out !
EDIT: Looks like the PR won't get merged.
In ipython, I know I can save aliases.
Is there similar way to save my own function, so that in is available next time I run ipython ?
lets say I have function:
def x(a,b):
print a+b
I tried using the %store command:
%store x
stored x (function)
But next time I start ipython it says:
name 'x' is not defined
I don't know if the exact feature you are asking exists in ipython.
What I do to handle this common case is to copy the functions in a .py file saved in the same folder as the notebook. Then, you can import the functions at the beginning of the notebook.
In this way, several notebooks can use the functions that are defined only once. It is also easy to distribute the notebook and the .py file together (for collaborating or moving to different machine).
Alternatively you can put the functions in a ipython notebook. From a notebook, you can "import" another notebook like it was a .py file, just use a notebook file name without spaces or '-'.
This approach is very similar to the .py file. The advantage is that you can add a more "rich" description of the functions (links, images, math formulas, etc...). The drawback is that you can't
seamlessly import the functions in another python script.
The only drawback of these approaches is that if you have a complex structure of folders and subfolders containing notebooks, you need to copy (or link) the .py file in each subfolder. To avoid that,
you can put your .py file in a single location and use absolute imports in the notebooks.
When you start ipython, you have to restore saved variables, aliases and functions using:
%store -r
There is however a chance this will not work, as there is apparently a bug
I have a set of helper functions I've written and would like to make them available to my IPython notebooks. My ideal set up would be:
Maintain the functions as a series of IPython notebooks
Be able to import the functions into other notebooks and invoke them
Does anybody know of a way to accomplish this?
Have you tried putting the ipython commands/functions into a script and loading it via %run script? The script can be a plan-text file without need to any #! line, and ipython can load the functions when you invoke %run.
If you keep changing the file after you invoked above, you may find that newer changes are not loading even you reissue the %run magic from a notebook where you invoked the %run earlier. You will need to execute this snippet in order to allow reloading upon new invocation of %run:
%load_ext autoreload
%autoreload 2