I have currently 3 environments: root(base), aind-dl, py2env
So, for easy working, I've installed the conda extensions: nb_conda, nb_conda_kernels
and when I run $: jupyter notebook the nb_conda_kernel detects 5 kernels which are redundant and in the listing. Some kernels gives an error of not existing.
How do I remove the redundant kernels?
Envs i have:
redundant kernels:
Normally, you can use jupyter kernelspec remove <env_name> to remove a specific kernel.
In your case, your conda root and default seem to be leftovers from an Anaconda-Python 2 installation. The related discussion for this issue may be helpful. The idea is to disable/remove nb_conda. I have yet to run into this issue with Anaconda3 since I manage multiple kernels manually using the instructions here.
Related
I have multiple people working on the same AWS EMR cluster to run some Spark jobs. This is being done through Jupyter Notebooks which are created/modified using the Jupyter extension installed on a SSH Target through vscode. The modules are installed on the base conda environment that is included with the /emr/notebook-env/. Some people can see the correct kernel that is associated with the base conda environment in their vscode window when working on notebooks. However, some don't see this kernel as an option. How do I make sure that everyone's vscode lists the appropriate kernel when they are creating new notebooks or modifying existing notebooks?
Another potential reason this could happen is that the Jupyter exentsion of VSCode is not installed.
To add the Jupyter extension for VSCode, search for it by clicking extensions icon in lefthand toolbar, then searching for Jupyter and installing.
The user having the issue had to update their vscode and that fixed the issue
I had started with udacity deep learning course and was setting up environments. I think the kernel notebook uses does not use python from conda environment. Following are some of the results of things I have tried.
Started conda environment
source activate tensorflow
With python terminal inside conda environment from linux terminal:
import sys
sys.executable
>>> '/home/username/anaconda2/envs/tensorflow/bin/python'
Also tensorflow gets imported with python shell
With ipython terminal inside conda environment, it shows same executable path. and tensorflow gets imported inside ipython shell.
However with jupyter notebook when I execute a cell in notebook, tensorflow module cannot be found. Also terminal spawned from notebook shows executable path of global python installation which is in anaconda/bin directoty, not of environment I had created from which I started the notebook
'/home/username/anaconda2/bin/python'
However conda environment of shell is still tensorflow
conda info --envs
# conda environments:
#
tensorflow * /home/username/anaconda2/envs/tensorflow
root /home/username/anaconda2
Does that mean kernel is linked to python installation in this location and not in conda env? How to link the same?
There is some more nuance to this question that is good to clarify. Each notebook is bound to a particular kernel. With the latest 4.0 release of Anaconda we (Continuum) have bundled a Conda-environment-aware extension that will try to associate a Notebook with a particular Conda environment. If that cannot be found then the "default" environment (or "root" environment) will be used. In your case you have a Notebook that is, I am guessing, asking for the default (or "root") environment, and so Jupyter starts a kernel in that environment, and not in the environment from which the Jupyter server was started. You can change the associated kernel by going to the Kernel->Change kernel menu and picking your tensorflow environment's kernel, along the lines of this:
Or when you create a new Notebook you can pick at that time which Conda environment's kernel should back the Notebook (note that one Conda environment can have multiple kernels available, e.g. Python and R):
We appreciate that this can be a common cause of confusion, especially when sharing notebooks, since the person who shared it either used the "default" kernel (probably called just "Python"), or they were using a Conda environment with a different name. We are working on ways to make this smoother and less confusing, but if you have suggestions for expected/desired behavior, please let us know (GitHub issue to https://github.com/ContinuumIO/anaconda-issues/issues/new is the best way to do this)
I'm using the fantastic SageMath Cloud service to remotely collaborate with a partner. In particular, I'm using IPython notebooks. Unfortunately, the language seems to default to Python 2; I would prefer Python 3.
SSH'ing into my project, I can see that IPython 3 is actually installed. Is there a way to coerce SMC into using Python 3 for notebooks?
I have tried the instructions mentioned in the FAQ, i.e.,
ln -s /usr/bin/python3 ~/bin/python
ln -s /usr/bin/ipython3 ~/bin/ipython
While this works for invoking Python from the SSH commandline, it doesn't seem to affect the kernel used by IPython notebooks created from the web GUI.
Once you open an IPython Notebook on SageMathCloud you can switch the kernel to a variety of choices, including Python 3. To do that, use the 'Kernel' menu, then 'Change kernel', then 'Python 3'.
Switching to the Python 3 kernel in the IPython Notebook on SageMathCloud is discussed in this discussion the sage-cloud mailing list.
Is this what your question is about, or are you asking how to make that choice the default when you open a new IPython Notebook on SageMathCloud?
To get the fastest answers to SageMathCloud questions, use the sage-cloud mailing list.
I am running IPython Notebook on Enthought's Canopy 64 bit distribution, Ubuntu 14.04.
I've tried install libtiff, but when I import it in IPython Notebook, the kernel always dies at the import statement. What could possibly be causing this? Canopy is my default Python distribution, my paths all seem like they're set up appropriately, although I'm convinced that something in my Python setup is borked.
Any advice is appreciated.
EDIT: I'll be more specific. Output of sys.path:
['',
'/home/joe/Enthought/Canopy_64bit/User/src/svn',
'/home/joe/Canopy/appdata/canopy-1.4.1.1975.rh5-x86_64/lib/python27.zip',
'/home/joe/Canopy/appdata/canopy-1.4.1.1975.rh5-x86_64/lib/python2.7',
'/home/joe/Canopy/appdata/canopy-1.4.1.1975.rh5-x86_64/lib/python2.7/plat-linux2',
'/home/joe/Canopy/appdata/canopy-1.4.1.1975.rh5-x86_64/lib/python2.7/lib-tk',
'/home/joe/Canopy/appdata/canopy-1.4.1.1975.rh5-x86_64/lib/python2.7/lib-old',
'/home/joe/Canopy/appdata/canopy-1.4.1.1975.rh5-x86_64/lib/python2.7/lib-dynload',
'/home/joe/Enthought/Canopy_64bit/User/lib/python2.7/site-packages',
'/home/joe/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/PIL',
'/home/joe/opencv-2.4.9',
'/home/joe/Canopy/appdata/canopy-1.4.1.1975.rh5-x86_64/lib/python2.7/site-packages',
'/home/joe/Canopy/appdata/canopy-1.4.1.1975.rh5-x86_64/lib/python2.7/site-packages/IPython/extensions']
As for how to install Python packages, I assume I go to ~/Enthought/Canopy_64bit/User/lib/python2.7/site-packages and run pip, setup.py, or a shell script, per the specific package's instructions. Is that correct? The article that I linked has the following line: "To install a package which is not available in the Canopy / EPD repository, follow standard Python installation procedures from the OS command line.", which seems to imply that I install per package instructions.
In .bashrc, I have the following:
VIRTUAL_ENV_DISABLE_PROMPT=1 source /home/joe/Enthought/Canopy_64bit/User/bin/activate
export PYTHONHOME=/home/joe/Enthought/Canopy_64bit/User/bin
export PATH=/home/joe/Enthought/Canopy_64bit/User/bin
export PYTHONPATH=/home/joe/Enthought/Canopy_64bit/User/bin
From what I understand of the linked articles, this means I'm setting Canopy User as my default Python distribution. I'm sure I'm doing something a bit over my head here, but I can't understand what else I need to do to fix this issue.
Worse yet, now I'm getting an "ImportError: No module named site" with these .bashrc settings, when trying to start IPython notebook or python from the command line. I can run only from the Canopy GUI.
Closing this. I made it harder than necessary.
It turns out, the PYTHONHOME and PYTHONPATH .bashrc variables were causing some conflicts. Commenting them out seems to have resolved the issue.
Installing outside packages does, indeed, happen from the home (~) directory.
Background
I use Anaconda's IPython on my mac and it's a great tool for data exploration and debugging. However, when I wish to use IPython for my programs that require virtualenv (e.g. a Django web app), I don't want to have to reinstall IPython every time.
Question
Is there a way to use my local IPython while also using the rest of my virtualenv packages? (i.e. just make IPython the exception to virtualenv packages so that the local IPython setup is available no matter what) If so, how would you do this on a mac? My guess is that it would be some nifty .bash_profile changes, but my limited knowledge with it hasn't been fruitful. Thanks.
Example Usage
Right now if I'm debugging a program, I'd use the following:
import pdb
pdb.set_trace() # insert this to pause program and explore at command line
This would bring it to the command line (that I wish was IPython)
If you have a module in your local Python and not in the virtualenv, it will still be available in the virtualenv. Unless you shadow it with another virtualenv version. Did you try to launch your local IPython from a running virtualenv that didn't have an IPython? It should work.
Will, I assume you are using Anaconda's "conda" package manager? (Which combines the features of pip and virtualenv). If so you should be aware that many parts of it does not work completely like the tools it is replacing. E.g. if you are using conda create -n myenv to create your virtual environment, this is different from the "normal" virtualenv in a number of ways. In particular, there is no "global/default" packages: Even the default installation is essentially an environment ("root") like all other environments.
To obtain the usual virtualenv behavior, you can create your environments by cloning the root environment: conda create -n myenv --clone root. However, unlike for regular virtualenv, if you make changes to the default installation (the "root" environment in conda) these changes are not reflected in the environments that were created by cloning the root environment.
An alternative to cloning the root is to keep an updated list of "default packages" that you want to be available in new environments. This is managed by the create_default_packages option in the condarc file.
In summary: Don't treat your conda environments like regular python virtualenvs - even though they appear deceptively similar in many regards. Hopefully at some point the two implementations will converge.