I work with Jupyter notebooks in Jetbrain's DataSpell. In two places I can configure a python interpreter. What is the difference between
Python Interpreter (lower right, in my case Conda env "mpd")
Jupyter-Server/ Managed Server (upper right, in my case WSL)
I think this should be the same, not?
In the lower right corner, you see the default Python interpreter configured in your system to run your python scrips. E.g. you have a *.py file in your DataSpell workspace.
The one shown under the Jupyter configuration is the one which will be used for running your notebooks.
If you run Jupyter locally and have just one interpreter, it will be the same for both. However, you can have multiple Python environments in your system and use one for Jupyter and another for your local scripts.
Related
I have multiple people working on the same AWS EMR cluster to run some Spark jobs. This is being done through Jupyter Notebooks which are created/modified using the Jupyter extension installed on a SSH Target through vscode. The modules are installed on the base conda environment that is included with the /emr/notebook-env/. Some people can see the correct kernel that is associated with the base conda environment in their vscode window when working on notebooks. However, some don't see this kernel as an option. How do I make sure that everyone's vscode lists the appropriate kernel when they are creating new notebooks or modifying existing notebooks?
Another potential reason this could happen is that the Jupyter exentsion of VSCode is not installed.
To add the Jupyter extension for VSCode, search for it by clicking extensions icon in lefthand toolbar, then searching for Jupyter and installing.
The user having the issue had to update their vscode and that fixed the issue
I am trying to run some notebooks in my virtual environment in the VSCode (remotely connected). I install the venv as usual via python3 -m venv <venv-name>, activate it and install all the needed modules. When I run which ipython I get the one from the venv so I install the kernel via ipython kernel install --name "<name>" --user and it is successfully created in ~/.local/share/jupyter/kernels/ directory and the kernel.json points to the venv python. Then I open the VSCode and select both the Python: Select Interpreter and Jupyter: Select Interpreter to start Jupyter server to point to the virtual environment's python, sth. like .../<venv-name>/bin/python3.
However, when I try to run the cell it wants me to select kernel (I can also do it myself in the upper right corner of the VSCode), but my newly created kernel is not there. There are only two (same) ones from usr/bin/python.
It is really strange since twice in two days my kernel magically appeared for one notebook and worked as desired, but when I opened a new notebook, my kernel was gone again. I tried to remove/reinstall kernels, venvs, VSCode's Python and Jupyter extensions but nothing helped. Any suggestions?
For now, I start the kernel in remote command-line via jupyter notebook --no-browser --ip=<ip> and then insert the connection link to Jupyter Server in the bottom right corner of the VSCode status bar but am wondering if there is an easier way since all the stuff (except VSCode) is on a remote machine?
This way is not easy. You can set up Jupyter Kernel easily.
Firstly, using ssh to connect to the remote server.
Secondly, open Command Palette (⇧⌘P) and enter Python: Select Interpreter, you can directly connecting to remote kernel.
resource: https://code.visualstudio.com/docs/datascience/jupyter-notebooks
I had started with udacity deep learning course and was setting up environments. I think the kernel notebook uses does not use python from conda environment. Following are some of the results of things I have tried.
Started conda environment
source activate tensorflow
With python terminal inside conda environment from linux terminal:
import sys
sys.executable
>>> '/home/username/anaconda2/envs/tensorflow/bin/python'
Also tensorflow gets imported with python shell
With ipython terminal inside conda environment, it shows same executable path. and tensorflow gets imported inside ipython shell.
However with jupyter notebook when I execute a cell in notebook, tensorflow module cannot be found. Also terminal spawned from notebook shows executable path of global python installation which is in anaconda/bin directoty, not of environment I had created from which I started the notebook
'/home/username/anaconda2/bin/python'
However conda environment of shell is still tensorflow
conda info --envs
# conda environments:
#
tensorflow * /home/username/anaconda2/envs/tensorflow
root /home/username/anaconda2
Does that mean kernel is linked to python installation in this location and not in conda env? How to link the same?
There is some more nuance to this question that is good to clarify. Each notebook is bound to a particular kernel. With the latest 4.0 release of Anaconda we (Continuum) have bundled a Conda-environment-aware extension that will try to associate a Notebook with a particular Conda environment. If that cannot be found then the "default" environment (or "root" environment) will be used. In your case you have a Notebook that is, I am guessing, asking for the default (or "root") environment, and so Jupyter starts a kernel in that environment, and not in the environment from which the Jupyter server was started. You can change the associated kernel by going to the Kernel->Change kernel menu and picking your tensorflow environment's kernel, along the lines of this:
Or when you create a new Notebook you can pick at that time which Conda environment's kernel should back the Notebook (note that one Conda environment can have multiple kernels available, e.g. Python and R):
We appreciate that this can be a common cause of confusion, especially when sharing notebooks, since the person who shared it either used the "default" kernel (probably called just "Python"), or they were using a Conda environment with a different name. We are working on ways to make this smoother and less confusing, but if you have suggestions for expected/desired behavior, please let us know (GitHub issue to https://github.com/ContinuumIO/anaconda-issues/issues/new is the best way to do this)
I have installed Anaconda for my Machine Learning course. I'm using it as IPython (Jupyter) notebook, in which we have lessons. OS is Ubuntu 14.04 LTS. Basically, I always run it from Terminal with:
jupyter notebook
I have created new environment called su_env from root environment (exact copy) with one package added. Now, I'm wondering: how can I set environment su_env as default one? I have dozen of notebooks so it's annoying to set up each time for every notebook the environment, in "web" GUI of Jupyter.
EDIT: I'm interested in a solution where you don't have to set environment before running notebook. My logic is that, somehow, automagically, jupyter sets root environment on its own while starting up. Because of that, I'm wondering is it possible to set some config file or something so jupyter sets su_env instead of root. Also, if you know that's not possible (and why), I would like to know that.
First activate the conda environment from the command line, then launch the notebook server.
For example:
$ source activate env_name
$ jupyter notebook
Note: This might only work with environments that were created from within Jupyter Notebook, not environments that were created using conda create on the command line.
In your ~/.bashrc, include the line:
alias jupyter="source activate su_env; jupyter"
This will condense the two commands into one, and you will activate su env whenever you call jupyter notebook or lab or whatever
Edit your bashrc and add source activate su_env then that env will always be active. To switch back to to root (or any other env) source activate env_name
You can use this on conda prompt:
conda activate env_name
jupyter notebook
source activate env_name gives me an error: 'source' is not recognized as an internal or external command, operable program, or batch file.
Background
I use Anaconda's IPython on my mac and it's a great tool for data exploration and debugging. However, when I wish to use IPython for my programs that require virtualenv (e.g. a Django web app), I don't want to have to reinstall IPython every time.
Question
Is there a way to use my local IPython while also using the rest of my virtualenv packages? (i.e. just make IPython the exception to virtualenv packages so that the local IPython setup is available no matter what) If so, how would you do this on a mac? My guess is that it would be some nifty .bash_profile changes, but my limited knowledge with it hasn't been fruitful. Thanks.
Example Usage
Right now if I'm debugging a program, I'd use the following:
import pdb
pdb.set_trace() # insert this to pause program and explore at command line
This would bring it to the command line (that I wish was IPython)
If you have a module in your local Python and not in the virtualenv, it will still be available in the virtualenv. Unless you shadow it with another virtualenv version. Did you try to launch your local IPython from a running virtualenv that didn't have an IPython? It should work.
Will, I assume you are using Anaconda's "conda" package manager? (Which combines the features of pip and virtualenv). If so you should be aware that many parts of it does not work completely like the tools it is replacing. E.g. if you are using conda create -n myenv to create your virtual environment, this is different from the "normal" virtualenv in a number of ways. In particular, there is no "global/default" packages: Even the default installation is essentially an environment ("root") like all other environments.
To obtain the usual virtualenv behavior, you can create your environments by cloning the root environment: conda create -n myenv --clone root. However, unlike for regular virtualenv, if you make changes to the default installation (the "root" environment in conda) these changes are not reflected in the environments that were created by cloning the root environment.
An alternative to cloning the root is to keep an updated list of "default packages" that you want to be available in new environments. This is managed by the create_default_packages option in the condarc file.
In summary: Don't treat your conda environments like regular python virtualenvs - even though they appear deceptively similar in many regards. Hopefully at some point the two implementations will converge.