Background
I use Anaconda's IPython on my mac and it's a great tool for data exploration and debugging. However, when I wish to use IPython for my programs that require virtualenv (e.g. a Django web app), I don't want to have to reinstall IPython every time.
Question
Is there a way to use my local IPython while also using the rest of my virtualenv packages? (i.e. just make IPython the exception to virtualenv packages so that the local IPython setup is available no matter what) If so, how would you do this on a mac? My guess is that it would be some nifty .bash_profile changes, but my limited knowledge with it hasn't been fruitful. Thanks.
Example Usage
Right now if I'm debugging a program, I'd use the following:
import pdb
pdb.set_trace() # insert this to pause program and explore at command line
This would bring it to the command line (that I wish was IPython)
If you have a module in your local Python and not in the virtualenv, it will still be available in the virtualenv. Unless you shadow it with another virtualenv version. Did you try to launch your local IPython from a running virtualenv that didn't have an IPython? It should work.
Will, I assume you are using Anaconda's "conda" package manager? (Which combines the features of pip and virtualenv). If so you should be aware that many parts of it does not work completely like the tools it is replacing. E.g. if you are using conda create -n myenv to create your virtual environment, this is different from the "normal" virtualenv in a number of ways. In particular, there is no "global/default" packages: Even the default installation is essentially an environment ("root") like all other environments.
To obtain the usual virtualenv behavior, you can create your environments by cloning the root environment: conda create -n myenv --clone root. However, unlike for regular virtualenv, if you make changes to the default installation (the "root" environment in conda) these changes are not reflected in the environments that were created by cloning the root environment.
An alternative to cloning the root is to keep an updated list of "default packages" that you want to be available in new environments. This is managed by the create_default_packages option in the condarc file.
In summary: Don't treat your conda environments like regular python virtualenvs - even though they appear deceptively similar in many regards. Hopefully at some point the two implementations will converge.
Related
Assuming that VSCode is installed and an anaconda environment is setup. The default conda environment contains language analysis and compilation tools like stack (for haskell) and yamllint (for yaml). However, since conda 3.4 they are under <conda_install_dir>/bin, which is not included in system PATH environment variables. The only executable is conda itself.
After installation of VSCode and its plugin, I frequently encounter error messages indicating that executables of these tools are not found. E.g. for stack, the message is:
Project requires Cabal but it isn't installed
It appears that the only way to make it work is to override the PATH variable in VSCode to make it different from the one used by OS. Is there a option to allow this overriding?
Thanks a lot for your help
have you tried updating your PATH to simply include it ?
I'm using a Python library which uses click autocompletion. Since I've installed the library in a conda env, I'd like the autocomplete to be associated with it. (Also, since it isn't installed in my primary Python env, adding eval "$(_FOO_BAR_COMPLETE=source_zsh foo-bar)" to my .zshrc doesn't work.) The documentation for the library I'm using says "if gradient was installed in a virtual environment, the following has to be added to the activate script":
eval "$(_GRADIENT_COMPLETE=source gradient)"
I originally added this to ~/miniconda3/envs/my_env/lib/python3.6/venv/scripts/common/activate, but the autocompletion didn't work. Running
source ~/miniconda3/envs/my_env/lib/python3.6/venv/scripts/common/activate
does work, but my shell prepends via __VENV_DIR__ to the prompt, and the fact that this doesn't happen automatically when I run conda activate myenv makes me think this is the wrong way to do it (for one, it isn't disabled when I do conda deactivate my_env).
What I'm looking for is the canonical way to add a script to run upon conda activate x, then end upon conda deactivate x. This seems very close, but it's for adding shell variables with export and unset. Is there a way to do it with click's autocomplete?
Following a small modification of the instructions in the docs seemed to work for me - I placed the eval statement in env_vars.sh, and nothing in deactivate.d.
My understanding is that export is persistent in the shell throughout sessions, and so must be undone with a corresponding unset. Whereas eval only works for that session, so as soon as the conda env is deactivated it no longer has an effect.
Would be happy to hear more from someone with a deeper understanding of bash/conda under the hood!
I have pip installed powerline-shell in my base conda env. Switching envs yields the following error:
conda activate <env_name>
-bash: powerline-shell: command not found
I also tried running conda init powershell but it took no actions.
I have miniconda3, with conda 4.7, installed on MacOS Mojave.
I don't know a simple solution to this. I'm thinking you either need to install it in every env (which I don't recommend because it's best to avoid using pip in Conda) or you create a link to the powerline-shell binary in another location that you can keep on PATH to avoid adding the entire miniconda3/bin/ directory to PATH. I've done something like this in the past, but never with a Python entry point before.
I'd try something like
mkdir -p ~/.local/bin
ln -s /your/path/to/miniconda3/bin/powerline-shell ~/.local/bin/powerline-shell
Then add .local/bin to PATH in your .bashrc, probably toward the beginning (e.g., before the Conda section). The path here (~/.local/bin) is totally arbitrary, so adjust to your preferences. Main point is to minimize what you are exposing globally in a shell session.
Note: conda init powershell is for Windows PowerShell users.
I had started with udacity deep learning course and was setting up environments. I think the kernel notebook uses does not use python from conda environment. Following are some of the results of things I have tried.
Started conda environment
source activate tensorflow
With python terminal inside conda environment from linux terminal:
import sys
sys.executable
>>> '/home/username/anaconda2/envs/tensorflow/bin/python'
Also tensorflow gets imported with python shell
With ipython terminal inside conda environment, it shows same executable path. and tensorflow gets imported inside ipython shell.
However with jupyter notebook when I execute a cell in notebook, tensorflow module cannot be found. Also terminal spawned from notebook shows executable path of global python installation which is in anaconda/bin directoty, not of environment I had created from which I started the notebook
'/home/username/anaconda2/bin/python'
However conda environment of shell is still tensorflow
conda info --envs
# conda environments:
#
tensorflow * /home/username/anaconda2/envs/tensorflow
root /home/username/anaconda2
Does that mean kernel is linked to python installation in this location and not in conda env? How to link the same?
There is some more nuance to this question that is good to clarify. Each notebook is bound to a particular kernel. With the latest 4.0 release of Anaconda we (Continuum) have bundled a Conda-environment-aware extension that will try to associate a Notebook with a particular Conda environment. If that cannot be found then the "default" environment (or "root" environment) will be used. In your case you have a Notebook that is, I am guessing, asking for the default (or "root") environment, and so Jupyter starts a kernel in that environment, and not in the environment from which the Jupyter server was started. You can change the associated kernel by going to the Kernel->Change kernel menu and picking your tensorflow environment's kernel, along the lines of this:
Or when you create a new Notebook you can pick at that time which Conda environment's kernel should back the Notebook (note that one Conda environment can have multiple kernels available, e.g. Python and R):
We appreciate that this can be a common cause of confusion, especially when sharing notebooks, since the person who shared it either used the "default" kernel (probably called just "Python"), or they were using a Conda environment with a different name. We are working on ways to make this smoother and less confusing, but if you have suggestions for expected/desired behavior, please let us know (GitHub issue to https://github.com/ContinuumIO/anaconda-issues/issues/new is the best way to do this)
I use virtualenv to maintain environments for projects locally. I also use virtualenvwrapper, so I can switch between environments using
workon project1
However, when using virtualenv, you need your virtual environment to be active. I've just installed virtualenv on an ec2 instance, but how can I make sure the environment stays active? My best attempt at doing this right now is just putting the proper virtualenv commands in .bashrc. However, I'm exactly sure how this part all works... if the server restarts, will the .bashrc be run?
Essentially, what is the best way to keep a virtualenv always on on a production server?
"Activating a virtualenv" basically means you are changing your $PATH environment variable.
If you want to always have a virtualenv activated, prepend your virtualenv bin path to $PATH environment variable somewhere that gets executed before your commands run (~/.bashrc is an option).
Example (using ~/.bashrc):
export PATH=/path/to/myenv/bin:$PATH
(Assuming /path/to/myenv is where my virtualenv is placed)
~/.bashrc is only executed when you start a new bash shell (even after restarts). If you never start a bash shell, ~/.bashrc never gets executed.
If you want your virtualenv to be really permanent to your project, you could stuff the following two lines directly into your code:
activate_this = 'this_is_my_project/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))