How to keep virtualenv always on in production? - virtualenv

I use virtualenv to maintain environments for projects locally. I also use virtualenvwrapper, so I can switch between environments using
workon project1
However, when using virtualenv, you need your virtual environment to be active. I've just installed virtualenv on an ec2 instance, but how can I make sure the environment stays active? My best attempt at doing this right now is just putting the proper virtualenv commands in .bashrc. However, I'm exactly sure how this part all works... if the server restarts, will the .bashrc be run?
Essentially, what is the best way to keep a virtualenv always on on a production server?

"Activating a virtualenv" basically means you are changing your $PATH environment variable.
If you want to always have a virtualenv activated, prepend your virtualenv bin path to $PATH environment variable somewhere that gets executed before your commands run (~/.bashrc is an option).
Example (using ~/.bashrc):
export PATH=/path/to/myenv/bin:$PATH
(Assuming /path/to/myenv is where my virtualenv is placed)
~/.bashrc is only executed when you start a new bash shell (even after restarts). If you never start a bash shell, ~/.bashrc never gets executed.

If you want your virtualenv to be really permanent to your project, you could stuff the following two lines directly into your code:
activate_this = 'this_is_my_project/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))

Related

Adding click autocompletion to conda env activate script

I'm using a Python library which uses click autocompletion. Since I've installed the library in a conda env, I'd like the autocomplete to be associated with it. (Also, since it isn't installed in my primary Python env, adding eval "$(_FOO_BAR_COMPLETE=source_zsh foo-bar)" to my .zshrc doesn't work.) The documentation for the library I'm using says "if gradient was installed in a virtual environment, the following has to be added to the activate script":
eval "$(_GRADIENT_COMPLETE=source gradient)"
I originally added this to ~/miniconda3/envs/my_env/lib/python3.6/venv/scripts/common/activate, but the autocompletion didn't work. Running
source ~/miniconda3/envs/my_env/lib/python3.6/venv/scripts/common/activate
does work, but my shell prepends via __VENV_DIR__ to the prompt, and the fact that this doesn't happen automatically when I run conda activate myenv makes me think this is the wrong way to do it (for one, it isn't disabled when I do conda deactivate my_env).
What I'm looking for is the canonical way to add a script to run upon conda activate x, then end upon conda deactivate x. This seems very close, but it's for adding shell variables with export and unset. Is there a way to do it with click's autocomplete?
Following a small modification of the instructions in the docs seemed to work for me - I placed the eval statement in env_vars.sh, and nothing in deactivate.d.
My understanding is that export is persistent in the shell throughout sessions, and so must be undone with a corresponding unset. Whereas eval only works for that session, so as soon as the conda env is deactivated it no longer has an effect.
Would be happy to hear more from someone with a deeper understanding of bash/conda under the hood!

powershell and conda: conda activate env returns command not found

I have pip installed powerline-shell in my base conda env. Switching envs yields the following error:
conda activate <env_name>
-bash: powerline-shell: command not found
I also tried running conda init powershell but it took no actions.
I have miniconda3, with conda 4.7, installed on MacOS Mojave.
I don't know a simple solution to this. I'm thinking you either need to install it in every env (which I don't recommend because it's best to avoid using pip in Conda) or you create a link to the powerline-shell binary in another location that you can keep on PATH to avoid adding the entire miniconda3/bin/ directory to PATH. I've done something like this in the past, but never with a Python entry point before.
I'd try something like
mkdir -p ~/.local/bin
ln -s /your/path/to/miniconda3/bin/powerline-shell ~/.local/bin/powerline-shell
Then add .local/bin to PATH in your .bashrc, probably toward the beginning (e.g., before the Conda section). The path here (~/.local/bin) is totally arbitrary, so adjust to your preferences. Main point is to minimize what you are exposing globally in a shell session.
Note: conda init powershell is for Windows PowerShell users.

Is there a venv-specific equivalent of virtualenvwrapper's 'postactivate'

After starting my virtualenv, I would like to run a script that depends on some specifics of the environment (e.g. bash completions for some tools installed there). Is there a way to do this on a per-venv basis — effectively a local version of postactivate?
AFAIU postactivate is sourced at workon time. You don't automatically activate a virtualenv, you source bin/activate so you can add your code at the end of the bin/activate script.
To emulate predeactivate edit the same bin/activate script, function deactivate.

How to use ipython without installing in every virtualenv?

Background
I use Anaconda's IPython on my mac and it's a great tool for data exploration and debugging. However, when I wish to use IPython for my programs that require virtualenv (e.g. a Django web app), I don't want to have to reinstall IPython every time.
Question
Is there a way to use my local IPython while also using the rest of my virtualenv packages? (i.e. just make IPython the exception to virtualenv packages so that the local IPython setup is available no matter what) If so, how would you do this on a mac? My guess is that it would be some nifty .bash_profile changes, but my limited knowledge with it hasn't been fruitful. Thanks.
Example Usage
Right now if I'm debugging a program, I'd use the following:
import pdb
pdb.set_trace() # insert this to pause program and explore at command line
This would bring it to the command line (that I wish was IPython)
If you have a module in your local Python and not in the virtualenv, it will still be available in the virtualenv. Unless you shadow it with another virtualenv version. Did you try to launch your local IPython from a running virtualenv that didn't have an IPython? It should work.
Will, I assume you are using Anaconda's "conda" package manager? (Which combines the features of pip and virtualenv). If so you should be aware that many parts of it does not work completely like the tools it is replacing. E.g. if you are using conda create -n myenv to create your virtual environment, this is different from the "normal" virtualenv in a number of ways. In particular, there is no "global/default" packages: Even the default installation is essentially an environment ("root") like all other environments.
To obtain the usual virtualenv behavior, you can create your environments by cloning the root environment: conda create -n myenv --clone root. However, unlike for regular virtualenv, if you make changes to the default installation (the "root" environment in conda) these changes are not reflected in the environments that were created by cloning the root environment.
An alternative to cloning the root is to keep an updated list of "default packages" that you want to be available in new environments. This is managed by the create_default_packages option in the condarc file.
In summary: Don't treat your conda environments like regular python virtualenvs - even though they appear deceptively similar in many regards. Hopefully at some point the two implementations will converge.

Any equivalent to virtualenvwrapper's hook scripts in conda?

With virtualenvwrapper, one can install "hooks" that can be run, say, after a virtualenv is activated.
I suspect not, but wanted to confirm.
No, there currently isn't such a thing.
You can patch conda's activate script directly. But your changes will be overwritten whenever you update conda. Here's an example that changes to a directory in your $HOME based on the environment name:
echo 'cd "$HOME/code/$1"' >> "$(which activate)"
Unfortunately this will only work if you use the old-school activate command:
source activate project_name rather than conda activate project_name