Adding click autocompletion to conda env activate script - autocomplete

I'm using a Python library which uses click autocompletion. Since I've installed the library in a conda env, I'd like the autocomplete to be associated with it. (Also, since it isn't installed in my primary Python env, adding eval "$(_FOO_BAR_COMPLETE=source_zsh foo-bar)" to my .zshrc doesn't work.) The documentation for the library I'm using says "if gradient was installed in a virtual environment, the following has to be added to the activate script":
eval "$(_GRADIENT_COMPLETE=source gradient)"
I originally added this to ~/miniconda3/envs/my_env/lib/python3.6/venv/scripts/common/activate, but the autocompletion didn't work. Running
source ~/miniconda3/envs/my_env/lib/python3.6/venv/scripts/common/activate
does work, but my shell prepends via __VENV_DIR__ to the prompt, and the fact that this doesn't happen automatically when I run conda activate myenv makes me think this is the wrong way to do it (for one, it isn't disabled when I do conda deactivate my_env).
What I'm looking for is the canonical way to add a script to run upon conda activate x, then end upon conda deactivate x. This seems very close, but it's for adding shell variables with export and unset. Is there a way to do it with click's autocomplete?

Following a small modification of the instructions in the docs seemed to work for me - I placed the eval statement in env_vars.sh, and nothing in deactivate.d.
My understanding is that export is persistent in the shell throughout sessions, and so must be undone with a corresponding unset. Whereas eval only works for that session, so as soon as the conda env is deactivated it no longer has an effect.
Would be happy to hear more from someone with a deeper understanding of bash/conda under the hood!

Related

Code-Runner Extension Refuses to Use the Correct Environment

I'm having a bit of trouble getting code-runner to play nice with my conda environments.
Checklist;
The correct python interpreter is selected.
I've explicitly changed the pythonPath and executorMap objects in the settings.json file to the correct environment.
I've tried reinstalling VSCode, Conda and the Code-Runner extension.
I've run a quick script to check which environment is being used and it confirmed that code-runner insists on using the base environment, rather than the one selected, as shown below.
Just to clarify, the code runs perfectly fine, and shows the correct environment selected when I use Ctrl+F5 instead of code-runner (or when I uninstall code-runner and use the normal run feature), but I'd like for it to work with the extension too.
Please help, thanks in advance!

Debugging in VSC is not changing the environment and fails with "no module named"

Initially, I pulled a repo and was able to run the debugger. I set up an environment using conda (to run Python 2.7) and used pip to install the dependencies.
I then wanted to test something else, elected to put the first project into a workspace. Now, when I try to run the project:
1) I see that the environment switches back to the (base), even though I have chosen the conda env/python
2) the modules within the project are no longer found
To illustrate, here's my starting point:
If I press F5, to run in debug mode, then it
Does not change the environment, even though I'm pointing to the conda env27
If I activate env27, it still doesn't work
And if I try this in a Ubuntu terminal window, where I've activated the conda env27, it works
What am I doing wrong?

zsh autocompletion appears to only work with built in commands

I'm new to zsh, just switched over from fish. I'm trying to get autocomplete working so it displays argument/flag options for commands upon pressing tab.
Currently this works, but it only appears to work for built in commands. For example, it works for ls, grep, git, etc. but does not work for programs I have added myself. For example, fd-find, exa, and nvm all do not work.
For nvm, I have enabled the nvm plugin using Oh My Zsh. I know the plugin is working in general, because nvm itself is working (and it wasn't before enabling the plugin).
For fd-find, I see the auto-completion file in /usr/share/zsh/vendor-completions/_fd
For exa, I manually downloaded and placed the autocompletion file in /usr/local/share/zsh/site-functions/_exa as instructed by the site.
All 3 of these programs do not show me the typical arguments/flags autocomplete menu the way built in commands do. I'm not sure what is wrong.
I echoed the fpath environment variable to make sure /usr/local/share/zsh/site-functions was in there. It is, along with /usr/local/share/zsh/site-functions/
When I run which nvm, I get:
_nvm () {
# undefined
builtin autoload -XUz
}
Which is actually what I get for all of _nvm, _exa, _fd.
Not sure what else to try.
Any suggestions for how to get autocomplete working properly?
Other info: I'm on a System76 Darter Pro laptop running Pop!_OS.
I found a fix that worked for me. After searching through zsh issues related to autocomplete on github, this solution worked for me. Credit was given to the original source of the solution on stackexchange.
The solution was simply to remove all zcompdump files:
rm ~/.zcompdump*
After running the above command, autocomplete works and expands out the possible flags/arguments for non-builtin programs!

How to use ipython without installing in every virtualenv?

Background
I use Anaconda's IPython on my mac and it's a great tool for data exploration and debugging. However, when I wish to use IPython for my programs that require virtualenv (e.g. a Django web app), I don't want to have to reinstall IPython every time.
Question
Is there a way to use my local IPython while also using the rest of my virtualenv packages? (i.e. just make IPython the exception to virtualenv packages so that the local IPython setup is available no matter what) If so, how would you do this on a mac? My guess is that it would be some nifty .bash_profile changes, but my limited knowledge with it hasn't been fruitful. Thanks.
Example Usage
Right now if I'm debugging a program, I'd use the following:
import pdb
pdb.set_trace() # insert this to pause program and explore at command line
This would bring it to the command line (that I wish was IPython)
If you have a module in your local Python and not in the virtualenv, it will still be available in the virtualenv. Unless you shadow it with another virtualenv version. Did you try to launch your local IPython from a running virtualenv that didn't have an IPython? It should work.
Will, I assume you are using Anaconda's "conda" package manager? (Which combines the features of pip and virtualenv). If so you should be aware that many parts of it does not work completely like the tools it is replacing. E.g. if you are using conda create -n myenv to create your virtual environment, this is different from the "normal" virtualenv in a number of ways. In particular, there is no "global/default" packages: Even the default installation is essentially an environment ("root") like all other environments.
To obtain the usual virtualenv behavior, you can create your environments by cloning the root environment: conda create -n myenv --clone root. However, unlike for regular virtualenv, if you make changes to the default installation (the "root" environment in conda) these changes are not reflected in the environments that were created by cloning the root environment.
An alternative to cloning the root is to keep an updated list of "default packages" that you want to be available in new environments. This is managed by the create_default_packages option in the condarc file.
In summary: Don't treat your conda environments like regular python virtualenvs - even though they appear deceptively similar in many regards. Hopefully at some point the two implementations will converge.

How to keep virtualenv always on in production?

I use virtualenv to maintain environments for projects locally. I also use virtualenvwrapper, so I can switch between environments using
workon project1
However, when using virtualenv, you need your virtual environment to be active. I've just installed virtualenv on an ec2 instance, but how can I make sure the environment stays active? My best attempt at doing this right now is just putting the proper virtualenv commands in .bashrc. However, I'm exactly sure how this part all works... if the server restarts, will the .bashrc be run?
Essentially, what is the best way to keep a virtualenv always on on a production server?
"Activating a virtualenv" basically means you are changing your $PATH environment variable.
If you want to always have a virtualenv activated, prepend your virtualenv bin path to $PATH environment variable somewhere that gets executed before your commands run (~/.bashrc is an option).
Example (using ~/.bashrc):
export PATH=/path/to/myenv/bin:$PATH
(Assuming /path/to/myenv is where my virtualenv is placed)
~/.bashrc is only executed when you start a new bash shell (even after restarts). If you never start a bash shell, ~/.bashrc never gets executed.
If you want your virtualenv to be really permanent to your project, you could stuff the following two lines directly into your code:
activate_this = 'this_is_my_project/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))