How to configure VSCode to find a third-party tool (e.g. stack, yamllint) if these tools are installed through anaconda? - visual-studio-code

Assuming that VSCode is installed and an anaconda environment is setup. The default conda environment contains language analysis and compilation tools like stack (for haskell) and yamllint (for yaml). However, since conda 3.4 they are under <conda_install_dir>/bin, which is not included in system PATH environment variables. The only executable is conda itself.
After installation of VSCode and its plugin, I frequently encounter error messages indicating that executables of these tools are not found. E.g. for stack, the message is:
Project requires Cabal but it isn't installed
It appears that the only way to make it work is to override the PATH variable in VSCode to make it different from the one used by OS. Is there a option to allow this overriding?
Thanks a lot for your help

have you tried updating your PATH to simply include it ?

Related

why my vscode does not have the same libraries installed in wsl?

I'm using wsl and it runs codes in vscode pretty fine and I have different libraries which I installed through pip and conda in wsl but when I run that code using vscode itself it doesn't recognize the libraries or even pip itself.
I don't have any other environment.
I should add that I installed the packages globally using conda install ... or pip install ... in base environment and I only have base environment and I run my code through code . and I also have python and remote wsl extensions installed in my vscode.
what can be the problem?
I don't have much personal experience with this, but I found some useful information in this Stack Overflow question (even though it doesn't utilize conda), along with https://code.visualstudio.com/docs/remote/wsl-tutorial#_python-development.
I also found this blog post useful, even if it doesn't cover WSL.
In short, make sure you:
Have installed the Python extension (by Microsoft) in VSCode. This is critical for being able to detect and select the Python interpreter. You don't mention having this in place, so I believe this is your likely problem.
You have done this already, but including it for others who might read this later -- Install the Remote - WSL extension (or the Remote Development extension pack) in VSCode.
You are also doing this already -- Start VSCode from inside your WSL distribution. Alternatively, you can start VSCode from Windows and then select the Remote WSL - Reopen Folder in WSL from the Command Palette (also accessible from the "Remote" Status Bar).
In VSCode, open the Command Palette with Shift+Ctrl+P, search for the Python: Select Interpreter command, and you should find your Conda environment in the list.
After selecting this, you should find that your project is using the interpreter and modules that you have installed via conda.
One thing I did to overcome this issue is go to Extensions -> Local (You should have two tabs there, Local and WSL:DISTRO) DISTRO refers to whatever DISTRO you're using, you will see that some of the local extensions are disabled in the current workspace (WSL) and there is a little cloud icon in the WSL:DISTRO tab that says install Local Extensions in WSL:DISTRO once you click that it will let you choose which extensions to install and you should be good to go!

How is the PATH variable defined for the vscode process itself (not the integrated terminal)?

I'm currently using a vscode over a remote ssh connection and cannot figure out how to set the search PATH for the vscode process itself. I have set the PATH for processes run in the terminal in my .bashrc file, which is also sourced from .bash_profile.
In spite of this, vscode complains that pipenv is not on the path although it is visible to my integrated terminal session. In my .bashrc I am loading environment modules to load versions of needed libraries, which get put on the PATH. Since I created my virtualenv using pipenv in the terminal, it knows which python version to use and makes a link to it in the environment definition. Because of the way python virtual environments work, the actual python binary is copied to the virtual environment. And because vscode has hardcoded paths where to look for virtual environments it is able to find the correct version of python that is in use (despite it not seeing it in the PATH).
In addition, hardcoding the path to pipenv using the extension setting python.pipenvPath still produces a "not found" error.
Solutions that I have seen elsewhere suggest launching vscode from the command line so that the process inherits the PATH settings. However, this will not work over a remote connection.

Is it possible to specify the path to the libstdc++ in VS Code clangd extension?

I use VS Code as my main code editor for my C++ development. I am using the remote SSH extension by Microsoft to access my office workstation from home. For the C++ autocompletion and linting I use the clangd extension by LLVM. Company policy prevents users from having sudo access to workstations and libraries are often not at the latest version.
When I try to launch clangd I get the following error message:
/lib64/libstdc++.so.6: version `GLIBCXX_3.4.22' not found (required by /my/path/to/clangd)
Which obviously means that the version of libstdc++ is too old for the version of clangd that I am using. This is easily fixable by adding to LD_LIBRARY_PATH the location of most recent gcc libraries (part of our compiler toolchain) and then launching VS Code.
However, now that I am working remotely I cannot do that because VS Code is installed onto my laptop and I am using the SSH extension to access the code on my office workstation. Looking on the man page for clangd I cannot see a way to specify the path to the libstdc++ that I want to use. Is there a way, other than adding the libraries to the LD_LIBRARY_PATH upon startup/login, to bypass this issue?
I found a way, albeit a little bit hacky.
Export the new LD_LIBRARY_PATH on .zprofile (or equivalent for your shell. I am using zsh). Make sure that there are no VSCode servers running in the host. If there are, make sure to remove them.
In the settings.json file add the following line, to tell VSCode that you want the shell to be a login, interactive shell:
"terminal.integrated.shellArgs.linux": ["-l", "-i"]
Job done. Clangd now finds the correct libraries.
Another way to do this is to use env in your settings.json as follows:
"clangd.path": "env",
"clangd.arguments": [
"LD_LIBRARY_PATH=<your_custom_path>",
"/path/to/my/clangd"
],

Unable to run .robot files in Eclipse Oxygen in Mac

I have a Mac laptop and have installed RED in Eclipse Oxygen version. When I try to run a .robot file in Eclipse I see the popup RunTime Environment Error with the message "Unable to provide valid RED environment. Check python/ robot installation and set it in Preferences". I do not see this issue in Windows machine.
This errors tells that RED could not found Python+Robot installation using command which/whereis. This can be verified by going to Windows->Preferences->RobotFramework -> Installed Frameworks. That part should show you list of python interpreters with robot - in your case either could be empty (no python found) or entry with red color (python without robot found).
On Windows usually Python is added to PATH so it is visible and discovered by RED, somehow Mac does not do this by default (you are not the first person with such issue).
Go to Windows->Preferences->RobotFramework -> Installed Frameworks, there is Add.. button and provide path where python executable is located. I do not have MacOS to provide you usual path, under linux (Centos,Ubuntu) it is in /usr/bin . Take note that RED looks for python filename, if your installation renamed it to python3,python2,py2,py3 or other just make a copy to python.
I will make an Help topic about finding python installations when python is not in PATH as it seems it is more common than I fought.
If you have any other questions, it is better to create GitHub issue: https://github.com/nokia/RED/issues

How to use ipython without installing in every virtualenv?

Background
I use Anaconda's IPython on my mac and it's a great tool for data exploration and debugging. However, when I wish to use IPython for my programs that require virtualenv (e.g. a Django web app), I don't want to have to reinstall IPython every time.
Question
Is there a way to use my local IPython while also using the rest of my virtualenv packages? (i.e. just make IPython the exception to virtualenv packages so that the local IPython setup is available no matter what) If so, how would you do this on a mac? My guess is that it would be some nifty .bash_profile changes, but my limited knowledge with it hasn't been fruitful. Thanks.
Example Usage
Right now if I'm debugging a program, I'd use the following:
import pdb
pdb.set_trace() # insert this to pause program and explore at command line
This would bring it to the command line (that I wish was IPython)
If you have a module in your local Python and not in the virtualenv, it will still be available in the virtualenv. Unless you shadow it with another virtualenv version. Did you try to launch your local IPython from a running virtualenv that didn't have an IPython? It should work.
Will, I assume you are using Anaconda's "conda" package manager? (Which combines the features of pip and virtualenv). If so you should be aware that many parts of it does not work completely like the tools it is replacing. E.g. if you are using conda create -n myenv to create your virtual environment, this is different from the "normal" virtualenv in a number of ways. In particular, there is no "global/default" packages: Even the default installation is essentially an environment ("root") like all other environments.
To obtain the usual virtualenv behavior, you can create your environments by cloning the root environment: conda create -n myenv --clone root. However, unlike for regular virtualenv, if you make changes to the default installation (the "root" environment in conda) these changes are not reflected in the environments that were created by cloning the root environment.
An alternative to cloning the root is to keep an updated list of "default packages" that you want to be available in new environments. This is managed by the create_default_packages option in the condarc file.
In summary: Don't treat your conda environments like regular python virtualenvs - even though they appear deceptively similar in many regards. Hopefully at some point the two implementations will converge.