Jupyter directories when in virtual environemnt - jupyter

Where does jupyter store kernelspecs and other data, when running inside a virtual environment?
(I'm interested in conda environments, but knowing about other kinds of virtual envs would be interesting too).

I think I found it.
When inside a virtual environment, one can run
jupyter --paths
and one will see jupyter locations (for the jupyter installed inside the currently active environment).
Something like:
config:
/home/<user>/.jupyter
/home/<user>/anaconda3/envs/<this-env>/etc/jupyter
/usr/local/etc/jupyter
/etc/jupyter
data:
/home/<user>/.local/share/jupyter
/home/<user>/anaconda3/envs/<this-env>/share/jupyter
/usr/local/share/jupyter
/usr/share/jupyter
runtime:
/home/<user>/.local/share/jupyter/runtime
The directory where kernelspecs are would be /home/<user>/anaconda3/envs/<this-env>/share/jupyter/kernels

Related

Running jupyter notebook cells as an interactive SLURM job on VScode

I am doing some analyses using vscode on a remote server that has got SLURM installed to manage jobs and provide parallel computing. I would like to run each cell in the Jupyter notebook as an interactive job on SLURM the same way my command line code would be run as an interactive SLURM job after I have used srun to request compute nodes. The jobs I need to run on the Jupyter notebook require a lot of memory, so I need to run them using SLURM.
My current work around is to run srun on the terminal and start a python terminal, then I copy and paste the code from each cell of my notebook into the python terminal. I'd really appreciate your help.
It is an old question, but answering as I also came across this problem recentrly.
After you do srun on a terminal, you should be able to ssh directly into your compute node in VScode and use all the capabilities of the compute node in the interactive mode/notebook
The steps I take, for example, are:
in a terminal (e.g. powershell), srun into a node
add that node to your config file, so that you can ssh into it
open vscode and ssh into that node
run code in interactive window/notebook, with access to CPU/GPU of the node

How to run initialization commands after SSH in VS Code Remote?

Problem
I am trying to connect to my school's computing cluster (aka a linux server with "login node" and "computing node") using VS Code's Remote SSH, but I cannot figure out how to run a command after SSH-ing.
Goal
I simply want to view Python code and test some small lines in a .ipynb jupyter notebook in the computing platform's environment.
Description
Basically, normally in the command line (or mobaXterm of a Windows machine) of my local machine, I first log onto the computing platform's login node with ssh -Y -L PORT:127.0.0.1:PORT username#computing.cluster.ip, and then run srun -t 0-12:00 --pty -p gpu --gres=gpu:1 --x11 --tunnel PORT:PORT /bin/bash to log onto the computing node interactively (shown command allows for port forwarding). The problem is, in VS Code I can only connect to the login node, but after that there's no way for me to run another command and log onto the computing node. The reason I need to get to computing node is that I want to test something with a .ipynb file interactively on VS Code while reading the code, and the login node does not allow me to perform computation.
Failed trials
I've been trying Code-Server, but it does not support .ipynb well (it keeps asking me to install jupyter notebook even though I have installed it in my conda env), possibly because it by default recognizes HPC cluster's Python interpreter which I cannot modify (I can't even select Jupyter kernel in code-server). I also tried to directly use Jupyter Notebook (open Jupyter with port forwarding after getting onto computing node), but reading code on it is much more inconvenient.
Would greatly appreciate your suggestions!

vagrant up fails with: cannot translate name # rb_sysopen when trying to run homestead

When I run vagrant up I get the following error:
Vagrant/embedded/gems/2.2.14/gems/vagrant-2.2.14/plugins/hosts/suse/host.rb:20:in `initialize': Cannot translate name. # rb_sysopen - /etc/os-release (Errno::ELOOP)
I have installed Vagrant for Windows and I'm trying to launch Laravel's Homestead that I cloned in WSL2 by cd'ing into the Z: directory that WSL2 provides via PowerShell (so that I have access to Vagrant that's installed on Windows).
cd Z:\home\coder\projects\homestead
It seems that Vagrant is trying to recognize the OS from the filesystem if I'm understanding correctly. So if you're trying to run Vagrant on Windows across a network share that is Unix/WSL/Linux it seems that it will try to run as if it is Unix and fail.
Solution
I was able to copy the homestead directory from the network share into my Windows environment and then navigate to that directory and run vagrant up successfully using powershell.
Another Option
It sounds like you should also be able to install Vagrant within WSL2 and use it from within WSL2 instead of PowerShell.
Another possibility to note is that you can invoke exes from within WSL2, but it sounds like it will not work properly if you were trying to run Window's Vagrant from within WSL2.
Research
https://github.com/roots/trellis/issues/1083
https://www.vagrantup.com/docs/other/wsl.html
https://discourse.roots.io/t/command-vagrant-up-in-wsl-is-failed/16528

using a conda virtual environment in jupyter notebook

I have read and implemented instructions from earlier posts like:
How to start an ipython shell(not notebook) within a conda or virtualenv
My goal is to use a kernel in ipython which has all conda packages from my virtual environment.
I have a google ubuntu 16.04 machine where I have installed anaconda and a virtual environment in which i installed all my packages..
when i run
python -m ipykernel.kernelspec
i get the following error:
/home/admin/anaconda3/envs/py36ve/lib/python3.6/site-packages/IPython/paths.py:61: UserWarning: IPython dir
'/home/admin/.ipython' is not a writable location, using a temp
directory.
" using a temp directory.".format(ipdir))
[Errno 13] Permission denied: '/usr/local/share/jupyter/kernels/python3'
I tried running with sudo too.. i created a kernel but when i use it then it has none of the packages i installed in the virtual environment..
I do have a similar issue with this when I try to submit my program to a cluster where it doesn't have access to my local directory and it shows the same message. But I don't get Permission denied message and everything is fine by me. But I wanted to address this issue and looked into it and I found that paths.py at line 62 in python package in the case of not writable, it creates a temp directory like the following:
ipdir = tempfile.mkdtemp()
As in tempfile documentation says:
Creates a temporary directory in the most secure manner possible. There are no race conditions in the directory’s creation. The directory is readable, writable, and searchable only by the creating user ID.
It is strange that you do get this but if you want to make it work, find the paths.py and change that to your liking and makes sure it works and replace it with the original.

ipython notebook on remote server peculiarity

I am taking my first steps with ipython notebook and I installed it successfully on a remote server of mine (over SSH) and I started it using the following command:
ipython notebook --ip='*' ---pylab=inline --port=7777
I then checked on http://myserver.sth:7777/ and the notebook was running just fine. I then wanted to close the SSH connection with the server and keep ipython running in the background. When I did this, I couldn't connect to myserver.sth:7777 anymore. Once I connected again to the remote server by SSH, I could connect again to the notebook. I then tried to use screen to start ipython: I created a new screen by screen -S ipy, I started ipython notebook as above and I used Ctrl+A,D to detach the screen and exit to the TTY. I could still connect remotely to the notebook. I then closed the SSH connection and I got a 404 NOT FOUND error when I tried to access my previously stored notebook and I couldn't see it on the list of notebook at http://myserver.sth:7777/. I tried to create a new notebook, but I got a 500 Internal Server Error.
I also tried running ipython notebook with and without using sudo.
Any ideas?
Rather than use screen, perhaps you could switch to an init script or supervisord to keep IPython notebook up and running.
Let's assume you go the supervisord route:
Install supervisord
Install supervisord using your package manager. For ubuntu it's named supervisor.
apt-get install supervisor
If you decide to install supervisor through pip, you'll have to set up its init.d script yourself.
Write a supervisor configuration file for IPython
The configuration file tells supervisor what to run and how.
After you install supervisor, it should have created /etc/supervisor/supervisord.conf. These lines should exist in the file:
[include]
files = /etc/supervisor/conf.d/*.conf
If they contain these lines, you're in good shape. I only show them to demonstrate where it expects new configuration files. Your configuration file can go there, named something like /etc/supervisor/conf.d/ipynb.conf.
Here's a sample configuration that was generated by Chef by an ipython-notebook-cookbook that runs the notebook in a virtualenv:
[program:ipynb]
command=/home/ipynb/.ipyvirt/bin/ipython notebook --profile=cooked
process_name=%(program_name)s
numprocs=1
numprocs_start=0
autostart=true
autorestart=true
startsecs=1
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=ipynb
redirect_stderr=false
stdout_logfile=AUTO
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=10
stdout_capture_maxbytes=0
stdout_events_enabled=false
stderr_logfile=AUTO
stderr_logfile_maxbytes=50MB
stderr_logfile_backups=10
stderr_capture_maxbytes=0
stderr_events_enabled=false
environment=HOME="/home/ipynb",SHELL="/bin/bash",USER="ipynb",PATH="/home/ipynb/.ipyvirt/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games",VIRTUAL_ENV="/home/ipynb/.ipyvirt"
directory=/home/ipynb
serverurl=AUTO
The above supervisor config also relies on an IPython notebook configuration (located at /home/ipynb/.ipython/profile_cooked/ipython_notebook_config.py). This makes configuration much easier (as you can also set up your password hash and many other configurables).:
c = get_config()
# Kernel config
# Make matplotlib plots inline
c.IPKernelApp.pylab = 'inline'
# The IP address the notebook server will listen on.
# If set to '*', will listen on all interfaces.
# c.NotebookApp.ip= '127.0.0.1'
c.NotebookApp.ip='*'
# Port to host on (e.g. 8888, the default)
c.NotebookApp.port = 8888 # If you want it on 80, I recommend iptables rules
# Open browser (probably want False)
c.NotebookApp.open_browser = False
Re-read and update, now that you have the configuration file
supervisorctl reread
supervisorctl update
Reality
In reality, I used to use a Chef cookbook to do the entire installation and configuration. However, using configuration management with tiny stuff like this is a bit of overkill (unless you're orchestrating these in automation).
Nowadays I use Docker images for IPython notebook, orchestrating via JupyterHub or tmpnb.