Install just one package globally on Julia - server

I have a fresh Julia instalation on a machine that I want to use as a number-crunching server for various persons on a lab. There seems to be this nice package called jupyterhub wich makes the Jupyter Notebook interface avaible to various clients simultaneusly. A web page which I am unable to find again began suggesting something like "first install IJulia globally, then install JupyterHub..."
I cannot seem to find a nice way to install ONE package globally.

Update
In Julia-v0.7+, we need to use JULIA_DEPOT_PATH instead of JULIA_PKGDIR and the LOAD_PATH looks something like this:
julia> LOAD_PATH
3-element Array{Any,1}:
Base.CurrentEnv()
Any[Base.NamedEnv("v0.7.0"), Base.NamedEnv("v0.7"), Base.NamedEnv("v0"), Base.NamedEnv("default"), Base.NamedEnv("v0.7", create=true)]
"/Users/gnimuc/Codes/julia/usr/share/julia/stdlib/v0.7"
Old Post
"first install IJulia globally, then install JupyterHub..."
I don't know whether this is true or not, by following these steps below, you can install IJulia after you installed Jupyterhub.
Install packages system-wide/globally for every user
this question has already been answered here by Stefan Karpinski. so what we need is just use this method to install the IJulia.jl package.
There's a Julia variable called LOAD_PATH that is arranged to point at two system directories under your julia installation. E.g.:
julia> LOAD_PATH
2-element Array{Union(ASCIIString,UTF8String),1}:
"/opt/julia-0.3.3/usr/local/share/julia/site/v0.3"
"/opt/julia-0.3.3/usr/share/julia/site/v0.3"
If you install packages under either of those directories, then everyone using that Julia will see them. One way to do this is to run julia as a user who can write to those directories after doing export JULIA_PKGDIR=/opt/julia-0.3.3/usr/share/julia/site in the shell. That way Julia will use that as it's package directory and normal package commands will allow you to install packages for everyone....
Make IJulia working with Jupyterhub
in order to make IJulia and Jupyterhub working with each other for all the users, you should copy the folder your/user/.local/share/jupyter/kernels/julia/ to /usr/local/share/jupyter/kernels/. I write down some of the steps that I used in my test Dockerfile. the code is ugly, but it works.
Steps: (after you successfully installed Jupyterhub)
note that, you should do the following steps as root and I assume that your julia was globally installed at /opt/julia_0.4.0/.
make our global package directory and set up JULIA_PKGDIR:
mkdir /opt/global-packages
echo 'push!(LOAD_PATH, "/opt/global-packages/.julia/v0.4/")' >> /opt/julia_0.4.0/etc/julia/juliarc.jl
export JULIA_PKGDIR=/opt/global-packages/.julia/
install "IJulia" using package manager:
julia -e 'Pkg.init()'
julia -e 'Pkg.add("IJulia")'
copy kernelspecs to /usr/local/share/jupyter/kernels/ which can be used by any new user added by Jupyterhub:
jupyter kernelspec list
cd /usr/local/share/ && mkdir -p jupyter/kernels/
cp -r /home/your-user-name/.local/share/jupyter/kernels/julia-0.4-your-julia-version /usr/local/share/jupyter/kernels/

Related

Install or import in Python?

I am a beginner in Python, trying to still learn the basics. I am mostly interested in using it for Data Analysis and Visualizations, with packages such as matplotlib.
Most of the examples I see, use the code
"import matplotlib"
or something similar.
But there are also cases when people suggest using pip install the use the package.
So, as a rule of thumb, when should one use import and when should one install through the terminal?
Let's say you want to use some library. Let its name be ABC. ABC has some function, let's say function1.
If you write
import ABC
ABC.function1()
you will get error. Because in your virtual environment python can't find library called ABC. You must install it first using pip install ABC in your terminal. After that same code will work.
You must install library first in order to use it.
There is no thumb rule for using a method to install. You can use any method for installing. Aim is to install so that the library is available when you run the code, else you will get an error.
In Windows, if you want to install a package/library use the following Command on DOS Prompt
python3 -m pip install matplotlib.
To Upgrade the same, use the following Command on DOS Prompt
python3 -m pip install --upgrade matplotlib.
You can install and upgrade the package/libraries through Jupyter too.
Once installed, you need to place the import <library_name> on top of the code in which you want to use that library.

Conda: How to install latest version of `pandoc-crossref` from Github in `conda` environment?

pandoc-crossref must match the pandoc version, and also only the 3.10.0 release works on OSX Big Sur. Thus, it is not possible to get pandoc and pandoc-crossref running in a conda environment from the official channel or from conda-forge.
I could easily download the matching binaries from https://github.com/lierdakil/pandoc-crossref/releases/tag/v0.3.10.0 and copy them e.g. to the binpath:
$ which pandoc-crossref
/usr/local/bin/pandoc-crossref
$ curl -OL https://github.com/lierdakil/pandoc-crossref/releases/download/v0.3.10.0/pandoc-crossref-macOS.tar.xz
$ tar -xzvf pandoc-crossref-macOS.tar.xz
$ mv pandoc-crossref /usr/local/bin/pandoc-crossref
But I think that is not a clean approach, because conda will not know that I updated the version for pandoc-crossref.
What is a clean approach for updating a package managed by conda from a binary available on Github?
Update Feedstock
I updated it on the Conda Forge feedstock, which is what I regard as the "cleanest" solution.
How does one do that? First, OP had posted a comment on the feedstock in the PR that they wanted merged. This was the appropriate first step and hopefully in future cases that should be sufficient to prompt maintainers to act. In this case, it was not sufficient. So, as a follow up, I chatted on the Conda Forge Gitter to point out that the feedstock had gone stale and had non-responding maintainer(s). One of the core Conda Forge members suggested I make a PR bumping the version and adding myself as maintainer, and they merged it for me. In all, this took about 10 mins of work and ~2 hours from start to having an updated package on Anaconda Cloud.
Custom Conda Build
Otherwise, there isn't really a clean solution for non-Python packages outside of building a Conda package. That is, clone the feedstock or write a new recipe, modify it to build from the GitHub reference, then install that build into your environment. It may also be worth uploading to an Anaconda Cloud user account, so there is some non-local reference for it.
Pip Install (Python Packages Only)
In the special case that it is a Python package, one could dump the environment to YAML, edit to install the package through pip, then recreate the environment.

How to manually install a package in racket?

How can I manually install a package in racket (that is without relying on raco)? Is that possible?
I installed the minimal racket distribution and want to manually add the packages in question (such as xrepl which doesn't seem to come by default).
I'm on CentOS and I have no root privileges (the installation is in a private directory).
Although I'm not sure I understand the permissions issue you're having, you could try raco pkg install --scope user.
Anyway, you can use raco pkg install --link <dir> to install locally. (Just like what people do when they're developing a package locally.)
So for example:
cd ~/src
git clone path/to/foo
(Or get the package source into ~/src/foo some other way. By "package source" I mean there should be an info.rkt in ~/src/foo.)
raco pkg install --link foo
If the foo package has any dependencies, than raco pkg install will offer to get and install them, too. Normally this would be handy. But since you're having connection or permission problems, I imagine you'll want to answer No. Instead, do this manual install for each of the deps, then retry this one. (Obviously if there are many deps, this is inconvenient, which is one of the benefits of using a package manager when you are able to.)

Does use of 'virtualenv' leave my "real" Python installation alone?

I've had many questions about Python for which a suggested answer is often "use virtualenv", but I have a (lovingly maintained and perfectly functioning) Python installation that I'm loath to disturb.
I want to be absolutely sure, so I'll ask twice: Does use of virtualenv in any way disturb my "real" Python installation? Using virtualenv does not in any way modify the files or paths in my "real" installation, right?
Virtualenv creates separated Python environment. Python interpreter is linked from one of system-installed that you choose creating virtualenv( --python commandline switch) and, optionally, wheater use or not system site-packages (--system-site-packages).
All packages that you install using virtualenv remains only on virtualenv directory site-packages folder and do not mess system packages.

How to migrate virtualenv

I have a relatively big project that has many dependencies, and I would like to distribute this project around, but installing these dependencies where a bit of a pain, and takes a very long time (pip install takes quite some time). So I was wondering if it was possible to migrate a whole virtualenv to another machine and have it running.
I tried copying the whole virtualenv, but whenever I try running something, this virtualenv still uses the path of my old machine. For instance when I run
source activate
pserve development.ini
I get
bash: ../bin/pserve: /home/sshum/backend/bin/python: bad interpreter: No such file or directory
This is my old directory. So is there a way to have virtualenv reconfigure this path with a new path?
I tried sed -i 's/sshum/dev1/g' * in the bin directory and it solved that issue. However, I'm getting a different issue now, my guess is that this sed changed something.
I've confirmed that I have libssl-dev installed but when I run python I get:
E: Unable to locate package libssl.so.1.0.0
E: Couldn't find any package by regex 'libssl.so.1.0.0'
But when I run aptitude search libssl and I see:
i A libssl-dev - SSL development libraries, header files and documentation
I also tried virtualenv --relocatable backend but no go.
Export virtualenvironment
from within the virtual environment:
pip freeze > requirements.txt
as example, here is for myproject virtual environment:
once in the new machine & environment, copy the requirements.txt into the new project folder in the new machine and run the terminal command:
sudo pip install -r requirements.txt
then you should have all the packages previously available in the old virtual environment.
When you create a new virtualenv it is configured for the computer it is running on. I even think that it is configured for that specific directory it is created in. So I think you should always create a fresh virtualenv when you move you code. What might work is copying the lib/Pythonx.x/site-packages in your virtualenv directory, but I don't think that is a particularly good solution.
What may be a better solution is using the pip download cache. This will at least speed up the download part of pip install. Have a look at this thread: How do I install from a local cache with pip?
The clean way seems to be with virtualenv --relocatable.
Alternatively, you can do it manually by editing the VIRTUAL_ENV path in bin/activate to reflect the changes. If you choose to do so, you must also edit the first line (#) of bin/pserve which indicates the interpreter path.