How to install ruamel.yaml on a buildroot environment - ruamel.yaml

ruamel.yaml seems to require PIP to install, which is not the default buildroot solution to build and install a Python package.
Is is possible to -at least- install a pure Python version of ruamel.yaml into a buildroot image - and how to circumvent the pip limitation?
Is is possible to cross-build ruamel.yaml?
Forcing RUAMEL_NO_PIP_INSTALL_CHECK env. var. does not help:
test compiling test_ruamel_yaml
running install
Checking .pth file support in ...
Failed to import the site module
ModuleNotFoundError: No module named '_sysconfigdata_m_linux_arm-linux-gnueabihf'
error: command '.../output/host/bin/python' failed with exit status 1
package/pkg-generic.mk:310: recipe for target '.../output/build/python-ruamel-yaml-0.15.45/.stamp_target_installed' failed

ruamel.yaml indeed requires pip to install from PyPI (using the .tar.gz or a .whl appropriate for your platform), this is documented.
The reason for this is that the fixes necessary to enable nested package installs where only implemented for pip (and not for easy_install or python setup.py installs).
That however does not preclude you from using ruamel.yaml, especially if you don't need the C extension (which is checked for at load time).
You can either check out a tagged version from bitbucket or untar a .tar.gz from PyPI and move the result to your site-packages directory:
$ virtualenv /tmp/ruamel_yaml_no_pip
Using base prefix '/opt/python/3.7'
New python executable in /tmp/ruamel_yaml_no_pip/bin/python
Installing setuptools, pip, wheel...done.
$ cd /tmp/ruamel_yaml_no_pip/
$ source bin/activate
(ruamel_yaml_no_pip) $ mkdir lib/python3.7/site-packages/ruamel/
(ruamel_yaml_no_pip) $ touch lib/python3.7/site-packages/ruamel/__init__.py
(ruamel_yaml_no_pip) $ wget -q https://files.pythonhosted.org/packages/63/a5/dba37230d6cf51f4cc19a486faf0f06871d9e87d25df0171b3225d20fc68/ruamel.yaml-0.15.45.tar.gz
(ruamel_yaml_no_pip) $ python -m ruamel.yaml
/tmp/ruamel_yaml_no_pip/bin/python: Error while finding module specification for 'ruamel.yaml' (ModuleNotFoundError: No module named 'ruamel')
(ruamel_yaml_no_pip) $ tar xf ruamel.yaml-0.15.45.tar.gz
(ruamel_yaml_no_pip) $ mv ruamel.yaml-0.15.45 lib/python3.7/site-packages/ruamel/yaml
(ruamel_yaml_no_pip) $ python -c 'from ruamel.yaml import YAML; print(YAML().load("{hello: world}")["hello"])'
world
(ruamel_yaml_no_pip)
(ruamel_yaml_no_pip) $ python -c 'from ruamel.yaml import __with_libyaml__ as X; print(X)'
False
(The URL is copied from the 0.15.45 project download page)
For development I normally just make a soft link from a virtualenv's site-packages to my ruamel directory.
I don't know how and if that translates to a buildroot environment (if so please publish your result).

I overlooked the buildroot documentation.
There is a critical parameter to define: SETUP_TYPE = setuptools rather than SETUP_TYPE = distutils.
With the following snippet:
PYTHON_RUAMEL_YAML_VERSION = 0.15.45
PYTHON_RUAMEL_YAML_SOURCE = ruamel.yaml-$(PYTHON_RUAMEL_YAML_VERSION).tar.gz
PYTHON_RUAMEL_YAML_SITE = https://pypi.python.org/packages/63/a5/dba37230d6cf51f4cc19a486faf0f06871d9e87d25df0171b3225d20fc68
PYTHON_RUAMEL_YAML_SETUP_TYPE = setuptools
PYTHON_RUAMEL_YAML_LICENSE = MIT
PYTHON_RUAMEL_YAML_LICENSE_FILES = LICENSE
PYTHON_RUAMEL_YAML_ENV += RUAMEL_NO_PIP_INSTALL_CHECK=1
$(eval $(python-package))
ruamel.yaml installs perfectly on the target image.

Related

Use Python library in Slurm job

I want to run a job on Slurm and my Python script needs the evaluate package which I have on my local machine. I don't know if I could change the Python path on the server to match the one on my local machine, and if I could I'm afraid I might break the system.
So I followed this answer, and included a requirements.txt file with just evaluate==0.1.2 in it, and I get even more errors:
load GCC/10.2.0 (PATH, MANPATH, INFOPATH, LIBRARY_PATH, LD_LIBRARY_PATH, STD COMP VARS)
load ROCM/5.1.1 (PATH, MANPATH, LD_LIBRARY_PATH, LIBRARY_PATH, C_INCLUDE_PATH)
Set INTEL compilers as MPI wrappers backend
load mkl/2018.4 (LD_LIBRARY_PATH)
load PYTHON/3.7.4 (PATH, MANPATH, LD_LIBRARY_PATH, LIBRARY_PATH, PKG_CONFIG_PATH, C_INCLUDE_PATH, CPLUS_INCLUDE_PATH, PYTHONHOME, PYTHONPATH)
/var/spool/slurmd/job216863/slurm_script: line 12: virtualenv: command not found
/var/spool/slurmd/job216863/slurm_script: line 16: /env/bin/activate: No such file or directory
ERROR: Could not find a version that satisfies the requirement evaluate==0.1.2 (from versions: none)
ERROR: No matching distribution found for evaluate==0.1.2
Traceback (most recent call last):
File "eval_comet.py", line 1, in <module>
from evaluate import load
ModuleNotFoundError: No module named 'evaluate'
Most of the time, the Python version on HPCs are old. My Uni's HPC cluster has Python 3.7. If you wish to create a Python virtual environment (not conda) with a newer version, then there is a trick.
Activate the Anaconda Module, some system uses Module load and some uses load depending on your organisation.
[s.1915438#sl2 ~]$ module load anaconda/2021.05
[s.1915438#sl2 ~]$ conda create -n surrogate python=3.8
Here I created a Conda environment named surrogate with Python 3.8. Here, you can choose any version of your choice. Now you can activate the Conda environment and check the Python version.
[s.1915438#sl2 ~]$ source activate surrogate
(modulus) [s.1915438#sl2 ~]$ which python
~/.conda/envs/surrogate/bin/python
(surrogate) [s.1915438#sl2 ~]$ python --version
Python 3.8.13
Now navigate to the directory where you want to install your Python virtual environment and install the virtual environment using the following command.
(surrogate) [s.1915438#sl2 s.1915438]$ mkdir modulus_pysdf
(surrogate) [s.1915438#sl2 s.1915438]$ cd modulus_pysdf/
(surrogate) [s.1915438#sl2 modulus_pysdf]$ python3 -m venv modulus_pysdf
Logout (ctrl + D) from the server to exit the Conda environment and then login again. Remember, in my case the path to the Python virtual environment was /scratch/s.1915438/modulus_pysdf.
This is how I will activate the Python virtual environment.
[s.1915438#sl2 ~]$ cd /scratch/s.1915438
[s.1915438#sl2 s.1915438]$ cd modulus_pysdf/
[s.1915438#sl2 modulus_pysdf]$ source modulus_pysdf/bin/activate
Now I can check the Python version and the path.
(modulus_pysdf) [s.1915438#sl2 modulus_pysdf]$ python --version
Python 3.8.13
(modulus_pysdf) [s.1915438#sl2 modulus_pysdf]$ which python
/scratch/s.1915438/modulus_pysdf/modulus_pysdf/bin/python
As usual, I can install any package using pip. For example, to install evaluate you can download it from PyPI
pip install evaluate
Or if you have a requirements.txt file then you can do this. See this for more details.
cat requirements.txt | grep -Eo '(^[^#]+)' | xargs -n 1 pip install

Installing matplotlib / basemap on Azure Databricks

Working on POC with netCDF(.nc) files. Would like do some visualisation and while trying to install Basemap having some issues.
As per the pre-requisites, got numpy and matplotlib installed.
geos is already installed
When installing basemap from git %sh pip install pip install --user git+https://github.com/matplotlib/basemap.git getting below error.
Collecting git+https://github.com/matplotlib/basemap.git
Cloning https://github.com/matplotlib/basemap.git to /tmp/pip-req-build-w20pcpms
Running command git clone -q https://github.com/matplotlib/basemap.git /tmp/pip-req-build-w20pcpms
ERROR: Command errored out with exit status 1:
command: /databricks/python3/bin/python3.7 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-w20pcpms/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-w20pcpms/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-req-build-w20pcpms/pip-egg-info
cwd: /tmp/pip-req-build-w20pcpms/
Complete output (18 lines):
checking for GEOS lib in /root ....
checking for GEOS lib in /root/local ....
checking for GEOS lib in /usr ....
checking for GEOS lib in /usr/local ....
checking for GEOS lib in /sw ....
checking for GEOS lib in /opt ....
checking for GEOS lib in /opt/local ....
Can't find geos library in standard locations ('/root', '/root/local', '/usr', '/usr/local', '/sw', '/opt', '/opt/local').
Please install the corresponding packages using your
systems software management system (e.g. for Debian Linux do:
'apt-get install libgeos-3.3.3 libgeos-c1 libgeos-dev' and/or
set the environment variable GEOS_DIR to point to the location
where geos is installed (for example, if geos_c.h
is in /usr/local/include, and libgeos_c is in /usr/local/lib,
set GEOS_DIR to /usr/local), or edit the setup.py script
manually and set the variable GEOS_dir (right after the line
that says "set GEOS_dir manually here".
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
Depending on your runtime version you may be pointing at the wrong version of python (I'm assuming python 3) with your pip commands. Also, if you're installing python packages through pip inside of a notebook environment you're going to have a bad time. The best way to install through pip on a cluster is to use an init script.

cannot activate virtualenv: No such file or directory

I have problem with activating virtualenv.
I'm working on the server and using SSH secure shell.
My final goal is to activate virtualenv and run the latest version of tensorflow
The following is the command lines:
jeonguyoang#vision6:~$ python3 -m venv tfenv
The virtual environment was not created successfully because ensurepip is not
available. On Debian/Ubuntu systems, you need to install the python3-venv
package using the following command.
apt-get install python3-venv
You may need to use sudo with that command. After installing the python3-venv
package, recreate your virtual environment.
jeonguyoang#vision6:~$ source tfenv/bin/activate
-bash: tfenv/bin/activate: No such file or directory
jeonguyoang#vision6:~$ cd tfenv
jeonguyoang#vision6:~/tfenv$ ls
bin include lib lib64 pyvenv.cfg
jeonguyoang#vision6:~/tfenv$ cd bin
jeonguyoang#vision6:~/tfenv/bin$ ls
python python3
captured image of the commands
I think that there is no activate file.
Maybe re-installing virtualenv is the answer, but I cannot interrupt server settings..
Check if you have python 2 versions of pip and python (python-all & python-pip packages). Venv installs both v2 and v3 versions of python & pip (regardless of python version of venv).

Python virtualenv ImportError: No module named 'zlib'

I am on an Ubuntu machine, which has Python 2.7.6 as its default python. It also has Python 3.4.3, with both versions located in /usr/bin/.
I have downloaded virtualenv and virtualenvwrapper. I then downloaded the current version of Python, 3.5.1. In its directory I ran the following commands:
./configure
make
make test
sudo make altinstall
Python 3.5.1 is now installed in /usr/local/bin/.
I now run the following commands:
mkvirtualenv test1
mkvirtualenv test2 -p /usr/bin/python3
mkvirtualenv test3 -p /usr/local/bin/python3.5
Environment test1 successfully created with Python version 2.7.6, and environment test2 successfully created with Python version 3.4.3. However, test3 fails with the following error:
ImportError: No module named 'zlib'
I see mentioned that I need to have "zlib" installed when compiling python to begin with, though make test didn't seem to give any problems. Do I just need to download and compile zlib from www.zlib.net and recompile python3.5?
zlib is a built-in module for Python 3.5.
I think you just need re-compile Python 3.5...
Look that link for Python virtualenv :
https://www.reddit.com/r/linux4noobs/comments/3uwk76/help_using_python_in_linux/
Get python source and extract
wget https://www.python.org/ftp/python/3.5.0/Python-3.5.0.tgz
tar xvf Python-3.5.0.tgz
configure for local install
cd Python-3.5.0/
./configure --prefix=$HOME/python35
make
If it complains about missing dependencies, install them, make clean and repeat.
make install

package is installed via pip in wrong (src) directory instead of site packages

I'm installing this package into a virtualenv using virtualenvwrapper and pip with this command:
pip install -e git+git://github.com/mr-stateradio/django-exchange.git#egg=django_exchange-master
Interestingly the package is then placed into a src folder, and not into the site-packages folder which I would have expected. The package is placed into this folder:
<path-to-my-virtual-env>/testenv/src/django-exchange-master/exchange
Instead of this:
<path-to-my-virtual-env>/testenv/lib/python2.7/site-packages
I assume something is wrong with the pip install command I'm using or with the setup.py of the package.
The -e option tells pip to install packages in “editable” mode. If you remove the -e option, pip will install the package into <venv path>/lib/Python_version/site-packages. Don't forget to remove the packages inside <venv path>/src, because python looks for the packages inside <venv path>/src first.
pip supports installing from Git, Mercurial, Subversion and Bazaar, and detects the type of VCS using url prefixes: “git+”, “hg+”, “bzr+”, “svn+”.
e.g
$ pip install -e git+https://git.repo/some_pkg.git#egg=SomePackage # from git
$ pip install -e hg+https://hg.repo/some_pkg.git#egg=SomePackage # from mercurial
$ pip install -e svn+svn://svn.repo/some_pkg/trunk/#egg=SomePackage # from svn
$ pip install -e git+https://git.repo/some_pkg.git#feature#egg=SomePackage # from 'feature' branch
VCS projects can be installed in editable mode (using the –editable option) or not.
For editable installs, the clone location by default is <venv path>/src/SomeProject in virtual environments, and <cwd>/src/SomeProject for global installs. The –src option can be used to modify this location.
For non-editable installs, the project is built locally in a temp dir and then installed normally. `