Is there a way to build ZFS for the linux-virt kernel on Alpine? - alpine-linux

I want to run ZFS inside of a virtual machine using Alpine Linux. The linux-virt kernel is much smaller and does not have the 200+MB of firmware files listed as dependencies, so that is the kernel I selected for the VM. However, I now find that there is no zfs-virt package, only zfs-vanilla which installs the vanilla kernel and all the firmware as dependencies.
Is there a zfs-virt package available in perhaps a third party repository? If not, I am not against building the package myself, but I'm relatively new to Alpine so have not yet figured out how its build system works nor if it is possible to build against an already-compiled kernel (in my own past experience, the only way I've successfully built kernel modules is using the source tree of the target kernel)

Just copy paste latest zfx-lts to zfs-virt, then abuild checksum and abuild -r. There is a prebuild package in my repository.
Install by
(cd /etc/apk/keys; sudo curl -LO https://repo.wener.me/alpine/wenermail#gmail.com-5dc8c7cd.rsa.pub )
echo https://repo.wener.me/alpine/v3.11/community | sudo tee -a /etc/apk/repositories
sudo apk add zfs-virt

Related

nvidia-docker - can cuda_runtime be available while building a container?

While attempting to compile darknet in the build command of a docker container I constantly run into the exception include/darknet.h:11:30: fatal error: cuda_runtime.h: No such file or directory.
I am building the container from the instructions here: https://github.com/NVIDIA/nvidia-docker/wiki/Deploy-on-Amazon-EC2. I have a simple Dockerfile I am testing with - the relevant parts:
FROM nvidia/cuda:9.2-runtime-ubuntu16.04
...
WORKDIR /
RUN apt-get install -y git
RUN git clone https://github.com/pjreddie/darknet.git
WORKDIR /darknet
# Set OpenCV makefile flag
RUN sed -i '/OPENCV=0/c\OPENCV=1' Makefile
RUN sed -i '/GPU=0/c\GPU=1' Makefile
#RUN ln -s /usr/local/cuda-9.2 /usr/local/cuda
# HERE I have been playing with commands to show me the state of the docker image to try to troubleshoot the problem
RUN find / -name "cuda_runtime.h"
RUN ls /usr/local/cuda/lib64/
RUN less /usr/local/cuda/README
RUN make
Most of the documentation I see references using the nvidia libraries when running a container, but the darknet compiles differently when built with gpu support so I need cuda_runtime.h available at build time.
Perhaps I misunderstand what nvidia-docker is doing - I'm assuming that nvidia-docker exists because the Nvidia code must be installed on the actual host machine and not inside the container & they use some mechanism to share the "native" code with the containers so the GPU can be managed - is that correct?
Should I even be trying to build darknet when building my container or should I be installing it on the host machine, then making it available somehow to the container? This seems to go against the portability of the containers but I can live with some constraints to get access to the GPU.
FROM nvidia/cuda:9.2-runtime-ubuntu16.04
Your image only has bits and pieces of CUDA-9.2 needed to run a CUDA app, but does not have the bits needed to build one.
You need to use -devel variant.

How to download, compile & install ONLY the libpq source on a server that DOES NOT have PostgreSQL installed

How can I download, compile, make & install ONLY the libpq source on a server (Ubuntu) that DOES NOT have PostgreSQL installed?
I have found the libpq source here. However it does NOT seem to be separable from the entire PostgreSQL. Thanks in advance.
I DO NOT want to install the entire PostgreSQL. I want to use libpq as a C interface to PostgreSQL on a DIFFERENT server (also Ubuntu) that DOES have it installed.
I also found this old link which indicates that the above is POSSIBLE but not HOW to do it.
I have found the libpq source here. However it does NOT seem to be separable from the entire PostgreSQL.
It has to be configured with the entire source tree because that's what generates the necessary Makefile parts. But once configured, make && make install can run inside the src/interfaces/libpq directory alone, and the rest being left out completely.
In steps:
download the source code archive, for example https://ftp.postgresql.org/pub/source/v9.4.1/postgresql-9.4.1.tar.bz2
unpack into a build directory: tar xjf ~/Downloads/postgresql-9.4.1.tar.bz2
apt-get install libssl-dev if it's not installed already
cd into it and configure: cd postgresql-9.4.1; ./configure --with-openssl --without-readline
Assuming configure succeeds, cd into src/interfaces/libpq and run make
still in the libpq directory, run make install as root: sudo make install.
That will install into /usr/local/pgsql and subdirectories as a library independent and insulated from the one packaged in Ubuntu if it happens to be installed. To install it elsewhere, specify the location with the --prefix option to configure.
Besides downloading and configuration, the steps are:
cd src/interfaces/libpq; make; make install; cd -
cd src/bin/pg_config; make install; cd -
cd src/backend; make generated-headers; cd -
cd src/include; make install; cd -
These steps will give you the library and headers of libpq, and a binary called pg_config, and all postgresql backend headers, so that you could compile things like libpqxx correctly.
(I've just tested with postgresql-9.6.5.)

Virtualenv - Automate project requirements deployment

I'm using Fabric to automate my deployment routines for my projects.
One of them concerns the virtualenv replication.
Automating the installation of new packages is pretty straight forward with
local $ pip freeze > requirements.txt
remote $ pip install -r requirements.txt
Now if I don't need a package anymore, I can simply
local $ pip uninstall unused_package
But as pip install won't remove the packages not present in the requirements anymore,
How can I automate the remove of packages from the virtualenv not present in the requirements ?
I'd like to have a command like:
remote $ pip flush -r requirements.txt
Another approach could be - and I know this is not answering your question perfectly - to use the power of the virtualenv you already have:
It is convenient to have known stable package and application environments, let's say identified by revision control tags, to be able to roll back to a known working combination (this is no replacement for testing or a staging environment, though).
So you could simply setup a new virtual environment ("workon your-tag"), populate it again with "pip install -r" and leave the old behind (for some time, e.g. until the new your-tag release is considered stable) and finally remove the old virtual-env('s).
In your fabfile do something like
with cd(stage_dir):
run("./verify_virtual_env.sh %s" % your-tag)
and the "verify_virtual_env.sh" script updates via pip for the given environment.
Why not just a diff with sets? It might require using a get operation though if you're operating on a remote box
On remote
from fabric.api import get, run
run("pip freeze > existing_pkgs.txt")
get("/path/to/existing_pkgs.txt")
So now existing_pkgs is on your local machine. Assuming you have a new requirements file...
with open("requirements.txt", "r") as req_file:
req_pkgs = set(req_file.readlines())
with open("existing_pkgs.txt", "r") as existing_pkgs:
existing = set(existing_pkgs.readlines())
Do an operation that gives you the differences in sets
uninstall_these = existing.difference_update(req_pkgs)
Then uninstall the pkgs from your remote host
for pkg in uninstall_these:
run("pip uninstall {}".format(pkg))
I ended up by keeping the install/uninstall jobs separated.
Install:
pip install -r requirements.txt
Uninstall:
pip freeze | grep -v -f requirements.txt - | xargs pip uninstall -y

How to migrate virtualenv

I have a relatively big project that has many dependencies, and I would like to distribute this project around, but installing these dependencies where a bit of a pain, and takes a very long time (pip install takes quite some time). So I was wondering if it was possible to migrate a whole virtualenv to another machine and have it running.
I tried copying the whole virtualenv, but whenever I try running something, this virtualenv still uses the path of my old machine. For instance when I run
source activate
pserve development.ini
I get
bash: ../bin/pserve: /home/sshum/backend/bin/python: bad interpreter: No such file or directory
This is my old directory. So is there a way to have virtualenv reconfigure this path with a new path?
I tried sed -i 's/sshum/dev1/g' * in the bin directory and it solved that issue. However, I'm getting a different issue now, my guess is that this sed changed something.
I've confirmed that I have libssl-dev installed but when I run python I get:
E: Unable to locate package libssl.so.1.0.0
E: Couldn't find any package by regex 'libssl.so.1.0.0'
But when I run aptitude search libssl and I see:
i A libssl-dev - SSL development libraries, header files and documentation
I also tried virtualenv --relocatable backend but no go.
Export virtualenvironment
from within the virtual environment:
pip freeze > requirements.txt
as example, here is for myproject virtual environment:
once in the new machine & environment, copy the requirements.txt into the new project folder in the new machine and run the terminal command:
sudo pip install -r requirements.txt
then you should have all the packages previously available in the old virtual environment.
When you create a new virtualenv it is configured for the computer it is running on. I even think that it is configured for that specific directory it is created in. So I think you should always create a fresh virtualenv when you move you code. What might work is copying the lib/Pythonx.x/site-packages in your virtualenv directory, but I don't think that is a particularly good solution.
What may be a better solution is using the pip download cache. This will at least speed up the download part of pip install. Have a look at this thread: How do I install from a local cache with pip?
The clean way seems to be with virtualenv --relocatable.
Alternatively, you can do it manually by editing the VIRTUAL_ENV path in bin/activate to reflect the changes. If you choose to do so, you must also edit the first line (#) of bin/pserve which indicates the interpreter path.

How to build gstreamer ugly plugins from source

I would like to change some code in one element X in gstreamer ugly plugin and rebuild and use it.
How I can do it?
I have gstreamer-0.10 and installed gstreamer-ugly plugin.
I would like to download only gstreamer0-10 ugly plugin code and change it and would like to use the new lib file. How I can do it?
unfortunately gstreamer-ugly depends on a lot of stuff in at least libgstreamer and plugins-base (if you're using linux and your distro provides *-dev packages as debian/ubuntu does).
If you're on debian you could use dpkg-buildpackage after checking out the source using apt-source. The big advantage here is that all the build dependencies can be easily installed.
The manual way will probably need you to first build all the other gstreamer packages have a close look on what ./configure tells you
I'm workin on debian and have already built gstreamer+plugins to backport the recent ones to ubuntu (although I'm not sure if I did it in a best-practice way ;) )
/edit: I'll try to cover the basic steps for ubuntu here:
add the source repositories to apt (check the "source code" checkbox in the ubuntu software center's "software sources" tool
sudo apt-get install dpkg-dev devscripts
sudo apt-get build-dep gst-plugins-ugly0.10
apt-get source gst-plugins-ugly0.10
change to the newly created gst-plugins-base* folder
dpkg-buildpackage (and make sure it works)
change the source to your needs
you can rebuild it any time using dpkg-buildpackage (to simply see if it compiles make might be faster though). This creates a .deb file in the parent folder that you can simply install using dpkg -i
If it's a useful change you might want to get in touch with the gstreamer-devs ;)
On a debian system, run apt-get build-dep gstreamer0.10-plugins-ugly to get all the build dependencies for that package. After that you can build the package from git, source tarball or even rebuild the debian package (using dkgp-buildpackage).