Availability of snapcraft on AlpineLinux - alpine-linux

I was looking for compatibility between snap package management system and alpine linux but could not find any relevant resources. Is there any plan to make it available on alpine linux? Any progress being made in that regard?

To be clear: there are two components here: snapd, which is responsible for running snaps, and Snapcraft, which is responsible for building/creating snaps. You specifically asked about Snapcraft, which unlike snapd, is currently Ubuntu-specific. This is due to the fact that it assumes build- and stage-packages are debs, and tries to use apt (and apt python bindings) to get them.
This is currently changing to be more extensible, with RPM support to probably be added first. Alpine will likely need apk support there.
Another feature coming soon will be to build in lxd containers by default. This may be the easier path, where Snapcraft can run natively on Alpine but then build packages using an Ubuntu container.
If you're curious about snapd, you can see from this table that Alpine does not currently seem to be a target. However, please do log a bug requesting that it be put on the roadmap.

Related

Making VS Code Remote extension work with GLIBC 2.17 installed in non standard locations

I'm trying to use VSCode Remote extension to connect to a remote host that runs on RHEL/CentOS 6, but it fails to connect since CentOS 6 ships with GLIBC 2.12 and GLIBCXX 3.4.1. As mentioned in this post, in order to get the extension to work, the workaround is to install GLIBC>=2.17 and GLIBCXX>=3.4.18.
Unfortunately, I don't have sudo access for the server, so I won't be able to update these libraries using the bash script provided in the link. Also, in this SO post, the author says not to update the system GLIBC since it can break down system applications. That being said, I've tried something different -- I extracted those rpm packages, as described in this blog, inside my home folder. I've then updated the env variables PATH and LD_LIBRARY_PATH in ~/.bash_profile to point to these new locations. But the node binary (in VS Code Remote) still can't find these libraries.
Is there a way to let the node binary know where to look for these libraries? More precisely, can someone explain how I can make this extension work without sudo access?
I've got it to work by installing gcc and glibc using Linuxbrew. See this post for more details: https://github.com/microsoft/vscode-remote-release/issues/103#issuecomment-546551293.
Couple of things to take note of:
Node binary versions in VS Code Server may vary between commits. In the GitHub comment above, the author uses node#10 -- you may replace it with node#12; everything would still work.
Make sure glibc and gcc are properly installed using linuxbrew. This step is key.

Trouble installing SUMO 0.30.0 in Ubuntu 16.04 from source code

I need to install SUMO 0.30.0 to be used with the VEINS_INET subproject in veins 4.6. I have tried following the instructions here and suggestions from forums but haven't had any luck being able to install sumo. I run ./configure (trying various tool/library options) then run sudo make but all I get is target marouter failed or nothing to be done for 'install-exec-am' 'install-data-am'.
Does anyone know how to install sumo-0.30.0 from source and/or make the veins_inet subproject work with the latest version of sumo-0.32.0?
Don't run sudo make.
Don't run sudo make.
Your problem is probably related to a dependency/packaging change in 16.04, which is explicitly pointed out in the veins tutorial:
Note that Ubuntu 16.04 no longer includes libproj0; this can be worked around by temporarily adding the packet repository of, e.g., Ubuntu Vivid when installing this package.
Short answer: Unfortunately this means that long-term, you're going to either have to package SUMO yourself, use the versions someone else compiled (see this launchpad for example) or rely on an old version.
Long answer:
In general, I would recommend building SUMO from source by building its' dependencies from source, since I've encountered this problem on various distributions. In particular, the fox, proj and gdal libraries tend to be packaged in different versions, and along with changes in the SUMO source code. I currently use this script (with the package versions downloaded) to compile SUMO -- but this is for 0.30.0, and it breaks if any of the referenced source packages are moved (which happens quite often). My general recommendation would be to either use a completely isolated version of SUMO (i.e., compiling by hand as much as possible) or relying on a pre-packaged version (see above), as long as that version is recent enough to work with VEINS.

How should I handle Perl module updates when maintaining docker images?

I'm working on building a docker image to be able to run all of our Perl applications. The applications require hundreds of CPAN modules to be installed. The full build of the docker image takes about an hour to complete.
After doing the initial image, I'm not sure how best to handle ongoing updates.
We could keep a single Dockerfile in git, and then modify this as required, and push new builds up to dockerhub. However if the person doing the build doesn't have all of the intermediate images, then adding a single CPAN module could be an extremely tedious process, and it might take an hour before they even know if the new module installs correctly. Also it would be downloading every CPAN module again, which seems a bit risky, as there might be a breaking change in the new module.
Alternatively, the person doing the build could pull the latest docker-hub image, and then install the cpan module interactively, commit the build and push the new image to dockerhub. However then we only have our dockerhub images, but not master Dockerfile.
Or another option would be to create a Dockerfile for each new build, which references the previous dockerhub image. This seems overly complicated though.
Option 1) seems wrong. I'm fairly sure we don't want to be rebuilding the entire image from the base OS just to install one additional module. However being dependent on images without Dockerfiles seems risky as well.
You could use the standard module installer for your underlying OS on your docker image.
For example, if its RedHat then use yum and only use CPAN when they are not available
FROM centos:centos7
RUN yum -y install cpanm gcc perl perl-App-cpanminus perl-Config-Tiny && yum clean all
RUN cpanm install Some::Module; rm -fr root/.cpanm; exit 0
taken from here and modified
I would try to have a base image which the actual applications use
I would also avoid doing things interactively (e.g. script a dockerfile) as you want to be able to repeat the build when upstream dependencies change, which docker hub does for you.
EDIT
You can convert perl modules into your own packages using dh-make-perl
You can load these into your own Ubuntu repo using reprepro or a paid solution of Artifactory
These can then be installed using apt-get when you use your repo as a source from within a dockerfile.
When I have tried a similar thing before There are a few problems
Your apps don't work with the latest version of modules
There are far more dependencies than you expected
Some modules wont package
Benefits are
You keep the build tools (gcc, etc) off the app servers
You know much more about your dependencies

Use yocto to extend a read-only filesystem with extra packages

I have an embedded linux "proof-of-concept" project that wants to add some packages to an existing piece of hardware with a read-only filesystem. I am very new (1 week) to Yocto but it seems like it is possible. Looking for a general road map of how to achieve this, but any detailed strategy ideas would be helpful to keep in mind as I RTFYM.
It is a networked device, running on ARMv5t hardware.
64GB SD/MMC card is available (empty) and mounted.
telnet, nfs, busybox utils available.
no resident dev tools
The packages I need to add are openssl, python, zeromq, pyzmq, and perhaps other python modules in the future. I cannot place these into the rootfs because it is read-only, but they can reside on the sd card. I am trying to understand how to use Yocto to create this set of packages and collect them together as a build output. What I have so far:
EXTERNAL_TOOLCHAIN and meta-sourcery recipe is working
I can build python and pyzmq independently with bitbake -b
Don't know how to add pyzmq or other modules to python tree
How to build & collect just these items without building entire image?
The python part is running on the hardware but I just hand-copied it to the nfs folder. I am asking if this is a valid approach and if so, to add some directional detail. I hope I have provided enough information.

How can I deploy my Catalyst application as a debian package (or suitable alternative)?

After testing my Catalyst application and deciding to deploy it I would like to package it up so I can easily pull it in on the staging and live servers, manage dependencies and easily roll-back via the flexibility of package versioning. As my production OS is Ubuntu I figured packaging it as a deb package would make most sense.
I am predicting I will have to create a second package of all my perl module dependencies as many are not provided by my distribution, or package them independently - though that may be a lot of work.
Does anyone have any experience of doing this - or a sane, similar alternative?
To build your own Debian packages out of CPAN packages:
Install Debian helper scripts
sudo apt-get install dh-make-perl
Download MODULE from CPAN and build Debian package
cpan2deb MODULE
dh-make-perl is actually the right tool to put CPAN modules into Debian packages. Together with apt-file it can even prepare proper dependencies for you.
About being able to "easily roll-back" though requires special attention to versioning or workflows. There are several approaches that might get your job done here:
If you can force-downgrade packages you have won already most of the time unless you have very specific maintainer scripts that do jobs on package upgrades - then you will have to make them able to handle the downgrade, too
If you have to go the regular upgrade-path, using approaches like using "< newversion>+rollback< oldversion>" or similar might be something to consider.
Dependency-packages are always a good idea for deployments to make sure no required package actually is missing. Also, you might want to invest some time in management frameworks like puppet, they might come handy here, too.