"Version lockfiles" for Alpine apk - alpine-linux

I'm using Alpine Linux 3.7 in a Docker container.
I'm performing some apk commands inside a Dockerfile.
I have successfully pinned my dependencies to a particular major version:
apk add --no-cache \
'openssl<2' \
'freeradius<4' \
'freeradius-mysql<4'
But the exact behaviour of this command is sensitive to when the apk add command is run (i.e. when the Docker image is built).
At the time of writing, these constraints resolve to:
apk add --no-cache \
'openssl=1.0.2m-r0' \
'freeradius=3.0.15-r3' \
'freeradius-mysql=3.0.15-r3'
If I run apk add 'dep<majver' next year, probably I will get different results.
I don't want my dependencies to change version without my knowing.
Yes, not even minor version. I do not trust semver 100% to keep me safe from breaking changes / regressions, so changing minor version should be done under supervision only.
I don't want to hand-write the exact version resolutions (dep=ver). 'dep<majver' is preferred.
Package managers such as npm and yarn have a nice solution to this. Version lockfiles.
You'd have two documents under version control:
One manually-created document to express 'dep<majver'
One version lockfile, expressing (for each dependency) a specific dep=ver resolution which fulfils all your constraints.
This would be machine-generated. You would regenerate it deliberately, only when you want to update your every dependency to the latest version that satisfies your 'dep<majver' constraints.
Is there some kind of Alpine apk concept that fulfils the role of a "version lockfile"?
There is an obvious Docker solution: do the apk add 'dep<majver' step in a separate Dockerfile. Use that Dockerfile as a base image. Only rebuild that base Docker image when you're ready to bump your dependencies.
But I'm curious as to whether there's an Alpine apk solution, since this can be viewed as a package-management concern, not a provisioning concern.

Related

Hyperledger FabCar Network issue via Visual Studio Code

While reproducing steps from an interesting tutorial found online - Hyperledger Fabric 1.4 Tutorial - FabCar Sample Application, I have installed all Hyperledger Fabric binaries via the Curl command:
curl -sSL https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap.sh | bash -s
From the Command Prompt window one can see that, the scripts are run correctly leading to the pull of hyperledger-docker images.
However, while launching the network from Visual Studio Code through the execution of
./startFabric.sh javascript from the fabcar sub-directory, one ends up with Fabric Images issues, which seems paradoxical.
A similar issue has been encountered, while attempting to bring up the network via the command
./network.sh up
from the test-network sub-directory too.
Thus, any relevant feedback would indeed be appreciated, given an absence of rationale to justify this matter.
[EDIT]
As a matter of fact, a synchronization issue might have caused the previously reported issue, since fabric-samples v.1.4.9 is not found from Curl command documented online. Thus the script automatically installs the latest Fabric binaries 2.x.
To rule out this issue, I have re-executed the Curl command by specifying the version 1.4.4 instead.
Therefore I can confirm, that the test-network sub-directory is not part of the Fabric binaries pertaining to the version 1.4x.
Furthermore, before the installation of this binaries, I had removed all containers, and related images, yielding to:
Back to the fabcar sub-directory, running:
./startFabric.sh javascript has still led to the following network issue :
What strikes me the most is:
firstly, that configtxlator can be found on my C:\Users\...\Documents\test4\fabric-samples\bin as opposed to the path highlighted on the command window.
secondly, I do not see the matter of version's incompatibility by scrutinizing byfn.sh:
Eventually, the manual amendment of IMAGETAG has not improved the situation.
Best
You seem to have the different version.
In hyperledger/fabric-samples, the test-network does not exist in version 1.4. In other words, you are running the code for the master version of fabric-samples (currently the version 2.x version). Since you're running the 1.4 tutorial, run the commands based on the 1.4 documentation and code.
fabric/docs/release-1.4
fabric/samples/release-1.4
[NOTE] It is also very likely that the different version binary has been installed. In the fabric document, the version can be specified and downloaded, so to download the binary of the corresponding version, you need to execute the command including the input parameter (version) below.
curl -sSL https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap.sh | bash -s - 1.4.9 1.4.9 0.4.21
[EDIT - 1]
Prior to the explanation, in 1.4.x, first-network/byfn.sh exists, not test-network/network.sh. Change the branch of your fabric-samples to release-1.4.
Do not run 2.x code while doing the 1.4 tutorial.
./byfn.sh basically executes the image tag as latest. That is, if you have pulled the 2.x version of the image, even if you proceed to 1.4.x, it will run as 2.x.
cd fabric-samples
git checkout release-1.4
There are two ways.
tag the latest with 1.4.x image
Untag all hyperledger latest image tags, and tag 1.4.x images as latest.
(In fact, the easiest way is to delete both the 2.x and 1.4.x images and execute the command again.)
Or edit in byfn.sh
There is a part to set IMAGE_TAG in byfn.sh.
byfn.sh
# default image tag
# IMAGETAG="latest"
IMAGETAG="1.4.9"
Make sure the value of that part matches your 1.4.x version
[EDIT - 2]
For binary I don't know where and how you built it. However, you can easily build it on the fabric github.
make binaries
cd $GOPATH/src/github.com/hyperledger
git clone http://github.com/hyperledger/fabric
cd fabric
git checkout release-1.4
make native
Makefile
native - ensures all native binaries are available
configtxgen - builds a native configtxgen binary
configtxlator - builds a native configtxlator binary
cryptogen - builds a native cryptogen binary
idemixgen - builds a native idemixgen binary
peer - builds a native fabric peer binary
orderer - builds a native fabric orderer binary
Path setting
mv $GOPATH/src/github.com/hyperledger/fabric/.build/bin/* <your_bin_path>
# in my case
# mv $GOPATH/src/github.com/hyperledger/fabric/.build/bin/* usr/local/bin
or
export PATH=$PATH:$GOPATH/src/github.com/hyperledger/fabric/.build/bin

nvidia-docker - can cuda_runtime be available while building a container?

While attempting to compile darknet in the build command of a docker container I constantly run into the exception include/darknet.h:11:30: fatal error: cuda_runtime.h: No such file or directory.
I am building the container from the instructions here: https://github.com/NVIDIA/nvidia-docker/wiki/Deploy-on-Amazon-EC2. I have a simple Dockerfile I am testing with - the relevant parts:
FROM nvidia/cuda:9.2-runtime-ubuntu16.04
...
WORKDIR /
RUN apt-get install -y git
RUN git clone https://github.com/pjreddie/darknet.git
WORKDIR /darknet
# Set OpenCV makefile flag
RUN sed -i '/OPENCV=0/c\OPENCV=1' Makefile
RUN sed -i '/GPU=0/c\GPU=1' Makefile
#RUN ln -s /usr/local/cuda-9.2 /usr/local/cuda
# HERE I have been playing with commands to show me the state of the docker image to try to troubleshoot the problem
RUN find / -name "cuda_runtime.h"
RUN ls /usr/local/cuda/lib64/
RUN less /usr/local/cuda/README
RUN make
Most of the documentation I see references using the nvidia libraries when running a container, but the darknet compiles differently when built with gpu support so I need cuda_runtime.h available at build time.
Perhaps I misunderstand what nvidia-docker is doing - I'm assuming that nvidia-docker exists because the Nvidia code must be installed on the actual host machine and not inside the container & they use some mechanism to share the "native" code with the containers so the GPU can be managed - is that correct?
Should I even be trying to build darknet when building my container or should I be installing it on the host machine, then making it available somehow to the container? This seems to go against the portability of the containers but I can live with some constraints to get access to the GPU.
FROM nvidia/cuda:9.2-runtime-ubuntu16.04
Your image only has bits and pieces of CUDA-9.2 needed to run a CUDA app, but does not have the bits needed to build one.
You need to use -devel variant.

Availability of snapcraft on AlpineLinux

I was looking for compatibility between snap package management system and alpine linux but could not find any relevant resources. Is there any plan to make it available on alpine linux? Any progress being made in that regard?
To be clear: there are two components here: snapd, which is responsible for running snaps, and Snapcraft, which is responsible for building/creating snaps. You specifically asked about Snapcraft, which unlike snapd, is currently Ubuntu-specific. This is due to the fact that it assumes build- and stage-packages are debs, and tries to use apt (and apt python bindings) to get them.
This is currently changing to be more extensible, with RPM support to probably be added first. Alpine will likely need apk support there.
Another feature coming soon will be to build in lxd containers by default. This may be the easier path, where Snapcraft can run natively on Alpine but then build packages using an Ubuntu container.
If you're curious about snapd, you can see from this table that Alpine does not currently seem to be a target. However, please do log a bug requesting that it be put on the roadmap.

How should I handle Perl module updates when maintaining docker images?

I'm working on building a docker image to be able to run all of our Perl applications. The applications require hundreds of CPAN modules to be installed. The full build of the docker image takes about an hour to complete.
After doing the initial image, I'm not sure how best to handle ongoing updates.
We could keep a single Dockerfile in git, and then modify this as required, and push new builds up to dockerhub. However if the person doing the build doesn't have all of the intermediate images, then adding a single CPAN module could be an extremely tedious process, and it might take an hour before they even know if the new module installs correctly. Also it would be downloading every CPAN module again, which seems a bit risky, as there might be a breaking change in the new module.
Alternatively, the person doing the build could pull the latest docker-hub image, and then install the cpan module interactively, commit the build and push the new image to dockerhub. However then we only have our dockerhub images, but not master Dockerfile.
Or another option would be to create a Dockerfile for each new build, which references the previous dockerhub image. This seems overly complicated though.
Option 1) seems wrong. I'm fairly sure we don't want to be rebuilding the entire image from the base OS just to install one additional module. However being dependent on images without Dockerfiles seems risky as well.
You could use the standard module installer for your underlying OS on your docker image.
For example, if its RedHat then use yum and only use CPAN when they are not available
FROM centos:centos7
RUN yum -y install cpanm gcc perl perl-App-cpanminus perl-Config-Tiny && yum clean all
RUN cpanm install Some::Module; rm -fr root/.cpanm; exit 0
taken from here and modified
I would try to have a base image which the actual applications use
I would also avoid doing things interactively (e.g. script a dockerfile) as you want to be able to repeat the build when upstream dependencies change, which docker hub does for you.
EDIT
You can convert perl modules into your own packages using dh-make-perl
You can load these into your own Ubuntu repo using reprepro or a paid solution of Artifactory
These can then be installed using apt-get when you use your repo as a source from within a dockerfile.
When I have tried a similar thing before There are a few problems
Your apps don't work with the latest version of modules
There are far more dependencies than you expected
Some modules wont package
Benefits are
You keep the build tools (gcc, etc) off the app servers
You know much more about your dependencies

Re-compiler QtEmbedded in OpenEmbedded without Examples

I have a touch panel computer running ARM9. I have successfully built a QtEmbedded SDK image under OpenEmbedded toolchain (I am newbie in this area) for ARM9. I'd like to re-build QtEmbedded images again with few examples that it comes (not all) with due to space limitation on NAND. How can I re-compile this. I have commented out examples in examples.pro but it seems it's building images from existing packages. I am using command: bitbake -b qt4-embedded-image
Please help.
Nimesh
You need to re-run the configure step of the bitbake to rebuild the Makefiles from the .pro files. You can do this by removing the configure stamp for this package. Just rm the do_configure stamp for this package and re-run the bitbake command you did above.