BBLayers.conf specify path using environment variable - yocto

I am trying to configure a Yocto build and would like to be able to specify the path to a layer in bblayers.conf via an environment variable. This will allow engineers to checkout a configuration and build without needing to manually modify bblayers.conf so to specify the absolute path to their checkout of the kernel module.
I have tried exporting a variable in my .zshrc file, e.g.
export TRIALS=~/TRIALS
Unfortunately, when I attempt to access it in bblayers.conf the value is not set:
# POKY_BBLAYERS_CONF_VERSION is increased each time build/conf/bblayers.conf
# changes incompatibly
POKY_BBLAYERS_CONF_VERSION = "2"
BBPATH = "${TOPDIR}"
BBFILES ?= ""
TRIALS_DIR = "${#os.environ['TRIALS']}"
BBLAYERS ?= " \
/home/ahk/poky/meta \
/home/ahk/poky/meta-poky \
/home/ahk/poky/meta-yocto-bsp \
/home/ahk/poky/meta-amd/meta-amd-bsp \
/home/ahk/poky/meta-amd/meta-amd-distro \
/home/ahk/poky/meta-congatec-amd \
/home/ahk/poky/meta-dpdk \
/home/ahk/poky/meta-openembedded/meta-oe \
/home/ahk/poky/meta-openembedded/meta-networking \
/home/ahk/poky/meta-openembedded/meta-python \
${TRIALS_DIR}/hello-layer \
"
The error I get is
bb.BBHandledException
ERROR: Failure expanding variable TRIALS_DIR, expression was
${#os.environ['TRIALS']} which triggered exception KeyError: 'TRIALS'
How do I set an environment variable in my shell that can be accessed from belayers.conf

Bitbake tightly controls the build environment to prevent unwanted contamination. This can be overridden by including any variables that you need to access to the BB_ENV_WHITELIST.
For my example above I added the $TRIALS variable to BB_ENV_WHITELIST via
export BB_ENV_WHITELIST="$BB_ENV_WHITELIST TRIALS"
After setting BB_ENV_WHITELIST it was necessary to re-initialise bitbake via:
cd ~/poky
source oe-init-build-env
The $TRIALS environment variable could then be accessed directly in bblayers.conf:
# POKY_BBLAYERS_CONF_VERSION is increased each time build/conf/bblayers.conf
# changes incompatibly
POKY_BBLAYERS_CONF_VERSION = "2"
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS ?= " \
/home/ahk/poky/meta \
/home/ahk/poky/meta-poky \
/home/ahk/poky/meta-yocto-bsp \
/home/ahk/poky/meta-amd/meta-amd-bsp \
/home/ahk/poky/meta-amd/meta-amd-distro \
/home/ahk/poky/meta-congatec-amd \
/home/ahk/poky/meta-dpdk \
/home/ahk/poky/meta-openembedded/meta-oe \
/home/ahk/poky/meta-openembedded/meta-networking \
/home/ahk/poky/meta-openembedded/meta-python \
${TRIALS}/hello-layer \
"

Related

How to push a single image with multiple tags to a container registry?

I'm currently using the following command to build an image with a tag:
buildctl build \
--frontend dockerfile.v0 --opt filename=${DOCKER_FILE} --local dockerfile=${DOCKER_ROOT} \
${BUILD_ARGS} --local context=${DOCKER_ROOT} \
--import-cache type=registry,ref=${REGISTRY_URL}/${REGISTRY_NAMESPACE}/${IMAGE_NAME} \
--output type=image,name="${REGISTRY_URL}/${REGISTRY_NAMESPACE}/${IMAGE_NAME}:${IMAGE_TAG}",push=true
I want to add a new tag to the same image. I've tried what is suggested on Buildctl command to tag multiple images, but it fails.
buildctl build \
--frontend dockerfile.v0 --opt filename=${DOCKER_FILE} --local dockerfile=${DOCKER_ROOT} \
${BUILD_ARGS} --local context=${DOCKER_ROOT} \
--import-cache type=registry,ref=${REGISTRY_URL}/${REGISTRY_NAMESPACE}/${IMAGE_NAME} \
--output type=image,name="${REGISTRY_URL}/${REGISTRY_NAMESPACE}/${IMAGE_NAME}:${IMAGE_TAG},${REGISTRY_URL}/${REGISTRY_NAMESPACE}/${IMAGE_NAME}:latest",push=true
error: invalid value acme.com/namespace/image-name:latest
I also tried the following, but it only creates the image with the latest tag, not both
buildctl build \
--frontend dockerfile.v0 --opt filename=${DOCKER_FILE} --local dockerfile=${DOCKER_ROOT} \
${BUILD_ARGS} --local context=${DOCKER_ROOT} \
--import-cache type=registry,ref=${REGISTRY_URL}/${REGISTRY_NAMESPACE}/${IMAGE_NAME} \
--output type=image,name="${REGISTRY_URL}/${REGISTRY_NAMESPACE}/${IMAGE_NAME}:${IMAGE_TAG},name=${REGISTRY_URL}/${REGISTRY_NAMESPACE}/${IMAGE_NAME}:latest",push=true
What I want:
1 image with 2 tags: value of IMAGE_TAG and latest
Does anyone know how to do it?
There seems to be a difference between name="foo" and "name=foo". The correct one in this case is the latter.
I tracked the issue that enabled this funcitonality https://github.com/moby/buildkit/issues/797. The comments show how to do it (depending on your environment escaping might be necessary; although, not always).
Following the scenario from the question, the propper way to build and push a single image with multiple tags is as follows (notice how it's "name=)
buildctl build \
--frontend dockerfile.v0 --opt filename=${DOCKER_FILE} --local dockerfile=${DOCKER_ROOT} \
${BUILD_ARGS} --local context=${DOCKER_ROOT} \
--import-cache type=registry,ref=${REGISTRY_URL}/${REGISTRY_NAMESPACE}/${IMAGE_NAME} \
--output type=image,\"name=${REGISTRY_URL}/${REGISTRY_NAMESPACE}/${IMAGE_NAME}:${IMAGE_TAG},${REGISTRY_URL}/${REGISTRY_NAMESPACE}/${IMAGE_NAME}:latest\",push=true

Bitbake build for yocto failed on adding libgdiplus

We are building a yocto image with bitbake.
In the build we need to add libgdiplus, but this causes an error during the build.
This is the error:
ERROR: Nothing RPROVIDES 'libgdiplus' (but //***/sources/meta-variscite-fslc/recipes-fsl/images/fsl-image-gui.bb RDEPENDS on or otherwise requires it)
NOTE: Runtime target 'libgdiplus' is unbuildable, removing...
Missing or unbuildable dependency chain was: ['libgdiplus']
NOTE: Target 'fsl-image-gui' is unbuildable, removing...
Missing or unbuildable dependency chain was: ['fsl-image-gui', 'libgdiplus']
ERROR: Required build target 'fsl-image-gui' has no buildable providers.
Missing or unbuildable dependency chain was: ['fsl-image-gui', 'libgdiplus']
In our bblayers.conf file we've added the meta-mono layer
LCONF_VERSION = "6"
BBPATH = "${TOPDIR}"
BSPDIR := "${#os.path.abspath(os.path.dirname(d.getVar('FILE', True)) + '/../..')}"
BBFILES ?= ""
BBLAYERS = " \
${BSPDIR}/sources/poky/meta \
${BSPDIR}/sources/poky/meta-poky \
\
${BSPDIR}/sources/meta-openembedded/meta-oe \
${BSPDIR}/sources/meta-openembedded/meta-multimedia \
${BSPDIR}/sources/meta-openembedded/meta-python \
${BSPDIR}/sources/meta-openembedded/meta-perl \
${BSPDIR}/sources/meta-openembedded/meta-filesystems \
${BSPDIR}/sources/meta-openembedded/meta-gnome \
${BSPDIR}/sources/meta-openembedded/meta-networking \
\
${BSPDIR}/sources/meta-freescale \
${BSPDIR}/sources/meta-freescale-3rdparty \
${BSPDIR}/sources/meta-freescale-distro \
\
${BSPDIR}/sources/meta-qt5 \
${BSPDIR}/sources/meta-swupdate \
${BSPDIR}/sources/meta-virtualization \
${BSPDIR}/sources/meta-variscite-fslc \
${BSPDIR}/sources/meta-security \
${BSPDIR}/sources/meta-dotnet-core \
${BSPDIR}/sources/meta-mono \
${BSPDIR}/sources/meta-tempest-driverboard \
"
In our layer.conf file We try to do a CORE_IMAGE_EXTRA_INSTALL for libgdiplus(this is the one that fails).
...
CORE_IMAGE_EXTRA_INSTALL += "giflib"
CORE_IMAGE_EXTRA_INSTALL += "libgdiplus"
Are there some other dependencies that need to be installed, or is there another easier way to get libgdiplus in the build?
I'm quite new in yocto so any help is welcome.

Yocto: custom Image /var/lib/dpkg missing

I am building a custom yocto image based on rocko (2.5.2) for a custom board equiped with a Xilinx Zynq7000.to generate a wic file I am usind sdimage-sota.wks.
I added debian package management in my local.conf with
PACKAGE_CLASSES ?= " package_deb"
EXTRA_IMAGE_FEATURES ?= "debug-tweaks package-management"
I also ran the command bitbake package-index.
There is no dpkg-package included in my recipes.
After building and flashing the image I get this error message: dpkg: error: unable to access dpkg status area: No such file or directorywhen I try to install a deb package.
When I extract the rootfs.tar.gz file after building, there is a /var/lib/dpkg directory.
If I flash the wic file to my board inside u-boot using tftpboot and mmc write there is no /var/lib/dpkg directory.
Why is the directory missing after flashing the wic file?
Is it possible, that the sdimage-sota.wks is excluding this?
This is my bblaiers.conf:
# LAYER_CONF_VERSION is increased each time build/conf/bblayers.conf
# changes incompatibly
LCONF_VERSION = "7"
BBPATH = "${TOPDIR}"
BBFILES ?= ""
# These layers hold recipe metadata not found in OE-core, but lack any machine or distro content
BASELAYERS ?= " \
${TOPDIR}/../external/poky/meta \
${TOPDIR}/../external/poky/meta-poky \
${TOPDIR}/../external/poky/meta-yocto-bsp \
${TOPDIR}/../external/meta-openembedded/meta-oe \
${TOPDIR}/../external/meta-openembedded/meta-networking \
${TOPDIR}/../external/meta-openembedded/meta-webserver \
${TOPDIR}/../external/meta-openembedded/meta-python \
${TOPDIR}/../external/meta-openembedded/meta-filesystems \
${TOPDIR}/../external/meta-ublox-modules \
"
# These layers hold machine specific content, aka Board Support Packages
BSPLAYERS ?= " \
${TOPDIR}/../meta-minicate \
${TOPDIR}/../external/meta-updater \
${TOPDIR}/../external/meta-xilinx/meta-xilinx-bsp \
${TOPDIR}/../external/meta-rust \
${TOPDIR}/../external/meta-sze \
${TOPDIR}/../external/meta-qt5 \
"
BBLAYERS ?= " \
${BSPLAYERS} \
${BASELAYERS} \
"

Build all packages for an image

Is it possible to build all of the packages for a specific image? I know I can build packages individually, but ideally would like to build all of them at once, through a single command.
Alternatively, is there a way to prevent the do_rootfs task from being executed for a particular image.
Cheers, Donal
First make an image that contains a packagegroup (or just list your dependencies there).
$ cat sources/meta-custom/recipes-custom/images/only-packages-image.bb
SUMMARY = "All dependencies no image"
LICENSE = "CLOSED"
version = "##DISTRO_VERSION##"
BB_SCHEDULER = "speed"
# option 1 - packagegroup, package list can be reused in real image
CORE_IMAGE_BASE_INSTALL += "\
packagegroup_all-depends \
"
# option 2 - list deps here, package list can not be reused in real image
CORE_IMAGE_BASE_INSTALL += "\
lshw \
systemd \
cronie \
glibc \
sqlite \
bash \
python3-dev \
python3-2to3 \
python3-misc \
python3-pyvenv \
python3-modules \
python3-pip \
wget \
apt \
pciutils \
file \
tree \
\
wpa-supplicant \
dhcpcd \
networkmanager \
curl-dev \
curl \
hostapd \
iw \
"
# remove the rootfs step
do_rootfs() {
}
Second make your packagegroup if you opted to reuse the list of packages
$ cat sources/meta-custom/recipes-custom/packagegroups/packagegroup-alldeps.bb
PACKAGE_ARCH = "${MACHINE_ARCH}"
inherit packagegroup
RDEPENDS_${PN} = " \
lshw \
systemd \
cronie \
glibc \
sqlite \
bash \
python3-dev \
python3-2to3 \
python3-misc \
python3-pyvenv \
python3-modules \
python3-pip \
wget \
apt \
pciutils \
file \
tree \
\
wpa-supplicant \
dhcpcd \
networkmanager \
curl-dev \
curl \
hostapd \
iw \
"
Finally build your new image placeholder
$ bitbake only-packages-image
In Yocto >=4.0 this is actually pretty easy to achieve. The packagegroup method did not work for me at all.
I don't know if this works in older versions though.
Create a new file in your custom layer, e.g. meta-custom/classes/norootfs.bbclass and put the following lines in there (as far as I noticed the order does not matter):
deltask do_deploy
deltask do_image
deltask do_rootfs
deltask do_image_complete
deltask do_image_setscene
then in your meta-custom/recipes-core/images/myimage.bb add norootfs to your other inherit commands
e.g. the most basic one
inherit core-image norootfs
You will notice your number of tasks decreasing by a fair amount (mine from ~4700 to ~3000) and there is no complete rootfs image anymore in build/tmp/deploy/images, except for bzImage and modules, just the plain ipk files in build/tmp/deploy/ipk.
I got this information by looking at https://docs.yoctoproject.org/ref-manual/tasks.html?highlight=do_image and .bbclass files in meta/classes where deltask is frequently used.

During SDK build: environment-setup.d/ conflicts between attempted installs

I am trying to build an image for beaglebone that contains Qt5, as well as generate the SDK for this image.
Problem
my problem is, that the build fails the do_populate_sdk task to create the SDK with the following error:
Error: Transaction check error:
file /opt/poky/2.3.1/sysroots/x86_64-pokysdk-linux/environment-setup.d conflicts between attempted installs of nativesdk-cmake-3.7.2-r0.x86_64_nativesdk and nativesdk-qtbase-tools-5.8.0+git0+49dc9aa409-r0.x86_64_nativesdk
A little further up the stream I encountered the following error message:
ERROR: Could not invoke dnf. Command '/home/ubuntu/workspace/bbb/build-toaster-2/tmp/work/my_machine-poky-linux-gnueabi/my-image-dev/1.0-r0/recipe-sysroot-native/usr/bin/dnf [...] ' returned 1:
Added oe-repo repo from file:///home/ubuntu/workspace/bbb/build-toaster-2/tmp/work/my-machine-poky-linux-gnueabi/my-image-dev/1.0-r0/oe-rootfs-repo.
Last metadata expiration check: 0:00:00 ago on Wed Aug 16 11:47:27 2017 UTC.
Dependencies resolved.
What I have
To configure my image I followed the advice here as well as similar posts stating the same elsewhere on the web. This is my (shortened and slightly redacted) image bb-file:
SUMMARY = "..."
LICENSE = "MIT"
IMAGE_LINGUAS = "en-us"
inherit core-image
# for populate_sdk to create a valid toolchain
inherit populate_sdk populate_sdk_qt5
CORE_OS = "..."
KERNEL_EXTRA_INSTALL = "..."
WIFI_SUPPORT = "..."
DEV_SDK_INSTALL = " \
binutils \
binutils-symlinks \
coreutils \
cpp \
cpp-symlinks \
diffutils \
file \
g++ \
g++-symlinks \
gdb \
gdbserver \
gcc \
gcc-symlinks \
gettext \
git \
ldd \
libstdc++ \
libstdc++-dev \
libtool \
make \
perl-modules \
pkgconfig \
python-modules \
python3-modules \
"
DEV_EXTRAS = "..."
EXTRA_TOOLS_INSTALL = " \
acpid \
bc \
bzip2 \
cursor-blink \
devmem2 \
dosfstools \
emmc-installer \
ethtool \
findutils \
i2c-tools \
iperf3 \
htop \
less \
memtester \
nano \
netcat \
procps \
rsync \
sysfsutils \
tcpdump \
unzip \
util-linux \
util-linux-blkid \
wget \
zip \
"
MQTT = "..."
ROOTFS_POSTPROCESS_COMMAND += "..."
QT_TOOLS = " \
qtbase \
qtbase-dev \
qtbase-mkspecs \
qtbase-plugins \
qtbase-tools \
qtserialport-dev \
qtserialport-mkspecs \
qt5-env \
"
QT5_PKGS = " \
qt3d \
qt3d-dev \
...
qtxmlpatterns \
qtxmlpatterns-dev \
qtxmlpatterns-mkspecs \
"
FONTS = "..."
TSLIB = "... "
ADDITIONAL_PKGS = "..."
QT_TEST_APPS = "..."
IMAGE_INSTALL += " \
${CORE_OS} \
${DEV_SDK_INSTALL} \
${DEV_EXTRAS} \
${EXTRA_TOOLS_INSTALL} \
${KERNEL_EXTRA_INSTALL} \
${FONTS} \
${QT_TOOLS} \
${QT5_PKGS} \
${QT_TEST_APPS} \
${MQTT} \
${WIFI_SUPPORT} \
${TSLIB} \
${ADDITIONAL_PKGS} \
"
IMAGE_FEATURES_append = " dev-pkgs"
export IMAGE_BASENAME = "my-image-dev"
I also set DISTRO_FEATURES_remove = "busybox x11 wayland" as well as DISTRO_FEATURES_append = " systemd opengl aufs" in my local.conf.
The build runs fine if I remove the inherit populate_sdk_qt5 line, but of course I don't get make for my SDK in this case.
What I found out
I found different people having the same problem (example here) but no-one ever got an answer.
I checked the nativesdk-cmake as well as the nativesdk-qtbase_git recipes (both unchanged standards) to see where the files get created, and neither looks problematic to me:
natives-qtbase_git.bb:
fakeroot do_generate_qt_environment_file() {
mkdir -p ${D}${SDKPATHNATIVE}/environment-setup.d/
script=${D}${SDKPATHNATIVE}/environment-setup.d/qt5.sh
echo 'export PATH=${OE_QMAKE_PATH_HOST_BINS}:$PATH' > $script
echo 'export OE_QMAKE_CFLAGS="$CFLAGS"' >> $script
echo 'export OE_QMAKE_CXXFLAGS="$CXXFLAGS"' >> $script
echo 'export OE_QMAKE_LDFLAGS="$LDFLAGS"' >> $script
echo 'export OE_QMAKE_CC=$CC' >> $script
echo 'export OE_QMAKE_CXX=$CXX' >> $script
echo 'export OE_QMAKE_LINK=$CXX' >> $script
echo 'export OE_QMAKE_AR=$AR' >> $script
echo 'export QT_CONF_PATH=${OE_QMAKE_PATH_HOST_BINS}/qt.conf' >> $script
echo 'export OE_QMAKE_LIBDIR_QT=`qmake -query QT_INSTALL_LIBS`' >> $script
echo 'export OE_QMAKE_INCDIR_QT=`qmake -query QT_INSTALL_HEADERS`' >> $script
echo 'export OE_QMAKE_MOC=${OE_QMAKE_PATH_HOST_BINS}/moc' >> $script
echo 'export OE_QMAKE_UIC=${OE_QMAKE_PATH_HOST_BINS}/uic' >> $script
echo 'export OE_QMAKE_RCC=${OE_QMAKE_PATH_HOST_BINS}/rcc' >> $script
echo 'export OE_QMAKE_QDBUSCPP2XML=${OE_QMAKE_PATH_HOST_BINS}/qdbuscpp2xml' >> $script
echo 'export OE_QMAKE_QDBUSXML2CPP=${OE_QMAKE_PATH_HOST_BINS}/qdbusxml2cpp' >> $script
echo 'export OE_QMAKE_QT_CONFIG=`qmake -query QT_INSTALL_LIBS`${QT_DIR_NAME}/mkspecs/qconfig.pri' >> $script
echo 'export OE_QMAKE_PATH_HOST_BINS=${OE_QMAKE_PATH_HOST_BINS}' >> $script
echo 'export QMAKESPEC=`qmake -query QT_INSTALL_LIBS`${QT_DIR_NAME}/mkspecs/linux-oe-g++' >> $script
# Use relocable sysroot
sed -i -e 's:${SDKPATHNATIVE}:$OECORE_NATIVE_SYSROOT:g' $script
}
cmake-3.7.2.bb:
do_install_append_class-nativesdk() {
mkdir -p ${D}${datadir}/cmake
install -m 644 ${WORKDIR}/OEToolchainConfig.cmake ${D}${datadir}/cmake/
mkdir -p ${D}${SDKPATHNATIVE}/environment-setup.d
install -m 644 ${WORKDIR}/environment.d-cmake.sh ${D}${SDKPATHNATIVE}/environment-setup.d/cmake.sh
}
environment.d-cmake.sh:
alias cmake="cmake -DCMAKE_TOOLCHAIN_FILE=$OECORE_NATIVE_SYSROOT/usr/share/cmake/OEToolchainConfig.cmake"
For the sake of trying I went ahead and executed the
/home/ubuntu/workspace/bbb/build-toaster-2/tmp/work/my_machine-poky-linux-gnueabi/my-image-dev/1.0-r0/recipe-sysroot-native/usr/bin/dnf
script from
/home/ubuntu/workspace/bbb/build-toaster-2/tmp/work/my_machine-poky-linux-gnueabi/my-image-dev/1.0-r0/recipe-sysroot-native
which got me the following error message:
Traceback (most recent call last):
File "/home/ubuntu/workspace/bbb/build-toaster-2/tmp/work/my-machine-poky-linux-gnueabi/my-image-dev/1.0-r0/recipe-sysroot-native/usr/bin/dnf.real", line 57, in <module>
from dnf.cli import main
ImportError: No module named 'dnf'
The dnf module seems to exist though:
<path as above>/recipe-sysroot-native$ find -name dnf
./usr/lib/python3.5/site-packages/dnf
./usr/bin/dnf
./etc/dnf
./etc/bash_completion.d/dnf
./etc/logrotate.d/dnf
Can you see anything that I am doing wrong? I am absolutely clueless...
I'm building an SDK with cmake and Qt5 without any problem...
Your issue seems to stem from dnf, and as I'm building with ipk without any issue, there might very well be a bug in the rpm handling in OpenEmbedded.
Could you try to rebuild with:
PACKAGE_CLASSES = "package_ipk"
in your local.conf and see it that helps?
Edit:
Anders' answer provides a more elegant solution by switching the packaging class. If you can, check out his approach before trying this workaround.
I found a workaround that worked for me but is no ideal solution. I am posting it anyway, in case it helps someone:
I figured out, that the nativesdk-cmake package somehow collided with the Qt one. Therefor I created a nativesdk-packagegroup-sdk-host.bbappend file in my custom layer, with the following content:
RDEPENDS_${PN}_remove = "\
nativesdk-cmake \
"
This removes the cmake dependency from the SDK build, which works for my purposes. But this merely solves the symptoms not the problem. So I am glad for any other solution.
Just try and add DIRFILES = "" in nativesdk-qtbase.bb (you should rather set up a clean and tidy nativesdk-qtbase.bbappend in your custom layer with DIRFILES = "").
This works around clashes due to RPM directory ownership for this RPM, which is a default policy in standard RPM packaging. See package_rpm.bbclass for details # python write_specfile () { .... walk_files method.
Note: DIRFILES must be defined but left empty for this to work on the current package.
Voila.
Cheers.
As so61pi mentioned, RPM has strict checking for files/directories. In my case, after installing nativesdk-qtbase the environment.d folder had permissions 775 whereas nativesk-cmake created that same folder with 755.
I don't know if this was caused due to the fact that the function generate_qt_environment_file is being executed in a fakeroot environment, but I fixed it by adding its function body to the do_install function and removing generate_qt_environment_file.
Not sure if this is the correct fix though. I noticed some other recipes use the fakeroot keyword and others don't. I wonder why...
RPM has strict checking for files/directories. environment-setup.d directory in your question may have different mode or user between the 2 packages.
You could check function rpmfilesCompare for the exact checks that RPM performs.