How to bind a kernel patches to different recipes - yocto

I have two similar boards. I want to write a recipe for each of them. But they will have different kernel patches.How to do it better? Or Should I add new machines to the build?
I added my-machine to mylayer/local.conf
MACHINEOVERRIDES = "imx8qmmek:my-machine"
I created mylayer/recipex-kernel/linux/linux-imx_%.bbappend with my-patch:
SRC_URI_imx8qmmek += " file://0001-add-modified-dts.patch "
SRC_URI_imx8qmmek += " file://0002-EP4668-wifi-bt-modified-dts.patch "
SRC_URI_imx8qmmek += " file://0003-EP4822-enable-USB3-hub.patch "
SRC_URI_my-machine += " file://0004-EP4827-comment-usdhc3.tcu.patch "
SRC_URI_imx8qmmek += " file://EP4133_added_BRCM-PCIE.cfg"
do_configure_append_imx8qmmek() {
bbnote "adding BRCM-PCIE configuration ${PN}"
cat ../*.cfg >> ${B}/.config
}
And I run command:
MACHINE="my-machine" bitbake -c clean linux-imx
But a terminal outputed the error:
WARNING: Layer meta-mylayer should set LAYERSERIES_COMPAT_mylayer in its conf/layer.conf file to list the core layer names it is compatible with.
WARNING: Layer meta-mylayer should set LAYERSERIES_COMPAT_meta-mylayer in its conf/layer.conf file to list the core layer names it is compatible with.
WARNING: You have included the meta-gnome layer, but 'x11' has not been enabled in your DISTRO_FEATURES. Some bbappend files may not take effect. See the meta-gnome README for details on enabling meta-gnome support.
WARNING: Host distribution "ubuntu-18.04" has not been validated with this version of the build system; you may possibly experience unexpected failures. It is recommended that you use a tested distribution.
ERROR: OE-core's config sanity checker detected a potential misconfiguration.
Either fix the cause of this error or at your own risk disable the checker (see sanity.conf).
Following is the list of potential problems / advisories:
MACHINE=my-machine is invalid. Please set a valid MACHINE in your local.conf, environment or other configuration file.

Similar != identical. If they are indeed slightly different, then two machines is the way to go. If they are sufficiently similar (to be determined by yourself :) ), different distros is also an option. All depends on how different the machines are and how different the final images should be (might need two machines or two distros or both).
If you have two similar machines but need two machine configuration files, put most of the common code into an .inc required by both machines. Don't forget to put a MACHINEOVERRIDES somewhere in that inc file with a name that will make sense for both machines (e.g., if you have rpi3-lcd and rpi3-iot, have an rpi3-common.inc with rpi3-common added to MACHINEOVERRIDES). This will make it possible to use VAR_rpi3-common in recipes that have patches or machine specific stuff in your recipes to apply to both without needing VAR_rpi3-lcd AND VAR_rpi3-iot.

Related

Is there a generic way of creating multiple (slightly different) variants of a Bitbake recipe providing the same software?

I have multiple variants for the same (custom) software. A few variants support different hardware platforms and other wariants support the same hardware platform but with different feature-set. For example I have:
mysoftware.bb (basic version)
mysoftware-qt.bb (basic + qt support)
mysoftware-lic.bb (basic + license support)
mysoftware-qt-lic.bb (basic + license + qt support)
My plan was to add PROVIDES = "mysoftware" to all of these recipes and PREFERRED_PROVIDER_mysoftware = "mysoftware-qt" to my machine conf file.
In my image recipe I wanted to add:
IMAGE_INSTALL += "mysoftware"
Many errors came up... like I had to set RPROVIDES_${PN} = "mysoftware" in every recipe and had to set SSTATE_DUPWHITELIST = "/" in those recipes. (and still not working...)
My question: is there a standard way to achieve this? or is this a bad practice? This seems to be a decent approach for me.
What I wanted to do in the end is:
IMAGE_INSTALL = "... mysoftware" and this will install the appropriate variant and I don't have to do like:
IMAGE_INSTALL_xhw = "... mysoftware-x"
IMAGE_INSTALL_yhw = "... mysoftware-y"
and for those very limited number of times when I have to change from mysoftware to mysoftware-qt I can install it with a package manager (apt in my case):
apt install mysoftware-x-qt
and as a result the conflicting mysoftware-x would be removed and mysoftware-x-qt would be installed easily.
Inside your recipe mysoftware-qt.bb add:
PROVIDES = "virtual/mysoftware"
RPROVIDES_${PN} = "virtual/mysoftware"
Inside your machine.conf add :
PREFERRED_PROVIDER_virtual/mysoftware = "mysoftware-qt
finally add IMAGE_INSTALL += "mysoftware" inside your image recipe

Yocto bitbake error: Nothing Provides for 'recipe-name'

I have a recipe in one of the meta layer. Its structure is given below:
meta-custom/swupdate/recipes-extended/images/recipe-name.bb
meta-custom layer is also included in the bblayers.conf. But when I run bitbake recipe-name I get the below error:
Bitbake error: Nothing PROVIDES for 'recipe-name'. Closes matches:
Can anyone please let me know what is the reason for this?
Thanks in advance!
Short answer: in your local.conf, add this:
IMAGE_INSTALL_append = " recipe-name "
Be sure to include the spaces in " recipe-name ", otherwise you may run into errors where your recipe isn't separated from others, e.g. ERROR nothing provides "someOtherRecipeyourrecipe-name"
Long answer: Disregarding local.conf, within your own layer (if applicable), you may have a distro configuration file within conf/distro/distro.conf (or whatever you named it). This can act as your local.conf, and is more sought after for maintained yocto layers. Within that, you would add:
IMAGE_INSTALL_append = " recipe-name "
just as you would in your local.conf

How can I get "HelloWorld - BitBake Style" working on a newer version of Yocto?

In the book "Embedded Linux Systems with the Yocto Project", Chapter 4 contains a sample called "HelloWorld - BitBake style". I encountered a bunch of problems trying to get the old example working against the "Sumo" release 2.5.
If you're like me, the first error you encountered following the book's instructions was that you copied across bitbake.conf and got:
ERROR: ParseError at /tmp/bbhello/conf/bitbake.conf:749: Could not include required file conf/abi_version.conf
And after copying over abi_version.conf as well, you kept finding more and more cross-connected files that needed to be moved, and then some relative-path errors after that... Is there a better way?
Here's a series of steps which can allow you to bitbake nano based on the book's instructions.
Unless otherwise specified, these samples and instructions are all based on the online copy of the book's code-samples. While convenient for copy-pasting, the online resource is not totally consistent with the printed copy, and contains at least one extra bug.
Initial workspace setup
This guide assumes that you're working with Yocto release 2.5 ("sumo"), installed into /tmp/poky, and that the build environment will go into /tmp/bbhello. If you don't the Poky tools+libraries already, the easiest way is to clone it with:
$ git clone -b sumo git://git.yoctoproject.org/poky.git /tmp/poky
Then you can initialize the workspace with:
$ source /tmp/poky/oe-init-build-env /tmp/bbhello/
If you start a new terminal window, you'll need to repeat the previous command which will get get your shell environment set up again, but it should not replace any of the files created inside the workspace from the first time.
Wiring up the defaults
The oe-init-build-env script should have just created these files for you:
bbhello/conf/local.conf
bbhello/conf/templateconf.cfg
bbhello/conf/bblayers.conf
Keep these, they supersede some of the book-instructions, meaning that you should not create or have the files:
bbhello/classes/base.bbclass
bbhello/conf/bitbake.conf
Similarly, do not overwrite bbhello/conf/bblayers.conf with the book's sample. Instead, edit it to add a single line pointing to your own meta-hello folder, ex:
BBLAYERS ?= " \
${TOPDIR}/meta-hello \
/tmp/poky/meta \
/tmp/poky/meta-poky \
/tmp/poky/meta-yocto-bsp \
"
Creating the layer and recipe
Go ahead and create the following files from the book-samples:
meta-hello/conf/layer.conf
meta-hello/recipes-editor/nano/nano.bb
We'll edit these files gradually as we hit errors.
Can't find recipe error
The error:
ERROR: BBFILE_PATTERN_hello not defined
It is caused by the book-website's bbhello/meta-hello/conf/layer.conf being internally inconsistent. It uses the collection-name "hello" but on the next two lines uses _test suffixes. Just change them to _hello to match:
# Set layer search pattern and priority
BBFILE_COLLECTIONS += "hello"
BBFILE_PATTERN_hello := "^${LAYERDIR}/"
BBFILE_PRIORITY_hello = "5"
Interestingly, this error is not present in the printed copy of the book.
No license error
The error:
ERROR: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb: This recipe does not have the LICENSE field set (nano)
ERROR: Failed to parse recipe: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
Can be fixed by adding a license setting with one of the values that bitbake recognizes. In this case, add a line onto nano.bb of:
LICENSE="GPLv3"
Recipe parse error
ERROR: ExpansionError during parsing /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
[...]
bb.data_smart.ExpansionError: Failure expanding variable PV_MAJOR, expression was ${#bb.data.getVar('PV',d,1).split('.')[0]} which triggered exception AttributeError: module 'bb.data' has no attribute 'getVar'
This is fixed by updating the special python commands being used in the recipe, because #bb.data was deprecated and is now removed. Instead, replace it with #d, ex:
PV_MAJOR = "${#d.getVar('PV',d,1).split('.')[0]}"
PV_MINOR = "${#d.getVar('PV',d,1).split('.')[1]}"
License checksum failure
ERROR: nano-2.2.6-r0 do_populate_lic: QA Issue: nano: Recipe file fetches files and does not have license file information (LIC_FILES_CHKSUM) [license-checksum]
This can be fixed by adding a directive to the recipe telling it what license-info-containing file to grab, and what checksum we expect it to have.
We can follow the way the recipe generates the SRC_URI, and modify it slightly to point at the COPYING file in the same web-directory. Add this line to nano.bb:
LIC_FILES_CHKSUM = "${SITE}/v${PV_MAJOR}.${PV_MINOR}/COPYING;md5=f27defe1e96c2e1ecd4e0c9be8967949"
The MD5 checksum in this case came from manually downloading and inspecting the matching file.
Done!
Now bitbake nano ought to work, and when it is complete you should see it built nano:
/tmp/bbhello $ find ./tmp/deploy/ -name "*nano*.rpm*"
./tmp/deploy/rpm/i586/nano-dbg-2.2.6-r0.i586.rpm
./tmp/deploy/rpm/i586/nano-dev-2.2.6-r0.i586.rpm
I have recently worked on that hands-on hello world project. As far as I am concerned, I think that the source code in the book contains some bugs. Below there is a list of suggested fixes:
Inheriting native class
In fact, when you build with bitbake that you got from poky, it builds only for the target, unless you mention in your recipe that you are building for the host machine (native). You can do the latter by adding this line at the end of your recipe:
inherit native
Adding license information
It is worth mentioning that the variable LICENSE is important to be set in any recipe, otherwise bitbake rises an error. In our case, we try to build the version 2.2.6 of the nano editor, its current license is GPLv3, hence it should be mentioned as follow:
LICENSE = "GPLv3"
Using os.system calls
As the book states, you cannot dereference metadata directly from a python function. Which means it is mandatory to access metadata through the d dictionary. Bellow, there is a suggestion for the do_unpack python function, you can use its concept to code the next tasks (do_configure, do_compile):
python do_unpack() {
workdir = d.getVar("WORKDIR", True)
dl_dir = d.getVar("DL_DIR", True)
p = d.getVar("P", True)
tarball_name = os.path.join(dl_dir, p+".tar.gz")
bb.plain("Unpacking tarball")
os.system("tar -x -C " + workdir + " -f " + tarball_name)
bb.plain("tarball unpacked successfully")
}
Launching the nano editor
After successfully building your nano editor package, you can find your nano executable in the following directory in case you are using Ubuntu (arch x86_64):
./tmp/work/x86_64-linux/nano/2.2.6-r0/src/nano
Should you have any comments or questions, Don't hesitate !

Auditd in Yocto

I'm trying to add auditd to Yocto linux.
I added the selinux layer and it's dependent layers: openembedded-core and meta-virtualization.
I added the layers to bblayers.conf.
I added DISTRO_FEATURES_append = " acl xattr pam selinux"
and PREFERRED_PROVIDER_virtual/refpolicy ?= "refpolicy-mls" to the local.conf file.
After building (by using bitbake core-image-base) and running the qemu, the kauditd process is running, but all user-space tools are not.
The /etc/audit folder is not exist ,non of the audit's config files exists (audit.rules) and no user-space audit process is running.
In the layer's info it is declared - "User space tools for kernel auditing".
What I am missing?
Thanks.
I think I found something that will answer your question: If you know what an example binary or library you expect to be in the target image, you can find what recipe the executable is in, and then add that package to the image.
Start with the name of a binary or library you expect to be in the image and run the following. For me, I am using a CAN bus executable called candump. I wonder what recipe it's in? To find out, I issue:
devtool search candump
Which returns:
can-utils
If nothing is returned, I'd double check your conf/bblayers.conf so that the layer you think it may be in is actually being seen by your build system. If you are unsure, take a look at the link below which points to OpenEmbedded which has a handy search utility for packages.
After you find the recipe, you can then include that recipe into your build.
Here is a good reference in doing what I think you're asking on the OpenEmbedded website:
https://wiki.yoctoproject.org/wiki/Cookbook:Example:Adding_packages_to_your_OS_image
I just added auditd to my system. This is what I did.
First I got the repository checked out.
cd /path/to/yocto
git clone git://git.yoctoproject.org/meta-selinux
cd meta-selinux
# checkout the branch matching the Yocto release you are on
git checkout thud
Then I added auditd to my build.
cd /path/to/build
bitbake-layers add-layer /path/to/yocto/meta-selinux
cat >> conf/local.conf <<'END'
IMAGE_INSTALL_append = " auditd"
END
bitbake my_normal_image_target
Even though the Yocto recipe is called audit, the package name is auditd.
Of course, auditd without selinux is useless but it did attempt to run (journalctl -u auditd) and /etc/audit exists.
FWIW: To get auditd to a point where it reports say, login success/failure, I had to do a few more things. I'm not just adding it to a standard Yocto image, but to a custom image and custom machine. I'm already using systemd so I didn't have to change that (the layer seems to indicate it's required?). My local.conf looked like this.
# enable selinux
DISTRO_FEATURES_append = " acl xattr pam selinux"
# set the policy
PREFERRED_PROVIDER_virtual/refpolicy ?= "refpolicy-mls"
# install selinux packages and auditd
IMAGE_INSTALL_append = " packagegroup-core-selinux auditd"
# tell the kernel to enable selinux (non-enforcing) and audting
APPEND_append = " selinux=1 enforcing=0 audit=1"
I also had to change linux-yocto_selinux.inc to load selinux.cfg later. Probably layer/recipe ordering could have solved this too?
-SRC_URI += "${#bb.utils.contains('DISTRO_FEATURES', 'selinux', 'file://selinux.cfg', '', d)}"
+SRC_URI_append = "${#bb.utils.contains('DISTRO_FEATURES', 'selinux', 'file://selinux.cfg', '', d)}"
With all that in place, I see audit logs in my journal.

Automake, generated source files and VPATH builds

I'm doing VPATH builds with automake. I'm now also using generated source, with SWIG. I've got rules in Makefile.am like:
dist_noinst_DATA = whatever.swig
whatever.cpp: whatever.swig
swig -c++ -php $^
Then the file gets used later:
myprogram_SOURCES = ... whatever.cpp
It works fine when $builddir == $srcdir. But when doing VPATH builds (e.g. mkdir build; cd build; ../configure; make), I get error messages about missing whatever.cpp.
Should generated source files go to $builddir or $srcdir? (I reckon probably $builddir.)
How should dependencies and rules be specified to put generated files in the right place?
Simple answer
You should assume that $srcdir is a read-only, so you must not write anything there.
So, your generated source-code will end up in $(builddir).
By default, autotool-generated Makefiles will only look for source-files in $srcdir, so you have to tell it to check $builddir as well. Adding the following to your Makefile.am should help:
VPATH = $(srcdir) $(builddir)
After that you might end up with a no rule to make target ... error, which you should be able to fix by updating your source-generating rule as in:
$(builddir)/whatever.cpp: whatever.swig
# ...
A better solution
You might notice that in your current setup, the release tarball (as created by make dist) will contain the whatever.cpp file as part of your sources, since you added this file to the myprogram_SOURCES.
If you don't want this (e.g. because it might mean that the build-process will really take the pregenerated file rather than generating it again), you might want to use something like the following.
It uses a wrapper source-file (whatever_includer.cpp) that simply includes the generated file, and it uses -I$(builddir) to then find the generated file.
Makefile.am:
dist_noinst_DATA = whatever.swig
whatever.cpp: whatever.swig
swig -c++ -php $^
whatever_includer.cpp: whatever.cpp
myprogram_SOURCES = ... whatever_includer.cpp
myprogram_CPPFLAGS = ... -I$(builddir)
clean-local::
rm -f $(builddir)/whatever.cpp
whatever_includer.cpp:
#include "whatever.cpp"
Usually, you want to keep $srcdir readonly, so that if for instance the source is distributed unpacked on a CDROM, you can still run /.../configure from some other part of the file-system.
However if you are using SWIG to generate source code for a wrapper library, you probably want to distribute that SWIG-generated code as well so that your users do not need to install SWIG to compile your code. Then you have indeed a choice: you can decide that the SWIG-generated code should end in $builddir (it's OK: make dist will collect it there and include it in the tarball), or you could decide to output SWIG-generated code in $srcdir since it is really a source from the point of view of the distributed package. An advantage of keeping it in $srcdir is that when make distcheck attempts to build your package from a read-only source directory, it will fail on any attempt to call SWIG to regenerate the wrapper source. If you have your wrapper source in $builddir, you might not notice you have some broken rule that cause SWIG to be run on the user's host; by generating in $srcdir you ensure that SWIG is not needed by your users.
So my preference is to output SWIG wrapper sources in $srcdir. My setup for Python wrappers looks as follows:
EXTRA_DIST = spot.i
python_PYTHON = $(srcdir)/spot.py # _PYTHON is distributed by default
pyexec_LTLIBRARIES = _spot.la
MAINTAINERCLEANFILES = $(srcdir)/spot_wrap.cxx $(srcdir)/spot.py
_spot_la_SOURCES = $(srcdir)/spot_wrap.cxx $(srcdir)/spot_wrap.h
_spot_la_LDFLAGS = -avoid-version -module
_spot_la_LIBADD = $(top_builddir)/src/libspot.la
$(srcdir)/spot_wrap.cxx: $(srcdir)/spot.i
$(SWIG) -c++ -python -I$(srcdir) -I$(top_srcdir)/src $(srcdir)/spot.i
# Handle the multi-file output of SWIG.
$(srcdir)/spot.py: $(srcdir)/spot.i
$(MAKE) $(AM_MAKEFLAGS) spot_wrap.cxx
Note that I use $(srcdir) for all targets, because of limitations of the VPATH feature on various flavors of make. My setup to deal with the multiple files output by SWIG could be improved, but as these rules are not run by users and it has never caused me any problem, I do not bother.