Request for clarification on Yocto inheritance - yocto

I've recently made a foray into building Linux-based embedded systems, a far cry from my usual embedded stuff where I have total control over everything.
As part of that, I'm looking into the Yocto/bitbake/OpenEmbedded build system.
There's one thing I'm grappling with and that's the layering concept, so I'm trying to both figure out the way in which layers use/affect other layers.
From my understanding to date, a .bb recipe file uses require to simply include another file, similar to C's #include "myheader.h" which generally looks locally.
A .bbappend file in an "upper" layer will auto-magically include the base file then make changes to it, sort of an inherent require.
In contrast, the inherit keyword looks for a .bbclass class file in much the same way as it locates the .bb files, and inherits all the detials from them (sort of like #include <stdio.h> which, again generally, looks in the system area(a)).
So the first part of my question is: is my understanding correct? Or am I being too simplistic?
The second part of my question then involves the use of BBEXTENDS in the light of my current understanding. If we already have the ability to extend a recipe by using require, what is the purpose of listing said recipes in a BBEXTENDS variable?
(a) Yes, I'm aware they're both totally implementation dependent in terms of where the headers come from, I'm simply talking about their common use.

The learning curve for Yocto is different than other building systems, that's why I understand your confusion. But trust me, this is worth it. Your questions are related to BitBake so I recommend the BitBake User Manual. Just ensure that you're reading the same version as your poky revision.
require and include.
require is similar to include and can be compared to #include from C and C++ just like you have written.
Although generally both of them should be used to add some extensions to a recipe (*.bb) which are common to some amount of recipes (simply - can be reused).
For instance: definitions of paths, custom tasks used by couple recipes. The common purpose is to make recipe cleaner and separate some constants for re-usage.
The very important thing -> difference between include and require (from BitBake manual):
The include directive does not produce an error when the file cannot be found. Consequently, it is recommended that if the file you are including is expected to exist, you should use require instead of include. Doing so makes sure that an error is produced if the file cannot be found.
As a result: when you include a file to *.bb and it hasn't been found, the BitBake will not raise an error during parsing this recipe.
If you would use require, the error will be raised. You should use require when the pointed file must exist because it contains important variables/tasks that are mandatory to process.
*.bbappend mechanism.
In the case of *.bbappend - it's very powerful. The typical usage is whey you are adding some custom modifications to the recipe from other layer (located above layer where original recipe is) by *.bbappend because (e.g): you are not the maintainer of original recipe or the modifications are only used in your project (then it should be located in your meta-layer). But you can also bbappend the recipe on the same layer. BitBake parses all layers and then 'creates' an output and executes it. More in chapter Execution from BitBake man.
inherit.
The inherit mechanism can be used to inherit *.bbclass where common tasks for some specific purpose are defined so you don't need to write them on your own, e.g: you use inherit cmake or inherit autotools to your recipe when it needs to provide output for sources that are built correspondingly by CMake (and you have CMakeLists.txt defined) or autotools (Makefile.am etc.).
The definitions of classes provided by OpenEmbedded are located under /meta/classes/ if you are using Yocto release with poky.
You can check them and you will see that for example autotools.bbclass has defined (among others) task: autotools_do_configure() so you don't need to write it from the scratch.
However, you can redefine it in your recipe (by just providing your own definition of this function). If the recipe can't be changed, then you can simply create a *.bbappend file and write your own function do_configure() which will override the function from *.bbclass. Just like in OO languages such as C++ or Java.

Related

Inheritance clarification in bitbake

In bitbake, are inheritances transferable between include files when they are added with require keyword? Or do we need to reuse inherit keyword over and over again in each of the include files? I'm using bitbake to built images with Yocto.
exampleA.inc file
inherit exampleC
exampleB.inc file
require exampleA.inc
In that case, if I want exampleB.inc to be inherited from exampleC as well, do I need to specify it in this file?
Assume that the exampleC is a bbclass file.
TLDR: one inherit statement is enough.
require and include just insert the content of the specified file at the current position in the recipe. This results in the same outcome, as if you had written the whole content of your .inc file into the recipe. Multiple layers of include / require should not change that. This means, that not the .inc file inherits from exanpleC, but rather the recipe, which requires said .inc file.
I also ran some quick tests to confirm the theory, and it all seems to work.
Do not be deterred by the BitBake documentation stating:
[...] you can use the inherit directive to inherit the functionality of a class (.bbclass). BitBake only supports this directive when used within recipe and class files (i.e. .bb and .bbclass).
This does not mean, that it does not work in .inc files, but rather that it will not work for configuration files.

Absolute and relative path conflict in Modelica

I want to build up a tests library and keep it separated from the libraries under development. My first thought is to go for a structure like the following:
PensLib
--Variants
----BallPoint
----FountainPen
----Tests
------TB_BallPoint
HammocksLib
--Variants
----SingleHammock
----DoubleHammock
----Tests
------TB_DoubleHammock
--Systems
----IndoorWalls
----OutdoorWallAndTree
----CoconutPalms
----Tests
------TB_IndoorWalls
Tests
--PensLib
----Variants
------Test_BallPoint // extends PensLib.Variants.Tests.TB_BallPoint
--HammocksLib
----Variants
------Test_DoubleHammock // extends HammocksLib.Variants.Tests.TB_DoubleHammock
----Systems
------Test_IndoorWalls // extends HammocksLib.Systems.Tests.TB_IndoorWalls
For now let's assume that the way I structure my libraries make sense (which most likely doesn't). I will soon ask more questions on good practices in setting up the testing environment in Dymola and with the Testing Library.
My question is about the correct way to handle relative and absolute paths within models, if possible at all.
The model PensLib.Variants.Tests.TB_BallPoint is used for developing the variant BallPoint
The model Tests.PensLib.Variants.Tests_BallPoint is used for automated testing
I want the model Test_BallPoint to extend the model TB_BallPoint, but I cannot link them. I guess the absolute path PensLib.Variants.Tests.TB_BallPoint is treated as a relative one, since PensLib is found "on the way out" of the Tests library, and from there it goes looking for the rest of the path. Is there perhaps a way to control the path, kind of ..\..\..\PensLib\Variants\Tests\TB_BallPoint?
As you already noted such a setup makes troubles. There are ways around that, namely global name lookup and imports, which I explain briefly further below.
Both solutions are nice when you have such a case in a few situations. But if you have to use it all the time, you make your setup unnecessarily complicated.
Hence, I suggest to make yourself the live easier and change your package structure:
Either create a dedicated test library for every library, maybe PensLib_Tests and HammocksLib_Tests
Or rename the packages in the Tests library and don't use the exact library names
Global name lookup
You can use absolute class paths. They are denoted with a leading ., so this should work:
extends .PensLib.Variants.Tests.TB_BallPoint;
See Modelica Specification chapter 5: Scoping, Name Lookup, and Flattening for details, especially 5.3.3 Global Name Lookup
Importing
You can simply import the library. Lookup of imports is always performed globally.
import PensLib;
extends PensLib.Variants.Tests.TB_BallPoint;

Why does Erlang offer both `import` for modules and `include` for headers?

Erlang's -import() directive lets you import code from other modules. Its include() directive lets you import code from headers. Why reasons are there to prefer either one over the other?
My hunch is that headers are good for short, easy-on-the-compiler kinds of code, such as record definitions, when you don't want to have to qualify the
Learn You Some Erlang states[1] that "Erlang header files are pretty similar to their C counter-part: they're nothing but a snippet of code that gets added to the module as if it were written there in the first place." Thus inclusion seems to cause the compiler to duplicate effort across different modules. And header files are what appear to be an optional complication on top of the mandatory module system. So why would I ever use a header file?
[1] https://learnyousomeerlang.com/a-short-visit-to-common-data-structures
Erlang's -import just allows you to call imported functions without the Module. It hurts legibility and should not be used: You need to check the import directive to know whether a function is local or external to the module.
With header files you get the same functionality as in C, you can use them to share -record definitions instead of having a dto-like module (1), you can use them to include -defines to use the same macros (2).
1:
-record(position, {x, y}).
Imagine that you have #position{} throughout the code, instead of defining the record everywhere and updating all of the copies when the record definition changes, you use a header (or a dto module with opaque types, but that's for another question).
And let's just hope that you remember to update all the copies, otherwise chaos ensues.
2:
-define(ENUM01, enum01).
-define(DEFAULT_TIMEOUT, 1000).
Instead of using enum01 and 1000 everywhere, which is error prone and requires multiple updates if you need to change them, you define them in a header and use them as ?ENUM01 and ?DEFAULT_TIMEOUT
Or you can be more thorough when testing:
-ifdef(TEST).
-define(assert(A), true = A).
-else
-define(assert(A), A).
-endif.
Or you can include some useful information:
-define(LOG(Level, X), logger:log(Level, X, #{line => ?LINE}).
The Erlang standard library uses header files to provide the ability to add metadata to your code.
For instance, EUnit functionality:
-include_lib("eunit/include/eunit.hrl").
import is helpful in building encapsulations, whereas include is kind of pre-processing(which means code will be part of the unit before it gets through the compiler).
An import ensues a dependency between two modules, which means a module A importing module B has B as dependency ... Whereas an include is extensional which means a module has included some code and that code is part of the module itself, and that is what header files do.
module(s) and header(s) are 2 semantically different things and serve different purposes. With modules, we can build abstractions by using the export(s), keep things confined by not exporting them, import from other modules(but they are not exported by default), re-export imported things etc etc. So when we import stuff, we can call upon functions from the other module, but only those which are exported in the other module. However that is not the case with header files. Everything inside a header file becomes part of the module which includes them. There is no sense of export/import inside header files. And header files are quite useful for writing and distributing definition(s) which otherwise could lead to redundancy in case of large programs.
So essentially they are 2 different things, so 2 different keywords available at our disposal. So don't prefer one over the other. Learn both of them as we need both of them.

How does VARIABLE_*_something works? in Yocto

I'm trying to build my own yocto_meta-layer based on the imx6ulevk and in ./meta-fsl-bsp-release/imx/meta-sdk/conf/distro/include/fsl-imx-preferred-env.inc I found something weird:
PREFERRED_PROVIDER_virtual/kernel_mx6ul = "linux-imx"
PREFERRED_PROVIDER_virtual/kernel_mx6sll = "linux-imx"
PREFERRED_PROVIDER_virtual/kernel_mx7 = "linux-imx"
So i was wondering what does the last *_word (i.e. PREFERRED_PROVIDER_virtual/kernel_*) means?
a) Does it is a way to set a virtual/kernel parser depending on the machine?
b) If [a] is yes, how do i know which name to put? or what part of the machine_name.conf i need to choose?
PREFERRED_PROVIDER_<recipe-name>_<machine-name> means this variable applies to mentioned recipe AND the respective MACHINE only. This is a common sighting in distro layers. In this particular case, the freescale layer is telling bitbake which Linux kernel recipe to choose depending on the MACHINE you either set in local.conf or pass via command line. More info here.
So the answer to a) is yes.
The answer to b) is that you should not bother with changing the PREFERRED_PROVIDER for the Linux kernel unless you really know what you're doing (i.e, writing a kernel recipe from scratch). Even if you have a custom board you're unlikely to change the virtual/kernel provider. You'd likely want to follow the BSP maintainer's recommendation. What you need to do is set a proper MACHINE, and the bitbake will take care of the rest.
For example if your MACHINE is mx6ul, invoking bitbake virtual/kernel is the same as bitbake linux-imx. The former is best practice, as you call that in Yocto regardless of the machine.
I'm afraid reading the docs is the best way to fully grasp Yocto. The good thing is that it's documented really well. You'd probably want to start from the development manual and the bitbake link above, before diving into the mega manual.
The suffix underscore '_' followed by a string means that the variable, PREFERRED_PROVIDER_virtual/kernel in this case is "overridden". bitbake will use this assignment when the OVERRIDES variable contains that particular string, such as "imx6ul".
Many times, if not all, the SoC architecture is set in the MACHINEOVERRIDES variable in the machine.conf, to define what the SoC is on the board. That consequently gets assigned to OVERRIDES in some yocto/bitbake recipe elsewhere.
The Conditional Syntax (Overrides) section in the bitbake manual [1] specifically talks about how this affects the variable expansion.
a.) If we were being strict with the terminology used by Yocto, it would be no. The "machine" per-se correlates to a board, such as "imx6ulevk". The overrides you have there more generally pertain to an SoC architecture (a chip). You may have many boards running the imx6ul for example. In this case it would pertain to all "machines" running that particular SoC (as defined by your machine in MACHINEOVERRIDES).
b.) Anything appearing in the colon delimited OVERRIDES variable is fair game. You can use the machine name because Yocto does in fact append the MACHINE name to it as well. But it doesn't really make sense to do that because you have a dedication machine.conf file for you to make a hard definition such as PREFERRED_PROVIDER_virtual/kernel = "something" if you really want a machine/board specific kernel selection. NXP did this in their distro layer to apply to many machines (aka boards) all at once.
Hint: to see what these variables expand out to, run bitbake -e virtual/kernel
These overrides are one of the most powerful features of bitbake. For example if you want to override the source revision of you linux-imx kernel build you can put something like SRCREV_pn-linux-imx = "something" in your local.conf. See if you can grep the recipe sources to find out how this works!
References:
[1] https://www.yoctoproject.org/docs/1.6/bitbake-user-manual/bitbake-user-manual.html#conditional-syntax-overrides

Yocto: how to remove/blacklist some dependency from RDEPENDS of a package?

I have a custom machine layer based on https://github.com/jumpnow/meta-wandboard.
I've upgraded the kernel to 4.8.6 and want to add X11 to the image.
I'm modifying to image recipe (console-image.bb).
Since wandboard is based on i.MX6, I want to include the xf86-video-imxfb-vivante package from meta-fsl-arm.
However, it fails complaining about inability to build kernel-module-imx-gpu-viv. I believe that happens because xf86-video-imxfb-vivante DEPENDS on imx-gpu-viv which in turn RDEPENDS on kernel-module-imx-gpu-viv.
I realize that those dependencies have been created with meta-fsl-arm BSP and vanilla Poky distribution. But those things are way outdated for wandboard, hence I am using the custom machine layer with modern kernel.
The kernel is configured to include the Vivante DRM module and I really don't want the kernel-module-imx-gpu-viv package to be built.
Is there a way to exclude it from RDEPENDS? Can I somehow swear my health to the build system that I will take care of this specific run-time dependency myself?
I have tried blacklisting 'kernel-module-imx-gpu-viv' setting PNBLACKLIST[kernel-module-imx-gpu-viv] in my local.conf, but that's just a part of a solution. It helps avoid build failures, but the packaging process still fails.
IIUC you problem comes from these lines in img-gpu-viv recipe:
FILES_libgal-mx6 = "${libdir}/libGAL${SOLIBS} ${libdir}/libGAL_egl${SOLIBS}"
FILES_libgal-mx6-dev = "${libdir}/libGAL${SOLIBSDEV} ${includedir}/HAL"
RDEPENDS_libgal-mx6 += "kernel-module-imx-gpu-viv"
INSANE_SKIP_libgal-mx6 += "build-deps"
I would actually qualify this RDEPENDS as a bug, usually kernel module dependencies are specified as RRECOMMENDS because most modules can be compiled into the kernel thus making no separate package at all while still providing the functionality. But that's another issue.
There are several ways to fix this problem, the first general route is to tweak RDEPENDS for the package. It's just a bitbake variable, so you can either assign it some other value or remove some portion of it. In the first case it's going to look somewhat like this:
RDEPENDS_libgal-mx6 = ""
In the second one:
RDEPENDS_libgal-mx6_remove = "kernel-module-imx-gpu-viv"
Obviously, these two options have different implications for your present and future work. In general I would opt for the softer one which is the second, because it has less potential for breakage when you're to update meta-fsl-arm layer, which can change imx-gpu-viv recipe in any kind of way. But when you're overriding some more complex recipe with big lists in variables and you're modifying it heavily (not just removing a thing or two) it might be easier to maintain it with full hard assignment of variables.
Now there is also a question of where to do this variable mangling. The main option is .bbappend in your layer, that's what appends are made for, but you can also do this from your distro configuration (if you're maintaining your own distro it might be easier to have all these tweaks in one place, rather than sprayed into numerous appends) or from your local.conf (which is a nice place to quickly try it out, but probably not something to use in longer term). I usually use .bbappend.
But there is also a completely different approach to this problem, rather than fixing package dependencies you can also fix what some other package provides. If for example you have a kernel configured to have imx-gpu-viv module built right into the main zimage you can do
RPROVIDES_kernel-image += "kernel-module-imx-gpu-viv"
(also in .bbappend, distro configuration or local.conf) and that's it.
In any case your approach to fixing this problem should reflect the difference between your setup and recipe assumptions. If you do have the module, but in a different package, then go for RPROVIDES, if you have some other module providing the same functionality to libgal-mx6 package then fix libgal-mx6 dependencies (and it's better to fix them, meaning not only drop something you don't need, but also add things that are relevant for your setup).