which recipe will be used first? - yocto

If I have many recipes, ex:
dmidecode_1.0.bb
dmidecode_1.1.bb
dmidecode_git.bb
when I doing:
CORE_IMAGE_EXTRA_INSTALL += "dmidecode"
which recipe will be used?

bitbake -e dmidecode | grep -e "^PV=" will tell you.
The layer priority (BBFILE_PRIORITY) takes precedence, then PV which is dynamically set if you're getting the sources from git IIRC. There's also DEFAULT_PREFERENCE that can be set and modifies the logic behind version precedence.

Related

Is there a way to configure pytest_plugins from a pytest.ini file?

I may have missed this detail but I'm trying to see if I can control the set of plugins made available through the ini configuration itself.
I did not find that item enumerated in any of the configurable command-line options nor in any of the documentation around the pytest_plugins global.
The goal is to reuse a given test module with different fixture implementations.
#hoefling is absolutely right, there is actually a pytest command line argument that can specify plugins to be used, which along with the addopts ini configuration can be used to select a set of plugin files, one per -p command.
As an example the following ini file selects three separate plugins, the plugins specified later in the list take precedence over those that came earlier.
projX.ini
addopts =
-p projX.plugins.plugin_1
-p projX.plugins.plugin_2
-p projY.plugins.plugin_1
We can then invoke this combination on a test module with a command like
python -m pytest projX -c projX.ini
A full experiment is detailed here in this repository
https://github.com/jxramos/pytest_behavior/tree/main/ini_plugin_selection

Get description of packages in yocto image

I know I can get a list of packages included in an image using this command:
bitbake -g <image> && cat pn-buildlist | grep -ve "native" | sort | uniq
Is there a bitbake command to get the description of a specific package? Or perhaps there is a command to get all info on a package, I could simply grep the output of this.
Cheers!
It isn't a bitbake command, but there is oe-pkgdata-util utility (part of openembedded-core). It works in OE build environment (like bitbake). You can query a value from a built package (not recipe) using the read-value subcommand. The basic syntax is:
oe-pkgdata-util read-value <value> <pkg1> [<pkg2>...]
You can query multiple packages, prefix the name of package etc. Just check:
oe-pkgdata-util read-value --help
Here is an example for your question:
tom#pc:~/oe/build> oe-pkgdata-util read-value DESCRIPTION libc6
The GNU C Library is used as the system C library in most systems with the Linux kernel.
BTW, you can query other variables like RDEPENDS, SUMMARY etc.
Please note that if the DESCRIPTION variable is not set in the recipe, it is filled with content of a SUMMARY variable (see doc).
Your question also mentions getting the list of packages in the image. I would say that there are more straightforward ways. For example:
manifest file in the deploy dir (the file is next to the image file): ${DEPLOY_DIR}/images/${MACHINE}/${IMAGE_BASENAME}-${MACHINE}.manifest
file installed-package-names.txt in buildhistory (if you've enabled it). It is inside the folder ${BUILDHISTORY_DIR}/images/${MACHINE_ARCH}/${TCLIBC}/${IMAGE_BASENAME}/.
FYI, not every package has a description.
I usually read the recipe as it is faster than waiting the bitbake output. Nevertheless, if you wish to read it from bitbake:
bitbake <recipe> -e | grep ^DESCRIPTION=
The description may be written in the recipe, like in here
As a side note, you can get access to each variable with -e, very useful for debugging.

How to conditionally install documentation to pkghtmldir using Automake

I have a recipe in Automake that optionally builds documentation if the user issues make doc or make htmldoc:
if DOXYGEN_AVAILABLE
docs html htmldoc html-doc:
$(DOXYGEN) Doxyfile -d DOXYGEN_PROCESSING
fi
DOXYGEN_AVAILABLE is set in configure based on the result of AC_CHECK_PROGS. If the docs are built there will be a directory html-doc. The documentation is optional and html-doc may be missing.
If html-doc is present I won't have a file list. I don't believe this will work in Makefile.am:
if DOXYGEN_AVAILABLE
docs html htmldoc html-doc:
$(DOXYGEN) Doxyfile -d DOXYGEN_PROCESSING
pkghtmldir_FILES += html-doc/
fi
How do I optionally install documentation to pkghtmldir when using Automake?
I suggest in the first place that you change your logic a bit. If it's OK to not install the Doxygen docs when Doxygen isn't available to build them, then it should also be OK not to install them even if Doxygen is available. Thus, it makes sense to use an --enable-docs or --with-docs or similar option to configure to let the package builder express whether the docs should be built, with whichever default suits you. You could also consider including pre-built docs with your package, and then selecting whether to enable rebuilding them.
You can furthermore, then, altogether omit the check for Doxygen when docs are not requested anyway, and emit a warning or error when they are requested but Doxygen is not available (or is too old). That should be less surprising to package builders. At least, it would be less surprising to me.
Still, it ultimately comes down to Automake conditionals either way. Here's a slightly trimmed version of how I handle pretty much the same task in one of my projects:
$(top_srcdir)/Makefile.am:
# An Automake conditional:
if make_docs
# The name of the target(s) that encompasses actually building the docs
doxygen_html_targets = dox-html
# My Doxyfile is built by the build system, and the docs also depend on some example
# sources, stylesheets, and other files provided by the package. All these are
# listed here, so that the docs are rebuilt if any of them change:
dox_deps = Doxyfile ...
# How to actually install the docs (a one-line make recipe). Covers also installing
# pre-built docs that I include in the package for convenience, though this example
# has most of the other plumbing for that removed.
#
# The chmod in the below command must ensure the target files writable
# in order to work around weirdness in the behavior of the "distcheck" target
# when not configured to rebuild docs.
html_install = test -d dox-html && html_dir=dox-html || html_dir=$(srcdir)/dox-html; \
$(MKDIR_P) $(DESTDIR)$(pkgdocdir); \
cp -pR $${html_dir} $(DESTDIR)$(pkgdocdir)/html; \
chmod -R u+w,go+rX $(DESTDIR)$(pkgdocdir)/html
else
doxygen_html_targets =
dox_deps =
html_install = :
endif
# The variable prerequisites of this rule are how the selection is made between
# building the docs and not building them:
html-local: $(doxygen_html_targets)
:
## This rule should not be necessary, but Automake seems otherwise to ignore the
## install-html-local rule, perhaps because there are no targets with an HTML
## primary.
install-data-local: install-html-local
# The variable recipe for this rule is how the selection is made between installing
# the docs and not installing them:
install-html-local:
$(html_install)
maintainer-clean-local:
$(RM) -rf dox-html
# This rule, when exercised, is what actually builds the docs:
dox-html: $(dox_deps)
$(RM) -rf dox-html
$(DOXYGEN) Doxyfile
The key thing to take away here is that it's not only file lists that you can store in make variables and control via Automake conditionals. You can also store the names of arbitrary targets, to be used in prerequisite lists, and you can even store the text of rule recipes, so as to vary behavior of the rules that are selected.

Bitbake: How to list all recipe and append files used in an image?

I'm using OpenEmbedded-Core and have created a custom layer with priority 6. Months of development have gone by, and now I want to increase my layer's priority to 8 because an append file from another layer with priority 7 is interfering with an append file I'm adding in my layer.
My question is, how can I generate a list of recipes and .bbappend files used in an image?
I want to generate the list both before and after I make the priority change so that I can compare them (with a difftool hopefully) to see if any unexpected side-effects occurred, like an important append file from the other layer getting ignored potentially.
I'm using the angstrom-v2014.12-yocto1.7 branch of the Angstrom distribution.
[EDIT]
I'm now primarily just interested in determining how to list which .bbappend files are actually used by my image at this point.
A list of packages can be viewed using "bitbake -g your-image-name" as suggested by #pnxs, or from the .manifest file (which is what I like to use) which in my case is located under deploy/glibc/images/imagename/. I originally asked how a list of "recipe files" could be generated, but I think a list of packages is sufficient.
Regarding the .bbappends though, I had a case where my own .bbappend was ignored due to layer priorities. I made a change to my layer priorities and now want to see if that caused any .bbappend files anywhere else in my image to get ignored. As I understand it, using "bitbake-layers show-appends" as suggested lists all .bbappends present rather than just those which are actually used in the creation of an image, so this doesn't do what I'm looking for.
Try using "bitbake-layers show-appends" to see what bbappends are used. But that will only work on a per-recipe basis. But that might give you the information you need to understand the priorities.
You can do a "bitbake -g your-image-name" which creates some dot-files in the current directory.
The file "pn-depends.dot" contains a list of package-names (pn) and the dependencies between them.
When you take the first part of the file where all packages are listed, you see for example:
"busybox" [label="busybox :1.23.1-r0.2\n/home/user/yocto/sources/poky/meta/recipes-core/busybox/busybox_1.23.1.bb"]
"base-files" [label="base-files :3.0.14-r89\n/home/user/yocto/sources/poky/meta/recipes-core/base-files/base-files_3.0.14.bb"]
So you got a list of all packages used by your image and the corresponding recipe-file.
To see which of the recpies are extended by bbappend you have to get the list of bbappends with "bitbake-layers show-appends" and look up the appends of every recipe. You can write a little python-program that can do that for you.
Try the following:
Show all recipes
bitbake-layers show-recipes
Show .bb file of a recipe
RECIPE_NAME="linux-yocto"
bitbake -e $RECIPE_NAME | grep ^FILE=
Try the below command
bitbake -g image-name && cat pn-depends.dot | grep -v -e '-native' | grep -v digraph | grep -v -e '-image' | awk '{print $1}' | sort | uniq

How to read a block in a storage pool (zpool) using dd?

I want to read a block in zpool storage pool using dd command. Since zpool doesn't create a device file like other volume manager like vxvm. I dunno which block device to use for reading. Is there any way to read block by block data in zpool ?
You can probably use the zdb command. Here is a pdf about it, and the help output.
http://www.bruningsystems.com/osdevcon_draft3.pdf
# zdb --help
zdb: illegal option -- -
Usage: zdb [-CumdibcsDvhL] poolname [object...]
zdb [-div] dataset [object...]
zdb -m [-L] poolname [vdev [metaslab...]]
zdb -R poolname vdev:offset:size[:flags]
zdb -S poolname
zdb -l [-u] device
zdb -C
Dataset name must include at least one separator character '/' or '#'
If dataset name is specified, only that dataset is dumped
If object numbers are specified, only those objects are dumped
Options to control amount of output:
-u uberblock
-d dataset(s)
-i intent logs
-C config (or cachefile if alone)
-h pool history
-b block statistics
-m metaslabs
-c checksum all metadata (twice for all data) blocks
-s report stats on zdb's I/O
-D dedup statistics
-S simulate dedup to measure effect
-v verbose (applies to all others)
-l dump label contents
-L disable leak tracking (do not load spacemaps)
-R read and display block from a device
Below options are intended for use with other options (except -l):
-A ignore assertions (-A), enable panic recovery (-AA) or both (-AAA)
-F attempt automatic rewind within safe range of transaction groups
-U <cachefile_path> -- use alternate cachefile
-X attempt extreme rewind (does not work with dataset)
-e pool is exported/destroyed/has altroot/not in a cachefile
-p <path> -- use one or more with -e to specify path to vdev dir
-P print numbers parsable
-t <txg> -- highest txg to use when searching for uberblocks
Specify an option more than once (e.g. -bb) to make only that option verbose
Default is to dump everything non-verbosely
Unfortunately, I don't know how to use it.
# zdb
tank:
version: 28
name: 'tank'
...
vdev_tree:
...
children[0]:
...
children[0]:
...
path: '/dev/label/bank1d1'
phys_path: '/dev/label/bank1d1'
...
So I took the array indexes 0 0 to get my first disk (bank1d1) and did this command. It did something. I don't know how to read the output.
zdb -R tank 0:0:4e00:200 | strings
Have fun... try not to destroy anything. Here is your warning from the man page:
The zdb command is used by support engineers to diagnose failures and
gather statistics. Since the ZFS file system is always consistent on
disk and is self-repairing, zdb should only be run under the direction
by a support engineer.
And please tell us what you actually were looking for. Was Alan right that you wanted to do backups?
You can read from underlying raw devices in the pool, but as far as I can tell there's no concept of single contiguous block device representing the whole pool.
The pool in ZFS is not a single contiguous block of sectors that 'classic' volume managers are. ZFS internal structure is closer to a tree which would be somewhat challenging to represent as a flat array of blocks.
Ben Rockwood's blog post "zdb: Examining ZFS At Point-Blank Range" may help getting better idea of what's under the hood.
No idea about what might be useful doing so but you certainly can read blocks in the underlying devices used by the pool. They are shown by the zpool status command. If you are really asking about zvols instead of zpools, they are accessible under /dev/zvol/rdsk/pool-name/zvol-name. If you want to look at internal zpool data, you probably want to use zdb.
If you want to backup ZFS filesystems you should be using the following tools:
'zfs snapshot' to create a stable snapshot of the filesystem
'zfs send' to send a copy of the snapshot to somewhere else
'zfs receive' to go back from a snapshot to a filesystem.
'dd' is almost certainly not the tool you should be using. In your case you could 'zfs send' and redirect the output into a file on your other filesystem.
See chapter 7 of the ZFS administration guide for more details.