I know I can get a list of packages included in an image using this command:
bitbake -g <image> && cat pn-buildlist | grep -ve "native" | sort | uniq
Is there a bitbake command to get the description of a specific package? Or perhaps there is a command to get all info on a package, I could simply grep the output of this.
Cheers!
It isn't a bitbake command, but there is oe-pkgdata-util utility (part of openembedded-core). It works in OE build environment (like bitbake). You can query a value from a built package (not recipe) using the read-value subcommand. The basic syntax is:
oe-pkgdata-util read-value <value> <pkg1> [<pkg2>...]
You can query multiple packages, prefix the name of package etc. Just check:
oe-pkgdata-util read-value --help
Here is an example for your question:
tom#pc:~/oe/build> oe-pkgdata-util read-value DESCRIPTION libc6
The GNU C Library is used as the system C library in most systems with the Linux kernel.
BTW, you can query other variables like RDEPENDS, SUMMARY etc.
Please note that if the DESCRIPTION variable is not set in the recipe, it is filled with content of a SUMMARY variable (see doc).
Your question also mentions getting the list of packages in the image. I would say that there are more straightforward ways. For example:
manifest file in the deploy dir (the file is next to the image file): ${DEPLOY_DIR}/images/${MACHINE}/${IMAGE_BASENAME}-${MACHINE}.manifest
file installed-package-names.txt in buildhistory (if you've enabled it). It is inside the folder ${BUILDHISTORY_DIR}/images/${MACHINE_ARCH}/${TCLIBC}/${IMAGE_BASENAME}/.
FYI, not every package has a description.
I usually read the recipe as it is faster than waiting the bitbake output. Nevertheless, if you wish to read it from bitbake:
bitbake <recipe> -e | grep ^DESCRIPTION=
The description may be written in the recipe, like in here
As a side note, you can get access to each variable with -e, very useful for debugging.
Related
I may have missed this detail but I'm trying to see if I can control the set of plugins made available through the ini configuration itself.
I did not find that item enumerated in any of the configurable command-line options nor in any of the documentation around the pytest_plugins global.
The goal is to reuse a given test module with different fixture implementations.
#hoefling is absolutely right, there is actually a pytest command line argument that can specify plugins to be used, which along with the addopts ini configuration can be used to select a set of plugin files, one per -p command.
As an example the following ini file selects three separate plugins, the plugins specified later in the list take precedence over those that came earlier.
projX.ini
addopts =
-p projX.plugins.plugin_1
-p projX.plugins.plugin_2
-p projY.plugins.plugin_1
We can then invoke this combination on a test module with a command like
python -m pytest projX -c projX.ini
A full experiment is detailed here in this repository
https://github.com/jxramos/pytest_behavior/tree/main/ini_plugin_selection
I have a recipe in Automake that optionally builds documentation if the user issues make doc or make htmldoc:
if DOXYGEN_AVAILABLE
docs html htmldoc html-doc:
$(DOXYGEN) Doxyfile -d DOXYGEN_PROCESSING
fi
DOXYGEN_AVAILABLE is set in configure based on the result of AC_CHECK_PROGS. If the docs are built there will be a directory html-doc. The documentation is optional and html-doc may be missing.
If html-doc is present I won't have a file list. I don't believe this will work in Makefile.am:
if DOXYGEN_AVAILABLE
docs html htmldoc html-doc:
$(DOXYGEN) Doxyfile -d DOXYGEN_PROCESSING
pkghtmldir_FILES += html-doc/
fi
How do I optionally install documentation to pkghtmldir when using Automake?
I suggest in the first place that you change your logic a bit. If it's OK to not install the Doxygen docs when Doxygen isn't available to build them, then it should also be OK not to install them even if Doxygen is available. Thus, it makes sense to use an --enable-docs or --with-docs or similar option to configure to let the package builder express whether the docs should be built, with whichever default suits you. You could also consider including pre-built docs with your package, and then selecting whether to enable rebuilding them.
You can furthermore, then, altogether omit the check for Doxygen when docs are not requested anyway, and emit a warning or error when they are requested but Doxygen is not available (or is too old). That should be less surprising to package builders. At least, it would be less surprising to me.
Still, it ultimately comes down to Automake conditionals either way. Here's a slightly trimmed version of how I handle pretty much the same task in one of my projects:
$(top_srcdir)/Makefile.am:
# An Automake conditional:
if make_docs
# The name of the target(s) that encompasses actually building the docs
doxygen_html_targets = dox-html
# My Doxyfile is built by the build system, and the docs also depend on some example
# sources, stylesheets, and other files provided by the package. All these are
# listed here, so that the docs are rebuilt if any of them change:
dox_deps = Doxyfile ...
# How to actually install the docs (a one-line make recipe). Covers also installing
# pre-built docs that I include in the package for convenience, though this example
# has most of the other plumbing for that removed.
#
# The chmod in the below command must ensure the target files writable
# in order to work around weirdness in the behavior of the "distcheck" target
# when not configured to rebuild docs.
html_install = test -d dox-html && html_dir=dox-html || html_dir=$(srcdir)/dox-html; \
$(MKDIR_P) $(DESTDIR)$(pkgdocdir); \
cp -pR $${html_dir} $(DESTDIR)$(pkgdocdir)/html; \
chmod -R u+w,go+rX $(DESTDIR)$(pkgdocdir)/html
else
doxygen_html_targets =
dox_deps =
html_install = :
endif
# The variable prerequisites of this rule are how the selection is made between
# building the docs and not building them:
html-local: $(doxygen_html_targets)
:
## This rule should not be necessary, but Automake seems otherwise to ignore the
## install-html-local rule, perhaps because there are no targets with an HTML
## primary.
install-data-local: install-html-local
# The variable recipe for this rule is how the selection is made between installing
# the docs and not installing them:
install-html-local:
$(html_install)
maintainer-clean-local:
$(RM) -rf dox-html
# This rule, when exercised, is what actually builds the docs:
dox-html: $(dox_deps)
$(RM) -rf dox-html
$(DOXYGEN) Doxyfile
The key thing to take away here is that it's not only file lists that you can store in make variables and control via Automake conditionals. You can also store the names of arbitrary targets, to be used in prerequisite lists, and you can even store the text of rule recipes, so as to vary behavior of the rules that are selected.
I have this custom build which invokes matlab to compile a .slx file into a .dll file.
function(BUILD_SIMULINK model)
set(EXECUTE_COMMAND matlab -r "rtwbuild( ${model} )" )
add_custom_target(
${model} ALL
COMMAND ${EXECUTE_COMMAND}
DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/${model}.slx
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/${model}.dll
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
COMMENT "Building: ${model}"
)
endfunction(BUILD_SIMULINK)
However my problem is that whenever I use cmake --build ., this command will always be executed.
How can I prevent this command from executing when the DEPENDS hasn't changed and the OUTPUT exists? What I'm looking for is similar to how cmake avoids re-compiling c/cpp files when the source hasn't changed and the appropriate object file exists.
See add_custom_target() command documentation:
The target has no output file and is always considered out of date even if the commands try to create a file with the name of the target. Use the add_custom_command() command to generate a file with dependencies.
There is not OUTPUT keyword. I think its only accepted because CMake sees OUTPUT as a dependency. Actually I get an CMake warning when I run your code:
...
This project specifies custom command DEPENDS on files in the build tree
that are not specified as the OUTPUT or BYPRODUCTS of any
add_custom_command or add_custom_target:
test_model.dll
You need to use add_custom_command():
cmake_minimum_required(VERSION 2.6)
project(TestCustomTargetWithDependency NONE)
function(BUILD_SIMULINK model)
#set(EXECUTE_COMMAND matlab -r "rtwbuild( ${model} )" )
set(EXECUTE_COMMAND "${CMAKE_COMMAND}" -E touch "${model}.dll")
add_custom_command(
OUTPUT "${model}.dll"
COMMAND ${EXECUTE_COMMAND}
DEPENDS "${model}.slx"
COMMENT "Building: ${model}"
)
add_custom_target(
${model} ALL
DEPENDS "${model}.dll"
)
endfunction(BUILD_SIMULINK)
file(WRITE "test_model.slx" "")
BUILD_SIMULINK(test_model)
𝓝𝓸𝓽𝓮: Sources/Dependencies default is CMAKE_CURRENT_SOURCE_DIR and outputs default is CMAKE_CURRENT_BINARY_DIR. No need to explicitly prefix those.
I am stuck with this incredibly silly error. I am trying to run pytest on a Raspberry Pi using bluepy.
pi#pi:~/bluepy/bluepy $ pytest test_asdf.py
============================= test session starts ==============================
platform linux2 -- Python 2.7.9, pytest-3.0.7, py-1.4.33, pluggy-0.4.0
rootdir: /home/pi/bluepy, inifile:
collected 0 items / 1 errors
==================================== ERRORS ====================================
______________ ERROR collecting bluepy/test_bluetoothutility.py _______________
ImportError while importing test module '/home/pi/bluepy/bluepy/test_asdf.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test_asdf:4: in <module>
from asdf import AsDf
asdf.py:2: in <module>
from bluepy.btle import *
E ImportError: No module named btle
!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!
=========================== 1 error in 0.65 seconds ============================
I realised that my problem could be that rootdir is showing incorrect path. It should be
/home/pi/bluepy/bluepy
I've been reading pytest docs but I just do not get it how to change the rootdir.
Your problem is nothing to do with Pytest's rootdir.
The rootdir in Pytest has no connection to how test package names are constructed and rootdir is not added to sys.path, as you can see from the problem you were experiencing. (Beware: the directory that is considered rootdir may be added to the path for other reasons, such as it also being the current working directory when you run python -m pytest.)
The problem here, as others have described, is that the top-level bluepy/ is not in sys.path. The easiest way to handle this if you just want to get something running interactively for yourself is as per Cecil Curry's answer: cd to the top-level bluepy and run Pytest as python -m pytest bluepy/test_asdf.py (or just python -m pytest if you want it to discover all test_* files in or under the current directory and run them). But I think you will need to use python -m pytest, not just pytest, in order to make sure that the current working directory is in the path.
If you're looking to set up a test framework that others can easily run without mysterious failures like this, you'll want to set up a test script that sets the current working directory or PYTHONPATH or whatever appropriately. Or use tox. Or just make this a Python package using standard tools that can run the tests for you. (All that goes way beyond the scope of this question.)
By the way, I concur with Cecil's opinion of Mackie Messer's answer; messing around with conftest.py like that is overly difficult and fragile; there are better solutions for almost any circumstance.
Appendix: Use of rootdir
There are only two things, as far as I'm aware, for which rootdir is used:
The .pytest_cache/ directory is stored in the rootdir unless otherwise specified (with the cache_dir configuration option).
If rootdir contains a conftest.py, it will always be loaded, even if no test files are loaded from in or under the rootdir.
The documentation claims that the rootdir also used to generate nodeids, but adding a conftest.py containing
def pytest_runtest_logstart(nodeid, location):
print("logstart nodeid={} location={}".format(nodeid, location))
and running pytest --rootdir=/somewhere/way/outside/the/tree shows that to be incorrect (though node locations are relative to the rootdir).
My first guess would be that you don't have that directory in the python path. You can add it to the python path dynamically. One simple way to do this is in a test configuration file conftest.py, which I believe is always executed before test discovery and test running.
For example, you might have a project setup like:
root
+-- tests
| +-- conftest.py
| +-- tests_asdf.py
+-- bluepy (or main project dir)
| +-- miscellaneous modules
In which case, you could add the root dir to your python path in the conftest.py file like so:
#
# conftest.py
import sys
from os.path import dirname as d
from os.path import abspath, join
root_dir = d(d(abspath(__file__)))
sys.path.append(root_dir)
Let me know if that's helpful.
Actually, py.test is correctly discovering the rootdir for your project to be /home/pi/bluepy. That's good.
Tragically, you are erroneously attempting to run py.test within your project's package subdirectory (i.e., /home/pi/bluepy/bluepy) rather than within your project's rootdir (i.e., /home/pi/bluepy). That's bad.
Let's break this down a little. From within the:
/home/pi/bluepy directory, there is a bluepy.btle submodule. (Good.)
/home/pi/bluepy/bluepy subdirectory, there is no bluepy.btle submodule. (Bad.) Unless you awkwardly attempt to manually inject the parent directory of this subdirectory (i.e., /home/pi/bluepy) onto sys.path as Makie Messer perhaps inadvisably advises, Python has no means of inferring that the package bluepy actually refers to the current directory coincidentally also named bluepy. To avoid ambiguity issues of this sort, Python is typically only run outside rather than inside of a project's package subdirectory.
Since you ran py.test from the latter rather than the former directory, Python is unable to find the bluepy.btle submodule on the current sys.path. For this and similar reasons, py.test should typically only ever be run from your project's top-level rootdir (i.e., /home/pi/bluepy):
pi#pi:~/ $ cd ~/bluepy
pi#pi:~/bluepy $ py.test bluepy/test_asdf.py
Lastly, note that it's typically preferable to defer test discovery to py.test. Rather than explicitly listing all test script filenames on the command line, consider instead letting py.test implicitly find and run all tests containing some substring via the -k option. For example, to run all tests whose function names are prefixed by test_asdf (regardless of the test script they reside in):
pi#pi:~/ $ cd ~/bluepy
pi#pi:~/bluepy $ py.test -k test_asdf .
The suffixing . is optional, but often useful. It instructs py.test to set its rootdir property to the current directory (i.e., /home/pi/bluepy). py.test is usually capable of finding your project's rootdir and setting this property on its own, but it can't hurt to specify it manually. (Especially as you're having... issues.)
For further details on rootdir discovery, see Initialization: determining rootdir and inifile in the official py.test documentation.
I'm using OpenEmbedded-Core and have created a custom layer with priority 6. Months of development have gone by, and now I want to increase my layer's priority to 8 because an append file from another layer with priority 7 is interfering with an append file I'm adding in my layer.
My question is, how can I generate a list of recipes and .bbappend files used in an image?
I want to generate the list both before and after I make the priority change so that I can compare them (with a difftool hopefully) to see if any unexpected side-effects occurred, like an important append file from the other layer getting ignored potentially.
I'm using the angstrom-v2014.12-yocto1.7 branch of the Angstrom distribution.
[EDIT]
I'm now primarily just interested in determining how to list which .bbappend files are actually used by my image at this point.
A list of packages can be viewed using "bitbake -g your-image-name" as suggested by #pnxs, or from the .manifest file (which is what I like to use) which in my case is located under deploy/glibc/images/imagename/. I originally asked how a list of "recipe files" could be generated, but I think a list of packages is sufficient.
Regarding the .bbappends though, I had a case where my own .bbappend was ignored due to layer priorities. I made a change to my layer priorities and now want to see if that caused any .bbappend files anywhere else in my image to get ignored. As I understand it, using "bitbake-layers show-appends" as suggested lists all .bbappends present rather than just those which are actually used in the creation of an image, so this doesn't do what I'm looking for.
Try using "bitbake-layers show-appends" to see what bbappends are used. But that will only work on a per-recipe basis. But that might give you the information you need to understand the priorities.
You can do a "bitbake -g your-image-name" which creates some dot-files in the current directory.
The file "pn-depends.dot" contains a list of package-names (pn) and the dependencies between them.
When you take the first part of the file where all packages are listed, you see for example:
"busybox" [label="busybox :1.23.1-r0.2\n/home/user/yocto/sources/poky/meta/recipes-core/busybox/busybox_1.23.1.bb"]
"base-files" [label="base-files :3.0.14-r89\n/home/user/yocto/sources/poky/meta/recipes-core/base-files/base-files_3.0.14.bb"]
So you got a list of all packages used by your image and the corresponding recipe-file.
To see which of the recpies are extended by bbappend you have to get the list of bbappends with "bitbake-layers show-appends" and look up the appends of every recipe. You can write a little python-program that can do that for you.
Try the following:
Show all recipes
bitbake-layers show-recipes
Show .bb file of a recipe
RECIPE_NAME="linux-yocto"
bitbake -e $RECIPE_NAME | grep ^FILE=
Try the below command
bitbake -g image-name && cat pn-depends.dot | grep -v -e '-native' | grep -v digraph | grep -v -e '-image' | awk '{print $1}' | sort | uniq