How can I determine what is causing an unwanted package to be built in Yocto? - yocto

I am trying to build an console image for an RPi using the core-image-base recipe, but somewhere in my configuration, I seem to have switched something on that is increasing the number of recipes built by around 1000 which include many things that don't feel like they belong in a console image (libx11, gnome-desktop-testing, etc.)
I am trying to track down why these recipes are being included in my build. My method so far has been to run the following commands:
# Generate a massive dot file with all the dependencies in it
bitbake -g core-image-base
# grep through that file to find out what is bringing in
# gnome-desktop-testing.
cat task-depends.dot | grep -i gnome-desktop-testing | grep -vi do_package_write_rpm
I removed do_package_write_rpm from the matching since everything seems to match against it. This leaves the following:
"core-image-base.do_build" -> "gnome-desktop-testing.do_build"
"core-image-base.do_rootfs" -> "gnome-desktop-testing.do_package_qa"
"core-image-base.do_rootfs" -> "gnome-desktop-testing.do_packagedata"
"core-image-base.do_rootfs" -> "gnome-desktop-testing.do_populate_lic"
"glib-2.0.do_package_qa" -> "gnome-desktop-testing.do_packagedata"
(followed by many dependencies between the tasks of the gnome-desktop-testing recipe)
So, if my interpretation is correct, it seems that core-image-base is depending directly on gnome-desktop-testing. This seems unusual since core-image-base is supposed to be a console only image.
I tried adding PACKAGE_EXCLUDE = "gnome-desktop-testing" to my local.conf hoping that it would give back some more information, but the build just seems to proceed regardless of this variable's setting :/
How can I figure out why gnome-desktop-testing is being built by Yocto? Ideally I would like to have a solution not involving toaster.

I ran into this issue and so I thought I would post the answer.
First, I removed the recipe, rebuilt and then looked at the first dependency chain.
NOTE: Runtime target 'shared-mime-info' is unbuildable, removing...
Missing or unbuildable dependency chain was: ['shared-mime-info', 'glib-2.0', 'gnome-desktop-testing']
Then we look in the recipe to see that glib-ptest has an RDEPENDS on gnome-desktop-testing.
RDEPENDS_${PN}-ptest += "\
coreutils \
libgcc \
dbus \
gnome-desktop-testing \
tzdata \
So then to fix that you will need to disable "ptest". This can be done from the your distro configuration (meta-layer/conf/distro/*.conf).

One brutal solution to this problem is to just delete the recipe that you don't want and to rerun bitbake. This gives you a useful message such as:
ERROR: Required build target 'core-image-base' has no buildable providers.
Missing or unbuildable dependency chain was:
['core-image-base', 'packagegroup-base-extended', 'ofono', 'glib-2.0', 'gnome-desktop-testing']
If you have brought in these layers using git, the changes can be quickly reverted with git checkout path/to/deleted/recipe

Related

What is the best practice for installing external dependencies in a Coq project?

I understand what I believe is the essence of the official utilties doc https://coq.inria.fr/refman/practical-tools/utilities.html#building-a-coq-project:
one creates a _CoqProject with arguments to coqc and the file names to compile (hopefully in an order that takes into account dependencies)
then one make an automatic CoqMakefile with coq_makefile -f _CoqProject -o CoqMakefile
Then you use their recommended Makefile to run the automatically generated make file.
But then if we need other dependencies, it doesn't say how to install them (or uninstall) them. What is the standard way to do that?
My guess is that one likely adds a target to your Makefile at the end of it and do some sort of opam install?
e.g.
# KNOWNTARGETS will not be passed along to CoqMakefile
KNOWNTARGETS := CoqMakefile
# KNOWNFILES will not get implicit targets from the final rule, and so
# depending on them won't invoke the submake. TODO: Not sure what this means
# Warning: These files get declared as PHONY, so any targets depending
# on them always get rebuilt -- or better said, any rules which those names will have their cmds be re-ran (which is
# usually rebuilding functions since that is what make files are for)
KNOWNFILES := Makefile _CoqProject
# Runs invoke-coqmakefile rule if you do run make by itself. Sets the default goal to be used if no targets were specified on the command line.
.DEFAULT_GOAL := invoke-coqmakefile
# Depends on two files to run this, itself and our _CoqProject
CoqMakefile: Makefile _CoqProject
$(COQBIN)coq_makefile -f _CoqProject -o CoqMakefile
# Note make knows what is the make file in this case thanks to -f CoqMakefile
invoke-coqmakefile: CoqMakefile install_external_dependencies
$(MAKE) --no-print-directory -f CoqMakefile $(filter-out $(KNOWNTARGETS),$(MAKECMDGOALS))
#
.PHONY: invoke-coqmakefile $(KNOWNFILES)
####################################################################
## Your targets here ##
####################################################################
# This should be the last rule, to handle any targets not declared above
%: invoke-coqmakefile
#true
# I know this is not a good coq dependency example but just to make it concrete wrote some opam command
install_external_dependencies:
opam install coq-serapi
I think I wrote the install_external_dependencies in the right place but I'm not sure. Is that correct? Anyone has a real example?
For all the code see: https://github.com/brando90/ultimate-utils/tree/master/tutorials_for_myself/my_makefile_playground/beginner_coq_project_with_makefiles/debug_proj
related: question on official tutorial for building coq projects https://coq.discourse.group/t/official-place-to-learn-how-to-setup-coq-make-files-for-beginner/1682
Btw,
I don't understand the last like in the makefile yet.
# This should be the last rule, to handle any targets not declared above
%: invoke-coqmakefile
#true
i.e.
%true in the make file template coq gave us.
% in the name of the rule.
What does that line do?
Update
I'm seeking an end-to-end small demo of how to install all dependencies with whatever the recommended approach when using _CoqProject and coq_makefile as shown in the utilities doc https://coq.inria.fr/refman/practical-tools/utilities.html. The ideal answer would contain a single script to install and compile everything in one go -- say in a install_project_name.sh. Including opam switches etc.
Related:
How does one install a new version of Coq when it cannot find the repositories in a new Mac M1 machine?
Installing packages for Coq using OPAM
https://coq.discourse.group/t/official-place-to-learn-how-to-setup-coq-make-files-for-beginner/1682
The simplest setup is to install external dependencies manually with opam.
opam install packages-needed-by-my-project
Then they will be immediately available to build your own project.
The next level of organization is to package up your project. Refer to the following Coq community resources:
Coq community templates
Recommended Project Structure
The main thing immediately relevant to your question is to have a *.opam file at the root of your project which lists its dependencies (possibly with version requirements). Then you can install them using opam install . --deps-only.
The Makefile part of your question is about a bit of overengineering for passing options seamlessly to CoqMakefile. I'm not sure how it works off-hand, but it's not important anyway, especially because Dune is superseding make as the recommended build system for Coq project.

how to make bitbake print options of do_configure

I'm having trouble cross compiling Qt5 for beaglebone using openembedded with bitbake. I think in step do_configure not everything is passed from my *.bbappend and no platform plugins are installed (I need 'linuxfb').
My question will be: how to make bitbake print list of arguments it passes to ./configure?
There's a few ways to get that info, I would suggest looking in the recipe work directory:
temp/log.do_configure contains the configure task log which should list exact ./configure-command
build/ contains the projects own build system artefacts
bitbake -e <recipe> | grep <VARIABLE> is very useful if you want to know what variable values end up as (check e.g. PACKAGECONFIG and PACKAGECONFIG_CONFARGS values if you're modifying packageconfig).

cpanspec option --filter-requires fails to remove :MODULE_COMPAT

I would like to make a rpm which can install on RHEL5,6 and 7.
[p4474668#rhel7dev source]$ cpanspec webmin-ajax-0.00.tar.gz -d '' --force --filter-requires 'perl(:MODULE_COMPAT_5.16.3)' -b
(... lots of infos here ...)
[p4474668#rhel7dev source]$ rpm -qpR noarch/perl-webmin-ajax-0.00-1.noarch.rpm
perl(:MODULE_COMPAT_5.16.3)
perl(DST::System)
perl(WebminCore)
perl(lib)
perl(strict)
perl(warnings)
rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(FileDigests) <= 4.6.0-1useless
rpmlib(PayloadFilesHavePrefix) <= 4.0-1
rpmlib(PayloadIsXz) <= 5.2-1
Since the dependancy is not removed, it cannot be installed on a RHEL5.
How to remove the perl(:MODULE_COMPAT_5.16.3) not required dependency?
This feature is under-documented, and doesn't (IMHO) do all of what its name implies.
What it does is:
along with the .spec file it creates a filter script file (some sed in a .sh script), this is referenced in the .spec file and is required during rpmbuild. The dependencies written to the .spec file are not affected by filtering.
that sed filter script is rewritten by sed during rpmbuild to invoke the correct perl.req.
the filter script is applied only to the output of perl.req (which is normally only invoked by find-requires)
(As far as I can tell, perl.req predates explicitly listed dependencies in Makefile.PL and/or the META files. What it does is look for likely use and require directives in perl code with some logic and regex's, the implementation of which is exactly as attractive as you'd expect).
To be honest I'm not entirely sure in what circumstances perl.req and find-requires are or are not invoked (cursory testing by setting AutoReq didn't have an effect for me), but cpanspec reads and processes META.yml directly if found in any case. The only other dependency-related functionality is (rudimentary) processing of Makefile.PL to extract modules references in PREREQ_PM and add those as BuildRequires
Normally you can patch up your .spec file to add something like
%filter_from_requires /XSLoader/d
%filter_setup
However, :MODULE_COMPAT_xxx is a special-case dependency in cpanspec, you can turn it off with the -o option to create "compatible" RPMs that work on older systems. This has other effects too, so it might be less problematic to build in discrete steps so that you can tweak the build:
cpanspec [...] yourmodule
sed -i'.bak' '/^Requires.*:MODULE_COMPAT.*/d' yourmodule.spec
rpmbuild -ba yourmodule.spec
(especially since cpanspec warns that -b is not guaranteed to work always).
Of course the final package may not work as expected, cpanspec is being cautious, but there might be a real API requirement for this constraint (in your case the "noarch" build may limit the scope for problems, I think). The rule of thumb is that API/ABI's try to be backward compatible, not forward compatible (a related problem occurs if you try to build binaries to work with a glibc older than the one you compile against).
You may have better luck building on a CentOS 5 system (or investigate Mock for a more complete solution).

Solaris package upgrade

I'm having a lot of trouble wrapping my head around how Solaris 11 does packaging. I understand that there is a yum type packaging approach, but I would expect there to be a rpm -i and rpm -U approach that allows a package to be delivered and then installed or upgrade.
For now I have tracked down how to make a package, ie pkgmk and pkgtrans. Given this I can create a "foo_1.0.pkg" file that can be installed like this:
pkgadd -d foo_1.0.pkg
However I can not figure out how to upgrade this package with "foo_2.0.pkg":
root#hostname # pkgadd -d foo_2.0.pkg
The following packages are available:
1 foo foo
(x86) private_build
Select package(s) you wish to process (or 'all' to process
all packages). (default: all) [?,??,q]: all
Processing package instance <foo> from </root/foo_2.0.pkg>
foo(x86) private_build
Current administration requires that a unique instance of the <foo>
package be created. However, the maximum number of instances of the
package which may be supported at one time on the same system has
already been met.
No changes were made to the system.
What am I doing wrong? It would appear that i should use pkg update, but this seems to imply that I need to release my pkg in a repo.
First, you aren't using Solaris 11 packaging (IPS) but the legacy SVR4 packaging.
With the latter, you cannot upgrade a custom package. The only way is then simply to remove the old package and install the newer one, which is what rpm -U is doing under the hood anyway.
pkgrm foo
pkgadd -d foo_2.0.pkg foo
I had the same problem, but I was able to workaround it by passing a config file into the cmd. This is especially useful in a script when used with the "echo |" as it bypasses the confirmation prompt as well. The config file overwrites the default install properties which are located in a file here: /var/sadm/install/admin/default. The key is the instance=overwrite line. I changed some of the others as well, to avoid any other prompts that may come up. As an alternate solution you can change the default file directly and not have to reference the additional config file.
with myprog1.0 (or 2.0) already installed run the following command.
echo | pkgadd -a /opt/myprog/install.conf -d myprog2.0
contents of /opt/myprog/install.conf file:
mail=
instance=overwrite
partial=nocheck
runlevel=nocheck
idepend=nocheck
rdepend=nocheck
space=ask
setuid=ask
conflict=nocheck
action=nocheck
networktimeout=60
networkretries=3
authentication=quit
keystore=/var/sadm/security
proxy=
$UPDATE
This variable does not exist under most installation environments. If it does exist (with the value yes), it means that a PKG with the same name, version and architecture is already installed on the system or that the installing PKG will overwrite an installed PKG. The original BASEDIR is then used.
So, this variable you can use in preinstall or postinstall script for any updation.

Using Makefile to add new source code files to Postgresql

I'm working on adding some functionality to the storage manager module in Postgresql.
I have added few source files already to the smgr folder, and I was able to have the Make system includes them by adding their names to the OBJS list in the Makefile inside the smgr folder. (i.e. When I add A.c, I would add A.o to the OBJS list).
That was working fine. Now I'm trying to add a new file hdfs_test.c to the project. The problem with this file is that it requires some extra directives in its compilation command (-I and -L directives).
The gcc command is:
gcc hdfs_test.c -I/HDFS_HOME/hdfs/src/c++/libhdfs -I/usr/lib/jvm/default-java/include -L/HDFS_HOME/hdfs/src/c++/libhdfs -L/HDFS_HOME/build/c++/Linux-i386-32/lib -L/usr/lib/jvm/default-java/jre/lib/i386/server -ljvm -lhdfs -o hdfs_test
Therefore, simply adding hdfs_test.o to the OBJS list doesn't work.
I tried editing the Makefile to look like this:
OBJS = md.o smgr.o smgrtype.o A.o B.o hdfs_test.o
MyRule1 : hdfs_test.c
gcc tati.c -c -I/diskless/taljab1/Workspace/HDFS_Append/hdfs/src/c++/libhdfs -I/usr/lib/jvm/default-java/include -L/diskless/taljab1/Workspace/HDFS_Append/hdfs/src/c++/libhdfs -L/diskless/taljab1/Workspace/HDFS_Append/build/c++/Linux-i386-32/lib -L/usr/lib/jvm/default-java/jre/lib/i386/server -ljvm -lhdfs
but it didn't work out, and I kept getting errors message of the Make trying to compile hdfs_test.c without including the directives.
How do I enforce the Make to include my compilation directives for hdfs_test.c ?
Thanks
You don't need to pass -l and -L at compile time, only at link time. At compile time only -I (include path) directives are required to help the compiler find any extra headers.
You should compile your source file to a .o file, same as all the others. Then add the -L and -l directives to the link command line when the linker is invoked to create the postgres executable. That means all you need to edit in src/backend/storage/smgr/Makefile is the OBJS line to add your output object, as you've already done below. Remove your custom rule, it's unnecessary as well as incorrect.
Just add your extra libraries to the $(LIBS) make variable and add your -L paths to $(LDFLAGS) via src/Makefile.global. src/Makefile.global is generated by configure from src/Makefile.global.in so you actually need to modify configure's behavior to add your includes, library paths and libraries. Don't edit configure directly either; edit configure.in and re-generate it with autoconf.
Yes, GNU Autotools is sometimes referred to as autohell for a reason. It's a bit ... interesting ... to work with sometimes, and there can be a lot of indirection involved in doing simple things.