autotools: propagating package version into subpackage - sh

I have an autotools based package call Foo. For historical reasons (are there ever any other reasons?), this contains a subpackage subBar.
Foo -|
|- configure.ac
|- Makefile.am
|- subBar -|
|- configure.ac
|- Makefile.am
subBar's configure.ac is sufficiently complex for me to wish not messing with it if avoidable. On the other side subBar is so tightly integrated with Foo that it does not make sense to keep a separate version number for it.
Is there a way to propagate Foo's version (from AC_INIT) into subBar? The obvious:
subBar/configure.ac:
AC_INIT([subBar], [$(VERSION)], ...)
Does not work due to circular references.
Any ideas?

The manual provides a possible solution:
The arguments of AC_INIT must be static, i.e., there should not be any
shell computation, quotes, or newlines, but they can be computed by
M4... It is permissible to use m4_esyscmd or m4_esyscmd_s for
computing a version string that changes with every commit to a version
control system (in fact, Autoconf does just that, for all builds of
the development tree made between releases).
Since autoconf itself uses this in its own configure.ac:
AC_INIT([GNU Autoconf],
m4_esyscmd([build-aux/git-version-gen .tarball-version]),
[bug-autoconf#gnu.org])
you need to replace this with a system command or shell script of your own to get a version string. e.g., a top-level file called .foo-version with "MAJOR.MINOR", and borrowing from git-version-gen, something like: m4_esyscmd([tr -d '\n' < ../.foo-version])
I suppose you could use the same command in the top-level package, replacing ../ with ./ of course. Since m4 is invoking a system command, you're free to get the version string any way you prefer.

https://github.com/ClusterLabs/libqb/blob/master/build-aux/git-version-gen
This is a shell script file you could adopt...for those not familiar ... then tailor the names to two separate variable files sets like .subBar-tarball-version and .tarball-version and .subBar-version and .version

Related

How to conditionally install documentation to pkghtmldir using Automake

I have a recipe in Automake that optionally builds documentation if the user issues make doc or make htmldoc:
if DOXYGEN_AVAILABLE
docs html htmldoc html-doc:
$(DOXYGEN) Doxyfile -d DOXYGEN_PROCESSING
fi
DOXYGEN_AVAILABLE is set in configure based on the result of AC_CHECK_PROGS. If the docs are built there will be a directory html-doc. The documentation is optional and html-doc may be missing.
If html-doc is present I won't have a file list. I don't believe this will work in Makefile.am:
if DOXYGEN_AVAILABLE
docs html htmldoc html-doc:
$(DOXYGEN) Doxyfile -d DOXYGEN_PROCESSING
pkghtmldir_FILES += html-doc/
fi
How do I optionally install documentation to pkghtmldir when using Automake?
I suggest in the first place that you change your logic a bit. If it's OK to not install the Doxygen docs when Doxygen isn't available to build them, then it should also be OK not to install them even if Doxygen is available. Thus, it makes sense to use an --enable-docs or --with-docs or similar option to configure to let the package builder express whether the docs should be built, with whichever default suits you. You could also consider including pre-built docs with your package, and then selecting whether to enable rebuilding them.
You can furthermore, then, altogether omit the check for Doxygen when docs are not requested anyway, and emit a warning or error when they are requested but Doxygen is not available (or is too old). That should be less surprising to package builders. At least, it would be less surprising to me.
Still, it ultimately comes down to Automake conditionals either way. Here's a slightly trimmed version of how I handle pretty much the same task in one of my projects:
$(top_srcdir)/Makefile.am:
# An Automake conditional:
if make_docs
# The name of the target(s) that encompasses actually building the docs
doxygen_html_targets = dox-html
# My Doxyfile is built by the build system, and the docs also depend on some example
# sources, stylesheets, and other files provided by the package. All these are
# listed here, so that the docs are rebuilt if any of them change:
dox_deps = Doxyfile ...
# How to actually install the docs (a one-line make recipe). Covers also installing
# pre-built docs that I include in the package for convenience, though this example
# has most of the other plumbing for that removed.
#
# The chmod in the below command must ensure the target files writable
# in order to work around weirdness in the behavior of the "distcheck" target
# when not configured to rebuild docs.
html_install = test -d dox-html && html_dir=dox-html || html_dir=$(srcdir)/dox-html; \
$(MKDIR_P) $(DESTDIR)$(pkgdocdir); \
cp -pR $${html_dir} $(DESTDIR)$(pkgdocdir)/html; \
chmod -R u+w,go+rX $(DESTDIR)$(pkgdocdir)/html
else
doxygen_html_targets =
dox_deps =
html_install = :
endif
# The variable prerequisites of this rule are how the selection is made between
# building the docs and not building them:
html-local: $(doxygen_html_targets)
:
## This rule should not be necessary, but Automake seems otherwise to ignore the
## install-html-local rule, perhaps because there are no targets with an HTML
## primary.
install-data-local: install-html-local
# The variable recipe for this rule is how the selection is made between installing
# the docs and not installing them:
install-html-local:
$(html_install)
maintainer-clean-local:
$(RM) -rf dox-html
# This rule, when exercised, is what actually builds the docs:
dox-html: $(dox_deps)
$(RM) -rf dox-html
$(DOXYGEN) Doxyfile
The key thing to take away here is that it's not only file lists that you can store in make variables and control via Automake conditionals. You can also store the names of arbitrary targets, to be used in prerequisite lists, and you can even store the text of rule recipes, so as to vary behavior of the rules that are selected.

Is it possible to use config fragments with Buildroot's .config?

I'm using Buildroot as a submodule, and I want to reuse existing in-tree defconfigs with a few modification of my own.
I'd like to store just the modified options in a config fragment, just like I can do with BR2_LINUX_KERNEL_CONFIG_FRAGMENT_FILES for the Linux kernel config.
Right now I'm doing something like:
cd buildroot
make BR2_EXTERNAL="$(pwd)/../mypackage" qemu_x86_64_defconfig
echo '
BR2_LINUX_KERNEL_CONFIG_FRAGMENT_FILES="../kernel_config_fragment"
BR2_ROOTFS_OVERLAY="../rootfs_overlay"
' >> .config
make
Is there a nicer way to avoid that echo with a config fragment, just like I'm using for the Linux kernel config fragment? I'd expect something like:
make BR2_CONFIG_FRAG=br_config_frag
where br_config_frag contains the lines:
BR2_LINUX_KERNEL_CONFIG_FRAGMENT_FILES="../kernel_config_fragment"
BR2_ROOTFS_OVERLAY="../rootfs_overlay"
and then I'd be able to write just:
make -C buildroot BR2_CONFIG_FRAG=br_config_frag qemu_x86_64_defconfig all
Here's the full example repo.
Edit
One slight improvement is to put the "config fragment" in a separate file buildroot_config_fragment:
BR2_LINUX_KERNEL_CONFIG_FRAGMENT_FILES="../kernel_config_fragment"
BR2_ROOTFS_OVERLAY="../rootfs_overlay"
and then cat that:
cat ../buildroot_config_fragment >> .config
First side note: your script should run make olddefconfig before make, so that any new options are set to their default value instead of being asked for interactively.
You could simplify the script a little by doing:
cat configs/qemu_x86_64_defconfig br_config_frag > .config
make olddefconfig
You can also use the script support/kconfig/merge_config.sh from the kconfig infrastructure. However, that script internally uses make alldefconfig which currently doesn't work - you need a patch for that.
If you would like to add support for BR2_CONFIG_FRAG to the Buildroot infrastructure, feel free to send a patch to the Buildroot mailing list!
I asked on the IRC, and an user who seems to be Yann E. Morin, who seems to be an active developer, said it is not possible currently.
Arnout's make alldefconfig patch is now merged in buildroot as of 26 Jul 2017
(https://github.com/buildroot/buildroot/commit/dab80981d15979eab3aea28a33694396635a52a1).
This means you can now do:
./support/kconfig/merge_config.sh configs/qemu_x86_64_defconfig fragment1.config fragment2.config
This will use qemu_x86_64_defconfig as the base and add modifications given in the listed fragment config files. The tool will also show nice warnings if you override items.

Change pytest rootdir

I am stuck with this incredibly silly error. I am trying to run pytest on a Raspberry Pi using bluepy.
pi#pi:~/bluepy/bluepy $ pytest test_asdf.py
============================= test session starts ==============================
platform linux2 -- Python 2.7.9, pytest-3.0.7, py-1.4.33, pluggy-0.4.0
rootdir: /home/pi/bluepy, inifile:
collected 0 items / 1 errors
==================================== ERRORS ====================================
______________ ERROR collecting bluepy/test_bluetoothutility.py _______________
ImportError while importing test module '/home/pi/bluepy/bluepy/test_asdf.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test_asdf:4: in <module>
from asdf import AsDf
asdf.py:2: in <module>
from bluepy.btle import *
E ImportError: No module named btle
!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!
=========================== 1 error in 0.65 seconds ============================
I realised that my problem could be that rootdir is showing incorrect path. It should be
/home/pi/bluepy/bluepy
I've been reading pytest docs but I just do not get it how to change the rootdir.
Your problem is nothing to do with Pytest's rootdir.
The rootdir in Pytest has no connection to how test package names are constructed and rootdir is not added to sys.path, as you can see from the problem you were experiencing. (Beware: the directory that is considered rootdir may be added to the path for other reasons, such as it also being the current working directory when you run python -m pytest.)
The problem here, as others have described, is that the top-level bluepy/ is not in sys.path. The easiest way to handle this if you just want to get something running interactively for yourself is as per Cecil Curry's answer: cd to the top-level bluepy and run Pytest as python -m pytest bluepy/test_asdf.py (or just python -m pytest if you want it to discover all test_* files in or under the current directory and run them). But I think you will need to use python -m pytest, not just pytest, in order to make sure that the current working directory is in the path.
If you're looking to set up a test framework that others can easily run without mysterious failures like this, you'll want to set up a test script that sets the current working directory or PYTHONPATH or whatever appropriately. Or use tox. Or just make this a Python package using standard tools that can run the tests for you. (All that goes way beyond the scope of this question.)
By the way, I concur with Cecil's opinion of Mackie Messer's answer; messing around with conftest.py like that is overly difficult and fragile; there are better solutions for almost any circumstance.
Appendix: Use of rootdir
There are only two things, as far as I'm aware, for which rootdir is used:
The .pytest_cache/ directory is stored in the rootdir unless otherwise specified (with the cache_dir configuration option).
If rootdir contains a conftest.py, it will always be loaded, even if no test files are loaded from in or under the rootdir.
The documentation claims that the rootdir also used to generate nodeids, but adding a conftest.py containing
def pytest_runtest_logstart(nodeid, location):
print("logstart nodeid={} location={}".format(nodeid, location))
and running pytest --rootdir=/somewhere/way/outside/the/tree shows that to be incorrect (though node locations are relative to the rootdir).
My first guess would be that you don't have that directory in the python path. You can add it to the python path dynamically. One simple way to do this is in a test configuration file conftest.py, which I believe is always executed before test discovery and test running.
For example, you might have a project setup like:
root
+-- tests
| +-- conftest.py
| +-- tests_asdf.py
+-- bluepy (or main project dir)
| +-- miscellaneous modules
In which case, you could add the root dir to your python path in the conftest.py file like so:
#
# conftest.py
import sys
from os.path import dirname as d
from os.path import abspath, join
root_dir = d(d(abspath(__file__)))
sys.path.append(root_dir)
Let me know if that's helpful.
Actually, py.test is correctly discovering the rootdir for your project to be /home/pi/bluepy. That's good.
Tragically, you are erroneously attempting to run py.test within your project's package subdirectory (i.e., /home/pi/bluepy/bluepy) rather than within your project's rootdir (i.e., /home/pi/bluepy). That's bad.
Let's break this down a little. From within the:
/home/pi/bluepy directory, there is a bluepy.btle submodule. (Good.)
/home/pi/bluepy/bluepy subdirectory, there is no bluepy.btle submodule. (Bad.) Unless you awkwardly attempt to manually inject the parent directory of this subdirectory (i.e., /home/pi/bluepy) onto sys.path as Makie Messer perhaps inadvisably advises, Python has no means of inferring that the package bluepy actually refers to the current directory coincidentally also named bluepy. To avoid ambiguity issues of this sort, Python is typically only run outside rather than inside of a project's package subdirectory.
Since you ran py.test from the latter rather than the former directory, Python is unable to find the bluepy.btle submodule on the current sys.path. For this and similar reasons, py.test should typically only ever be run from your project's top-level rootdir (i.e., /home/pi/bluepy):
pi#pi:~/ $ cd ~/bluepy
pi#pi:~/bluepy $ py.test bluepy/test_asdf.py
Lastly, note that it's typically preferable to defer test discovery to py.test. Rather than explicitly listing all test script filenames on the command line, consider instead letting py.test implicitly find and run all tests containing some substring via the -k option. For example, to run all tests whose function names are prefixed by test_asdf (regardless of the test script they reside in):
pi#pi:~/ $ cd ~/bluepy
pi#pi:~/bluepy $ py.test -k test_asdf .
The suffixing . is optional, but often useful. It instructs py.test to set its rootdir property to the current directory (i.e., /home/pi/bluepy). py.test is usually capable of finding your project's rootdir and setting this property on its own, but it can't hurt to specify it manually. (Especially as you're having... issues.)
For further details on rootdir discovery, see Initialization: determining rootdir and inifile in the official py.test documentation.

How to split frontend and backend translations?

I have a web project (php+js) translated by gettext. Later it was only translated at server side, pushing translations to JS by varying weird ways. Now i converted it to all gettext, convert my .po files by po2json and load them into Jed library. But this way I load all translations, even never used on client !
What i want to do now:
xgettext -js-options *.js > js-empty.po
xgettext -php-options *.php > php-empty.po
magic both-translated.po js-empty.po > js-translated.po
magic both-translated.po php-empty.po > php-translated.po
What should i use as 'magic' ?
P.S. I will be doing actual translation in one file and then split just for optimization, on every build.
I have found the solution:
msgcomm both-translated.po js-empty.po -o js-translated.po
Visual inspection confirms what lines are translated, and the following command confirms what number of 'msgid' is equal in empty and translated files:
grep msgid $1 | wc -l
Adding more value to answer: where's well recommended Python library https://pypi.python.org/pypi/polib, or in main Linux distributions, packages 'python3-polib' or 'python-polib'. I was considering using it to perform the task and maybe will use in future for other gettext-related tasks.

dpkg: How to use trigger?

I wrote a little CDN server that rebuilds its registry pool when new pool-content-packages are installed into that registry pool.
Instead of having each pool-content-package call the init.d of the cdn-server, I'd like to use triggers. That way it would restart the server only once at the end of an installation run, after all packages were installed.
What have I to do to use triggers in my packages with debhelper support?
What you are looking for is dpkg-triggers.
One solution with use of debhelper to build the debian packages is this:
Step 1)
Create file debian/<serverPackageName>.triggers (replace <serverPackageName> with name of your server package).
Step 1a)
Define a trigger that watch the directory of your pool. The content of file would be:
interest /path/to/my/pool
Step 1b)
But you can also define a named trigger, which have to be fired explicit (see step 3).
content of file:
interest cdn-pool-changed
The name of the trigger cdn-pool-changed is free. You can take what ever you want.
Step 2)
Add handler for trigger to file debian/<serverPackageName>.postinst (replace <serverPackageName> with name of your server package).
Example:
#!/bin/sh
set -e
case "$1" in
configure)
;;
triggered)
#here is the handler
/etc/init.d/<serverPackageName> restart
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0
Replace <serverPackageName> with name of your server package.
Step 3) (only for named triggers, step 1b) )
Add in every content package the file debian/<contentPackageName>.triggers (replace <contentPackageName> with names of your content packages).
content of file:
activate cdn-pool-changed
Use same name for trigger you defined in step 1.
More detailed Information
The best description for dpkg-triggers I could found is "How to use dpkg triggers". The corresponding git repository with examples you can get here:
git clone git://anonscm.debian.org/users/seanius/dpkg-triggers-example.git
I had a need and read and re-read the docs many times. I think that the process is not clearly explain or rather what goes where is not clearly explained. Here I hope to clarify the use of Debian package triggers.
Service with Configuration Directory
A service reading its settings in a specific directory can mark that directory as being of interest.
Say I create a new service which reads settings from /usr/share/my-service/config/...
That service gets two additions:
In its debian directory I add my-service.triggers
And here are the contents:
# my-service.triggers
interest /usr/share/my-service/config
This means if any other package installs or removes a file from that directory, the trigger enters its "needs to be run" state.
In its debian directory I also add my-service.postinst
And I have a script as follow to check whether the trigger happened and run a process as required:
# my-service.postinst
if [ "$1" = "triggered" ]
then
if [ "$2" = "/usr/share/my-service/config" ]
then
# this may or may not be what you need to do, but this is often
# how you handle a change in your service config files
#
systemctl restart my-service
fi
exit 0
fi
That's it.
Now packages adding extensions to your service can add their own configuration file(s) under /usr/share/my-service/config (or a directory under /etc/my-service/my-service.d/... or /var/lib/my-service/..., although that last one should be reserved for dynamic files, not files installed from a package) and dpkg automatically calls your postinst script with:
postinst triggered /usr/share/my-service/config
# where /usr/share/my-service/config is your <interest-path>
This call happens only once and after all the packages were installed, hence the advantage of having a trigger in the first place. This way each package does not need to know that it has to restart my-service and it does not happen more than once, which could cause all sorts of side effects (i.e. the service tries to listen on a TCP port and get error: address already in use).
IMPORTANT: keep in mind that the postinst should include a line with #DEBHELPER#.
So you do not have to do anything special in other packages. Only make sure to install the configuration files in the correct directory and dpkg picks up from there (i.e. in my example under /usr/share/my-service/config).
I have an extension to BIND9 called ipmgr which makes use of .ini files saved in a specific folder. It uses the files to generate DNS zones (way less errors that way! and it includes support for getting letsencrypt certificates and settings for dmarc/dkim). This package uses this case: a simple directory where configuration files get installed. Other packages do not need to do anything other than install files in the right place (/usr/share/ipmgr/zones, for this package).
Service without a Configuration Folder
In some (rare?) cases, you may need to trigger something in a service which is not driven by the installation of a new configuration file.
In this case, you can use an arbitrary name (it should include your package name to make sure it is unique since this name is global to the entire Debian/Ubuntu system).
To make this one work, you need three files, one of which is a trigger in the other packages.
State the Interest
As above, we have an interest. In this case, the interest is stated as a name on its own. The dpkg system distinguish between a name and a path because a name cannot include a slash (/) character. Names are limited to ASCII except control characters and spaces. I would suggest you stick to a-z, 0-9 and dashes (-).
# my-service.triggers
interest my-service-settings
This is useful if you cannot simply track a folder. For example, the settings could come from a network connection that a package offers once installed.
Listen for the Triggers
Again, as above, you need a postinst script in your Service Package. This captures the trigger and allows you to run a command. The script is the same, only you test for the name instead of the folder (note that you can have any number of triggers, so you could also have both: a folder as above and a special name as here).
# my-service.postinst
if [ "$1" = "triggered" ]
then
if [ "$2" = "my-service-settings" ]
then
# this may or may not what you need to do, but this is often
# how you handle a change in your service config files
#
systemctl restart my-service
fi
exit 0
fi
The Trigger
As mentioned above, we need a third file. An arbitrary name is not going to be triggered automatically by dpkg. It wouldn't know whether your other package needs to trigger something just like that (although it is fairly automated as it is already).
So in other packages, you create a trigger file which looks like this:
# other-package.triggers
activate my-service-settings
Now we recognize the name, it is the same as the interest stated above.
In other words, if the trigger needs to run for something other than just the installation of files in a given location, use a special name and add this triggers file with the activate keyword.
Other Features
I have not tested the other features of the dpkg-trigger(1) tool. There are other keywords support in the triggers files:
interest
interest-await
interest-noawait
activate
activate-await
activate-noawait
The deb-triggers manual page has additional information about those. I am not too sure what the await/noawait implies other than the trigger may happen at any time when nowait is used.
Automatic Trigger Added
The build system on Ubuntu (probably Debian too) automatically adds a triggers file with the following when your package includes a library:
$ cat triggers
# Triggers added by dh_makeshlibs/11.1.6ubuntu2
activate-noawait ldconfig
I suggest you exercise caution if your package includes libraries. If you have your own triggers file, I do not know whether this addition will still happen automatically.
This also shows us a special case where it wants to use the noawait. If I understand correctly, it has to run the ldconfig trigger ASAP so your commands will work as expected after the unpack. Otherwise ldd will not know anything about your newly installed library.