Is it possible to use config fragments with Buildroot's .config? - buildroot

I'm using Buildroot as a submodule, and I want to reuse existing in-tree defconfigs with a few modification of my own.
I'd like to store just the modified options in a config fragment, just like I can do with BR2_LINUX_KERNEL_CONFIG_FRAGMENT_FILES for the Linux kernel config.
Right now I'm doing something like:
cd buildroot
make BR2_EXTERNAL="$(pwd)/../mypackage" qemu_x86_64_defconfig
echo '
BR2_LINUX_KERNEL_CONFIG_FRAGMENT_FILES="../kernel_config_fragment"
BR2_ROOTFS_OVERLAY="../rootfs_overlay"
' >> .config
make
Is there a nicer way to avoid that echo with a config fragment, just like I'm using for the Linux kernel config fragment? I'd expect something like:
make BR2_CONFIG_FRAG=br_config_frag
where br_config_frag contains the lines:
BR2_LINUX_KERNEL_CONFIG_FRAGMENT_FILES="../kernel_config_fragment"
BR2_ROOTFS_OVERLAY="../rootfs_overlay"
and then I'd be able to write just:
make -C buildroot BR2_CONFIG_FRAG=br_config_frag qemu_x86_64_defconfig all
Here's the full example repo.
Edit
One slight improvement is to put the "config fragment" in a separate file buildroot_config_fragment:
BR2_LINUX_KERNEL_CONFIG_FRAGMENT_FILES="../kernel_config_fragment"
BR2_ROOTFS_OVERLAY="../rootfs_overlay"
and then cat that:
cat ../buildroot_config_fragment >> .config

First side note: your script should run make olddefconfig before make, so that any new options are set to their default value instead of being asked for interactively.
You could simplify the script a little by doing:
cat configs/qemu_x86_64_defconfig br_config_frag > .config
make olddefconfig
You can also use the script support/kconfig/merge_config.sh from the kconfig infrastructure. However, that script internally uses make alldefconfig which currently doesn't work - you need a patch for that.
If you would like to add support for BR2_CONFIG_FRAG to the Buildroot infrastructure, feel free to send a patch to the Buildroot mailing list!

I asked on the IRC, and an user who seems to be Yann E. Morin, who seems to be an active developer, said it is not possible currently.

Arnout's make alldefconfig patch is now merged in buildroot as of 26 Jul 2017
(https://github.com/buildroot/buildroot/commit/dab80981d15979eab3aea28a33694396635a52a1).
This means you can now do:
./support/kconfig/merge_config.sh configs/qemu_x86_64_defconfig fragment1.config fragment2.config
This will use qemu_x86_64_defconfig as the base and add modifications given in the listed fragment config files. The tool will also show nice warnings if you override items.

Related

Is there a way to configure pytest_plugins from a pytest.ini file?

I may have missed this detail but I'm trying to see if I can control the set of plugins made available through the ini configuration itself.
I did not find that item enumerated in any of the configurable command-line options nor in any of the documentation around the pytest_plugins global.
The goal is to reuse a given test module with different fixture implementations.
#hoefling is absolutely right, there is actually a pytest command line argument that can specify plugins to be used, which along with the addopts ini configuration can be used to select a set of plugin files, one per -p command.
As an example the following ini file selects three separate plugins, the plugins specified later in the list take precedence over those that came earlier.
projX.ini
addopts =
-p projX.plugins.plugin_1
-p projX.plugins.plugin_2
-p projY.plugins.plugin_1
We can then invoke this combination on a test module with a command like
python -m pytest projX -c projX.ini
A full experiment is detailed here in this repository
https://github.com/jxramos/pytest_behavior/tree/main/ini_plugin_selection

Snakemake: how to realize a mechanism to copy input/output files to/from tmp folder and apply rule there

We use Slurm workload manager to submit jobs to our high performance cluster. During runtime of a job, we need to copy the input files from a network filesystem to the node's local filesystem, run our analysis there and then copy the output files back to the project directory on the network filesystem.
While the workflow management system Snakemake integrates with Slurm (by defining profiles) and allows to run each rule/step in the workflow as Slurm job, I haven't found a simple way to specify for each rule, wether a tmp folder should be used (with all the implications stated above or not.
I am very happy for simple solutions how to realise this behaviour.
I am not entirely sure if I understand correctly. I am guessing you do not want to copy the input of each rule to a certain directory, do the rule, then copy the output back to another filesystem, since that would be a lot of unnecessary files moving around. So for the first half of the answer I assume before execution you move your files to /scratch/mydir.
I believe you could use the --directory command (https://snakemake.readthedocs.io/en/stable/executing/cli.html). However I find this works poorly, since then snakemake has difficulty finding the config.yaml and samples.tsv.
The way I solve this is just by adding a working dir in front of my paths in each rule...
rule example:
input:
config["cwd"] + "{sample}.txt"
output:
config["cwd"] + "processed/{sample}.txt"
shell:
"""
touch {output}
"""
So all you then have to do is change cwd in your config.yaml.
local:
cwd: ./
slurm:
cwd: /scratch/mydir
You would then have to manually copy them back to your long-term filesystem or make a rule that would do that for you.
Now if however you do want to copy your files from filesystem A -> B, do your rule, and then move the result from B -> A, then I think you want to make use of shadow rules. I think the docs properly explain how to use that so I just give a link :).

Why when I change BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE and run make linux-reconfigure the kernel .config does not change?

Buildroot 7d43534625ac06ae01987113e912ffaf1aec2302 post 2018.02, Ubuntu 17.10 host.
I run:
make qemu_x86_64_defconfig
printf 'BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE=\"kdb\"\n' >>.config
make olddefconfig
time make BR2_JLEVEL="$(nproc)"
where kdb is a Linux kernel configuration that has CONFIG_KGDB=y.
Then as expected:
grep '^CONFIG_KGDB=y' ./output/build/linux-4.15/.config
has a match.
But then I want to try out a new kernel config, so I try:
sed -i 's/BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE=kdb/BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE=nokdb/'
where nokdb is a kernel config that has CONFIG_KGDB=n and then:
time make BR2_JLEVEL="$(nproc)" linux-reconfigure
However to my surprise, the kernel .config did not change, CONFIG_KGDB=y is still there.
Only if I do:
rm -f ./output/build/linux-4.15/.config
time make BR2_JLEVEL="$(nproc)" linux-reconfigure
Is there a better way to force the kernel .config to be regenerated, e.g. some other linux-* target?
I don't like this rm solution because it forces me to deal with "internal" paths inside output.
I'd expect linux-reconfigure to do that regeneration for me.
Analogous behavior if you turn BR2_TARGET_ROOTFS_INITRAMFS on and off, which affects the CONFIG_INITRAMFS_SOURCE option of the Linux kernel.
http://lists.busybox.net/pipermail/buildroot/2018-March/215817.html
The config files are checked for timestamps, so after you do:
touch kdb
touch nokdb
printf 'BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE=\"kdb\"\n' >>.config
make olddefconfig
time make BR2_JLEVEL="$(nproc)"
kdb and nokdb have the same modified date, and the kernel does not get reconfigured on the next:
sed -i 's/BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE=kdb/BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE=nokdb/'
time make BR2_JLEVEL="$(nproc)" linux-reconfigure
But if you touch the new config file, it works, even without using the linux-reconfigure target explicitly:
touch nokdb
time make BR2_JLEVEL="$(nproc)"
Alternatively, if you just edit the used file instead of pointing to a new one, the configuration also gets updated as expected.

dpkg: How to use trigger?

I wrote a little CDN server that rebuilds its registry pool when new pool-content-packages are installed into that registry pool.
Instead of having each pool-content-package call the init.d of the cdn-server, I'd like to use triggers. That way it would restart the server only once at the end of an installation run, after all packages were installed.
What have I to do to use triggers in my packages with debhelper support?
What you are looking for is dpkg-triggers.
One solution with use of debhelper to build the debian packages is this:
Step 1)
Create file debian/<serverPackageName>.triggers (replace <serverPackageName> with name of your server package).
Step 1a)
Define a trigger that watch the directory of your pool. The content of file would be:
interest /path/to/my/pool
Step 1b)
But you can also define a named trigger, which have to be fired explicit (see step 3).
content of file:
interest cdn-pool-changed
The name of the trigger cdn-pool-changed is free. You can take what ever you want.
Step 2)
Add handler for trigger to file debian/<serverPackageName>.postinst (replace <serverPackageName> with name of your server package).
Example:
#!/bin/sh
set -e
case "$1" in
configure)
;;
triggered)
#here is the handler
/etc/init.d/<serverPackageName> restart
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0
Replace <serverPackageName> with name of your server package.
Step 3) (only for named triggers, step 1b) )
Add in every content package the file debian/<contentPackageName>.triggers (replace <contentPackageName> with names of your content packages).
content of file:
activate cdn-pool-changed
Use same name for trigger you defined in step 1.
More detailed Information
The best description for dpkg-triggers I could found is "How to use dpkg triggers". The corresponding git repository with examples you can get here:
git clone git://anonscm.debian.org/users/seanius/dpkg-triggers-example.git
I had a need and read and re-read the docs many times. I think that the process is not clearly explain or rather what goes where is not clearly explained. Here I hope to clarify the use of Debian package triggers.
Service with Configuration Directory
A service reading its settings in a specific directory can mark that directory as being of interest.
Say I create a new service which reads settings from /usr/share/my-service/config/...
That service gets two additions:
In its debian directory I add my-service.triggers
And here are the contents:
# my-service.triggers
interest /usr/share/my-service/config
This means if any other package installs or removes a file from that directory, the trigger enters its "needs to be run" state.
In its debian directory I also add my-service.postinst
And I have a script as follow to check whether the trigger happened and run a process as required:
# my-service.postinst
if [ "$1" = "triggered" ]
then
if [ "$2" = "/usr/share/my-service/config" ]
then
# this may or may not be what you need to do, but this is often
# how you handle a change in your service config files
#
systemctl restart my-service
fi
exit 0
fi
That's it.
Now packages adding extensions to your service can add their own configuration file(s) under /usr/share/my-service/config (or a directory under /etc/my-service/my-service.d/... or /var/lib/my-service/..., although that last one should be reserved for dynamic files, not files installed from a package) and dpkg automatically calls your postinst script with:
postinst triggered /usr/share/my-service/config
# where /usr/share/my-service/config is your <interest-path>
This call happens only once and after all the packages were installed, hence the advantage of having a trigger in the first place. This way each package does not need to know that it has to restart my-service and it does not happen more than once, which could cause all sorts of side effects (i.e. the service tries to listen on a TCP port and get error: address already in use).
IMPORTANT: keep in mind that the postinst should include a line with #DEBHELPER#.
So you do not have to do anything special in other packages. Only make sure to install the configuration files in the correct directory and dpkg picks up from there (i.e. in my example under /usr/share/my-service/config).
I have an extension to BIND9 called ipmgr which makes use of .ini files saved in a specific folder. It uses the files to generate DNS zones (way less errors that way! and it includes support for getting letsencrypt certificates and settings for dmarc/dkim). This package uses this case: a simple directory where configuration files get installed. Other packages do not need to do anything other than install files in the right place (/usr/share/ipmgr/zones, for this package).
Service without a Configuration Folder
In some (rare?) cases, you may need to trigger something in a service which is not driven by the installation of a new configuration file.
In this case, you can use an arbitrary name (it should include your package name to make sure it is unique since this name is global to the entire Debian/Ubuntu system).
To make this one work, you need three files, one of which is a trigger in the other packages.
State the Interest
As above, we have an interest. In this case, the interest is stated as a name on its own. The dpkg system distinguish between a name and a path because a name cannot include a slash (/) character. Names are limited to ASCII except control characters and spaces. I would suggest you stick to a-z, 0-9 and dashes (-).
# my-service.triggers
interest my-service-settings
This is useful if you cannot simply track a folder. For example, the settings could come from a network connection that a package offers once installed.
Listen for the Triggers
Again, as above, you need a postinst script in your Service Package. This captures the trigger and allows you to run a command. The script is the same, only you test for the name instead of the folder (note that you can have any number of triggers, so you could also have both: a folder as above and a special name as here).
# my-service.postinst
if [ "$1" = "triggered" ]
then
if [ "$2" = "my-service-settings" ]
then
# this may or may not what you need to do, but this is often
# how you handle a change in your service config files
#
systemctl restart my-service
fi
exit 0
fi
The Trigger
As mentioned above, we need a third file. An arbitrary name is not going to be triggered automatically by dpkg. It wouldn't know whether your other package needs to trigger something just like that (although it is fairly automated as it is already).
So in other packages, you create a trigger file which looks like this:
# other-package.triggers
activate my-service-settings
Now we recognize the name, it is the same as the interest stated above.
In other words, if the trigger needs to run for something other than just the installation of files in a given location, use a special name and add this triggers file with the activate keyword.
Other Features
I have not tested the other features of the dpkg-trigger(1) tool. There are other keywords support in the triggers files:
interest
interest-await
interest-noawait
activate
activate-await
activate-noawait
The deb-triggers manual page has additional information about those. I am not too sure what the await/noawait implies other than the trigger may happen at any time when nowait is used.
Automatic Trigger Added
The build system on Ubuntu (probably Debian too) automatically adds a triggers file with the following when your package includes a library:
$ cat triggers
# Triggers added by dh_makeshlibs/11.1.6ubuntu2
activate-noawait ldconfig
I suggest you exercise caution if your package includes libraries. If you have your own triggers file, I do not know whether this addition will still happen automatically.
This also shows us a special case where it wants to use the noawait. If I understand correctly, it has to run the ldconfig trigger ASAP so your commands will work as expected after the unpack. Otherwise ldd will not know anything about your newly installed library.

How do I copy from numerous release directories to a single folder

Okay this is and isn't programming related I guess...
I've got a whole bunch of little useful console utilities scattered across a suite of projects that I wrote and I want to dump them all to a single directory to make using them simpler. The only issue is that I have them all compiled in both Debug and Release mode.
Given that I only want the release mode versions in my utilities directory, what switch would allow me to specify that I want all executables from my tree structure but only from within Release folders:
Example:
Projects\
Project1\
Bin\
Debug\
Project1.exe
Release\
Project1.exe
Project2\
etc etc...
To
Utilities\
Project1.exe
Project2.exe
Project3.exe
Project4.exe
...
etc etc...
I figured this would be a cinch with XCopy - but it doesn't seem to allow me to exclude the Debug directories - or rather - only include items in my Release directories.
Any ideas?
You can restrict it to only release executables with the following. However, I do not believe the other requirement of flattening is possible using xcopy alone. To do the restriction:
First create a file such as exclude.txt and put this inside:
\Debug\
Then use the following command:
xcopy /e /EXCLUDE:exclude.txt *.exe C:\target
You can, however, accomplish what you want using xxcopy (free for non-commercial use). Read technical bulletin #16 for an explanation of the flattening features.
If the claim in that technical bulletin is correct, then it confirms that flattening cannot be accomplished with xcopy alone.
The following command will do exactly what you want using xxcopy:
xxcopy /sgfo /X:*\Debug\* .\Projects\*.exe .\Utilities
I recommend reading the technical bulletin, however, as it gives more sophisticated options for the flattening. I chose one of the most basic above.
Sorry, I haven't tried it yet, but shouldn't you be using:
xcopy release*.exe d:\destination /s
I am currently on my Mac so, I cant really check to be for sure.
This might not help you with assembling them all in one place now, but going forward have you considered adding a post-build event to the projects in Visual Studio (I'm assuming you are using it based on the directory names)
xcopy /Y /I /E "$(TargetDir)\$(TargetFileName)" "c:\somedirectory\$(TargetFileName)"
Ok, this is probably not going to work for you since you seem to be on a windows machine.
Here goes anyway, for the logic.
# From the base directory
mkdir Utilities
find . -type f | grep -w Release > utils.txt
for f in $(<utils.txt); do cp $f Utilities/; done
You can combine the find and cp lines into one, I split them for readability.
To do this on a windows machine you'll need Cygwin or some such Unix Utilities handy.
Maybe there are tools in the Windows shell to do this...
This may help get you started:
C:\>for %i in (*) do dir "%~dpi\*.exe"
Used in the dir command as a modifier to i, ~dp uses the drive and path of everything found in (*). If I run the above in a folder that has several subfolders containing executables, I get a dir list of all of the executables in each folder.
You should be able to modify that to add '\bin\release\' following the ~dpi portion and change dir to xcopy. A little experimentation should make it pretty easy.
To use the for statement above in a batch file, change '%' to '%%' in both places.