I'm starting up with Packer using chef-solo to provision. I have a very simple recipe (default.rb) that contains the lines below:
package "git"
package "ruby"
package "rubygems"
I was able to provision an image using Vagrant with this successfully. I'm now trying to move this provision step to Packer but when I execute packer build it doesn't seem to run the recipe.
virtualbox-iso: Running handlers complete
virtualbox-iso: Chef Client finished, 0/0 resources updated in 1.818437359 seconds
My Packer template's provision section is:
{
"type": "chef-solo",
"cookbook_paths": ["/cookbooks"]
}
My second part to this question (I'm assuming it's going to be related) is what is the run_list configuration option?
The Packer documentation says it goes in the same file, is called run_list, and by default is empty. So you should give the name of your cookbook as a single-element string array, in a param called run_list.
For beginners like myself who are looking for an answer, I addressed this by adding a run_list like the following:
"run_list": ["git::default"]
'git' is the name in the metadata.rb file and 'default' is the filename or recipe (if my terminology is correct). My cookbook directory structure is as follows:
~/Projects/Packer-Templates/Cookbooks $ find .
.
./ruby-environment
./ruby-environment/metadata.rb
./ruby-environment/recipes
./ruby-environment/recipes/default.rb
Related
I am using an embedded Linux system based on Yocto/Open Embedded Linux and the systemd-journald-remote program is missing.
When I look at the systemd recipe the program is mentioned. It seems like it is not compiled or added by default to the image. I understand how to add normal recipes but unfortunately I don't understand how to add such a "subpackage".
The Bitbake documentation is unfortunately overwhelming for a beginner like me. Can someone help me?
Create bbappend for systemd in your meta-layer with following path recipes-core/systemd/systemd_%.bbappend and:
PACKAGECONFIG_append = " \
microhttpd \
"
You can add it into your image .bb or .bbappend file with following parameter:
IMAGE_INSTALL += "systemd-journal-remote"
This will add systemd-journal-remote into your image. Install the image on your target board, log in to your target and configure the file /etc/systemd/journal-remote.conf.
Then, enable the service with systemctl enable systemd-journal-remote, and then restart it with systemctl restart systemd-journal-remote.
I am building linux system for raspberrypi4 but for some reason I need to remove getty#tty1 service in yocto.
I have created systemd_%.bbappend file for that.
Host PC is Ubuntu 18.04
this is working with warrior branch
Now, I am trying to compile with dunfell branch in yocto
but at the time of systemd compiling it gives an error like
"cannot remove /etc/systemd/system/getty.target.wants/getty#tty1, no such file or deirectory
But at the end, In final image there I can see getty#tty1.service
Also I can't find any other receipe that creates this link.
systemd_%.bbappend looks like this
DESCRIPTION = "Customization of systemD services."
do_install_append() {
rm ${D}${sysconfdir}/systemd/system/getty.target.wants/getty#tty1.service
}
FILES_${PN} += "${sysconfdir}/systemd/system"
REQUIRED_DISTRO_FEATURES= "systemd"
Thanks
Margish
On more recent versions of systemd (like the one in Yocto dunfell), the links to services are not created by the build system (ninja), but instead by running systemctl preset-all on the running system after installation (see here). This command reads the systemd preset files to determine which units to enable or disable by default.
In Yocto, what this means is that instead of the links being created as part of the systemd recipe, systemctl preset-all is run as part of the IMAGE_PREPROCESS_COMMAND during image creation in image.bbclass (see here). This is why the old method of deleting the symbolic links in /etc/systemd/system from the systemd recipe no longer works.
Instead, what you need to do is modify the 90-systemd.preset file to disable the getty#tty1 preset (or any other default system service) by changing the below line:
enable getty#.service
to this:
disable getty#.service
You can accomplish this using a bbappend file as follows*:
# systemd_%.bbappend
do_install_append() {
# Disable getty#tty1 from starting at boot time.
sed -i -e "s/enable getty#.service/disable getty#.service/g" ${D}${systemd_unitdir}/system-preset/90-systemd.preset
}
*https://stackoverflow.com/a/67505478/286701
I am a yocto noob, trying to decipher how the device tree is built from a Xilinx hardware definition (.hdf) file. But my question is more general.
Is there a yocto way to find the source of task?
Given a task name is it possible to find where the tasks source code lives? (presumably in a recipe or class)
As an example, where is the source for the Python task do_create_yaml which is called by recipes in the meta-xilinx-bsp layer that compile the device tree blob?
bitbake -e device-tree
Will dump the python source for do_create_yaml (amongst the rest of it prodigious output) but how can I find where that is coming from?
Device tree is part of Linux Kernel. In Yocto, this is compiled from KERNEL_DEVICETREE variable value either defined as part of Linux Kernel recipe or machine configuration.
For example, for cubieboard7 as defined here,
KERNEL_DEVICETREE = "s700_cb7_linux.dtb"
instructs the compilation to use this dts file for compilation. This is done by yocto by using various classes.
In our example, we inherit kernel.bbclass which in turn inherits kernel-devicetree.bbclass, in this class (copied from kernel-devicetree.bbclass),
do_compile_append() {
for dtbf in ${KERNEL_DEVICETREE}; do
dtb=`normalize_dtb "$dtbf"`
oe_runmake $dtb
done
}
do_install_append() {
for dtbf in ${KERNEL_DEVICETREE}; do
dtb=`normalize_dtb "$dtbf"`
dtb_ext=${dtb##*.}
dtb_base_name=`basename $dtb .$dtb_ext`
dtb_path=`get_real_dtb_path_in_kernel "$dtb"`
install -m 0644 $dtb_path ${D}/${KERNEL_IMAGEDEST}/$dtb_base_name.$dtb_ext
done
}
do_deploy_append() {
for dtbf in ${KERNEL_DEVICETREE}; do
dtb=`normalize_dtb "$dtbf"`
this appends tasks to compile, install and deploy tasks. So defining KERNEL_DEVICETREE enables the automatic build of dtb.
I found that the datastore contains the filename for tasks as a VarFlag,
from a devpyshell
pydevshell> d.getVarFlags("do_create_yaml")
gives
{'filename': '.....yocto/sources/core/../meta-xilinx-tools/classes/xsctyaml.bbclass', 'lineno': '61', 'func': 1, 'task': 1, 'python': '1', 'deps': ['do_prepare_recipe_sysroot']}
So for the example in my question the active definition for the do_create_yaml task is in xsctyaml.bbclass.
In the book "Embedded Linux Systems with the Yocto Project", Chapter 4 contains a sample called "HelloWorld - BitBake style". I encountered a bunch of problems trying to get the old example working against the "Sumo" release 2.5.
If you're like me, the first error you encountered following the book's instructions was that you copied across bitbake.conf and got:
ERROR: ParseError at /tmp/bbhello/conf/bitbake.conf:749: Could not include required file conf/abi_version.conf
And after copying over abi_version.conf as well, you kept finding more and more cross-connected files that needed to be moved, and then some relative-path errors after that... Is there a better way?
Here's a series of steps which can allow you to bitbake nano based on the book's instructions.
Unless otherwise specified, these samples and instructions are all based on the online copy of the book's code-samples. While convenient for copy-pasting, the online resource is not totally consistent with the printed copy, and contains at least one extra bug.
Initial workspace setup
This guide assumes that you're working with Yocto release 2.5 ("sumo"), installed into /tmp/poky, and that the build environment will go into /tmp/bbhello. If you don't the Poky tools+libraries already, the easiest way is to clone it with:
$ git clone -b sumo git://git.yoctoproject.org/poky.git /tmp/poky
Then you can initialize the workspace with:
$ source /tmp/poky/oe-init-build-env /tmp/bbhello/
If you start a new terminal window, you'll need to repeat the previous command which will get get your shell environment set up again, but it should not replace any of the files created inside the workspace from the first time.
Wiring up the defaults
The oe-init-build-env script should have just created these files for you:
bbhello/conf/local.conf
bbhello/conf/templateconf.cfg
bbhello/conf/bblayers.conf
Keep these, they supersede some of the book-instructions, meaning that you should not create or have the files:
bbhello/classes/base.bbclass
bbhello/conf/bitbake.conf
Similarly, do not overwrite bbhello/conf/bblayers.conf with the book's sample. Instead, edit it to add a single line pointing to your own meta-hello folder, ex:
BBLAYERS ?= " \
${TOPDIR}/meta-hello \
/tmp/poky/meta \
/tmp/poky/meta-poky \
/tmp/poky/meta-yocto-bsp \
"
Creating the layer and recipe
Go ahead and create the following files from the book-samples:
meta-hello/conf/layer.conf
meta-hello/recipes-editor/nano/nano.bb
We'll edit these files gradually as we hit errors.
Can't find recipe error
The error:
ERROR: BBFILE_PATTERN_hello not defined
It is caused by the book-website's bbhello/meta-hello/conf/layer.conf being internally inconsistent. It uses the collection-name "hello" but on the next two lines uses _test suffixes. Just change them to _hello to match:
# Set layer search pattern and priority
BBFILE_COLLECTIONS += "hello"
BBFILE_PATTERN_hello := "^${LAYERDIR}/"
BBFILE_PRIORITY_hello = "5"
Interestingly, this error is not present in the printed copy of the book.
No license error
The error:
ERROR: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb: This recipe does not have the LICENSE field set (nano)
ERROR: Failed to parse recipe: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
Can be fixed by adding a license setting with one of the values that bitbake recognizes. In this case, add a line onto nano.bb of:
LICENSE="GPLv3"
Recipe parse error
ERROR: ExpansionError during parsing /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
[...]
bb.data_smart.ExpansionError: Failure expanding variable PV_MAJOR, expression was ${#bb.data.getVar('PV',d,1).split('.')[0]} which triggered exception AttributeError: module 'bb.data' has no attribute 'getVar'
This is fixed by updating the special python commands being used in the recipe, because #bb.data was deprecated and is now removed. Instead, replace it with #d, ex:
PV_MAJOR = "${#d.getVar('PV',d,1).split('.')[0]}"
PV_MINOR = "${#d.getVar('PV',d,1).split('.')[1]}"
License checksum failure
ERROR: nano-2.2.6-r0 do_populate_lic: QA Issue: nano: Recipe file fetches files and does not have license file information (LIC_FILES_CHKSUM) [license-checksum]
This can be fixed by adding a directive to the recipe telling it what license-info-containing file to grab, and what checksum we expect it to have.
We can follow the way the recipe generates the SRC_URI, and modify it slightly to point at the COPYING file in the same web-directory. Add this line to nano.bb:
LIC_FILES_CHKSUM = "${SITE}/v${PV_MAJOR}.${PV_MINOR}/COPYING;md5=f27defe1e96c2e1ecd4e0c9be8967949"
The MD5 checksum in this case came from manually downloading and inspecting the matching file.
Done!
Now bitbake nano ought to work, and when it is complete you should see it built nano:
/tmp/bbhello $ find ./tmp/deploy/ -name "*nano*.rpm*"
./tmp/deploy/rpm/i586/nano-dbg-2.2.6-r0.i586.rpm
./tmp/deploy/rpm/i586/nano-dev-2.2.6-r0.i586.rpm
I have recently worked on that hands-on hello world project. As far as I am concerned, I think that the source code in the book contains some bugs. Below there is a list of suggested fixes:
Inheriting native class
In fact, when you build with bitbake that you got from poky, it builds only for the target, unless you mention in your recipe that you are building for the host machine (native). You can do the latter by adding this line at the end of your recipe:
inherit native
Adding license information
It is worth mentioning that the variable LICENSE is important to be set in any recipe, otherwise bitbake rises an error. In our case, we try to build the version 2.2.6 of the nano editor, its current license is GPLv3, hence it should be mentioned as follow:
LICENSE = "GPLv3"
Using os.system calls
As the book states, you cannot dereference metadata directly from a python function. Which means it is mandatory to access metadata through the d dictionary. Bellow, there is a suggestion for the do_unpack python function, you can use its concept to code the next tasks (do_configure, do_compile):
python do_unpack() {
workdir = d.getVar("WORKDIR", True)
dl_dir = d.getVar("DL_DIR", True)
p = d.getVar("P", True)
tarball_name = os.path.join(dl_dir, p+".tar.gz")
bb.plain("Unpacking tarball")
os.system("tar -x -C " + workdir + " -f " + tarball_name)
bb.plain("tarball unpacked successfully")
}
Launching the nano editor
After successfully building your nano editor package, you can find your nano executable in the following directory in case you are using Ubuntu (arch x86_64):
./tmp/work/x86_64-linux/nano/2.2.6-r0/src/nano
Should you have any comments or questions, Don't hesitate !
I'm trying to add auditd to Yocto linux.
I added the selinux layer and it's dependent layers: openembedded-core and meta-virtualization.
I added the layers to bblayers.conf.
I added DISTRO_FEATURES_append = " acl xattr pam selinux"
and PREFERRED_PROVIDER_virtual/refpolicy ?= "refpolicy-mls" to the local.conf file.
After building (by using bitbake core-image-base) and running the qemu, the kauditd process is running, but all user-space tools are not.
The /etc/audit folder is not exist ,non of the audit's config files exists (audit.rules) and no user-space audit process is running.
In the layer's info it is declared - "User space tools for kernel auditing".
What I am missing?
Thanks.
I think I found something that will answer your question: If you know what an example binary or library you expect to be in the target image, you can find what recipe the executable is in, and then add that package to the image.
Start with the name of a binary or library you expect to be in the image and run the following. For me, I am using a CAN bus executable called candump. I wonder what recipe it's in? To find out, I issue:
devtool search candump
Which returns:
can-utils
If nothing is returned, I'd double check your conf/bblayers.conf so that the layer you think it may be in is actually being seen by your build system. If you are unsure, take a look at the link below which points to OpenEmbedded which has a handy search utility for packages.
After you find the recipe, you can then include that recipe into your build.
Here is a good reference in doing what I think you're asking on the OpenEmbedded website:
https://wiki.yoctoproject.org/wiki/Cookbook:Example:Adding_packages_to_your_OS_image
I just added auditd to my system. This is what I did.
First I got the repository checked out.
cd /path/to/yocto
git clone git://git.yoctoproject.org/meta-selinux
cd meta-selinux
# checkout the branch matching the Yocto release you are on
git checkout thud
Then I added auditd to my build.
cd /path/to/build
bitbake-layers add-layer /path/to/yocto/meta-selinux
cat >> conf/local.conf <<'END'
IMAGE_INSTALL_append = " auditd"
END
bitbake my_normal_image_target
Even though the Yocto recipe is called audit, the package name is auditd.
Of course, auditd without selinux is useless but it did attempt to run (journalctl -u auditd) and /etc/audit exists.
FWIW: To get auditd to a point where it reports say, login success/failure, I had to do a few more things. I'm not just adding it to a standard Yocto image, but to a custom image and custom machine. I'm already using systemd so I didn't have to change that (the layer seems to indicate it's required?). My local.conf looked like this.
# enable selinux
DISTRO_FEATURES_append = " acl xattr pam selinux"
# set the policy
PREFERRED_PROVIDER_virtual/refpolicy ?= "refpolicy-mls"
# install selinux packages and auditd
IMAGE_INSTALL_append = " packagegroup-core-selinux auditd"
# tell the kernel to enable selinux (non-enforcing) and audting
APPEND_append = " selinux=1 enforcing=0 audit=1"
I also had to change linux-yocto_selinux.inc to load selinux.cfg later. Probably layer/recipe ordering could have solved this too?
-SRC_URI += "${#bb.utils.contains('DISTRO_FEATURES', 'selinux', 'file://selinux.cfg', '', d)}"
+SRC_URI_append = "${#bb.utils.contains('DISTRO_FEATURES', 'selinux', 'file://selinux.cfg', '', d)}"
With all that in place, I see audit logs in my journal.