Yocto: ROOTFS_POSTPROCESS_COMMAND and do_rootfs - yocto

I have a task which has a dependency set as below:
bb.build.addtask('do_function2', 'do_build', 'do_rootfs', d)
There is another task do_function1 that modifies some files in the rootfs:
ROOTFS_POSTPROCESS_COMMAND_append = " do_function1;"
I require do_function2 to be triggered only after rootfs for the image has been populated.
Query is whether the tasks that come under ROOTFS_POSTPROCESS_COMMANDS_append satisfy the do_rootfs dependency that I have set up in the addtask() call?
I want do_function2 to run only after do_function1. Is that the case even if do_function1 is specified under ROOTFS_POSTPROCESS_COMMAND_append?

Related

Why doesn't my recipe re-build successfully when I'm offline?

I have a recipe that looks basically like this :
SUMMARY = "SomeLibrary"
LICENSE = "Apache-2.0"
LIC_FILES_CHKSUM = "file://LICENSE;md5=3b83ef96387f14655fc854ddc3c6bd57"
SRC_URI += "git://gitlab.com/some_library/some-library.git;protocol=https;nobranch=1"
SRCREV = "${PV}"
S = "${WORKDIR}/git"
inherit autotools pkgconfig
It builds successfully with bitbake some-library, and I can see there is a git2/gitlab.com.some_library.some-library.git/ directory and a git2/gitlab.com.some_library.some-library.git.done file in my downloads folder (the one DL_DIR point to).
My understanding is that if I then immediately run bitbake -c cleansstate some-library && bitbake some-library, given that there is no change in the recipe, bitbake shouldn't need to download anything (it already has everything it needs). In practice, if I turn off my network connection or add BB_NO_NETWORK="1" to my local.conf, I get the following error :
Initialising tasks: 100% |################################################################| Time: 0:00:01
Sstate summary: Wanted 12 Found 4 Missed 8 Current 251 (33% match, 96% complete)
NOTE: Executing Tasks
ERROR: some-library-v2.3.0-r0 do_fetch: Bitbake Fetcher Error: NetworkAccess('https://gitlab.com/some_library/some-library.git', 'git -c core.fsyncobjectfiles=0 ls-remote "https://gitlab.com/some_library/some-library.git" ')
ERROR: Logfile of failure stored in: /home/myusername/work/builddir/tmp/work/aarch64-poky-linux/some-library/v2.3.0-r0/temp/log.do_fetch.116252
ERROR: Task (/home/myusername/work/builddir/../../layers/meta-mymeta/recipes-core/some-library/some-library_v2.3.0.bb:do_fetch) failed with exit code '1'
NOTE: Tasks Summary: Attempted 806 tasks of which 804 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/home/myusername/work/builddir/../../layers/meta-mymeta/recipes-core/some-library/some-library_v2.3.0.bb:do_fetch
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
Why is that ? How do other recipes avoid this pitfall ? (when I build my image, this recipe seems to be the only one trying to fetch things from the network, which suggests to me that I'm doing something wrong here)
EDIT :
What really puzzles me is that bitbakes seems to behave differently with recipes other than my own. For example, the recipe for can-utils located at meta-openembedded/meta-oe/recipes-extended/socketcan/can-utils_git.bb looks like this:
SUMMARY = "Linux CAN network development utilities"
LICENSE = "GPLv2 & BSD-3-Clause"
LIC_FILES_CHKSUM = "file://include/linux/can.h;endline=44;md5=a9e1169c6c9a114a61329e99f86fdd31"
DEPENDS = "libsocketcan"
SRC_URI = "git://github.com/linux-can/${BPN}.git;protocol=https;branch=master"
SRCREV = "da65fdfe0d1986625ee00af0b56ae17ec132e700"
PV = "2020.02.04"
S = "${WORKDIR}/git"
inherit autotools pkgconfig
which is very similar, but when I set BB_NO_NETWORK="1" in my local.conf and run bitbake -c cleansstate can-utils && bitbake can-utils I get Tasks Summary: Attempted 842 tasks of which 822 didn't need to be rerun and all succeeded.
This works for me:
After configuring the project, add the following lines to build/conf/site.conf file:
# Build offline
SOURCE_MIRROR_URL ?= "file:///path/to/oe-downloads"
INHERIT += " own-mirrors"
BB_GENERATE_MIRROR_TARBALLS = "1"
BB_NO_NETWORK = "1"
After that, it might be necessary to build project once when online.
After every re-configuration (different build options) the site.conf is overwritten, so I created a script to add these lines after re-configuration.
I believe I found the issue.
If I replace ${PV} (which was equal to v2.3.0 here) by the hash associated to that tag, then the issue stops happening.
If I interpret this correctly, it means that bitbake is able to tell if SRCREV is a hash or a tag, and that if it is a tag then do_fetch will always run git ls-remote to make sure that the tag has not been moved.

BitBake - what defines the order of tasks for a recipe?

Is there a way to list the order of tasks that shall be executed when a recipe is built with BitBake? I know I can build the recipe and then inspect log.task_order, but that's not what I'm after - I want to know where is the order of tasks defined for given recipe, without actually building it. I also know that there's bitbake <recipe_name> -c listtasks, but AFAIK, that lists all the available tasks, regardless if they are actually executed during the build.
Update:
The recipe I'm interested in is the kernel recipe, this is how its log.task_order looks like after a full clean build:
do_fetch
do_unpack
do_prepare_recipe_sysroot
do_kernel_checkout
do_symlink_kernsrc
do_validate_branches
do_kernel_metadata
do_patch
do_kernel_version_sanity_check
do_populate_lic
do_kernel_configme
do_configure
do_kernel_configcheck
do_compile
do_shared_workdir
do_kernel_link_images
do_compile_kernelmodules
do_strip
do_sizecheck
do_install
do_populate_sysroot
do_package
do_packagedata
do_package_qa
do_package_write_ipk
do_bundle_initramfs
do_deploy
I would expect that this sequence is defined somewhere in the recipe metadata, but I haven't found it.

Azure Datafactory Pipeline execution status

It is kind of annoying we cannot change the logical order(AND/OR) of the Activity dependencies. however, I have got another issue. having said that I have activities for on failure to log the error messages in DB, since the logging activity succeeds, the entire pipeline succeeds too! is there any workaround to say if any activities failed the entire pipeline and the parent pipeline, if it is called from another pipeline, should be failed either?
In my screenshot, i have selected the on completion dependencies to log the successful or error.
I see that you defined "On Success" of the copy activity to run "usp_postexecution" . Please define a "On failure" of the copy activity and add any activity ( may be a set variable for testing ) and execute the pipeline . The pipeline will fail .
Just to give you more context what i tried .
I have a variable name "test" of the type boolean and I am failing it deliberately ( by assigning to a non-boolean value of true1 )
Pipeline will fail when I define both success and failure scenarios .
The pipeline will succeed when you have only "Failure" defined

Yocto - creating a dependency for WIC to cpio.gz image

I'm creating a small Yocto distro that should work in RAM on tmpfs. I use the WIC configuration in the following way:
part /boot --source bootimg-efi --sourceparams="loader=grub-efi,initrd=${PN}-${MACHINE}.cpio.gz,file=${PN}-${MACHINE}.cpio.gz" --ondisk sda --label msdos --active --align 1024
bootloader --ptable gpt --timeout=0 --append="rootfstype=tmpfs rootflags=size=2G console=ttyS0,115200 console=tty0"
I also add IMAGE_FSTYPES_append = " cpio.gz " to my local.conf, so it builds the cpio.gz archive from my rootfs.
My problem is very straightforward - when WIC runs, it tries to create the wic file before it is done with creating the rootfs cpio.gz, and therefore the build fails. What I need is to create a dependency, something that will hold WIC scripts until the cpio.gz is ready. Does anyone know how to achieve it? Can, for instance, WKS_FILE_DEPENDS be used?
Here is the failure:
| ERROR: _exec_cmd: cp .../poky/build/tmp/deploy/images/genericx86-64/core-image-minimal-genericx86-64.cpio.gz .../poky/build/tmp/work/genericx86_64-poky-linux/core-image-minimal/1.0-r0/deploy-core-image-minimal-image-complete/core-image-minimal-genericx86-64-20191121151711/tmp.wic.k00ckxmk/hdd/boot returned '1' instead of 0
| output: cp: cannot stat '.../poky/build/tmp/deploy/images/genericx86-64/core-image-minimal-genericx86-64.cpio.gz': No such file or directory
Currently I bypass the problem by running the wic tool manually after the build. I had to use IMAGE_FSTYPES_remove = " wic wic.bmap hddimg " in my local.conf for that. The command for running wic then is:
wic create ../meta-mylayer/wic/myimage.wks -e core-image-minimal
Thanks!
EDIT:
Maybe the problem is not in creating the required dependency, but in the way I create the image? I just want a UEFI boot, a kernel, and a cpio.gz file with a complete rootfs which will gets mounted on boot. This is not an initramfs, but a complete rootfs that I need there. Except the problematic dependency the resulting image does exactly what I need.
You can specify the dependency with WIC in 2 ways.
Using do_image_wic: The final task to create the WIC is do_image_wic. So you can add dependency for creating your initrd/initramfs image to this task as below,
do_image_wic[depends] += "image-base-initramfs:do_image_complete"
You need to specify this in your WIC image creation recipe. For this example,
DESCRIPTION = "My image"
inherit core-image
export IMAGE_BASENAME = "image-base"
IMAGE_FSTYPES = "wic.xz"
DEPENDS += "image-base-initramfs"
do_image_wic[depends] += "image-base-initramfs:do_image_complete"
WKS_FILES = "my.wks"
Here image-base is used for creating the WIC using my.wks. It waits for the initramfs to complete the building. In image-base-initramfs you will create the initramfs image.
To add, you can also do this with INITRAMFS_IMAGE when using kernel fitImage.
Using WKS_FILE_DEPENDS: You can add any bitbake recipe to dependency before creating the WIC image. Adding image-base-initramfs to this variable will wait for it to complete the initramfs image. We also have WKS_FILE_DEPENDS_BOOTLOADERS when depending on bootloader to complete in WIC creation.

bitbake do_image dependency not cached

I have a task do_image_custom that has a dependency on task do_image_ext4.
That task (do_image_ext4) generates an image file containing DATETIME.
The first time I build my image, no errors.
dependency_DATETIME.rootfs.ext4 is generated and used by its dependents.
If I make a change to the consuming task of the ext4 file, because I need to stipulate the dependency on DATETIME.rootfs.ext4.
After I build a second time (without cleaning), I get the error that do_image_custom cannot find newer_datetime.rootfs.ext4
I check the IMGDEPLOYDIR and sure enough, that file doesn't exist and the do_image_ext4 task still has the first timestamp.
My question is, what am I doing wrong here in do_image_custom such that it re-evaluates DATETIME every time it is run without checking with (perhaps) the sstate cache?
The problem was that my custom task (do_image_custom) depended on the output of a prior task. That task output generates an ext4 image with a timestamp in the name.
do_image_custom re-evaluated the DATETIME, even though the dependency (the ext4 file with an earlier DATETIME did not, and therefore was not rebuilt. Hence when do_image_custom executed, it referenced a file that did not exist (the error) because it was not generated (correctly so, because the basehash for the dependency task was unchanged).
The solution was (in front of me all along) to modify my custom task (do_image_custom) to refer to a symlink (also generated in the same step as the ext4) which does not have a DATETIME in the symlink name, hence making do_image_custom invariant to any or no changes to it's dependent step.