Can any one explain to me more comprehensively how the sstate cache works in yocto?
This explanation is far from clear.
I don't understand when such situation occurs:
NOTE: Preparing runqueue
NOTE: Executing SetScene Tasks
NOTE: Running setscene task 118 of 155 (virtual:native:/home/lulianhao/poky-build/edwin/poky/meta/recipes-devtools/pseudo/pseudo_git.bb:do_populate_sysroot_setscene)
NOTE: Running setscene task 119 of 155 (/home/lulianhao/poky-build/edwin/poky/meta/recipes-devtools/quilt/quilt-native_0.48.bb:do_populate_sysroot_setscene)
When it found the artifacts or got the candidates, and then checks the signatures. I want to know when setscene tasks are actually run.
Additional question: When it looks in local sstate_cache folder and when into mirror?
The Yocto Project manual has a section devoted to Shared State Cache.
To answer your question, the sstate-cache folder is checked first, then the mirrors are checked if nothing is found locally.
This cache is built based on a set of inputs which are hashed into "signatures" which can be found in the $BUILD_DIR/tmp/stamps, but keep in mind you'll need bitbake-dumpsigs to view the file. Having a look at bitbake-dumpsigs and bitbake-diffsigs can help you understand how the cache works. Also there is a great "Tips & Tricks" article on Understanding What Changed in your build environment.
While it can take some time to understand, shared state cache is extremely valuable and rigorously tested.
In terms of tracing dependancies, for example why your image might contain passwd, bitbake -g will give you a dependency tree and oe-pkgdata-util find-path can help you understand which recipe resulted in a given binary on the resultant image.
Useful tip: "When we need to rebuild from scratch, we either remove the build/tmp so that we can use sstate-cache to speed up the build or we remove both build/tmp and sstate-cache so that no cache is reused during the build."1
1Salvador, Otavio, and Daiane Angolini. "6.2 Understanding Shared State Cache." Embedded Linux Development with Yocto Project
Related
In a distributed yocto build environment, is it a bad idea to host a global sstate cache (via SSTATE_MIRRORS) on a busy build node over NFS?
I have recently introduced SSTATE_MIRRORS in our yocto build configuration to try to further reduce yocto builds times on our "build nodes" (Jenkins agents in vSphere and developer workstations). Per the manual, yocto will search SSTATE_MIRRORS for already-built artifacts if they are not found in the local sstate cache (SSTATE_DIR).
All build nodes have a local SSTATE_DIR, in which they cache build results. One of the build nodes (the first Jenkins agent) is designated as the "keeper of the global cache," and exports its local SSTATE_DIR as a r/o NFS share. The other build nodes mount this, and refer to it by SSTATE_MIRRORS in their build configurations. I thought I had a really good idea here and patted myself on the back.
Alas, I'm seeing a significant increase in build times after making the change.
Certainly I have a lot of troubleshooting and measuring to do before drawing any conclusions. We're using NFS v4, and for sure there's tuning to be done there. Also, I suspect the build node hosting the NFS share is intermittently very busy performing yocto builds itself (populating its hybrid local/global cache), leaving little CPU cycles for the kernel to manage NFS requests.
I'm wondering if others can offer advice based on their experiences implementing shared yocto sstate caches.
It's hard to say exactly what problems you are seeing with some profiling data, but I have a few observations and suggestions.
You are on the right track using NFS as the sstate cache between CI nodes, but I would recommend taking it one step further. Instead of having one node be the "keeper" of the sstate cache and having all the other node use it as a mirror, I would recommend having each node directly mount a common NFS share as SSTATE_DIR. This allows all the nodes to to read and write to the cache during their builds, and does a much better job of keeping it up to date with the required sstate objects. If you only have one node populating the cache, it is unlikely that it is going to contain all of the objects needed by the other builds.
Additionally, I would recommend that the NFS server be a persistent server instead of tied to a Jenkins agent. This gains you a few things:
It means that you can dedicate hardware resources to the cache without having them compete with an ongoing Jenkins build
You can put a simple HTTP server front end that serves up the cache files. Doing this allows your developer workstations to set that HTTP server as their SSTATE_MIRROR, and thus directly benefit from the cache produced by your Jenkins nodes. If you've done this correctly, a developer should be able to replicate a build that was previously built by Jenkins entirely from sstate, which can save a ton of local build time. Even if you aren't exactly replicating a build Jenkins has done before, you still can usually pull a significant amount from sstate.
The final thing to check is if you have hash equivalence enabled. Hash equivalence is a build acceleration technology that allows the bitbake to detect when metadata changes to a recipe that would normally cause it to rebuild would result in the same output as a previously built sstate object, and instead of building it restore it from sstate. This feature is enable by default starting with Yocto 3.0 (codename zeus). If you do not have a hash equivalence server running in your infrastructure, bitbake will start a local server for the duration of your build. However, this can cause some issues when working in a distributed environment like your Jenkins nodes, because hash equivalence is highly dependent on the contents of the sstate cache. If each node is running it's own hash equivalence server locally, they can get diverging sstate hashes (particularly when the CI nodes are transient and the local hash equivalence database is lost), which can result in more sstate misses that is necessary. The solution to this is to either run a hash equivalence server (bitbake comes with one) and point all your CI nodes at it, or disable hash equivalence completely by setting: BB_SIGNATURE_HANDLER = "OEBasicHash".
I have C/C++ application and I am trying to run cov-build and getting a warning “ NO FILES EMITTED”. Can you please help me as we doing the POC on Coverity for static code analysis.
C:\Users\Master\bamboo-agent-home\xml-data\build-dir\DEC-L11PROJ-JOB1>cov-build --dir cov-int IarBuild.exe MainApplication\EWARM\L11_P4_uC1.ewp -build *
Coverity Build Capture (64-bit) version 2019.03 on Windows 10 Enterprise, 64-bit (build 18362)
Internal version numbers: 2c0f9c8cf4 p-pacific1-push-35439.872
IAR Command Line Build Utility V8.4.8.6680
Copyright 2002-2020 IAR Systems AB.
Total number of errors: 0
Total number of warnings: 0
[WARNING] No files were emitted. This may be due to a problem with your configuration
or because no files were actually compiled by your build command.
Please make sure you have configured the compilers actually used in the compilation.
For more details, please look at:
C:/Users/Master/bamboo-agent-home/xml-data/build-dir/DEC-L11PROJ-JOB1/cov-int/build-log.txt
First, if you are involved in a pre-sales Proof of Concept (POC), then there should be a Coverity Sales Engineer assigned to help with the POC. That person's role includes providing instructions and information similar to what I'll offer below, as well as answering technical questions such as yours. There may have been a miscommunication somewhere. Get in contact with the Sales Engineer, as they will be able to help more reliably and completely than I can.
Now, what's going on? The primary purpose of cov-build is to watch the build process for invocations of compilers, and when one is found, compile the same code using the Coverity compiler (called cov-emit). But in order to recognize a compiler, cov-build needs to know its command line name, what kind of compiler it is, where its include files are stored, etc. This is accomplished by a helper tool called cov-configure that must be run before cov-build. If cov-configure has not been run, then no compiler invocations will be recognized, which appears to be the case for you, as indicated by "No files were emitted".
Synopsys has a page called CLI Integration Cheat sheet that gives these commands for use with IAR:
cov-configure --comptype iar:arm --compiler iccarm --template
cov-build --dir <intermediate directory> "c:\Program Files (x86)\IAR Systems\Embedded Workbench 6.5\common\bin\IarBuild.exe" sample_project.ewp -build Debug -log all
I can't personally vouch for these commands (I don't have IAR, nor access to the Coverity tools anymore; I'm a former employee), but something like that will be needed. Again, your assigned Sales Engineer should be able to help.
Finally, for new Coverity users, I recommend using the cov-wizard tool. cov-wizard is a graphical front-end to the command line tools, and has help text explaining the concepts and procedures, along with a convenient interface for performing them. There are several steps even after cov-build, and cov-wizard will walk you through all of them. Its final screen shows exactly what command lines it used in case you want to script them.
I tried a clean build, also disabled state cache. But still facing the issue.
Please help.
ERROR: autoconf-archive-native-2019.01.06-r0 do_populate_sysroot: The recipe autoconf-archive-native is trying to install files into a shared area when those files already exist. Those files and their manifest location are:
/builds/mharriss/poky-tpm/build/tmp/sysroots/x86_64-linux/usr/share/aclocal/ax_check_enable_debug.m4
Matched in b'manifest-x86_64-gnome-common-native.populate_sysroot'
/builds/mharriss/poky-tpm/build/tmp/sysroots/x86_64-linux/usr/share/aclocal/ax_code_coverage.m4
Matched in b'manifest-x86_64-gnome-common-native.populate_sysroot'
Please verify which recipe should provide the above files.
The build has stopped as continuing in this scenario WILL break things, if not now, possibly in the future (we've seen builds fail several months later). If the system knew how to recover from this automatically it would however there are several different scenarios which can result in this and we don't know which one this is. It may be you have switched providers of something like virtual/kernel (e.g. from linux-yocto to linux-yocto-dev), in that case you need to execute the clean task for both recipes and it will resolve this error. It may be you changed DISTRO_FEATURES from systemd to udev or vice versa. Cleaning those recipes should again resolve this error however switching DISTRO_FEATURES on an existing build directory is not supported, you should really clean out tmp and rebuild (reusing sstate should be safe). It could be the overlapping files detected are harmless in which case adding them to SSTATE_DUPWHITELIST may be the correct solution. It could also be your build is including two different conflicting versions of things (e.g. bluez 4 and bluez 5 and the correct solution for that would be to resolve the conflict. If in doubt, please ask on the mailing list, sharing the error and filelist above.
ERROR: autoconf-archive-native-2019.01.06-r0 do_populate_sysroot: If the above message is too much, the simpler version is you're advised to wipe out tmp and rebuild (reusing sstate is fine). That will likely fix things in most (but not all) cases.
ERROR: autoconf-archive-native-2019.01.06-r0 do_populate_sysroot: Function failed: sstate_task_postfunc
ERROR: Logfile of failure stored in: /builds/mharriss/poky-tpm/build/tmp/work/x86_64-linux/autoconf-archive-native/2019.01.06-r0/temp/log.do_populate_sysroot.5051
ERROR: Task (virtual:native:/builds/mharriss/poky-tpm/layers/meta-junos/recipes-devtools/autoconf-archive/autoconf-archive_2019.01.06.bb:do_populate_sysroot) failed with exit code '1'
I am attempting to create a recipe that includes a custom python package written by us and sourced from a git server on our LAN. I'm running into an issue with defining the license. There is no license. From what I've read, in this situation these license fields should be used in the recipe:
LICENSE="CLOSED"
LIC_FILES_CHKSUM=""
and this should be all that is required.
Trying to build the recipe gives the following error when the recipe is parsed:
ERROR: Nothing PROVIDES '<recipe>' <recipe> was skipped: because it has a restricted license not whitelisted in LICENSE_FLAGS_WHITELIST
My understanding is that the CLOSED license should not require whitelisting as it is coded as a specific case, but I have tried adding the recipe to the whitelist without success.
Should I be using some other license in this situation? Should I be using LICENSE_FLAGS? I've tried to find a solution in the documentation without success, perhaps due to my noob status and the steepness of the learning curve.
Can anyone help take the edge of the curve for me?
After deleting tmp, sstate-cache and downloads I tried removing LIC_FILES_CHKSUM then creating a custom license, neither approach worked. I then ran the build from scratch without the custom recipe, then added it back and now both techniques work as expected.
It appears there was still some state information related to earlier incorrect values of the license fields hanging about, perhaps in cache.
This is the first time deleting tmp, downloads and sstate-cache has not brought the system back to a truely clean state after I have futzed around with things exploring how they work. bitbake -c clean/clean_all have never done a reasonable job.
Thanks for the helpful comments.
So, I was given the task to upgrade our yocto based system from fido to morty. I have very little experience with yocto, I have been struggling with it and trying to understand it for almost a week now. I have managed to fix some issues, but now I'm facing a problem when trying to build the image:
ERROR: basic-image-1.0-r0 do_rootfs: The following packages could not be configured offline and rootfs is read-only: ['component']
ERROR: basic-image-1.0-r0 do_rootfs: Function failed: do_rootfs
If I disable the compoents the basic-image builds just fine, and both of them build just fine on their own, i.e bb component
I don't even know where to start looking for a solution. So if you have any idea what might be causing this or where to start looking for a solution, it would be greatly appreciated.
Ofcourse I have been browsing the yocto manuals, but there is so much stuff that I'm just overwhelmed by all of it.
Well, the "problem" stems from the fact that you have the following in your image:
IMAGE_FEATURES += "read-only-rootfs"
That implies that nothing can modify the rootfs during runtime, everything has to be done off-line, i.e. when constructing the rootfs in question.
Your package component (adcl and cfgmgr in your original question), all have a post-installation script including the following snippet:
pkg_postinst_${PN} () {
if test "x$D" != "x" then
# Need to run on first boot
exit 1
fi
}
(Something similar at least, which exit 1).
The conditional in my example checks if the pkg_postinst script is being run during rootfs creation, if so, it exits with 1 as exit status. That means that the pkg_postinst has to be run live on the target system. However, as the target system is read-only, this will be impossible, and the build fails.
You'll have to check for pkg_postinst scripts, and rewrite them, such that they're able to be run during rootfs creation.