Bitbake: "The metadata is not deterministic and this needs to be fixed" - yocto

I'm building a Bitbake recipe and getting the following error message:
ERROR: When reparsing virtual:native:/path/to/poky/meta/recipes-devtools/cve-check-tool/cve-check-tool_5.6.4.bb.do_populate_cve_db, the basehash value changed from 0b637979bcb5db4263f9ed97497a6330 to bcd28a5efe646ed4d327fefa349f889c. The metadata is not deterministic and this needs to be fixed.
This reproduces in a clean build (after bitbake -c cleanall -c cleansstate <recipe>).
What is the reason for this error? The recipe has not been modified from the upstream version.

the fix I used is go to that recipe and add an empty line in the end, that will help bitbake re-recognize the recipe

The following is the yocto patch that adds this diagnostic message
https://patchwork.openembedded.org/patch/133517/
Here is the commit message, explaining its reasons and a possible way to get further details for the problem:
Bitbake can parse metadata in the cooker and in the worker during
builds. If the metadata isn't deterministic, it can change between
these two parses and this confuses things a lot. It turns out to be
hard to debug these issues currently.
This patch ensures the basehashes from the original parsing are passed
into the workers and that these are checked when reparsing for
consistency. The user is shown an error message if inconsistencies are
found.
There is debug code in siggen.py (see the "Slow but can be useful for
debugging mismatched basehashes" commented code), we don't enable this
by default due to performance issues. If you run into this message,
enable this code and you will find "sigbasedata" files in tmp/stamps
which should correspond to the hashes shown in this error message.
bitbake-diffsigs on the files should show which variables are
changing.
Signed-off-by: Richard Purdie

I stumbled onto a similar error. For me it happened in the do_install routine. My aim was to store a copy of a file with a ${DATETIME} attached to its name referring to the build time in a specific place.
Apparently the recipes get parsed multiple times during a build and since the time has changed by the second time it got parsed a different ${DATETIME} value was inserted and thus the metadata was recognized as changed.

Simplest solution: touch <recipename>. Then run cleansstate on your recipe. Then go on as usual from here.

It helps for me to remove a shared cache for a recipe:
bitbake recipename -c cleansstate
And then, everything works like a charm.

This happens because tasks are evaluated twice, the first time by the cooker, and the second time by bitbake worker. The task hash is calculated twice and if it will not match, meta is considered unstable. Base hash is calculated from variables that are used in the task script. So if you for example use time-related variables like DATETIME, you will get this hash mismatch error. To avoid this error you need to exclude variables from the hash calculation, with VarFlags.
do_something[vardepsexclude]="DATETIME"
here you can find more elaborate explanation.
https://www.kc8apf.net/2017/01/tired-of-taskhash-mismatch/
yocto bitbake hashes

For what it's worth, doing this cleared it up for me:
devtool modify <recipe>
devtool reset <recipe>

this also happens if you modify the recipe while you're building...
this means that the error you stated goes off if you edit a recipe file while a build is in progress. If this is your case just finish your edit and re-launch a build, it should work. Happened to me and solved this way.

This happens when a recipe contains changing data and you try to rebuild it, one way to avoid this is to delete the tmp/ directory [not a good solution] but it is the least possible solution.

Related

Can we get the Coverity report specific to only one issue like URL Manipulation Error?

I am using cov-capture, and cov-analyze to get the reports in my VM. Can anyone help in getting the command to run the cov-analyze only for getting specific errors? Example: There are various XML files created and analysis takes time to run. So to save time If we can get only a single report for a single issue like URL Manipulation or Encryption Error.
Note: Tool Used in Synopsys with REST API code in python and flask.
To run the analysis with only a single checker enabled, use the --disable-default and --enable options like this:
$ cov-analyze --disable-default --enable CHECKER_NAME ...
CHECKER_NAME is the all-caps, identifier-like name of the checker that reports issues of a certain type. For URL Manipulation, the checker is called PATH_MANIPULATION. The Checker Reference lists all of the checker names.
However, be aware that doing this repeatedly for each checker will take significantly longer than simply running all desired checkers at once because there is substantial overhead involved in simply reading the program into memory for analysis.
If your goal is faster analysis turnaround for changes you are making during development before check-in or push, you may want to look into using the cov-run-desktop command, which is meant for that use case.

Task is not re-triggered even if variables in vardeps change

I have an issue related to vardeps where I am making a task dependent on some variables.
I have created some new variables e.g., NEW_VARIABLE, added them to BB_ENV_EXTRAWHITE. In some recipes, I wrote my own implementation of some tasks that are dependent on these new variables, and for this dependency to work I added e.g., do_install[vardeps] = "NEW_VARIABLE", so I am now expecting that every time I change this NEW_VARIABLE and perform e.g., bitbake recipename, the do_install task should run. I checked the task signature and I see the NEW_VARIABLE there.
Let's assum I have two possible values for this variable. When I set the variable for the first time "value1", i.e., the first build, everything works and there is no problem. When I change its value to the other value "value2" not used before and build the recipe again, the do_install will also run and no problem occurs. The problem is however, if I set the variable again to the old value "value1", and I execite bitbake recipename again. The do_install will not be re-triggered, and this leads to some wrong/old data located in work directory, and also produced in the image.
I tried setting BB_DONT_CACHE, as I understood in an old question that the problem might be that the recipe needs to be parsed again, however this did not work at all.
I do not want to always run the tasks when I perform a new build, i.e., do_install[[nostamp] = "1" so this solution can not be regarded. I just want it to run again every time I change this NEW_VARIABLE.
Is what I am expecting a normal behavior? Or Yocto does not work this way?
I faced same issue all day, also tried vardeps BB_DONT_CACHE etc but then I changed into using SRC_URI but still got same behavior. Like you I build a separate recipe "bitbake name" and look in the work image/ for the recipe output. For me it turns out that I just think that this will be updated just as if I make other new changes. But once I change input to something old the state cache kicks in and the work dir is left alone - fooling me to think it does not work properly. But when I build the final image-base it is properly populated. I guess one need to force build or disregard compiler cache to get the output this way.

How are checksums (hash) generated in yocto for the sstate cache?

I'm trying to get more insight into how the sstate cache works for yocto/bitbake. I understand that many dependent hashes, and things like timestamp are used to make a checksum (hash? I see both in the documentation...). I would like to know what are the specific steps taken to create the hash that is used in the sstate. I haven't had much luck finding any details in the docs, so if anyone knows, or cares to link relevant docs, I would be grateful.
To know what exactly makes it to the sstate-cache, you can run bitbake-dumpsig on your recipe (+ task) you want to have a look at or pass directly the sigdata file of your recipe task to it. This will print everything that is used for the sstate-cache of this task.
It's a great tool when you want to understand why a recipe is not being rebuilt. For when a recipe is a being rebuilt without you wanting to, have a look at bitbake-diffsigs which will output the differences between the two sigdata, highlighting what triggered a rebuild.
For more info on sstate-cache, I can recommend reading "Sstate-cache magic" slides from Yocto Project Summit 2019: https://wiki.yoctoproject.org/wiki/images/1/18/Yocto_Summit_Lyon_Day2_2019.pdf

Make yocto skip a recipe instead of stopping

I have a recipe that does a check during parsing. What I would like to do is instead of issuing a warning or stopping with an error, I would like to make yocto completely ignore the recipe as if it was never there. It could still error out if some other recipe RDEPENDS on it, but otherwise parsing would be successful.
Is this possible to do?
EDIT: I don't see a way to do it.
But you can "hide" specific recipe(s) using the BBMASK variable. The value is regexp for masking specific files or paths. You can also mask a whole directory.
We are using that mechanism and the variable is set in configuration file (distro configuration in our case, but it may be in a different configuration file).
You can find more information in the documentation for that variable: https://www.yoctoproject.org/docs/latest/mega-manual/mega-manual.html#var-BBMASK
Some examples copied from the linked documentation:
BBMASK += "/meta-ti/recipes-misc/ meta-ti/recipes-ti/packagegroup/"
BBMASK += "/meta-oe/recipes-support/"
BBMASK += "/meta-foo/.*/openldap"
BBMASK += "opencv.*\.bbappend"
BBMASK += "lzma"
When bitbake launches, it first parses everything it can in order to figure out what does it have and are there any obvious errors. Only after this stage it analyzes what have you asked it to do. So if you have syntax errors there is no other way to avoid it except don't add the layer that contains invalid recipe to bblayers.conf.
Yes, you can, by raising bb.parse.SkipRecipe exception
python() {
...
if ... :
raise bb.parse.SkipRecipe("Message")
...
}
I don't find it well documented, but google does return some assuring results for this.

Class _NSZombie_GraphicPath is implemented in both ?? and ??. One of the two will be used. Which one is undefined

I'm getting the following run time output:
"Class _NSZombie_GraphicPath is implemented in both ?? and ??. One of the two will be used. Which one is undefined."
Have no clue how to fix this. There are a couple of other questions that cover this, but it seems in those unit testing was involved. Has anyone ever come across this problem before and if so how was it fixed?
It implies that two images and/or static libraries export the class GraphicPath. For example, one may be your app, and the other a unit test. A library you link to could also export that class. In any event, you should review your projects' compilations phases including all dependencies, and ensure that GraphicPath.m is compiled exactly once, then remove all others. Also note that it is possible to compile the file twice for the same target. I expect that you would also see a log warning when running with zombies disabled. You can also use nm to dump an image's symbol names.