I'm trying to get more insight into how the sstate cache works for yocto/bitbake. I understand that many dependent hashes, and things like timestamp are used to make a checksum (hash? I see both in the documentation...). I would like to know what are the specific steps taken to create the hash that is used in the sstate. I haven't had much luck finding any details in the docs, so if anyone knows, or cares to link relevant docs, I would be grateful.
To know what exactly makes it to the sstate-cache, you can run bitbake-dumpsig on your recipe (+ task) you want to have a look at or pass directly the sigdata file of your recipe task to it. This will print everything that is used for the sstate-cache of this task.
It's a great tool when you want to understand why a recipe is not being rebuilt. For when a recipe is a being rebuilt without you wanting to, have a look at bitbake-diffsigs which will output the differences between the two sigdata, highlighting what triggered a rebuild.
For more info on sstate-cache, I can recommend reading "Sstate-cache magic" slides from Yocto Project Summit 2019: https://wiki.yoctoproject.org/wiki/images/1/18/Yocto_Summit_Lyon_Day2_2019.pdf
Related
in yocto imagebuildscripts there is a config variable named "IMAGE_FEATURES" i want to create a custom Imagefeature.
I searched my yocto installation which runs poky for existing imagefeatures but i wasn't able to find it.
IMAGE_FEATURES is a bit special, as its basically being hardcoded into image.bbclass.
Generally you are way better off creating custom DISTRO_FEATURES, and triggering on them whereever needed. See packagegroup-core-boot as an example of a recipe changing behaviour based on DISTRO_FEATURE in various places.
Usage wise there is little difference, the only thing you can't do is set DISTRO_FEATURES in the image recipe. If that is your actual need, then you probably should pour the new functionality in a custom image class the includes and extends image.bbclassm and call it myimage.bbclass (or similar).
EDIT:
Initially, I referred to the dropbear recipe as an example that triggers behaviour based on systemd being set as DISTRO_FEATURE. This is technically correct (and it was the first recipe that came to my mind), but probably confusing as there is a dropbear spedific IMAGE_FEATURE too.
I'm building a Bitbake recipe and getting the following error message:
ERROR: When reparsing virtual:native:/path/to/poky/meta/recipes-devtools/cve-check-tool/cve-check-tool_5.6.4.bb.do_populate_cve_db, the basehash value changed from 0b637979bcb5db4263f9ed97497a6330 to bcd28a5efe646ed4d327fefa349f889c. The metadata is not deterministic and this needs to be fixed.
This reproduces in a clean build (after bitbake -c cleanall -c cleansstate <recipe>).
What is the reason for this error? The recipe has not been modified from the upstream version.
the fix I used is go to that recipe and add an empty line in the end, that will help bitbake re-recognize the recipe
The following is the yocto patch that adds this diagnostic message
https://patchwork.openembedded.org/patch/133517/
Here is the commit message, explaining its reasons and a possible way to get further details for the problem:
Bitbake can parse metadata in the cooker and in the worker during
builds. If the metadata isn't deterministic, it can change between
these two parses and this confuses things a lot. It turns out to be
hard to debug these issues currently.
This patch ensures the basehashes from the original parsing are passed
into the workers and that these are checked when reparsing for
consistency. The user is shown an error message if inconsistencies are
found.
There is debug code in siggen.py (see the "Slow but can be useful for
debugging mismatched basehashes" commented code), we don't enable this
by default due to performance issues. If you run into this message,
enable this code and you will find "sigbasedata" files in tmp/stamps
which should correspond to the hashes shown in this error message.
bitbake-diffsigs on the files should show which variables are
changing.
Signed-off-by: Richard Purdie
I stumbled onto a similar error. For me it happened in the do_install routine. My aim was to store a copy of a file with a ${DATETIME} attached to its name referring to the build time in a specific place.
Apparently the recipes get parsed multiple times during a build and since the time has changed by the second time it got parsed a different ${DATETIME} value was inserted and thus the metadata was recognized as changed.
Simplest solution: touch <recipename>. Then run cleansstate on your recipe. Then go on as usual from here.
It helps for me to remove a shared cache for a recipe:
bitbake recipename -c cleansstate
And then, everything works like a charm.
This happens because tasks are evaluated twice, the first time by the cooker, and the second time by bitbake worker. The task hash is calculated twice and if it will not match, meta is considered unstable. Base hash is calculated from variables that are used in the task script. So if you for example use time-related variables like DATETIME, you will get this hash mismatch error. To avoid this error you need to exclude variables from the hash calculation, with VarFlags.
do_something[vardepsexclude]="DATETIME"
here you can find more elaborate explanation.
https://www.kc8apf.net/2017/01/tired-of-taskhash-mismatch/
yocto bitbake hashes
For what it's worth, doing this cleared it up for me:
devtool modify <recipe>
devtool reset <recipe>
this also happens if you modify the recipe while you're building...
this means that the error you stated goes off if you edit a recipe file while a build is in progress. If this is your case just finish your edit and re-launch a build, it should work. Happened to me and solved this way.
This happens when a recipe contains changing data and you try to rebuild it, one way to avoid this is to delete the tmp/ directory [not a good solution] but it is the least possible solution.
I have a custom machine layer based on https://github.com/jumpnow/meta-wandboard.
I've upgraded the kernel to 4.8.6 and want to add X11 to the image.
I'm modifying to image recipe (console-image.bb).
Since wandboard is based on i.MX6, I want to include the xf86-video-imxfb-vivante package from meta-fsl-arm.
However, it fails complaining about inability to build kernel-module-imx-gpu-viv. I believe that happens because xf86-video-imxfb-vivante DEPENDS on imx-gpu-viv which in turn RDEPENDS on kernel-module-imx-gpu-viv.
I realize that those dependencies have been created with meta-fsl-arm BSP and vanilla Poky distribution. But those things are way outdated for wandboard, hence I am using the custom machine layer with modern kernel.
The kernel is configured to include the Vivante DRM module and I really don't want the kernel-module-imx-gpu-viv package to be built.
Is there a way to exclude it from RDEPENDS? Can I somehow swear my health to the build system that I will take care of this specific run-time dependency myself?
I have tried blacklisting 'kernel-module-imx-gpu-viv' setting PNBLACKLIST[kernel-module-imx-gpu-viv] in my local.conf, but that's just a part of a solution. It helps avoid build failures, but the packaging process still fails.
IIUC you problem comes from these lines in img-gpu-viv recipe:
FILES_libgal-mx6 = "${libdir}/libGAL${SOLIBS} ${libdir}/libGAL_egl${SOLIBS}"
FILES_libgal-mx6-dev = "${libdir}/libGAL${SOLIBSDEV} ${includedir}/HAL"
RDEPENDS_libgal-mx6 += "kernel-module-imx-gpu-viv"
INSANE_SKIP_libgal-mx6 += "build-deps"
I would actually qualify this RDEPENDS as a bug, usually kernel module dependencies are specified as RRECOMMENDS because most modules can be compiled into the kernel thus making no separate package at all while still providing the functionality. But that's another issue.
There are several ways to fix this problem, the first general route is to tweak RDEPENDS for the package. It's just a bitbake variable, so you can either assign it some other value or remove some portion of it. In the first case it's going to look somewhat like this:
RDEPENDS_libgal-mx6 = ""
In the second one:
RDEPENDS_libgal-mx6_remove = "kernel-module-imx-gpu-viv"
Obviously, these two options have different implications for your present and future work. In general I would opt for the softer one which is the second, because it has less potential for breakage when you're to update meta-fsl-arm layer, which can change imx-gpu-viv recipe in any kind of way. But when you're overriding some more complex recipe with big lists in variables and you're modifying it heavily (not just removing a thing or two) it might be easier to maintain it with full hard assignment of variables.
Now there is also a question of where to do this variable mangling. The main option is .bbappend in your layer, that's what appends are made for, but you can also do this from your distro configuration (if you're maintaining your own distro it might be easier to have all these tweaks in one place, rather than sprayed into numerous appends) or from your local.conf (which is a nice place to quickly try it out, but probably not something to use in longer term). I usually use .bbappend.
But there is also a completely different approach to this problem, rather than fixing package dependencies you can also fix what some other package provides. If for example you have a kernel configured to have imx-gpu-viv module built right into the main zimage you can do
RPROVIDES_kernel-image += "kernel-module-imx-gpu-viv"
(also in .bbappend, distro configuration or local.conf) and that's it.
In any case your approach to fixing this problem should reflect the difference between your setup and recipe assumptions. If you do have the module, but in a different package, then go for RPROVIDES, if you have some other module providing the same functionality to libgal-mx6 package then fix libgal-mx6 dependencies (and it's better to fix them, meaning not only drop something you don't need, but also add things that are relevant for your setup).
I'm making a package for Atom, and Travis CI keeps telling me my build failed.
Update: I created a blank spec file and now my builds are passing.
You can see my package here: https://travis-ci.org/frayment/language-jazz
The console is telling me:
sh: line 105: ./spec: No such file or directory
Missing spec folder! Please consider adding a test suite in
I went looking around at Atom packages on GitHub for 'spec' files and they seem to be CoffeeScript based, but I can't understand what on earth they contain. There isn't much documentation on the subject, so:
What is a 'spec' file, and what do I put in it?
Help is very appreciated.
The ./spec directory should contain one or more Jasmine Specifications for the Atom Package you are developing, for example, this spec is taken from the Atom documentation:
describe "when a test is written", ->
it "has some expectations that should pass", ->
expect("apples").toEqual("apples")
expect("oranges").not.toEqual("apples")
One of the biggest challenges with Open Source software is maintaining quality when a large number of individual contributors are providing code, one solution to this is providing a high level of test coverage:
Like most aspects of programming, testing requires thoughtfulness. TDD is a very useful, but certainly not sufficient, tool to help you get good tests. If you are testing thoughtfully and well, I would expect a coverage percentage in the upper 80s or 90s. I would be suspicious of anything like 100% - it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing.
In Atom's case, all of the specifications are added to the ./spec folder and must end with -spec.coffee, so for example if you were creating a package named awesome and your code sat within /awesome.coffee then you spec would be ./spec/awesome.coffee. Your spec should exercise the key areas of your code to give you confidence when committing pull requests to your master branch.
I have a couple of packages on Atom.io and both of these have tests included with them, you are welcome to use these as concrete examples of how Jasmine 1.3 tests can be written to support the functionality of your packages. Equally the majority of packages on Atom.io also have a set of tests that you can draw upon to build your own test suite.
Can anyone share his experience on workflow for R peject development under ESS? I tried several times to learn emacs but I have not get it yet. I can understand ESS as an editor, but is there a project view in ESS? what's the efficient ways to set up/view R project directory, coding, and testing, and how's ESS has an edge to facilitate the whole process?
Do you use ESS as a good R editor only or tend to emulate a R IDE environment within ESS?
Thanks for any advices.
It sounds like you're asking two separate questions.
One question concerns workflow and the other concerns using ESS.
As I use StatET and Eclipse, I'll just share my experience regarding the workflow aspect of your question.
As with Vincent I also follow something like the workflow set out by Josh Reich here (also see Hadley's useful comments):
Workflow for statistical analysis and report writing
Although it can vary between projects, I tend to have a couple of main R files
import.R: this imports data files and does any necessary cleaning and manipulation
analyse.R: This generates the output that I need for any final report
main.R: This calls import.R and analyse.R
The aim is for import.R and analyse.R to represent the complete and final workflow for producing the final results of any analyses.
In terms of a directory structure for an analysis project, I'll often also have the following folders
data: for storing any raw data files
meta: for storing meta data, such as variable labels, scoring systems for tests, recoding information, etc.
output: for storing any graphics, tables, or text generated by my analyses that I might want to incorporate into an external program
temp: When exploring the data and brainstorming analyses, I like to type code into files instead of using the console. I tend to label these temp1.R, temp2.R, temp3.R. I store these in a temp folder. That way I have a permanent record that's easily accessible. If the analyses become final they get incorporated into one of the main R files (i.e., import.R or analysis.R)
functions: If I think that a function will be needed across a couple of projects, I often place it one function per file or a set of related functions in a file in a folder called functions. This makes it relatively easy to reuse functions across projects, when the formal requirements of package development are more than needed.
library: If I want to create some general functions that I think will be project specific, I'll place them in this folder
save: A folder to store any saved R objects
StatET and Eclipse make it easy to interact with such a file system.
Of course, given all the R gurus that use ESS and Emacs, I'm sure it also handles interactions with the file system well.
I'm not exactly sure what you expect as an answer on this one. I, for one, have stolen (and adapted) a system that was suggested here a little while ago (by Josh Reich):
Create a folder for every project, and split up your work in a bunch of different .R files:
Load.R for getting your raw data into R;
Prep.R for cleaning the data, recoding variables, etc.;
Func.R for coding any custom functions you will need for evaluation; and
Eval.R for running your final stuff.
If that doesn't fit your style, just change it.
Then, you can either have a master file to call each of the parts one after each other (good for reproducibility), or save at different stages and have the individual scripts load the appropriate data (good if some of the prep work is very computationally/time intensive).
**
On a different note, the trick that is posted at the link really helped me get into ESS. It turns Shift-Enter into a one-stop-ESS-shop: http://www.kieranhealy.org/blog/archives/2009/10/12/make-shift-enter-do-a-lot-in-ess/
Others have given you some good ideas about how to setup your directory/file structure for a project.
You also asked about "project views," in which case you might want to look into the Emacs Code Browser (ECB).
You can find some screen shots of it in action on its site, here:
http://ecb.sourceforge.net/screenshots/index.html