Yocto upgrade from fido to morty rootfs is read-only error - yocto

So, I was given the task to upgrade our yocto based system from fido to morty. I have very little experience with yocto, I have been struggling with it and trying to understand it for almost a week now. I have managed to fix some issues, but now I'm facing a problem when trying to build the image:
ERROR: basic-image-1.0-r0 do_rootfs: The following packages could not be configured offline and rootfs is read-only: ['component']
ERROR: basic-image-1.0-r0 do_rootfs: Function failed: do_rootfs
If I disable the compoents the basic-image builds just fine, and both of them build just fine on their own, i.e bb component
I don't even know where to start looking for a solution. So if you have any idea what might be causing this or where to start looking for a solution, it would be greatly appreciated.
Ofcourse I have been browsing the yocto manuals, but there is so much stuff that I'm just overwhelmed by all of it.

Well, the "problem" stems from the fact that you have the following in your image:
IMAGE_FEATURES += "read-only-rootfs"
That implies that nothing can modify the rootfs during runtime, everything has to be done off-line, i.e. when constructing the rootfs in question.
Your package component (adcl and cfgmgr in your original question), all have a post-installation script including the following snippet:
pkg_postinst_${PN} () {
if test "x$D" != "x" then
# Need to run on first boot
exit 1
fi
}
(Something similar at least, which exit 1).
The conditional in my example checks if the pkg_postinst script is being run during rootfs creation, if so, it exits with 1 as exit status. That means that the pkg_postinst has to be run live on the target system. However, as the target system is read-only, this will be impossible, and the build fails.
You'll have to check for pkg_postinst scripts, and rewrite them, such that they're able to be run during rootfs creation.

Related

command not found: flow

I followed the Flow installation guide for npm & babel and when I get to the second stage where you flow init I keep getting the error message zsh command not found: flow. I installed flow into my project (a branch of my Gatsby blog) for testing/debugging purposes. It is not installed globally, which is what the Flow docs state is the best practice:
Flow works best when installed per-project with explicit versioning rather than globally.
I have been having a similar issue with Lume that returns zsh command not found: lume
If I enter echo $PATH
The colon delimited list should have user/local/.deno/bin:$PATH but it is not there. If I add it by running:
export PATH="/Users/yourUserName/.deno/bin:$PATH"
Than I am able to run lume commands. However, when I try to run lume commands the next day I have to go through the whole process once more as the error crops up again...
My question here today is regarding the Flow error and getting it sorted. I only mention the Lume error because it makes me fairly certain something is messed up in $Path or my Zsh config. I am just not sure what. The only caveat to that hunch though is that Deno is a global install, whereas Flow is installed directly into my project...
So, maybe the two errors while the same syntax are totally separate?
Thank you in advance for any guidance/suggestions. Cheers!
I came across this video from 2017 no less and the host had issues with flow not working within the project and so he installed it globally. I gave it a shot and the flow error zsh command not found: flow has been resolved...

When a recipe is trying to install files into a shared area when those files already exist.?

I tried a clean build, also disabled state cache. But still facing the issue.
Please help.
ERROR: autoconf-archive-native-2019.01.06-r0 do_populate_sysroot: The recipe autoconf-archive-native is trying to install files into a shared area when those files already exist. Those files and their manifest location are:
/builds/mharriss/poky-tpm/build/tmp/sysroots/x86_64-linux/usr/share/aclocal/ax_check_enable_debug.m4
Matched in b'manifest-x86_64-gnome-common-native.populate_sysroot'
/builds/mharriss/poky-tpm/build/tmp/sysroots/x86_64-linux/usr/share/aclocal/ax_code_coverage.m4
Matched in b'manifest-x86_64-gnome-common-native.populate_sysroot'
Please verify which recipe should provide the above files.
The build has stopped as continuing in this scenario WILL break things, if not now, possibly in the future (we've seen builds fail several months later). If the system knew how to recover from this automatically it would however there are several different scenarios which can result in this and we don't know which one this is. It may be you have switched providers of something like virtual/kernel (e.g. from linux-yocto to linux-yocto-dev), in that case you need to execute the clean task for both recipes and it will resolve this error. It may be you changed DISTRO_FEATURES from systemd to udev or vice versa. Cleaning those recipes should again resolve this error however switching DISTRO_FEATURES on an existing build directory is not supported, you should really clean out tmp and rebuild (reusing sstate should be safe). It could be the overlapping files detected are harmless in which case adding them to SSTATE_DUPWHITELIST may be the correct solution. It could also be your build is including two different conflicting versions of things (e.g. bluez 4 and bluez 5 and the correct solution for that would be to resolve the conflict. If in doubt, please ask on the mailing list, sharing the error and filelist above.
ERROR: autoconf-archive-native-2019.01.06-r0 do_populate_sysroot: If the above message is too much, the simpler version is you're advised to wipe out tmp and rebuild (reusing sstate is fine). That will likely fix things in most (but not all) cases.
ERROR: autoconf-archive-native-2019.01.06-r0 do_populate_sysroot: Function failed: sstate_task_postfunc
ERROR: Logfile of failure stored in: /builds/mharriss/poky-tpm/build/tmp/work/x86_64-linux/autoconf-archive-native/2019.01.06-r0/temp/log.do_populate_sysroot.5051
ERROR: Task (virtual:native:/builds/mharriss/poky-tpm/layers/meta-junos/recipes-devtools/autoconf-archive/autoconf-archive_2019.01.06.bb:do_populate_sysroot) failed with exit code '1'

An issue with postinst-intercept scripts

I trying to integrate Yocto 2.6.2 due to higher version packages requests from Software team. Except regular oe bitbake, in our company we have another build system which is based on python also. I am not familiar too much with it but it invokes bitbake too. Just it fetches the layers from company servers because it is not able to do it from the net and put some additional data. The issue is following. With regular bitbake everything is fine and the build goes smoothly. When I run our proprietary build system it fails on do_rootfs with the following reason copied from the log:
**NOTE: Running intercept scripts:**
**NOTE: > Executing update_icon_cache intercept ...**
**NOTE: Exit code 127. Output:**
**/home/w23698/projects/btstmp/builds/home/w23698/projects/proj_csc5_clean/proj_csc5/srcMirror/build/tmp/work/csc5-poky-linux-gnueabi/ww-image/1.0-r0/intercept_scripts-12c5ef04e386052464955905a231f7ec3a3eb8c0452bbf7c1cd0f436ca99cbf7/update_icon_cache: 6: /home/w23698/projects/btstmp/builds/home/w23698/projects/proj_csc5_clean/proj_csc5/srcMirror/build/tmp/work/csc5-poky-linux-gnueabi/ww-image/1.0-r0/intercept_scripts-12c5ef04e386052464955905a231f7ec3a3eb8c0452bbf7c1cd0f436ca99cbf7/update_icon_cache: /home/w23698/projects/btstmp/builds/home/w23698/projects/proj_csc5_clean/proj_csc5/srcMirror/build/tmp/work/csc5-poky-linux-gnueabi/ww-image/1.0-r0/recipe-sysroot-native//gdk-pixbuf-2.0/gdk-pixbuf-query-loaders: not found**
**ERROR: The postinstall intercept hook 'update_icon_cache' failed, details in /home/w23698/projects/btstmp/builds/home/w23698/projects/proj_csc5_clean/proj_csc5/srcMirror/build/tmp/work/csc5-poky-linux-gnueabi/ww-image/1.0-r0/temp/log.do_rootfs**
**DEBUG: Python function do_rootfs finished**
**ERROR: Function failed: do_rootfs**
In the normal bitbake the invoked intercept (update_gio_module_cache intercept) script is very different and it passes.
Where the decision which intercept script to call is made?
I notified that all the intercept scripts in the proprietary build system are chmoded to x. I made one very simple try to match the access flags as per original yocto poky layer. It didn't help.

Yocto: LICENSE="CLOSED"

I am attempting to create a recipe that includes a custom python package written by us and sourced from a git server on our LAN. I'm running into an issue with defining the license. There is no license. From what I've read, in this situation these license fields should be used in the recipe:
LICENSE="CLOSED"
LIC_FILES_CHKSUM=""
and this should be all that is required.
Trying to build the recipe gives the following error when the recipe is parsed:
ERROR: Nothing PROVIDES '<recipe>' <recipe> was skipped: because it has a restricted license not whitelisted in LICENSE_FLAGS_WHITELIST
My understanding is that the CLOSED license should not require whitelisting as it is coded as a specific case, but I have tried adding the recipe to the whitelist without success.
Should I be using some other license in this situation? Should I be using LICENSE_FLAGS? I've tried to find a solution in the documentation without success, perhaps due to my noob status and the steepness of the learning curve.
Can anyone help take the edge of the curve for me?
After deleting tmp, sstate-cache and downloads I tried removing LIC_FILES_CHKSUM then creating a custom license, neither approach worked. I then ran the build from scratch without the custom recipe, then added it back and now both techniques work as expected.
It appears there was still some state information related to earlier incorrect values of the license fields hanging about, perhaps in cache.
This is the first time deleting tmp, downloads and sstate-cache has not brought the system back to a truely clean state after I have futzed around with things exploring how they work. bitbake -c clean/clean_all have never done a reasonable job.
Thanks for the helpful comments.

Building a hello world project for a Verifone Terminal using Sourcery CodeBench for Verifone DTK

I am attempting to flash a basic hello world program to a Verifone terminal as an exercise in the development flow of the hardware. I'm currently running into an issue that is occurring somewhere during the post-build steps. After I build my project, I get the message:
***
*** The package '\Debug\dl.lab2.tar' is available for download.
***
Implying that the project built successfully. However, further up in the build messages, I can see:
"C:\Program Files (x86)\Verifone\PackageManagerProduction\Cygwin\tar.exe" -czf "usr1.bundle.lab2.tgz" "pkg.lab2.tar" "pkg.lab2.tar.p7s" "crt" -C "..\bundle" "./"
tar (child): gzip: Cannot exec: No such file or directory
tar (child): Error is not recoverable: exiting now
And indeed, when I try to load the resulting archive, I get the "Invalid bundle file" on the PinPad. Inspecting the dl.lab2.tgz file shows that one of the internal archives is actually 0 Kb, so I'm quite positive it's because this archive generation step is failing. I'm not sure why it's failing though, because checking the directory contents, it seems like everything that it's looking for is there, though I can't explain why it's searching for "./". Does anyone have an idea why this is failing, and can someone tell me if it is possible to edit this archive generation step through CodeBench?
I figured out my answer to this, so I'll post an answer to hopefully help someone else in the future. I was correct in assuming that the error being returned by tar.exe was suspect. The post-build steps were being executed by running the external script simple_pkg.bat. Apparently the path in the simple_pkg.bat script was completely wrong; it was just pointing to an executable that didn't exist. Modifying simple_pkg.bat to point to where the correct tar.exe files was fixed my issue.