I trying to integrate Yocto 2.6.2 due to higher version packages requests from Software team. Except regular oe bitbake, in our company we have another build system which is based on python also. I am not familiar too much with it but it invokes bitbake too. Just it fetches the layers from company servers because it is not able to do it from the net and put some additional data. The issue is following. With regular bitbake everything is fine and the build goes smoothly. When I run our proprietary build system it fails on do_rootfs with the following reason copied from the log:
**NOTE: Running intercept scripts:**
**NOTE: > Executing update_icon_cache intercept ...**
**NOTE: Exit code 127. Output:**
**/home/w23698/projects/btstmp/builds/home/w23698/projects/proj_csc5_clean/proj_csc5/srcMirror/build/tmp/work/csc5-poky-linux-gnueabi/ww-image/1.0-r0/intercept_scripts-12c5ef04e386052464955905a231f7ec3a3eb8c0452bbf7c1cd0f436ca99cbf7/update_icon_cache: 6: /home/w23698/projects/btstmp/builds/home/w23698/projects/proj_csc5_clean/proj_csc5/srcMirror/build/tmp/work/csc5-poky-linux-gnueabi/ww-image/1.0-r0/intercept_scripts-12c5ef04e386052464955905a231f7ec3a3eb8c0452bbf7c1cd0f436ca99cbf7/update_icon_cache: /home/w23698/projects/btstmp/builds/home/w23698/projects/proj_csc5_clean/proj_csc5/srcMirror/build/tmp/work/csc5-poky-linux-gnueabi/ww-image/1.0-r0/recipe-sysroot-native//gdk-pixbuf-2.0/gdk-pixbuf-query-loaders: not found**
**ERROR: The postinstall intercept hook 'update_icon_cache' failed, details in /home/w23698/projects/btstmp/builds/home/w23698/projects/proj_csc5_clean/proj_csc5/srcMirror/build/tmp/work/csc5-poky-linux-gnueabi/ww-image/1.0-r0/temp/log.do_rootfs**
**DEBUG: Python function do_rootfs finished**
**ERROR: Function failed: do_rootfs**
In the normal bitbake the invoked intercept (update_gio_module_cache intercept) script is very different and it passes.
Where the decision which intercept script to call is made?
I notified that all the intercept scripts in the proprietary build system are chmoded to x. I made one very simple try to match the access flags as per original yocto poky layer. It didn't help.
Related
I'm designing a device which would need to perform a number of setup activities at first boot and I'm trying to figure out the best way to do it. One of the tools at my disposal seems to be fantastically incompletely documented pkg_postinst_ontarget.
One of the activities I need to perform depends on an SD card being successfully mounted. Would pkg_postinst_ontarget get executed after all fstab mounting activities have completed?
The yocto build places the post-installation scripts in /etc/ipk-postinsts if you are using ipk packages. Then, those scripts are typically run by systemd on target: the run-postinsts.service unit runs /usr/sbin/run-postinsts which runs and deletes all the scripts stored in /etc/ipk-postinsts. Hence, the scripts are run once at the first startup but disappear after they have been executed.
I have C/C++ application and I am trying to run cov-build and getting a warning “ NO FILES EMITTED”. Can you please help me as we doing the POC on Coverity for static code analysis.
C:\Users\Master\bamboo-agent-home\xml-data\build-dir\DEC-L11PROJ-JOB1>cov-build --dir cov-int IarBuild.exe MainApplication\EWARM\L11_P4_uC1.ewp -build *
Coverity Build Capture (64-bit) version 2019.03 on Windows 10 Enterprise, 64-bit (build 18362)
Internal version numbers: 2c0f9c8cf4 p-pacific1-push-35439.872
IAR Command Line Build Utility V8.4.8.6680
Copyright 2002-2020 IAR Systems AB.
Total number of errors: 0
Total number of warnings: 0
[WARNING] No files were emitted. This may be due to a problem with your configuration
or because no files were actually compiled by your build command.
Please make sure you have configured the compilers actually used in the compilation.
For more details, please look at:
C:/Users/Master/bamboo-agent-home/xml-data/build-dir/DEC-L11PROJ-JOB1/cov-int/build-log.txt
First, if you are involved in a pre-sales Proof of Concept (POC), then there should be a Coverity Sales Engineer assigned to help with the POC. That person's role includes providing instructions and information similar to what I'll offer below, as well as answering technical questions such as yours. There may have been a miscommunication somewhere. Get in contact with the Sales Engineer, as they will be able to help more reliably and completely than I can.
Now, what's going on? The primary purpose of cov-build is to watch the build process for invocations of compilers, and when one is found, compile the same code using the Coverity compiler (called cov-emit). But in order to recognize a compiler, cov-build needs to know its command line name, what kind of compiler it is, where its include files are stored, etc. This is accomplished by a helper tool called cov-configure that must be run before cov-build. If cov-configure has not been run, then no compiler invocations will be recognized, which appears to be the case for you, as indicated by "No files were emitted".
Synopsys has a page called CLI Integration Cheat sheet that gives these commands for use with IAR:
cov-configure --comptype iar:arm --compiler iccarm --template
cov-build --dir <intermediate directory> "c:\Program Files (x86)\IAR Systems\Embedded Workbench 6.5\common\bin\IarBuild.exe" sample_project.ewp -build Debug -log all
I can't personally vouch for these commands (I don't have IAR, nor access to the Coverity tools anymore; I'm a former employee), but something like that will be needed. Again, your assigned Sales Engineer should be able to help.
Finally, for new Coverity users, I recommend using the cov-wizard tool. cov-wizard is a graphical front-end to the command line tools, and has help text explaining the concepts and procedures, along with a convenient interface for performing them. There are several steps even after cov-build, and cov-wizard will walk you through all of them. Its final screen shows exactly what command lines it used in case you want to script them.
Our code building process is done via an http server which starts the build process after receiving a project uuid from the build command. Once the server starts the compilation, GCC compatible output can be fetched from it.
Note: only my extension is aware of the project uuid which is different per workspace.
AFAIU I can implement it by:
programmatically adding a task which will call a script with the correct workspace uuid. Is this possible?
Having my extension manage the build process. This seems to be far from supported.
Bottom line, I'm trying to avoid asking the user to add anything to the configuration files and I want to completely manage the build process.
Thanks!
As I didn't find a suitable only vscode solution I did the following:
Defined a helper script which I executed as the task. The helper script was respojnsible for the communication against the HTTP server.
I registered the task using vscode.workspace.registerTaskProvider API, and made sure to register it only after figuring out the UUID.
Then in the task itself I executed the helper script.
(A nice task register example can be found here: https://github.com/Microsoft/vscode-extension-samples/tree/master/task-provider-sample)
So, I was given the task to upgrade our yocto based system from fido to morty. I have very little experience with yocto, I have been struggling with it and trying to understand it for almost a week now. I have managed to fix some issues, but now I'm facing a problem when trying to build the image:
ERROR: basic-image-1.0-r0 do_rootfs: The following packages could not be configured offline and rootfs is read-only: ['component']
ERROR: basic-image-1.0-r0 do_rootfs: Function failed: do_rootfs
If I disable the compoents the basic-image builds just fine, and both of them build just fine on their own, i.e bb component
I don't even know where to start looking for a solution. So if you have any idea what might be causing this or where to start looking for a solution, it would be greatly appreciated.
Ofcourse I have been browsing the yocto manuals, but there is so much stuff that I'm just overwhelmed by all of it.
Well, the "problem" stems from the fact that you have the following in your image:
IMAGE_FEATURES += "read-only-rootfs"
That implies that nothing can modify the rootfs during runtime, everything has to be done off-line, i.e. when constructing the rootfs in question.
Your package component (adcl and cfgmgr in your original question), all have a post-installation script including the following snippet:
pkg_postinst_${PN} () {
if test "x$D" != "x" then
# Need to run on first boot
exit 1
fi
}
(Something similar at least, which exit 1).
The conditional in my example checks if the pkg_postinst script is being run during rootfs creation, if so, it exits with 1 as exit status. That means that the pkg_postinst has to be run live on the target system. However, as the target system is read-only, this will be impossible, and the build fails.
You'll have to check for pkg_postinst scripts, and rewrite them, such that they're able to be run during rootfs creation.
We are trying to use the Coverity OpenSource service but have problems submitting our project files for analyses.
Whenever submitting the project.tgz to the coverity (no matter whether this is done via the automation instruction or via the website directly),
we see that the build is being queued for a short time:
Last Build Status: Running. Your build is currently being analyzed
But after a few second the build fails as it cannot find the archive:
Last Build Status: Failed. Your build has failed due to the following reason. Please fix the error and upload the build again.
Error details: :Failed to retrieve tar file ...more
The build log seems fine:
2015-12-18T12:30:44.458433Z|cov-build|5752|info|> Build time (cov-build overall): 00:34:26.499117
2015-12-18T12:30:44.458433Z|cov-build|5752|info|>
2015-12-18T12:30:44.462750Z|cov-build|5752|info|> Build time (C/C++/Java emits total): 00:49:03.604351
2015-12-18T12:30:44.462750Z|cov-build|5752|info|>
2015-12-18T12:30:44.462750Z|cov-build|5752|info|>
2015-12-18T12:30:44.462794Z|cov-build|5752|info|> 397 C/C++ compilation units (100%) are ready for analysis
2015-12-18T12:30:44.462794Z|cov-build|5752|info|> 19 Java compilation units (100%) have been captured and are ready for analysis
The issue seems to be consistent with Error details: :Failed to download tar file from . Unfortunately, there is no solution.
Is there any naming convention/and or size restriction for the archive?
Thanks for your help!
After contacting the coverity support we just received the following answer and we could successfully submit a build. Seems there was some hickup on the coverity side.
"This was due to some behind the
scenes issues on our end – nothing interesting,, but it is back up and
running now. Thanks for your patience".