When a recipe is trying to install files into a shared area when those files already exist.? - yocto

I tried a clean build, also disabled state cache. But still facing the issue.
Please help.
ERROR: autoconf-archive-native-2019.01.06-r0 do_populate_sysroot: The recipe autoconf-archive-native is trying to install files into a shared area when those files already exist. Those files and their manifest location are:
/builds/mharriss/poky-tpm/build/tmp/sysroots/x86_64-linux/usr/share/aclocal/ax_check_enable_debug.m4
Matched in b'manifest-x86_64-gnome-common-native.populate_sysroot'
/builds/mharriss/poky-tpm/build/tmp/sysroots/x86_64-linux/usr/share/aclocal/ax_code_coverage.m4
Matched in b'manifest-x86_64-gnome-common-native.populate_sysroot'
Please verify which recipe should provide the above files.
The build has stopped as continuing in this scenario WILL break things, if not now, possibly in the future (we've seen builds fail several months later). If the system knew how to recover from this automatically it would however there are several different scenarios which can result in this and we don't know which one this is. It may be you have switched providers of something like virtual/kernel (e.g. from linux-yocto to linux-yocto-dev), in that case you need to execute the clean task for both recipes and it will resolve this error. It may be you changed DISTRO_FEATURES from systemd to udev or vice versa. Cleaning those recipes should again resolve this error however switching DISTRO_FEATURES on an existing build directory is not supported, you should really clean out tmp and rebuild (reusing sstate should be safe). It could be the overlapping files detected are harmless in which case adding them to SSTATE_DUPWHITELIST may be the correct solution. It could also be your build is including two different conflicting versions of things (e.g. bluez 4 and bluez 5 and the correct solution for that would be to resolve the conflict. If in doubt, please ask on the mailing list, sharing the error and filelist above.
ERROR: autoconf-archive-native-2019.01.06-r0 do_populate_sysroot: If the above message is too much, the simpler version is you're advised to wipe out tmp and rebuild (reusing sstate is fine). That will likely fix things in most (but not all) cases.
ERROR: autoconf-archive-native-2019.01.06-r0 do_populate_sysroot: Function failed: sstate_task_postfunc
ERROR: Logfile of failure stored in: /builds/mharriss/poky-tpm/build/tmp/work/x86_64-linux/autoconf-archive-native/2019.01.06-r0/temp/log.do_populate_sysroot.5051
ERROR: Task (virtual:native:/builds/mharriss/poky-tpm/layers/meta-junos/recipes-devtools/autoconf-archive/autoconf-archive_2019.01.06.bb:do_populate_sysroot) failed with exit code '1'

Related

Can't get rid of 'no rule to process file' warnings and 'duplicate output file' errors in framework upon pod linting

Problem
Pod lib lint passes with warnings about "no rules to process files" in the .xcodeproj/ folder of my framework.
Pod spec lint fails with the same warnings as above except also for MyLib/build. I don't have a MyLib/build folder in my repository. It also gives me 2 errors having to do with derived data.
error: Multiple commands produce '/Users/alex/Library/Developer/Xcode/DerivedData/App-efnsaqjhuzmwkkftkwknbhzigxxe/Build/Products/Release-iphonesimulator/MaterialFields/MaterialFields.framework/Headers/MaterialFields.h':
error: Multiple commands produce '/Users/alex/Library/Developer/Xcode/DerivedData/App-efnsaqjhuzmwkkftkwknbhzigxxe/Build/Products/Release-iphonesimulator/MaterialFields/MaterialFields.framework/Headers/MaterialFields-Swift.h':
What I've Tried
I've tried everything from deleting my derived data, erasing all my simulator content, cloning my repo into a different location and trying the commands again. I've checked my build phases for duplicate files. Nothing has worked.
I've opened up an issue on the cocoa pods GitHub (https://github.com/CocoaPods/CocoaPods/issues/8772). One of the contributors was kind enough to clone it and run it and it passed just fine (with the same 6 no rules to process warnings for stuff inside the .xcodeproj package).
My Thoughts
I'm expecting the linter to pass as it does locally. I have a feeling it has something to do with the MyLib/build file that gets created for the pod spec lint which gives me a lot of warnings. Likewise, I don't know how to fix the DerivedData error. I can't find any information to help me out online.
What files should I be excluding in my podspec? So far I'm only excluding the plist.

Yocto: LICENSE="CLOSED"

I am attempting to create a recipe that includes a custom python package written by us and sourced from a git server on our LAN. I'm running into an issue with defining the license. There is no license. From what I've read, in this situation these license fields should be used in the recipe:
LICENSE="CLOSED"
LIC_FILES_CHKSUM=""
and this should be all that is required.
Trying to build the recipe gives the following error when the recipe is parsed:
ERROR: Nothing PROVIDES '<recipe>' <recipe> was skipped: because it has a restricted license not whitelisted in LICENSE_FLAGS_WHITELIST
My understanding is that the CLOSED license should not require whitelisting as it is coded as a specific case, but I have tried adding the recipe to the whitelist without success.
Should I be using some other license in this situation? Should I be using LICENSE_FLAGS? I've tried to find a solution in the documentation without success, perhaps due to my noob status and the steepness of the learning curve.
Can anyone help take the edge of the curve for me?
After deleting tmp, sstate-cache and downloads I tried removing LIC_FILES_CHKSUM then creating a custom license, neither approach worked. I then ran the build from scratch without the custom recipe, then added it back and now both techniques work as expected.
It appears there was still some state information related to earlier incorrect values of the license fields hanging about, perhaps in cache.
This is the first time deleting tmp, downloads and sstate-cache has not brought the system back to a truely clean state after I have futzed around with things exploring how they work. bitbake -c clean/clean_all have never done a reasonable job.
Thanks for the helpful comments.

Asset Catalog Compile Error with On-Demand Resources: has no output specification

I've been trying to get On-Demand Resource to work but I keep getting this compile error:
/* com.apple.actool.errors */
: error: The tag combination "tagName" for "xxx.imageset/xxx#3x.png" has no output specification.
I had a look at actool man page and there's an option:
--asset-pack-output-specifications filename
Which says:
Tells actool where to write the information about ODR resources found
in the asset catalog. The emitted file will be a plist.
But I'm not really sure what to put as an argument/where this plist is used or even if this option is in the right track of fixing the error.
My coworkers and I struggled with this error for over a day and were only able to fix it by wiping our existing local repos and installing a fresh clone from our remote repos with the code that contains the on-demand resources.
In our case, I was the one that created the on-demand resources functionality and did the tagging for the assets. I built and ran all of that code, and everything worked fine locally on my machine. I pushed those commits to our remote, and when my coworkers pulled they received the asset catalog compile error that you reported when they tried to build.
I compared my build logs with those of my coworkers and found that I had the --asset-pack-output-specifications flag along with a filename whereas they did not, even though all of our production code was the same. I never set that flag manually myself during development, it was automatically generated at some point in the process but I have no idea where -- I didn't even know it existed until this build failure occurred. After struggling for many hours we noticed that if my coworkers deleted their local projects entirely and basically started fresh by installing a new project and repo again from the remote, they suddenly were able to build. They had already tried to clean and nuke their derived data, but that didn't work. Only totally deleting the repos and the projects entirely did the trick. Not sure why, but something about wiping the project and all associated directories and building themselves totally fresh from their own local machines triggered something that enabled the --asset-pack-output-specifications flag.
I just faced this issue and was totally against deleting my repo and cloning again.
I noticed that alongside this error, I also got a warning stating that I had assets under the same name (thus being duplicated).
Deleting the duplicated asset in order to get rid of the warning, fixed the compilation error.
Hope this helps someone, as deleting the repo and cloning again shouldn't be an option.
I just faced the issue.
I could solve it just by deleting on-demand resource tags and tagging them back again.
I just solve it by rebooting Xcode then run successfully. The error seems to only appear once...I don't know why.

Yocto upgrade from fido to morty rootfs is read-only error

So, I was given the task to upgrade our yocto based system from fido to morty. I have very little experience with yocto, I have been struggling with it and trying to understand it for almost a week now. I have managed to fix some issues, but now I'm facing a problem when trying to build the image:
ERROR: basic-image-1.0-r0 do_rootfs: The following packages could not be configured offline and rootfs is read-only: ['component']
ERROR: basic-image-1.0-r0 do_rootfs: Function failed: do_rootfs
If I disable the compoents the basic-image builds just fine, and both of them build just fine on their own, i.e bb component
I don't even know where to start looking for a solution. So if you have any idea what might be causing this or where to start looking for a solution, it would be greatly appreciated.
Ofcourse I have been browsing the yocto manuals, but there is so much stuff that I'm just overwhelmed by all of it.
Well, the "problem" stems from the fact that you have the following in your image:
IMAGE_FEATURES += "read-only-rootfs"
That implies that nothing can modify the rootfs during runtime, everything has to be done off-line, i.e. when constructing the rootfs in question.
Your package component (adcl and cfgmgr in your original question), all have a post-installation script including the following snippet:
pkg_postinst_${PN} () {
if test "x$D" != "x" then
# Need to run on first boot
exit 1
fi
}
(Something similar at least, which exit 1).
The conditional in my example checks if the pkg_postinst script is being run during rootfs creation, if so, it exits with 1 as exit status. That means that the pkg_postinst has to be run live on the target system. However, as the target system is read-only, this will be impossible, and the build fails.
You'll have to check for pkg_postinst scripts, and rewrite them, such that they're able to be run during rootfs creation.

NuGet error in TeamCity: The process cannot access the file because it is being used by another process

We're using TeamCity (9.0) as our CI server to build, test and deploy several applications. Recently we are seeing occassional (one in every 30/40 builds or so) NuGet (2.8.3) errors as follows:
[restore] The process cannot access the file 'C:\BuildAgent\work\e32cbd0940f38bf.....\packages\Newtonsoft.Json.5.0.6\Newtonsoft.Json.5.0.6.nupkg' because it is being used by another process.
where the actual package seems to differ from time to time.
We suspected it has something to do with the same package being referenced in multiple projects within the same solution, but I would expect NuGet to be able to handle this correctly by filtering out duplicates instead of attempting to retrieve the same package multiple times, thereby ending up with write-locks when restoring the packages to the work folder.
As a first step of each Build Configuration we have a 'NuGet Installer' step set to 'restore'. I've tried fiddling with its settings (different 'Update modes', '-NoCache', older NuGet version (2.8.0)), but to no avail.
Has anyone else experienced similar issues, and if so, has any suggestions on how to ensure this error does not occur.
Any help would be greatly appreciated!
I had the same issue with Jenkins and fixed that by adding "-DisableParallelProcessing" to the nuget restore command, the final command would look like:
nuget restore "%WORKSPACE%\Solutions\App\App.sln" -DisableParallelProcessing
Excluding NuGet package files from our anti-malware products resolved this issue for us.
I used the SysInternals Process Explorer utility on the build agents to search for file handles for any *.nupkg files while the builds were running. After several builds I observed the anti-malware products briefly locking these files during the NuGet restore operations. Adding an exclusion to the anti-malware scanning rules prevented these locks as the files were no longer being scanned.
In our environment we use two different anti-malware products on different build agent servers. We encountered this issue with both products.
As far as the error message is concerned, I also came across it.
I debugged the “nuget restore” process, breaking at the point where the .nupkg is copied to the local repository, and then freezing the thread while the file was opened for writing. And sure enough I got the exception in another task, due to the fact that the two packages had Ids where one was a prefix of the other. I filed an issue for this : https://nuget.codeplex.com/workitem/4465.
However, this is probably not exactly your problem, since the error in my case is on reading the .nupkg of the package with the “long” name, and I don’t think there is a package with an Id that is a prefix of NewtonSoft.Json (whereas it is very possible the other way around : there are for instance NewtonSoft.JsonResult of NewtonSoft.Json.Glimpse).
I installed new Newtonsoft.Json and problem disappear
You can turn on build feature Swabra with option "Locking processes" (requires handle.exe). And check are there any files locked after build's finish or not.
If there are no locked files then try to run Nuget using command line build step instead of NuGet Installer. If the issue is reproduced then most probably it means that the issue is related NuGet.