I have an application A that depends on library B. But library B can only build for the target, i.e. non-native. Also, I need to build A for both the host and the target. Unfortunately, I get an error like this:
Nothing PROVIDES B-native (but A-native DEPENDS or otherwise requires it)
In the recipe for A, I have tried numerous variations of
DEPENDS_append = "B" DEPENDS_remove = "B-native" DEPENDS-${PN}-native_remove = "B-native"
to no avail.
I'm sure there is a simple way to do this but I haven't found the correct combination of commands yet.
Related
Setuptools supports dynamic metadata for project properties in pyproject.toml, and as a PEP517 backend, it also has the option to specify build requirements by implementing get_requires_for_build_wheel. But I cannot figure out whether it uses the chance and does implement a way to specify build requirements based on configuration options, and if so, how to specify it in the pyproject.toml.
I naively tried
[build-system]
requires = {file = "requirements-build.txt"}
but that understandably leads to pip complaining “This package has an invalid build-system.requires key in pyproject.toml. It is not a list of strings.” And adding
[project]
dynamic = ["build-system.requires"]
also doesn't work, because the possible options of dynamic are explicitly enumerated. I would be somewhat surprised if there wasn't an option for this, given that all the infrastructure elements are available, but how do I specify it?
As far as I know, it is not possible.
If it is really necessary for your use case, and you think it is worth the cost, maybe it is possible to add some dynamic behavior here by (mis-)using the "in-tree build backends" feature.
Context:
I have a few projects in the same solution which I push through a pipeline that packs them into NuGet packages and stores them in my Azure Artifacts storage.
The steps are:
Install NuGet
NuGet restore
Build solution
Run tests
NuGet pack (dotnet pack, to be specific, as they are .NET Standard targeting)
NuGet push (to Artifacts storage)
However, the solution contains a few yet unfinished package projects that I don't want to pack yet as well as my test project, which should also be excluded.
Simple enough, in that case my file matching pattern would just include the names of the projects I want, like:
'**/*Proj1.csproj;**/*Proj2.csproj;**/*Proj3.csproj;'
But now I want a few new projects to be added to this shipping 'set'. Therefore my pattern will have to include them as well.
'**/*Proj1.csproj;**/*Proj2.csproj;**/*Proj3.csproj;**/*Proj4.csproj;**/*Proj5.csproj;'
As you can see, that's hardly generic. I have to modify the pattern every time something changes, gets included, or if I reverse it - every time I want to exclude a project.
I'm looking to apply the same pipeline, or at least the structure (as much as I can), to a few solutions of the same type, which I'd like to make possible with a few naming conventions I have in place.
Question:
Is there a way to turn:
'**/*Proj1.csproj;**/*Proj2.csproj;**/*Proj3.csproj;**/*Proj4.csproj;**/*Proj5.csproj;'
into
'**/Packages/**.csproj;' //or something very similar
Where 'Packages' is a VS Solution folder (because actual folders don't work inside the base of a solution), with the end goal being every project inside the 'Packages' solution folder being discovered (and packed), and ignoring everything outside of it.
The problem being that solution folders are not an actual part of the path structure...
PS -
Workarounds I have considered -
Have a keyword in the names of all projects I want to ignore like "Foo.Ignore.csproj" and then exclude all that contain "Ignore" in the name.
Unloading/removing the unfinished projects from the solution but a) I want to make sure they are held in buildable and testable state and b) since they remain in the repository path, they are still discoverable by the matching pattern.
However I don't feel like this is such a far fetched use-case that it wouldn't have a "supported" solution (I could be wrong, of course). Or there is a different 'best practice' established?
i have added dependencies through DEPENDS +=. While do_prepare_recipe_sysroot, what order does it follow while copying to recipe-sysroot?
How I can enforce this order?
eg:
for recipeA
DEPENDS += "recipeB recipeC"
DEPENDS += "recipeD"
where as recipeB depends on recipeD.
Here recipeC and recipeD both populate header.h. which one will be include in the recipe-sysroot.
You can't do this. Dependencies ordering is automatically done by Yocto. Same file cannot be provided by different recipes. You will get error like below,
Exception: FileExistsError: [Errno 17] File exists:
So you need to fix up the path. For example, if recipe C is for Application X, then you should try usr/include/X/header.h and usr/include/Y/header.h for recipe D or you should name is differently.
As far as dependency is concerned, you don't need to worry about the ordering. Yocto automatically parses and identifies which one to compile first in it's task queue.
I have a custom machine layer based on https://github.com/jumpnow/meta-wandboard.
I've upgraded the kernel to 4.8.6 and want to add X11 to the image.
I'm modifying to image recipe (console-image.bb).
Since wandboard is based on i.MX6, I want to include the xf86-video-imxfb-vivante package from meta-fsl-arm.
However, it fails complaining about inability to build kernel-module-imx-gpu-viv. I believe that happens because xf86-video-imxfb-vivante DEPENDS on imx-gpu-viv which in turn RDEPENDS on kernel-module-imx-gpu-viv.
I realize that those dependencies have been created with meta-fsl-arm BSP and vanilla Poky distribution. But those things are way outdated for wandboard, hence I am using the custom machine layer with modern kernel.
The kernel is configured to include the Vivante DRM module and I really don't want the kernel-module-imx-gpu-viv package to be built.
Is there a way to exclude it from RDEPENDS? Can I somehow swear my health to the build system that I will take care of this specific run-time dependency myself?
I have tried blacklisting 'kernel-module-imx-gpu-viv' setting PNBLACKLIST[kernel-module-imx-gpu-viv] in my local.conf, but that's just a part of a solution. It helps avoid build failures, but the packaging process still fails.
IIUC you problem comes from these lines in img-gpu-viv recipe:
FILES_libgal-mx6 = "${libdir}/libGAL${SOLIBS} ${libdir}/libGAL_egl${SOLIBS}"
FILES_libgal-mx6-dev = "${libdir}/libGAL${SOLIBSDEV} ${includedir}/HAL"
RDEPENDS_libgal-mx6 += "kernel-module-imx-gpu-viv"
INSANE_SKIP_libgal-mx6 += "build-deps"
I would actually qualify this RDEPENDS as a bug, usually kernel module dependencies are specified as RRECOMMENDS because most modules can be compiled into the kernel thus making no separate package at all while still providing the functionality. But that's another issue.
There are several ways to fix this problem, the first general route is to tweak RDEPENDS for the package. It's just a bitbake variable, so you can either assign it some other value or remove some portion of it. In the first case it's going to look somewhat like this:
RDEPENDS_libgal-mx6 = ""
In the second one:
RDEPENDS_libgal-mx6_remove = "kernel-module-imx-gpu-viv"
Obviously, these two options have different implications for your present and future work. In general I would opt for the softer one which is the second, because it has less potential for breakage when you're to update meta-fsl-arm layer, which can change imx-gpu-viv recipe in any kind of way. But when you're overriding some more complex recipe with big lists in variables and you're modifying it heavily (not just removing a thing or two) it might be easier to maintain it with full hard assignment of variables.
Now there is also a question of where to do this variable mangling. The main option is .bbappend in your layer, that's what appends are made for, but you can also do this from your distro configuration (if you're maintaining your own distro it might be easier to have all these tweaks in one place, rather than sprayed into numerous appends) or from your local.conf (which is a nice place to quickly try it out, but probably not something to use in longer term). I usually use .bbappend.
But there is also a completely different approach to this problem, rather than fixing package dependencies you can also fix what some other package provides. If for example you have a kernel configured to have imx-gpu-viv module built right into the main zimage you can do
RPROVIDES_kernel-image += "kernel-module-imx-gpu-viv"
(also in .bbappend, distro configuration or local.conf) and that's it.
In any case your approach to fixing this problem should reflect the difference between your setup and recipe assumptions. If you do have the module, but in a different package, then go for RPROVIDES, if you have some other module providing the same functionality to libgal-mx6 package then fix libgal-mx6 dependencies (and it's better to fix them, meaning not only drop something you don't need, but also add things that are relevant for your setup).
I'm writing a small library which I'd like to be backwards compatible with older versions of an API, yet use features of the latest API when possible.
So for example, I have a project which uses an external API, which I'll call FooFoo_v1.
Initially, my code looked like this:
// in Widget.scala
val f = new Foo
f.bar
Foo has since released a new version of their API, FooFoo_v2, which adds the bat method. So long as I'm compiling against the new version, this works fine:
// in Widget.scala
val f = new Foo
f.bar
f.bat
But if you try to build against FooFoo_v1, the build obviously fails. Since the bat feature is truly optional, and I'd like to allow folks to build my code against FooFoo_v1 or FooFoo_v2.
Ignoring the details of the dependency management, what's the right high level approach for something like this? My aim is to keep it as simple as possible.
I think you should split your library in two pieces - one with features used from FooFoo_v1, another depending on the first one and on FooFoo_v2 and using features from FooFoo_v2. How to accomplish it depends on your code... If it's too difficult it's better to follow #rex-kerr advice - to maintain two branches.
I would simply keep separate branches of the project in a repository (one which is sufficiently robust to allow you to edit one and merge effortlessly into the others--git would be my first choice).
If you must do the selection at runtime, then you're limited to using reflection for any new methods.