How to link a recipe A producing dynamic library with another recipe B? - yocto

I am trying to link a shared library (libxyz.so) created by recipe A, with an executable created by recipe B by adding -lxyz on the Makefile where it generated the executable.
But I see that recipe A is dependent on recipe B indirectly (through some recipe C), so I can't use DEPENDS flag in .bb file of recipe B.
Is there a way to link my executable with the library libxyz.so?
Also, from my understanding on dynamic libraries, the library must be physically available when the executable is built right?.

You can try to separate B into 2 recipes: library and executable.
Recipe A depend on library B (indirectly). Executable recipe B depends on A and library B.
Makefile of B will be modified accordingly to separate compilation of B library and B executable.

Related

How to force right resource file to be used when calling from another module

Here is the scenario (newbie to spark scala so kindly bear with me)
1) I have module A and a config file under resources folder. Class C in module A reads this config to get information about the file paths
2) i am trying to call Class C (module A) from Module B (after importing the dependencies of Module A in module B)
3) Issue i am facing is Class C (module) code when invoked from Module B , is using the config from Module B instead of its own config in Module A
Note : code works perfectly when i call with in Module A but once i move this code to Module B its using the resources file in Module B instead of Module A resource file.
both the configs have same name.
From the discussion in regard to my original answer, which assumed Lightbend Config (commonly used in the Scala world), it's been discovered that some sort of config.xml is in src/main/resources for the respective modules. These files both end up on the classpath and each module attempts (by an at this point unspecified means) to load the config.xml resource.
The JVM when asked to load resources always loads the first which matches.
The easiest way in a small set of projects to address this collision is to not collide by giving the configs in each project different names.
An alternative which is viable in a larger set of projects is to use Lightbend Config which allows config file inclusion out of the box, as well as the ability to use environment variables to easily override configurations at runtime.
An elaborate strategy for a larger set of projects, depending on how compatible the XML schemas for the various module's config.xmls are (if they're being read using a schema) is to define a custom Maven build process which embeds config.xmls inside one another so that code in module A and module B can share a config.xml: A only cares about the portion of the config which came from A and B only cares about that from B. I'm not particularly familiar with how one would do this in Maven, but I can't think of a reason why it wouldn't be possible.

How to add a missing library (or executable or other file) to Yocto/bitbake

For an application I am running, there is a run time error as it cannot find libwayland-client.so.0 shared object. How do I know which package provides it and where do I add it. I tried as shown below but it gave me a Nothing PROVIDES error.
CORE_IMAGE_EXTRA_INSTALL += "libwayland-client"
You don't typically work with single files when building Yocto images
In reverse order
You install packages to the image
You build packages by using a recipe
You find (or as a last resort write) recipes as part of layers.
Generally when something is missing you take the following steps:
Check the layerindex https://layers.openembedded.org/layerindex/branch/master/recipes/?q=wayland It tells you that there is a recipe called wayland in layer openembedded-core
Add the layer in question. openembedded-core is already contained in Yocto's poky (directly under the name meta, just to confuse the newcomer...), so nothing to add in this example
Create the environment listing of the recipe in question, bitbake -e wayland >wayland.env
Check what packages the recipe in question creates grep ^PACKAGES= wayland.env. In this case it is easy because there is really only one package wayland (-debug, -dev etc. are special purpose that would not contain the library)
Add a package to the image by its package name. How to do that exactly depends on the image type you create. The variable name given in the question works for some images, but not all. Search for IMAGE_INSTALL in the manual https://www.yoctoproject.org/docs/2.6.1/mega-manual/mega-manual.html for other options.
Once you have built the recipe in question you can also check what files are contained in a package (In this case recipe name and package name are identical, but that is not always the case. Some recipes build more than one package suitable for installation, so obviously they need to use different names)
$ oe-pkgdata-util list-pkg-files wayland
wayland:
/usr/lib/libwayland-client.so.0
/usr/lib/libwayland-client.so.0.3.0
/usr/lib/libwayland-cursor.so.0
/usr/lib/libwayland-cursor.so.0.0.0
/usr/lib/libwayland-server.so.0
/usr/lib/libwayland-server.so.0.1.0

How to add my new library package to Yocto Extensible SDK (eSDK)?

I get it that Yocto eSDK is snapshot of pre-configured OpenEmbedded build system. But I want to verify that the custom library that I add as a new meta layer (say, meta-foo layer) becomes a part of eSDK. So, that the user applications may include the header files of this custom library, link against the *.a of this custom library and that the user applications may link in runtime against shared objects of this custom library.
So, is it enough to define in the recipe of this custom library just:
RPROVIDES = "custom_lib1.so custom_lib2.so ..."
... to tell bitbake to copy those *.so libraries to the RootFS?
And how to ensure that the header files of this custom library are copied to the appropriate place, say, /usr/include?
Not exactly, RPROVIDES is used for deliver package dependency, thus You need provide here recipe name.
Firstly You need to create recipe with will use do_install function into deliver needed binaries to ${D}${includedir}/. Then add created package (recipe) as RDEPENDS to nativesdk-packagegroup-sdk-host.bb recipe.

How can I get to the work directory of another recipe?

I want to create an own recipe in which I need both a binary from the U-Boot sources and a binary from the Kernel sources.
Can I get the paths to those sources (Svariable) in my own recipe on a save way?
Short answer, no.
You can take the binaries from ${DEPLOY_DIR_IMAGE} though, if your recipe depends the deploy task from the respective recipe. That dependency is created by:
do_configure[depends] = "u-boot:do_deploy"
If your recipe include the line above, that means that u-boot will be put into the DEPLOY_DIR_IMAGE before the do_configure task from your recipe is being run.

How to add packages to populate SDK as a host tool?

I have created my own recipe for building my SW, which requires native perl during building (e.g. invoking perl script for generating code). There is no problem if I add my recipe to an image and use bitbake to build my recipe with the image.
Now I also want to build SW with a populate SDK, but I found that when I generate the populate SDK, the native perl only contains a few modules without what is necessary to build my SW. I have found two ways to generate the populate SDK with additional perl modules:
Add TOOLCHAIN_HOST_TASK += "nativesdk-perl-modules" to my image .bb file before I generate the populate SDK
Add a bbappend file for nativesdk-packagegroup-sdk-host which includes "nativesdk-perl-modules" in RDEPENDS
For 1, it is an image-specific solution.
For 2, it is a global solution.
Now I am looking for a recipe-specific solution. Is there a solution where I could add some configuration in my recipe .bb file, and then I build populate SDK for any image which include my recipe will contains these additional native perl modules?
I'm afraid there isn't really a way for a specific recipe to hint at adding specific dependencies to an SDK. The closest thing I can think of would be to code something into anonymous python in something like an extra global class, where it checks the included target packages and then adds dependencies to TOOLCHAIN_HOST_TASK if the right target packages are being installed. Even this wouldn't detect non direct dependencies of your specific recipe.