Yocto Build Maxium PARALLEL_MAKE, BB_NUMBER_THREADS - yocto

I recently tried to start the yocto build task with leaving PARALLEL_MAKE and BB_NUMBER_THREADS blank, e.g. commented out and starting the Build on a Host with 48 or 64 vCPU's. The build was failing. With 32 vCPU's it still works. Yocto is still old Krogoth and cannot be updated currently. Does anybody knows if there is a limitation going further than 32 vCPU's? Is this just for older Yocto versions or generally an Issue?
Update: The error is related to a custom recipe for compiling a rnnoise-plugin:
../rnnoise-plugin/1.0-r0/temp/log.do_compile.1449627
rnnoise-plugin/1.0-r0/asound-pcm-plugin-rnnoise.c:16:28: fatal error: alsa/asoundlib.h: No such file or directory
#include <alsa/asoundlib.h>
^
As mentioned above, i only get this error when compiling on a 48 or 64 vCPU Host with PARALLEL_MAKE and BB_NUMBER_THREADS left blank. If i set PARALLEL_MAKE="-j 32" and BB_NUMBER_THREADS="32" it compiles without any error.

It seems you run into race conditions when compiling with that many threads.
The yocto reference recommends setting PARELLEL_MAKE so make uses a maximum of 20 threads when using more than one physical CPU. If your system only has a single CPU this might be a virtualization problem.
For single socket systems (i.e. one CPU), you should not have to override this variable to gain optimal parallelism during builds. However, if you have very large systems that employ multiple physical CPUs, you might want to make sure the PARALLEL_MAKE variable is not set higher than "-j 20".
Yocto Project - PARALLEL_MAKE
For increased building speed you can try to fix this dependency issue by patching the appropriate Makefile (Debugging Parallel Make Races)
Edit:
It is possible to set PARALLEL_MAKE for a particular package when overwriting in its recipe which allows to compile one particular package with make -j 2 e.g. while other packages use maximum parallelism.
So it might be a suitable workaround to set PARALLEL_MAKE to a lower value in the recipe of the package where the dependency issue occurs.
But keep in mind that the yocto reference also recommends to limit BB_NUMBER_THREADS (no package-specific parameter) to 20 on large systems with multiple CPU's which might be the case when using hardware virtualization.

Related

Android kernel build flow with GKI introduced from Android 11

I'm trying to port Android 11 to my board(odroid-n2) and I'm confused about how I can build an board specific kernel module and ramdisk. Could I get some help about this?
Recently, to solve kernel fragmentation, it seems AOSP is splitting kernel into two different block.
(1) GKI(Generic Kernel Image)
(2) Vendor specific kernel
For GKI, I think I can use an image from ci.android.com.
For Vendor specific portion(Related to vendor_boot partition),
is there specific flow for this? or something to refer?
I'm referring to {android kernel}/common/build.config.db845c for case-study, I don't understand why 'gki_defconfig + db845c_gki.fragment' should be combined to one to generate configruation for kernel build. I think we only build kernel module for vendor specific portion.
*) For android docs, I'm referring to the followings.
https://source.android.com/setup/build/building-kernels
https://source.android.com/devices/architecture/kernel/generic-kernel-image
Indeed, with GKI (Generic Kernel Image), generic parts and vendor parts are separated. As of today, that distinction is quite clear: vmlinux is GKI and any module (*.ko) is vendor. That might change in the future if GKI modules show to be useful. Then there could be GKI (Kernel+Modules) + Vendor Modules.
The whole build process is quite new as well and still evolving with this quite fundamental change to how Android kernels are developed. Historically, device kernels and modules were build in one logical step and compatibility was ensured by the combined build. Now there is a shift towards a world where we can cleanly build the kernel and the modules entirely separate without overlap. It is likely to get much easier in the future to build vendor modules without having to build too much of the GKI kernel at the same time. Yet, the way the build currently works, it is easier to set up as it is.
Android 11 introduced the concept of "compliance" for GKI based kernels. That means a shipped kernel is ABI compatible to the GKI kernel. In theory that means that you could literally swap out the kernel that you have and replace it with a build from ci.android.com. Note, a compatible kernel can have significant (ABI compatible) patches that the GKI does not have. So, while compatible, it might not lead to the same experience.
Android 12 enables devices to be launched with signed boot images containing the GKI kernel. Since the Kernel<>Module ABI of those kernels is kept stable, this also allows independent updates of GKI kernel and vendor modules.
When you refer to the db845c build config, yes, this looks a bit confusing. This is a full blown config and the build indeed produces an (ABI compatible!) kernel and the vendor specific modules. The fragment can be considered a patch to the gki_defconfig in the sense that it does not change the core kernel, but enables the required modules.
For the final release, the kernel image from this build will be replaced by the GKI kernel image. But for development, the kernel that comes out of this build is perfectly fine to use.
In practice it helps downstream projects to develop core kernel features and modules at the same time, though changes for modules and kernel need to go into different repositories (db845c being an exception here a reference board).
To somewhat answer your question on how to build the db845c kernel, ci.android.com also provides the build log along with the artifacts to download. For the android12-5.10 branch and the target kernel_db845c, a recent build can be found here. The build.log states at the beginning the instructions to reproduce this:
$ BUILD_CONFIG=common/build.config.db845c build/build.sh
This is the relevant step based on the general instructions on source.android.com - building kernels.

Include precompiled zImage in yocto project

I have a custom board with imx6dl chip and peripherals. I have compiled u-boot, zImage and rootfs from examples provided by manufacturer. But when i try to build yocto from git repo with latests releases, it fails to run (some drivers not working, board is loading and display interface, but touchscreen is not working for ex.),
Is there any way to include precompiled binaries zImage, u-boot and device table to bitbake recipes? I'm very new to yocto project, and only need to get bootable image with working drivers and qt5.
If you have a working boot chain (e.g. u-boot, kernel and device tree) that you have built out-of-yocto, then you might try building a rootfs only. This requires two main settings, to be made in your local.conf to get started. Please don't firget that this is just a starting point, and it is highly advised to get the kernel/bootloader build sorted out really soon.
PREFERRED_PROVIDER_virtual/kernel = "linux-dummy to have no kernel being built, and something like MACHINE="qemuarm" to set up an armv7 build on poky later than version 3.0. The core-image-minimal target should at least be enough to drop you in a shell for starters, and then you can proceed from there.
Additionally, it might be qorth asking the board vendor or the yocto community (#yocto on the freenode server) if they know about a proper BSP layer. FSL things are quite nicely supported these days, and if your board is closely related to one of the well-known ones, you've got a high chance that meta-freescale just does the trick nicely.
Addition:
#Martin pointed out the mention of Qemu is misleading. This is just the easiest way to make Yocto build a userland for the armv7-architecture which the imx6dl is based on. The resulting root filesystem should be sufficiently compatible to get started, before moving on to more tuned MACHINE configuration.

Where do the "virtual/..." terms come from?

In Bitbake I can build e.g. the Linux Kernel with bitbake virtual/kernel or U-Boot with bitbake virtual/bootloader.
Where do those "virtual/..." terms come from?
I used find for patters such as "virtual/kernel" in the poky directory, but there are nearly infinite results and I don't know where to search.
Can I e.g. direct virtual/bootloader to a custom recipe when I might have programmed an own bootloader?
From bitbake user-manual
As an example of adding an extra provider, suppose a recipe named
foo_1.0.bb contained the following:
PROVIDES += "virtual/bar_1.0"
The recipe now provides both "foo_1.0" and "virtual/bar_1.0". The "virtual/" namespace is often used to denote
cases where multiple providers are expected with the user choosing
between them. Kernels and toolchain components are common cases of
this in OpenEmbedded.
Sometimes a target might have multiple providers. A common example is
"virtual/kernel", which is provided by each kernel recipe. Each
machine often selects the best kernel provider by using a line similar
to the following in the machine configuration file:
PREFERRED_PROVIDER_virtual/kernel = "linux-yocto"
Go to your meta-layer/conf/machine/here you can find macros.
your-meta-layer/recipes-bsp/barebox(or U-boot) here you can find bootloader recipes(.bb).

How to increase memory for embedded jvm for a deployed javafx application?

I am using a .fxbuild-script to build a JavaFX Application. I used Packaging-Format all to include its own runtime. Now I am wondering, how I can define any runtime parameters?
Since we noticed, that we had far more OutOfMemory Issuses within the deployed version than with the local development version, we were monitioring it with Visual VM and noticed, that the embedded JVM (by default?) is only configured to use 256MB of RAM. How can I increase the Maximum available RAM for the included JVM?
The Application is launched by an .exe file after beeing installed on the system.
Update:
The Answer of Roland is correct. I just made the mistake, that I added the <fx:platform>-Tag at the Bottom of the Ant script and not within the appropriate <fx:deploy>-Tag which results in that the <fx:platform>-Tag will be ignored and the JVM is configured to use 256 MB max RAM on 32-Bit and 1/4th of available RAM on 64-Bit.
Please read the Packaging Basics, especially chapter "5.8.2 Customizing JVM Setup".
Excerpt of what you need:
<fx:platform javafx="2.1+">
<fx:jvmarg value="-Xmx400m"/>
...
</fx:platform>

How can I update a buildroot setup to a later version?

A project I've inherited uses a very old version of buildroot, but I'd like to change it to use a feature that was added only in a later buildroot release.
Is there a straightforward way of updating a buildroot setup to use a later release?
e.g. if I save out a defconfig file and import that in a later buildroot release, would that just work, or are there practical reasons why not? Are there additional configuration files I'd need to carry across (e.g. kernel, busybox, etc)? Thanks!
No.
In fact, it's worse that that.
You can start by using a newer Buildroot version with your old default configuration file, but you will need to check the resulting configuration carefully for deprecated packages and packages whose versions are not compatible with whatever application software you might be adding to the Buildroot filesystem. The names of some packages (e.g. opencv) change over time, so you need to eyeball the resulting .config file to make sure that all of the packages that you need are there.
If you build a toolchain or Linux kernel in Buildroot (commonly done but not generally good practice), then you need to make sure that the new configuration is set to build the old version of the kernel and compiler. These might be too old to build some of the packages in the newer version of Buildroot.
If you upgrade your kernel at the same time that you upgrade Buildroot, then you need to port your old kernel config file to the new kernel version. Since the kernel configuration options change frequently, you will probably need to start from defconfig for your board and then use make menuconfig to manually add the configs that you need.
Busybox is a bit less volatile, so there is a chance that your old config will work.
If your old Buildroot configuration uses postbuild or postimage scripts, you will need to review them, but my guess is that they will not need any changes.
You should allocate at least a week for this work, maybe more, depending on the complexity of the configuration. Remember that if you are forced to use an older vendor kernel due to patches for a specific SoC, for example, the Freescale 2.6.33.9 kernel for the BSC9131, then the upgrade that you want to do might not be possible without doing six to twelve months of work to port the vendor's kernel patches to a newer kernel version.
Cheers.