Determine machine architecture reliably in a BitBake recipe - yocto

I am writing a recipe for a package which needs to be aware of the underlying machine's microarchitecture. In other words, I would like a string such as aarch64 or arm64 for a 64-bit Arm system, and x86_64 for a 64-bit Intel system.
So far, I have identified:
MACHINE - This seems to be whatever the meta-* layer author decides to name their machine and may contain the architecture, it may not. For example, beaglebone is no use.
MACHINE_ARCH - This seems to be close to what I'm looking for. However, taking this BSP layer as an example, and doing a quick search, it doesn't seem as though this variable is set anywhere. Only read from in a few packages.
TUNE_PKGARCH - May be the best bet so far. But, what format is this variable in? What architecture naming conventions are used? Also, the aforementioned BSP layer, again, doesn't seem to set this anywhere.
I would have thought that knowing the machine architecture in a well-defined format is important, but it doesn't seem to be so simple. Any advice?

I'm accustomed to doing this with uname -m (Windows fans can use the output of SET processor), so for me in Yocto it ends up being a toss-up:
According to the Mega-Manual entry for TARGET_ARCH:
TARGET_ARCH
The target machine's architecture. The OpenEmbedded build system supports many
architectures. Here is an example list of architectures supported. This list is by
no means complete as the architecture is configurable:
arm
i586
x86_64
powerpc
powerpc64
mips
mipsel
uname -m is a bit better since you get subarchitectural information as well. From the machines I have access to at this moment:
Intel-based Nuc build system: x86_64
ARM embedded system: armv7l
Raspberry Pi 4B: aarch64
I have found that the GNU automake (native) and libtool (available for target) packages compute a useful variable named UNAME_MACHINE_ARCH. If you are using libtool already or are willing to take it on just for the purpose of having this done for you :-#), you can solve this way. Look in the built tree for files named config.guess.
You may be able to get by more generically than libtool by using Yocto BUILD_ARCH:
BUILD_ARCH
Specifies the architecture of the build host (e.g. i686). The OpenEmbedded build
system sets the value of BUILD_ARCH from the machine name reported by the uname
command.
So play with these and make your own choice depending on your project's circumstances.

Related

Raspberry Pi "tools" (raspistill, vcgencmd, ...) not included with buildroot

I've created a basic image with buildroot (buildroot-2021.02.1), containing some software and also selected the RPI firmware in order to use the camera and some Raspberry Pi tools: Target packages --> Hardware handling --> Firmware --> ([x] rpi-firmware) --> Firmware to boot as mentioned here.
But the tools raspistill, vcgencmd, ... are not included. The question is how to include them and why are they not included?
At some point in time it must have been working, see: RaspberryPi camera with buildroot
More details:
In the logs of buildroot the following lines show up:
>>> rpi-firmware d016a6eb01c8c7326a89cb42809fed2a21525de5 Installing to target
comm: /home/ich/br/buildroot/output/build/rpi-firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/.files-list.before: No such file or directory
comm: /home/ich/br/buildroot/output/build/rpi-firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/.files-list-staging.before: No such file or directory
comm: /home/ich/br/buildroot/output/build/rpi-firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/.files-list-host.before: No such file or directory
and inside this package the binaries are existing. They are downloaded from http://sources.buildroot.net/rpi-firmware/ where the tars contain the actual tools. But they are not copied into the final image by buildroot but only downloaded. Maybe because some files-list.txt file(s) are missing, as pointed out by the error message. Maybe those files are whitelisting the files to copy. But I could not find documentation about this.
For 64-bit builds the binaries in the (then manually downloaded) tar file could not be executed, because they are 32-bit executables: firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/opt/vc/bin/vcgencmd: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.3, for GNU/Linux 3.1.9, not stripped; on a 32-bit build with buildroot it also does not work, because the shared libraries are missing, even though the full structure from the archive has been placed under /opt/vc/{bin|lib|...} like on a standard RPI image.
I'm unsure how to proceed with the problem, diagnose it and fix it.
EDIT: maybe this are two different problems; I read the linked SO question once again and compared the files fixup.dat and start.elf (which contain the RPI hardware stuff to make the tools work) in the boot.vfat of the built image with the images in buildroot/output/build/rpi-firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/boot and the files fixup_x.dat and start_x.elf are taken from there. So is in accordance to the mentioned SO question. And at no place it is indicated that the tools for the Raspberry Pi are compiled. They are only inside this tar archive. Maybe one needs to compile them extra and this package is not designed to integrate those tools.
I figured out the solution and this might be useful for future reference, so I put the solution here.
One have to differentiate between:
the "firmware" (that is the mentioned tar from the question), which is turned on with BR2_PACKAGE_RPI_FIRMWARE=y in buildroot. This causes then that the start.elf and fixup.dat contain the correct data from this tar. The fact that the tars also contain the desired binaries is only "coincidence"
the desired applications are packaged as "userland" (see here) and if one finds the # BR2_PACKAGE_RPI_USERLAND is not set line in .config in the buildroot project's root directory and replaces this line by BR2_PACKAGE_RPI_USERLAND=y the applications (vcgencmd, raspistill, ...) are being built and included into the final image (I could not find the location in make menuconfig however, but this is no problem, if you directly modify the vars)
Therefore, the question is answered. But: you might run into some issues ;-) :
Question 1: I get a "Segmentation fault" when running raspistill
For raspistill -o i.jpg you might get out:
mmal: mmal_vc_shm_init: could not initialize vc shared memory service
mmal: mmal_vc_component_create: failed to initialise shm for 'vc.camera_info' (7:EIO)
mmal: mmal_component_create_core: could not create component 'vc.camera_info' (7)
mmal: Failed to create camera_info component
Segmentation fault
(with an empty image file), see here for details.
Answer: this is related to /dev/vcsm (or /dev/vcsm-csa) missing which is used for camera control / video decoding "stuff". Symlinking to /dev/vc-mem as stated somewhere around the net does not help.
Solution: I was using the latest BR with Kernel 5.10.x (buildroot-2021.02.1 and simply "dowgraded" to buildroot-2020.02.1, rebuilt it and /dev/vcsm appears and everything works fine.
Question 2: I want to do it in docker containers
Answer: No problem. I used balenalib/rpi-raspbian:latest (as suggested here) and it worked flawlessly by running docker run --privileged --device=/dev/vchiq --rm -it balenalib/rpi-raspbian:latest. For this only the proper devices and the support is needed. So the package BR2_PACKAGE_RPI_USERLAND=y could be completely omitted.
Question 3: Does it work with 64-bit?
Answer: No. I tried out the recent version (raspberrypi3_64_defconfig) of buildroot and the version from Feb 2020 as mentioned and for both /dev/vcsm (or /dev/vcsm-csa) is missing. Linux cpi64 4.19.97-v8 #1 SMP PREEMPT Sat Apr 17 14:13:11 CEST 2021 aarch64 GNU/Linux

Android kernel build flow with GKI introduced from Android 11

I'm trying to port Android 11 to my board(odroid-n2) and I'm confused about how I can build an board specific kernel module and ramdisk. Could I get some help about this?
Recently, to solve kernel fragmentation, it seems AOSP is splitting kernel into two different block.
(1) GKI(Generic Kernel Image)
(2) Vendor specific kernel
For GKI, I think I can use an image from ci.android.com.
For Vendor specific portion(Related to vendor_boot partition),
is there specific flow for this? or something to refer?
I'm referring to {android kernel}/common/build.config.db845c for case-study, I don't understand why 'gki_defconfig + db845c_gki.fragment' should be combined to one to generate configruation for kernel build. I think we only build kernel module for vendor specific portion.
*) For android docs, I'm referring to the followings.
https://source.android.com/setup/build/building-kernels
https://source.android.com/devices/architecture/kernel/generic-kernel-image
Indeed, with GKI (Generic Kernel Image), generic parts and vendor parts are separated. As of today, that distinction is quite clear: vmlinux is GKI and any module (*.ko) is vendor. That might change in the future if GKI modules show to be useful. Then there could be GKI (Kernel+Modules) + Vendor Modules.
The whole build process is quite new as well and still evolving with this quite fundamental change to how Android kernels are developed. Historically, device kernels and modules were build in one logical step and compatibility was ensured by the combined build. Now there is a shift towards a world where we can cleanly build the kernel and the modules entirely separate without overlap. It is likely to get much easier in the future to build vendor modules without having to build too much of the GKI kernel at the same time. Yet, the way the build currently works, it is easier to set up as it is.
Android 11 introduced the concept of "compliance" for GKI based kernels. That means a shipped kernel is ABI compatible to the GKI kernel. In theory that means that you could literally swap out the kernel that you have and replace it with a build from ci.android.com. Note, a compatible kernel can have significant (ABI compatible) patches that the GKI does not have. So, while compatible, it might not lead to the same experience.
Android 12 enables devices to be launched with signed boot images containing the GKI kernel. Since the Kernel<>Module ABI of those kernels is kept stable, this also allows independent updates of GKI kernel and vendor modules.
When you refer to the db845c build config, yes, this looks a bit confusing. This is a full blown config and the build indeed produces an (ABI compatible!) kernel and the vendor specific modules. The fragment can be considered a patch to the gki_defconfig in the sense that it does not change the core kernel, but enables the required modules.
For the final release, the kernel image from this build will be replaced by the GKI kernel image. But for development, the kernel that comes out of this build is perfectly fine to use.
In practice it helps downstream projects to develop core kernel features and modules at the same time, though changes for modules and kernel need to go into different repositories (db845c being an exception here a reference board).
To somewhat answer your question on how to build the db845c kernel, ci.android.com also provides the build log along with the artifacts to download. For the android12-5.10 branch and the target kernel_db845c, a recent build can be found here. The build.log states at the beginning the instructions to reproduce this:
$ BUILD_CONFIG=common/build.config.db845c build/build.sh
This is the relevant step based on the general instructions on source.android.com - building kernels.

Extend risc-v instructions on QEMU

I want to extend the QEMU TCG (tiny code generator) to accept new instructions for the risc-v guest on my x86 machine. However, I have no experience on how the TCG works, so I was wondering if someone can give me some useful pointers on where to start understanding how the TCG works in the QEMU source code?
I know there is a frontend and backend, but I don't really understand where the translation actually happens, and how are the instruction translated.
I also saw the insn32.decode file in target/riscv defining the opcodes for various operators like lui, but I am not sure how that file is used and if it's for the TCG target (ie a risc-v host) or the QEMU guest.
I am looking for something like
QEMU - Code Flow [ Instruction cache and TCG]
but up-to-date with current QEMU version.
Any help is appreciated.

Include precompiled zImage in yocto project

I have a custom board with imx6dl chip and peripherals. I have compiled u-boot, zImage and rootfs from examples provided by manufacturer. But when i try to build yocto from git repo with latests releases, it fails to run (some drivers not working, board is loading and display interface, but touchscreen is not working for ex.),
Is there any way to include precompiled binaries zImage, u-boot and device table to bitbake recipes? I'm very new to yocto project, and only need to get bootable image with working drivers and qt5.
If you have a working boot chain (e.g. u-boot, kernel and device tree) that you have built out-of-yocto, then you might try building a rootfs only. This requires two main settings, to be made in your local.conf to get started. Please don't firget that this is just a starting point, and it is highly advised to get the kernel/bootloader build sorted out really soon.
PREFERRED_PROVIDER_virtual/kernel = "linux-dummy to have no kernel being built, and something like MACHINE="qemuarm" to set up an armv7 build on poky later than version 3.0. The core-image-minimal target should at least be enough to drop you in a shell for starters, and then you can proceed from there.
Additionally, it might be qorth asking the board vendor or the yocto community (#yocto on the freenode server) if they know about a proper BSP layer. FSL things are quite nicely supported these days, and if your board is closely related to one of the well-known ones, you've got a high chance that meta-freescale just does the trick nicely.
Addition:
#Martin pointed out the mention of Qemu is misleading. This is just the easiest way to make Yocto build a userland for the armv7-architecture which the imx6dl is based on. The resulting root filesystem should be sufficiently compatible to get started, before moving on to more tuned MACHINE configuration.

Why are there different packages for the same architecture, but different OSes?

My question is rather conceptual. I noticed that there are different packages for the same architecture, like x86-64, but for different OSes. For example, RPM offers different packages for Fedora and OpenSUSE for the same x86-64 architecture: http://www.rpmfind.net/linux/rpm2html/search.php?query=wget - not to mention different packages served up by YUM and APT (for Ubuntu), all for x86-64.
My understanding is that a package contains binary instructions suitable for a given CPU architecture, so that as long as CPU is of that architecture, it should be able to execute those instructions natively. So why do packages built for the same architecture differ for different OSes?
Considering just different Linux distros:
Besides being compiled against different library versions (as Hadi described), the packaging itself and default config files can be different. Maybe one distro wants /etc/wget.conf, while maybe another wants /etc/default/wget.conf, or for those files to have different contents. (I forget if wget specifically has a global config file; some packages definitely do, and not just servers like Exim or Apache.)
Or different distros could enable / disable different sets of compile-time options. (Traditionally set with ./configure --enable-foo --disable-bar before make -j4 && make install).
For wget, choices may include which TLS library to compile against (OpenSSL vs. gnutls), not just which version.
So ABIs (library versions) are important, but there are other reasons why every distro has their own package of things.
Completely different OSes, like Linux vs. Windows vs. OS X, have different executable file formats. ELF vs. PE vs. Mach-O. All three of those formats contain x86-64 machine code, but the metadata is different. (And OS differences mean you'd want the machine code to do different things.
For example, opening a file on Linux or OS X (or any POSIX OS) can be done with an int open(const char *pathname, int flags, mode_t mode); system call. So the same source code works for both those platforms, although it can still compile to different machine code, or actually in this case very similar machine code to call a libc wrapper around the system call (OS X and Linux use the same function calling convention), but with a different symbol name. OS X would compile to a call to _open, but Linux doesn't prepend underscores to symbol names, so the dynamic linker symbol name would be open.
The mode constants for open might be different. e.g. maybe OS X defines O_RDWR as 4, but maybe Linux defines it as 2. This would be an ABI difference: same source compiles to different machine code, where the program and the library agree on what means what.
But Windows isn't a POSIX system. The WinAPI function for opening a file is HFILE WINAPI OpenFile(LPCSTR lpFileName, LPOFSTRUCT lpReOpenBuff, UINT uStyle);
If you want to do anything invented more recently than opening / closing files, especially drawing a GUI, things are even less similar between platforms and you will use different libraries. (Or a cross platform GUI library will use a different back-end on different platforms).
OS X and Linux both have Unix heritage (real or as a clone implementation), so the low-level file stuff is similar.
These packages contain native binaries that require a particular Application Binary Interface (ABI) to run. The CPU architecture is only one part of the ABI. Different Linux distros have different ABIs and therefore the same binary may not be compatible across them. That's why there are different packages for the same architecture, but different OSes. The Linux Standard Base project aims at standardizing the ABIs of Linux distros so that it's easier to build portable packages.