I'm trying to port Android 11 to my board(odroid-n2) and I'm confused about how I can build an board specific kernel module and ramdisk. Could I get some help about this?
Recently, to solve kernel fragmentation, it seems AOSP is splitting kernel into two different block.
(1) GKI(Generic Kernel Image)
(2) Vendor specific kernel
For GKI, I think I can use an image from ci.android.com.
For Vendor specific portion(Related to vendor_boot partition),
is there specific flow for this? or something to refer?
I'm referring to {android kernel}/common/build.config.db845c for case-study, I don't understand why 'gki_defconfig + db845c_gki.fragment' should be combined to one to generate configruation for kernel build. I think we only build kernel module for vendor specific portion.
*) For android docs, I'm referring to the followings.
https://source.android.com/setup/build/building-kernels
https://source.android.com/devices/architecture/kernel/generic-kernel-image
Indeed, with GKI (Generic Kernel Image), generic parts and vendor parts are separated. As of today, that distinction is quite clear: vmlinux is GKI and any module (*.ko) is vendor. That might change in the future if GKI modules show to be useful. Then there could be GKI (Kernel+Modules) + Vendor Modules.
The whole build process is quite new as well and still evolving with this quite fundamental change to how Android kernels are developed. Historically, device kernels and modules were build in one logical step and compatibility was ensured by the combined build. Now there is a shift towards a world where we can cleanly build the kernel and the modules entirely separate without overlap. It is likely to get much easier in the future to build vendor modules without having to build too much of the GKI kernel at the same time. Yet, the way the build currently works, it is easier to set up as it is.
Android 11 introduced the concept of "compliance" for GKI based kernels. That means a shipped kernel is ABI compatible to the GKI kernel. In theory that means that you could literally swap out the kernel that you have and replace it with a build from ci.android.com. Note, a compatible kernel can have significant (ABI compatible) patches that the GKI does not have. So, while compatible, it might not lead to the same experience.
Android 12 enables devices to be launched with signed boot images containing the GKI kernel. Since the Kernel<>Module ABI of those kernels is kept stable, this also allows independent updates of GKI kernel and vendor modules.
When you refer to the db845c build config, yes, this looks a bit confusing. This is a full blown config and the build indeed produces an (ABI compatible!) kernel and the vendor specific modules. The fragment can be considered a patch to the gki_defconfig in the sense that it does not change the core kernel, but enables the required modules.
For the final release, the kernel image from this build will be replaced by the GKI kernel image. But for development, the kernel that comes out of this build is perfectly fine to use.
In practice it helps downstream projects to develop core kernel features and modules at the same time, though changes for modules and kernel need to go into different repositories (db845c being an exception here a reference board).
To somewhat answer your question on how to build the db845c kernel, ci.android.com also provides the build log along with the artifacts to download. For the android12-5.10 branch and the target kernel_db845c, a recent build can be found here. The build.log states at the beginning the instructions to reproduce this:
$ BUILD_CONFIG=common/build.config.db845c build/build.sh
This is the relevant step based on the general instructions on source.android.com - building kernels.
Related
I am writing a recipe for a package which needs to be aware of the underlying machine's microarchitecture. In other words, I would like a string such as aarch64 or arm64 for a 64-bit Arm system, and x86_64 for a 64-bit Intel system.
So far, I have identified:
MACHINE - This seems to be whatever the meta-* layer author decides to name their machine and may contain the architecture, it may not. For example, beaglebone is no use.
MACHINE_ARCH - This seems to be close to what I'm looking for. However, taking this BSP layer as an example, and doing a quick search, it doesn't seem as though this variable is set anywhere. Only read from in a few packages.
TUNE_PKGARCH - May be the best bet so far. But, what format is this variable in? What architecture naming conventions are used? Also, the aforementioned BSP layer, again, doesn't seem to set this anywhere.
I would have thought that knowing the machine architecture in a well-defined format is important, but it doesn't seem to be so simple. Any advice?
I'm accustomed to doing this with uname -m (Windows fans can use the output of SET processor), so for me in Yocto it ends up being a toss-up:
According to the Mega-Manual entry for TARGET_ARCH:
TARGET_ARCH
The target machine's architecture. The OpenEmbedded build system supports many
architectures. Here is an example list of architectures supported. This list is by
no means complete as the architecture is configurable:
arm
i586
x86_64
powerpc
powerpc64
mips
mipsel
uname -m is a bit better since you get subarchitectural information as well. From the machines I have access to at this moment:
Intel-based Nuc build system: x86_64
ARM embedded system: armv7l
Raspberry Pi 4B: aarch64
I have found that the GNU automake (native) and libtool (available for target) packages compute a useful variable named UNAME_MACHINE_ARCH. If you are using libtool already or are willing to take it on just for the purpose of having this done for you :-#), you can solve this way. Look in the built tree for files named config.guess.
You may be able to get by more generically than libtool by using Yocto BUILD_ARCH:
BUILD_ARCH
Specifies the architecture of the build host (e.g. i686). The OpenEmbedded build
system sets the value of BUILD_ARCH from the machine name reported by the uname
command.
So play with these and make your own choice depending on your project's circumstances.
I have a custom board with imx6dl chip and peripherals. I have compiled u-boot, zImage and rootfs from examples provided by manufacturer. But when i try to build yocto from git repo with latests releases, it fails to run (some drivers not working, board is loading and display interface, but touchscreen is not working for ex.),
Is there any way to include precompiled binaries zImage, u-boot and device table to bitbake recipes? I'm very new to yocto project, and only need to get bootable image with working drivers and qt5.
If you have a working boot chain (e.g. u-boot, kernel and device tree) that you have built out-of-yocto, then you might try building a rootfs only. This requires two main settings, to be made in your local.conf to get started. Please don't firget that this is just a starting point, and it is highly advised to get the kernel/bootloader build sorted out really soon.
PREFERRED_PROVIDER_virtual/kernel = "linux-dummy to have no kernel being built, and something like MACHINE="qemuarm" to set up an armv7 build on poky later than version 3.0. The core-image-minimal target should at least be enough to drop you in a shell for starters, and then you can proceed from there.
Additionally, it might be qorth asking the board vendor or the yocto community (#yocto on the freenode server) if they know about a proper BSP layer. FSL things are quite nicely supported these days, and if your board is closely related to one of the well-known ones, you've got a high chance that meta-freescale just does the trick nicely.
Addition:
#Martin pointed out the mention of Qemu is misleading. This is just the easiest way to make Yocto build a userland for the armv7-architecture which the imx6dl is based on. The resulting root filesystem should be sufficiently compatible to get started, before moving on to more tuned MACHINE configuration.
My question is rather conceptual. I noticed that there are different packages for the same architecture, like x86-64, but for different OSes. For example, RPM offers different packages for Fedora and OpenSUSE for the same x86-64 architecture: http://www.rpmfind.net/linux/rpm2html/search.php?query=wget - not to mention different packages served up by YUM and APT (for Ubuntu), all for x86-64.
My understanding is that a package contains binary instructions suitable for a given CPU architecture, so that as long as CPU is of that architecture, it should be able to execute those instructions natively. So why do packages built for the same architecture differ for different OSes?
Considering just different Linux distros:
Besides being compiled against different library versions (as Hadi described), the packaging itself and default config files can be different. Maybe one distro wants /etc/wget.conf, while maybe another wants /etc/default/wget.conf, or for those files to have different contents. (I forget if wget specifically has a global config file; some packages definitely do, and not just servers like Exim or Apache.)
Or different distros could enable / disable different sets of compile-time options. (Traditionally set with ./configure --enable-foo --disable-bar before make -j4 && make install).
For wget, choices may include which TLS library to compile against (OpenSSL vs. gnutls), not just which version.
So ABIs (library versions) are important, but there are other reasons why every distro has their own package of things.
Completely different OSes, like Linux vs. Windows vs. OS X, have different executable file formats. ELF vs. PE vs. Mach-O. All three of those formats contain x86-64 machine code, but the metadata is different. (And OS differences mean you'd want the machine code to do different things.
For example, opening a file on Linux or OS X (or any POSIX OS) can be done with an int open(const char *pathname, int flags, mode_t mode); system call. So the same source code works for both those platforms, although it can still compile to different machine code, or actually in this case very similar machine code to call a libc wrapper around the system call (OS X and Linux use the same function calling convention), but with a different symbol name. OS X would compile to a call to _open, but Linux doesn't prepend underscores to symbol names, so the dynamic linker symbol name would be open.
The mode constants for open might be different. e.g. maybe OS X defines O_RDWR as 4, but maybe Linux defines it as 2. This would be an ABI difference: same source compiles to different machine code, where the program and the library agree on what means what.
But Windows isn't a POSIX system. The WinAPI function for opening a file is HFILE WINAPI OpenFile(LPCSTR lpFileName, LPOFSTRUCT lpReOpenBuff, UINT uStyle);
If you want to do anything invented more recently than opening / closing files, especially drawing a GUI, things are even less similar between platforms and you will use different libraries. (Or a cross platform GUI library will use a different back-end on different platforms).
OS X and Linux both have Unix heritage (real or as a clone implementation), so the low-level file stuff is similar.
These packages contain native binaries that require a particular Application Binary Interface (ABI) to run. The CPU architecture is only one part of the ABI. Different Linux distros have different ABIs and therefore the same binary may not be compatible across them. That's why there are different packages for the same architecture, but different OSes. The Linux Standard Base project aims at standardizing the ABIs of Linux distros so that it's easier to build portable packages.
I was configuring rpi-3.8.y (raspbian kernel on 3.8.y branch) using menuconfig and came across the following,
Will selecting a PREEMPT-ible kernel (choice #3) cross-compile a real-time one ?
No.
The closest thing you'll get to real-time with Linux is PREEMPT_RT, which is an out-of-tree patch set, and clearly not present in whatever kernel sources you're building.
A project I've inherited uses a very old version of buildroot, but I'd like to change it to use a feature that was added only in a later buildroot release.
Is there a straightforward way of updating a buildroot setup to use a later release?
e.g. if I save out a defconfig file and import that in a later buildroot release, would that just work, or are there practical reasons why not? Are there additional configuration files I'd need to carry across (e.g. kernel, busybox, etc)? Thanks!
No.
In fact, it's worse that that.
You can start by using a newer Buildroot version with your old default configuration file, but you will need to check the resulting configuration carefully for deprecated packages and packages whose versions are not compatible with whatever application software you might be adding to the Buildroot filesystem. The names of some packages (e.g. opencv) change over time, so you need to eyeball the resulting .config file to make sure that all of the packages that you need are there.
If you build a toolchain or Linux kernel in Buildroot (commonly done but not generally good practice), then you need to make sure that the new configuration is set to build the old version of the kernel and compiler. These might be too old to build some of the packages in the newer version of Buildroot.
If you upgrade your kernel at the same time that you upgrade Buildroot, then you need to port your old kernel config file to the new kernel version. Since the kernel configuration options change frequently, you will probably need to start from defconfig for your board and then use make menuconfig to manually add the configs that you need.
Busybox is a bit less volatile, so there is a chance that your old config will work.
If your old Buildroot configuration uses postbuild or postimage scripts, you will need to review them, but my guess is that they will not need any changes.
You should allocate at least a week for this work, maybe more, depending on the complexity of the configuration. Remember that if you are forced to use an older vendor kernel due to patches for a specific SoC, for example, the Freescale 2.6.33.9 kernel for the BSC9131, then the upgrade that you want to do might not be possible without doing six to twelve months of work to port the vendor's kernel patches to a newer kernel version.
Cheers.