Where can I find what drivers built in my yocto project Linux kernel image? - linux-device-driver

I'm using Yocto project to build a linux kernel image following these steps:
https://www.at91.com/linux4sam/bin/view/Linux4SAM/Sama5d27Som1EKMainPage
For some reasons I just want to reduce my Image size so I can flash it on QSPI 8 Mega octet memory. I have tried to reduce the size of my rootFS, I have removed some packages that I found in .manifest file and some Distro features. But I did not find how can I modify the kernel size which size is fixed ( 4.2 Mega octet ).
I think that when I can remove some drivers that I don't need the kernel size will be reduced.
I just want to know how can I find what drivers are built in my image and where can I find them ? and later how can I delete the ones that I don't need ?
Thank you.

if you check the .config file that was generated for your BSP, it will show what drivers (and other things) were built into your kernel (check for the 'y' on all the options).
Such file should be somewhere in:
tmp/work//linux-yocto//linux-*-build/.config
Sorry that I can't give you the exact location, but it literally depends on what BSP/MACHINE you are building for.
Also, if you want to modify such configuration, you can call:
$ bitbake -c menuconfig virtual/kernel
that will bring up the menuconfig ncurses interface, in which you can not only see what is installed but also modify what you need.

Related

Raspberry Pi "tools" (raspistill, vcgencmd, ...) not included with buildroot

I've created a basic image with buildroot (buildroot-2021.02.1), containing some software and also selected the RPI firmware in order to use the camera and some Raspberry Pi tools: Target packages --> Hardware handling --> Firmware --> ([x] rpi-firmware) --> Firmware to boot as mentioned here.
But the tools raspistill, vcgencmd, ... are not included. The question is how to include them and why are they not included?
At some point in time it must have been working, see: RaspberryPi camera with buildroot
More details:
In the logs of buildroot the following lines show up:
>>> rpi-firmware d016a6eb01c8c7326a89cb42809fed2a21525de5 Installing to target
comm: /home/ich/br/buildroot/output/build/rpi-firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/.files-list.before: No such file or directory
comm: /home/ich/br/buildroot/output/build/rpi-firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/.files-list-staging.before: No such file or directory
comm: /home/ich/br/buildroot/output/build/rpi-firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/.files-list-host.before: No such file or directory
and inside this package the binaries are existing. They are downloaded from http://sources.buildroot.net/rpi-firmware/ where the tars contain the actual tools. But they are not copied into the final image by buildroot but only downloaded. Maybe because some files-list.txt file(s) are missing, as pointed out by the error message. Maybe those files are whitelisting the files to copy. But I could not find documentation about this.
For 64-bit builds the binaries in the (then manually downloaded) tar file could not be executed, because they are 32-bit executables: firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/opt/vc/bin/vcgencmd: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.3, for GNU/Linux 3.1.9, not stripped; on a 32-bit build with buildroot it also does not work, because the shared libraries are missing, even though the full structure from the archive has been placed under /opt/vc/{bin|lib|...} like on a standard RPI image.
I'm unsure how to proceed with the problem, diagnose it and fix it.
EDIT: maybe this are two different problems; I read the linked SO question once again and compared the files fixup.dat and start.elf (which contain the RPI hardware stuff to make the tools work) in the boot.vfat of the built image with the images in buildroot/output/build/rpi-firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/boot and the files fixup_x.dat and start_x.elf are taken from there. So is in accordance to the mentioned SO question. And at no place it is indicated that the tools for the Raspberry Pi are compiled. They are only inside this tar archive. Maybe one needs to compile them extra and this package is not designed to integrate those tools.
I figured out the solution and this might be useful for future reference, so I put the solution here.
One have to differentiate between:
the "firmware" (that is the mentioned tar from the question), which is turned on with BR2_PACKAGE_RPI_FIRMWARE=y in buildroot. This causes then that the start.elf and fixup.dat contain the correct data from this tar. The fact that the tars also contain the desired binaries is only "coincidence"
the desired applications are packaged as "userland" (see here) and if one finds the # BR2_PACKAGE_RPI_USERLAND is not set line in .config in the buildroot project's root directory and replaces this line by BR2_PACKAGE_RPI_USERLAND=y the applications (vcgencmd, raspistill, ...) are being built and included into the final image (I could not find the location in make menuconfig however, but this is no problem, if you directly modify the vars)
Therefore, the question is answered. But: you might run into some issues ;-) :
Question 1: I get a "Segmentation fault" when running raspistill
For raspistill -o i.jpg you might get out:
mmal: mmal_vc_shm_init: could not initialize vc shared memory service
mmal: mmal_vc_component_create: failed to initialise shm for 'vc.camera_info' (7:EIO)
mmal: mmal_component_create_core: could not create component 'vc.camera_info' (7)
mmal: Failed to create camera_info component
Segmentation fault
(with an empty image file), see here for details.
Answer: this is related to /dev/vcsm (or /dev/vcsm-csa) missing which is used for camera control / video decoding "stuff". Symlinking to /dev/vc-mem as stated somewhere around the net does not help.
Solution: I was using the latest BR with Kernel 5.10.x (buildroot-2021.02.1 and simply "dowgraded" to buildroot-2020.02.1, rebuilt it and /dev/vcsm appears and everything works fine.
Question 2: I want to do it in docker containers
Answer: No problem. I used balenalib/rpi-raspbian:latest (as suggested here) and it worked flawlessly by running docker run --privileged --device=/dev/vchiq --rm -it balenalib/rpi-raspbian:latest. For this only the proper devices and the support is needed. So the package BR2_PACKAGE_RPI_USERLAND=y could be completely omitted.
Question 3: Does it work with 64-bit?
Answer: No. I tried out the recent version (raspberrypi3_64_defconfig) of buildroot and the version from Feb 2020 as mentioned and for both /dev/vcsm (or /dev/vcsm-csa) is missing. Linux cpi64 4.19.97-v8 #1 SMP PREEMPT Sat Apr 17 14:13:11 CEST 2021 aarch64 GNU/Linux

Create just smallest possible rootfs using yocto

I want to create am minimal Linux system. I have compiled the kernel myself, but I want to use Yocto to build my rootfs. How can I build the smallest possible rootfs to startup and system and open a shell without building the kernel? Also, how can I choose the type of rootfs? I'd like it to be initramfs so I can then embed it in my kernel image.
Without more details both questions would require very long answers. Luckily the Development Manual covers the issues: See Building a tiny system and Building an initramfs image. I would suggest starting with those and asking more specific questions if needed.

Extracting particular executable from Yocto build files

I have a project that could be built with the Yocto build system to generate a full disk image. According to the existing procedure, I can get only a full disk image that should be flashed on SD card.
And this does not suit my needs because I can't flash the image on the board. In my case, I need to build a certain project (that currently has a recipe) to an executable. (This project currently is a part of the full disk that is built with Yocto)
So I am wondering, is it possible to extract this executable (and the libraries that this executable depends on) from Yocto build files, so that I could copy and install it on the board? Which possibilities do I have to do this? Do I have some quick and dirty way to do this?
P.S: I heard something about that Yocto can provide a package for a certain project, that could be installed by the corresponding package manager on the board. On the board installed dpkg package manager.
It's a solution to add tar.gz to your IMAGE_FSTYPE.
After building the image, you can extract the executable you were looking for from the created archive.
Or you add the output format you need for your target and install the full image.

Include precompiled zImage in yocto project

I have a custom board with imx6dl chip and peripherals. I have compiled u-boot, zImage and rootfs from examples provided by manufacturer. But when i try to build yocto from git repo with latests releases, it fails to run (some drivers not working, board is loading and display interface, but touchscreen is not working for ex.),
Is there any way to include precompiled binaries zImage, u-boot and device table to bitbake recipes? I'm very new to yocto project, and only need to get bootable image with working drivers and qt5.
If you have a working boot chain (e.g. u-boot, kernel and device tree) that you have built out-of-yocto, then you might try building a rootfs only. This requires two main settings, to be made in your local.conf to get started. Please don't firget that this is just a starting point, and it is highly advised to get the kernel/bootloader build sorted out really soon.
PREFERRED_PROVIDER_virtual/kernel = "linux-dummy to have no kernel being built, and something like MACHINE="qemuarm" to set up an armv7 build on poky later than version 3.0. The core-image-minimal target should at least be enough to drop you in a shell for starters, and then you can proceed from there.
Additionally, it might be qorth asking the board vendor or the yocto community (#yocto on the freenode server) if they know about a proper BSP layer. FSL things are quite nicely supported these days, and if your board is closely related to one of the well-known ones, you've got a high chance that meta-freescale just does the trick nicely.
Addition:
#Martin pointed out the mention of Qemu is misleading. This is just the easiest way to make Yocto build a userland for the armv7-architecture which the imx6dl is based on. The resulting root filesystem should be sufficiently compatible to get started, before moving on to more tuned MACHINE configuration.

Modifying core-image-minimal to only make rootfs

I am working on an embedded project on Zedboard. I would like (at least for now) to use Bitbake only to produce proper rootfs. I use recipe core-image-minimal, as I need only limited amount of staff there. How can I "tell" it to not compile kernel, not make u-boot, etc. and focus on rootfs only?
Here is what I've done so far:
Created my build environment
Downloaded needed layers
Modified local.conf to add needed packages to rootfs
Then after typing
bitbake core-image-minimal
I get my rootfs, and all this unnecessary staff. How can I avoid it?
I recently had the same need to only build the rootfs with yocto, skipping other things such as kernel, uboot, image creation etc. There are many legitimate reasons to do so. Anyways, this is what you have to do:
bitbake core-image-minimal -c image_cpio
in krogoth, this will populate the rootfs directory in build/tmp/work/$MACHINE/core-image-minimal/1.0-r0/ and create a rootfs.cpio file in build/tmp/deploy/images/$MACHINE/
in morty, the rootfs.cpio archives seem to be in build/tmp/work/$MACHINE/core-image-minimal/1.0-r0/deploy-core-image-minimal-image-complete/
Interesting concept. However, from what I observed, Yocto must get the defconfig in kernel and u-boot to do configuration on the image itself. Therefore, removing the process will make rootfs not bootable.
These happened for me a lot of time since I used different kernels to compile for different machines. I thought that the ARM image will be the same and will work for all machine but I was wrong.
For Debian, the image compiled need to use kernel's corresponding configuration to compile the rootfs for it to work. And Yocto is the same.
bitbake -e |grep IMAGE_FSTYPE
will give you something like:
IMAGE_FSTYPES="tar.gz cpio cpio.gz.u-boot ...."
it's a list of all the image that will be generated, to remove the undesired ones, in the local.conf file use:
IMAGE_FSTYPES_remove = " cpio cpio.gz.u-boot"
the space before the first element it's not optional.
Regards
If you don't want to build a kernel set the preferred provider of virtual/kernel to 'linux-dummy'.