Raspberry Pi "tools" (raspistill, vcgencmd, ...) not included with buildroot - raspberry-pi

I've created a basic image with buildroot (buildroot-2021.02.1), containing some software and also selected the RPI firmware in order to use the camera and some Raspberry Pi tools: Target packages --> Hardware handling --> Firmware --> ([x] rpi-firmware) --> Firmware to boot as mentioned here.
But the tools raspistill, vcgencmd, ... are not included. The question is how to include them and why are they not included?
At some point in time it must have been working, see: RaspberryPi camera with buildroot
More details:
In the logs of buildroot the following lines show up:
>>> rpi-firmware d016a6eb01c8c7326a89cb42809fed2a21525de5 Installing to target
comm: /home/ich/br/buildroot/output/build/rpi-firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/.files-list.before: No such file or directory
comm: /home/ich/br/buildroot/output/build/rpi-firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/.files-list-staging.before: No such file or directory
comm: /home/ich/br/buildroot/output/build/rpi-firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/.files-list-host.before: No such file or directory
and inside this package the binaries are existing. They are downloaded from http://sources.buildroot.net/rpi-firmware/ where the tars contain the actual tools. But they are not copied into the final image by buildroot but only downloaded. Maybe because some files-list.txt file(s) are missing, as pointed out by the error message. Maybe those files are whitelisting the files to copy. But I could not find documentation about this.
For 64-bit builds the binaries in the (then manually downloaded) tar file could not be executed, because they are 32-bit executables: firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/opt/vc/bin/vcgencmd: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.3, for GNU/Linux 3.1.9, not stripped; on a 32-bit build with buildroot it also does not work, because the shared libraries are missing, even though the full structure from the archive has been placed under /opt/vc/{bin|lib|...} like on a standard RPI image.
I'm unsure how to proceed with the problem, diagnose it and fix it.
EDIT: maybe this are two different problems; I read the linked SO question once again and compared the files fixup.dat and start.elf (which contain the RPI hardware stuff to make the tools work) in the boot.vfat of the built image with the images in buildroot/output/build/rpi-firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/boot and the files fixup_x.dat and start_x.elf are taken from there. So is in accordance to the mentioned SO question. And at no place it is indicated that the tools for the Raspberry Pi are compiled. They are only inside this tar archive. Maybe one needs to compile them extra and this package is not designed to integrate those tools.

I figured out the solution and this might be useful for future reference, so I put the solution here.
One have to differentiate between:
the "firmware" (that is the mentioned tar from the question), which is turned on with BR2_PACKAGE_RPI_FIRMWARE=y in buildroot. This causes then that the start.elf and fixup.dat contain the correct data from this tar. The fact that the tars also contain the desired binaries is only "coincidence"
the desired applications are packaged as "userland" (see here) and if one finds the # BR2_PACKAGE_RPI_USERLAND is not set line in .config in the buildroot project's root directory and replaces this line by BR2_PACKAGE_RPI_USERLAND=y the applications (vcgencmd, raspistill, ...) are being built and included into the final image (I could not find the location in make menuconfig however, but this is no problem, if you directly modify the vars)
Therefore, the question is answered. But: you might run into some issues ;-) :
Question 1: I get a "Segmentation fault" when running raspistill
For raspistill -o i.jpg you might get out:
mmal: mmal_vc_shm_init: could not initialize vc shared memory service
mmal: mmal_vc_component_create: failed to initialise shm for 'vc.camera_info' (7:EIO)
mmal: mmal_component_create_core: could not create component 'vc.camera_info' (7)
mmal: Failed to create camera_info component
Segmentation fault
(with an empty image file), see here for details.
Answer: this is related to /dev/vcsm (or /dev/vcsm-csa) missing which is used for camera control / video decoding "stuff". Symlinking to /dev/vc-mem as stated somewhere around the net does not help.
Solution: I was using the latest BR with Kernel 5.10.x (buildroot-2021.02.1 and simply "dowgraded" to buildroot-2020.02.1, rebuilt it and /dev/vcsm appears and everything works fine.
Question 2: I want to do it in docker containers
Answer: No problem. I used balenalib/rpi-raspbian:latest (as suggested here) and it worked flawlessly by running docker run --privileged --device=/dev/vchiq --rm -it balenalib/rpi-raspbian:latest. For this only the proper devices and the support is needed. So the package BR2_PACKAGE_RPI_USERLAND=y could be completely omitted.
Question 3: Does it work with 64-bit?
Answer: No. I tried out the recent version (raspberrypi3_64_defconfig) of buildroot and the version from Feb 2020 as mentioned and for both /dev/vcsm (or /dev/vcsm-csa) is missing. Linux cpi64 4.19.97-v8 #1 SMP PREEMPT Sat Apr 17 14:13:11 CEST 2021 aarch64 GNU/Linux

Related

Yocto: postinstall intercept hook "update_desktop_database" error

I am trying to build custom linux image using yocto. The set up is;
Ubuntu 20.04 on Oracle Virtual Machine
Yocto release dunfell
It gives this error
NOTE: Exit code 127. Output:
/home/user234/yocto-project/image/build/tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/intercept_scripts-1bf6a9721f164ba99b8a07325a9c3fe0f21a700fea55c8351519b59cf02d0aca/update_desktop_database:
7: update-desktop-database: not found
ERROR: The postinstall intercept hook 'update_desktop_database' failed, details in /home/user234/yocto-project/image/build/tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/temp/log.do_rootfs
The problem only occures in Virtual Machine Environment, It works fine with native linux environment on my other machine.
I have installed the desktop-file-utils and I can run from my shell manually. Somehow, bitbake is unable to detect it. Does someone know the solution?
You can resolve this issue immediately by patching poky sources to skip execution of "update_desktop_database" in this path:
poky/scripts/postinst-intercepts/update_desktop_database
Just comment the line.
It may happen again for other scripts. Just do the same for the other scripts. Finally, the do_rootfs task will complete successfully.
I had a similar problem (without virtual machine, you will see that this part is unrelated). The issue is with file permissions of the Yocto sources, or part of them. In my particular case the DevOps have set the POSIX file permissions on the entire Yocto source directory to 777, i.e. -rwxrwxrwx. Don't ask me why.
In the OP's case it seems like the Yocto sources may have been copied to the virtual machine with a help of some permission-less file system, like FAT32, which results in the same outcome. A good example is copying the sources with a USB flash drive formatted with FAT32. YMMV, this may probably be relevant for ACLs as well, I haven't tried to fiddle with that.
In my opinion, this should be considered as a bug in bitbake. If the source files permissions can cause such an unpredictable behavior, they have to be verified before or during the build, and the user must be informed by generating an error.
This is clearly irrelevant to the OP by now, hope somebody finds it useful.

Yocto SDK missing kernel header files [duplicate]

This question already has an answer here:
How to build linux kernel module using Yocto SDK?
(1 answer)
Closed 1 year ago.
I have a kernel module that I have successfully compiled against my toolchain and installed in the image. The driver loads just fine and functions as expected. The user program that uses the driver is a cmake project in CLion 2020.1. I set up the cmake project to point to the OEToolchainConfig.cmake so all the
#include<foo.h>
are resolved. All except a number of kernel headers; for example: <linux/devices.h>
Edit
To be clear, there are linux header files that are resolved. for example:
#include <linux/kernel.h>
#include <linux/module.h>
I only seem to be missing a subset of kernel header files...
end Edit
After navigating to the sysroot toolchain /usr/includes/linux directory I verified the missing/unresolved kernel headers are in fact not present.
So, there are 2 questions here: 1) How did I successfully compile the driver against the toolchain if the required kernel headers are missing & 2) how do I include the missing kernel headers in the SDK?
I suspect the answer to the first question is that bitbake grabbed the host's kernel header files in which case question 1 becomes how do I prevent that from happening; just a guess though. For question 2 (my main question), after probing the google machine, I found references to add:
IMAGE_INSTALL += "\
kernel-devsrc \
linux-libc-headers-dev \
python"
to my core-image-myimage_1.0.bb file but this does not seem to add the headers I require.
Update
It would appear the header files i require are in fact installed into the tool chain but they are installed under source: /usr/src/kernel/include/linux
while this allows me a workaround for setting up include paths in CLion, is there some reason I cannot get these to install into the regular:/usr/include/linux directroy?
The generated SDK by default usually does not include the kernel headers. This is very similar to other systems like Ubuntu, where there is a separate package you must install first to compile against the kernel.
By putting kernel-devsrc in IMAGE_INSTALL, you added the kernel headers to the image but not the SDK. This is handy though, if you want to be able to compile and test your module on a running target (which is great for speeding up debugging).
Now with some context out of the way, take a look at this answer to see how to get the kernel headers in your SDK, and how to actually get compiling working. The latter not being so intuitive if you're a Linux newbie such as myself.
https://stackoverflow.com/a/67335209/1385815

Include precompiled zImage in yocto project

I have a custom board with imx6dl chip and peripherals. I have compiled u-boot, zImage and rootfs from examples provided by manufacturer. But when i try to build yocto from git repo with latests releases, it fails to run (some drivers not working, board is loading and display interface, but touchscreen is not working for ex.),
Is there any way to include precompiled binaries zImage, u-boot and device table to bitbake recipes? I'm very new to yocto project, and only need to get bootable image with working drivers and qt5.
If you have a working boot chain (e.g. u-boot, kernel and device tree) that you have built out-of-yocto, then you might try building a rootfs only. This requires two main settings, to be made in your local.conf to get started. Please don't firget that this is just a starting point, and it is highly advised to get the kernel/bootloader build sorted out really soon.
PREFERRED_PROVIDER_virtual/kernel = "linux-dummy to have no kernel being built, and something like MACHINE="qemuarm" to set up an armv7 build on poky later than version 3.0. The core-image-minimal target should at least be enough to drop you in a shell for starters, and then you can proceed from there.
Additionally, it might be qorth asking the board vendor or the yocto community (#yocto on the freenode server) if they know about a proper BSP layer. FSL things are quite nicely supported these days, and if your board is closely related to one of the well-known ones, you've got a high chance that meta-freescale just does the trick nicely.
Addition:
#Martin pointed out the mention of Qemu is misleading. This is just the easiest way to make Yocto build a userland for the armv7-architecture which the imx6dl is based on. The resulting root filesystem should be sufficiently compatible to get started, before moving on to more tuned MACHINE configuration.

Where can I find what drivers built in my yocto project Linux kernel image?

I'm using Yocto project to build a linux kernel image following these steps:
https://www.at91.com/linux4sam/bin/view/Linux4SAM/Sama5d27Som1EKMainPage
For some reasons I just want to reduce my Image size so I can flash it on QSPI 8 Mega octet memory. I have tried to reduce the size of my rootFS, I have removed some packages that I found in .manifest file and some Distro features. But I did not find how can I modify the kernel size which size is fixed ( 4.2 Mega octet ).
I think that when I can remove some drivers that I don't need the kernel size will be reduced.
I just want to know how can I find what drivers are built in my image and where can I find them ? and later how can I delete the ones that I don't need ?
Thank you.
if you check the .config file that was generated for your BSP, it will show what drivers (and other things) were built into your kernel (check for the 'y' on all the options).
Such file should be somewhere in:
tmp/work//linux-yocto//linux-*-build/.config
Sorry that I can't give you the exact location, but it literally depends on what BSP/MACHINE you are building for.
Also, if you want to modify such configuration, you can call:
$ bitbake -c menuconfig virtual/kernel
that will bring up the menuconfig ncurses interface, in which you can not only see what is installed but also modify what you need.

Swift on Ubuntu - No such file or directory

Trying to install swift on my Ubuntu 14.04.3 server. Followed the guide on Swift.org.
download the install package
fetched the gpg keys
verified the .sig file
extracted the file and added the usr/bin subdir to my PATH
However when I try running swift I get "No such file or directory"
Swift seems to be found:
icanzilb#underplot:~/public$ which swift
/home/icanzilb/swift-DEVELOPMENT-SNAPSHOT-2016-01-25-a-ubuntu14.04/usr/bin/swift
Path is correct:
icanzilb#underplot:~/swift-DEVELOPMENT-SNAPSHOT-2016-01-25-a-ubuntu14.04/usr/bin$ echo $PATH
/home/icanzilb/swift-DEVELOPMENT-SNAPSHOT-2016-01-25-a-ubuntu14.04/usr/bin:...
But can't run it:
icanzilb#underplot:~/swift-DEVELOPMENT-SNAPSHOT-2016-01-25-a-ubuntu14.04/usr/bin$ swift
-bash: /home/icanzilb/swift-DEVELOPMENT-SNAPSHOT-2016-01-25-a-ubuntu14.04/usr/bin/swift: No such file or directory
=====
Update 1: I tried the development and stable builds of Swift from swift.org and the .sig checks out but still getting the same error.
Both the swift executable and my Ubuntu are 64 bit.
It really looks like this is a problem of an architecture mismatch, as indicated by one of the commenters, or a corrupted download. The swift executable is in the path and is picked up by which. Judging from the name of the directory where it is located, it is indeed for Ubuntu 14.04. I would try the following:
file `which swift`
This should tell you it is an ELF 64-bit executable, dynamically linked, etc. If that is not the case, you have a corrupted binary somehow.
Then do
uname -a
which should identify your system as Ubuntu 14, x86_64 among other things. If that is the case, you probably have a corrupted binary. Unpack the downloaded package again and retry. If your system is not x86_64, then you are on a wrong architecture.
Update 2/4/2016:
Based on the additional info you provided about the binary and the architecture, I tend to believe this is a platform mismatch. The output of uname doesn't show it is an Ubuntu box. Besides, the Linux kernel is 4.4.0, and the binary is for 2.6.24. I can successfully run the same binary on an Ubuntu 14.04 installation with a 3.19.0 kernel. The kernel version difference is not necessarily a problem in itself, but an indicator that the platform may not be a standard Ubuntu 14.04 install.
As another check, could you please do cat /etc/os-release and cat /etc/lsb-release? Those files should indicate if the platform is indeed Ubuntu. If it is, then I'm wondering how you ended up with this late kernel version. It is possible some other components of the OS are also too recent and incompatible with the Swift binary. This may be some strange custom Linode image... If you have a machine you can use, virtual or physical, try downloading and installing a standard Ubuntu 14.04.