Yocto SDK missing kernel header files [duplicate] - yocto

This question already has an answer here:
How to build linux kernel module using Yocto SDK?
(1 answer)
Closed 1 year ago.
I have a kernel module that I have successfully compiled against my toolchain and installed in the image. The driver loads just fine and functions as expected. The user program that uses the driver is a cmake project in CLion 2020.1. I set up the cmake project to point to the OEToolchainConfig.cmake so all the
#include<foo.h>
are resolved. All except a number of kernel headers; for example: <linux/devices.h>
Edit
To be clear, there are linux header files that are resolved. for example:
#include <linux/kernel.h>
#include <linux/module.h>
I only seem to be missing a subset of kernel header files...
end Edit
After navigating to the sysroot toolchain /usr/includes/linux directory I verified the missing/unresolved kernel headers are in fact not present.
So, there are 2 questions here: 1) How did I successfully compile the driver against the toolchain if the required kernel headers are missing & 2) how do I include the missing kernel headers in the SDK?
I suspect the answer to the first question is that bitbake grabbed the host's kernel header files in which case question 1 becomes how do I prevent that from happening; just a guess though. For question 2 (my main question), after probing the google machine, I found references to add:
IMAGE_INSTALL += "\
kernel-devsrc \
linux-libc-headers-dev \
python"
to my core-image-myimage_1.0.bb file but this does not seem to add the headers I require.
Update
It would appear the header files i require are in fact installed into the tool chain but they are installed under source: /usr/src/kernel/include/linux
while this allows me a workaround for setting up include paths in CLion, is there some reason I cannot get these to install into the regular:/usr/include/linux directroy?

The generated SDK by default usually does not include the kernel headers. This is very similar to other systems like Ubuntu, where there is a separate package you must install first to compile against the kernel.
By putting kernel-devsrc in IMAGE_INSTALL, you added the kernel headers to the image but not the SDK. This is handy though, if you want to be able to compile and test your module on a running target (which is great for speeding up debugging).
Now with some context out of the way, take a look at this answer to see how to get the kernel headers in your SDK, and how to actually get compiling working. The latter not being so intuitive if you're a Linux newbie such as myself.
https://stackoverflow.com/a/67335209/1385815

Related

Raspberry Pi "tools" (raspistill, vcgencmd, ...) not included with buildroot

I've created a basic image with buildroot (buildroot-2021.02.1), containing some software and also selected the RPI firmware in order to use the camera and some Raspberry Pi tools: Target packages --> Hardware handling --> Firmware --> ([x] rpi-firmware) --> Firmware to boot as mentioned here.
But the tools raspistill, vcgencmd, ... are not included. The question is how to include them and why are they not included?
At some point in time it must have been working, see: RaspberryPi camera with buildroot
More details:
In the logs of buildroot the following lines show up:
>>> rpi-firmware d016a6eb01c8c7326a89cb42809fed2a21525de5 Installing to target
comm: /home/ich/br/buildroot/output/build/rpi-firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/.files-list.before: No such file or directory
comm: /home/ich/br/buildroot/output/build/rpi-firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/.files-list-staging.before: No such file or directory
comm: /home/ich/br/buildroot/output/build/rpi-firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/.files-list-host.before: No such file or directory
and inside this package the binaries are existing. They are downloaded from http://sources.buildroot.net/rpi-firmware/ where the tars contain the actual tools. But they are not copied into the final image by buildroot but only downloaded. Maybe because some files-list.txt file(s) are missing, as pointed out by the error message. Maybe those files are whitelisting the files to copy. But I could not find documentation about this.
For 64-bit builds the binaries in the (then manually downloaded) tar file could not be executed, because they are 32-bit executables: firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/opt/vc/bin/vcgencmd: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.3, for GNU/Linux 3.1.9, not stripped; on a 32-bit build with buildroot it also does not work, because the shared libraries are missing, even though the full structure from the archive has been placed under /opt/vc/{bin|lib|...} like on a standard RPI image.
I'm unsure how to proceed with the problem, diagnose it and fix it.
EDIT: maybe this are two different problems; I read the linked SO question once again and compared the files fixup.dat and start.elf (which contain the RPI hardware stuff to make the tools work) in the boot.vfat of the built image with the images in buildroot/output/build/rpi-firmware-d016a6eb01c8c7326a89cb42809fed2a21525de5/boot and the files fixup_x.dat and start_x.elf are taken from there. So is in accordance to the mentioned SO question. And at no place it is indicated that the tools for the Raspberry Pi are compiled. They are only inside this tar archive. Maybe one needs to compile them extra and this package is not designed to integrate those tools.
I figured out the solution and this might be useful for future reference, so I put the solution here.
One have to differentiate between:
the "firmware" (that is the mentioned tar from the question), which is turned on with BR2_PACKAGE_RPI_FIRMWARE=y in buildroot. This causes then that the start.elf and fixup.dat contain the correct data from this tar. The fact that the tars also contain the desired binaries is only "coincidence"
the desired applications are packaged as "userland" (see here) and if one finds the # BR2_PACKAGE_RPI_USERLAND is not set line in .config in the buildroot project's root directory and replaces this line by BR2_PACKAGE_RPI_USERLAND=y the applications (vcgencmd, raspistill, ...) are being built and included into the final image (I could not find the location in make menuconfig however, but this is no problem, if you directly modify the vars)
Therefore, the question is answered. But: you might run into some issues ;-) :
Question 1: I get a "Segmentation fault" when running raspistill
For raspistill -o i.jpg you might get out:
mmal: mmal_vc_shm_init: could not initialize vc shared memory service
mmal: mmal_vc_component_create: failed to initialise shm for 'vc.camera_info' (7:EIO)
mmal: mmal_component_create_core: could not create component 'vc.camera_info' (7)
mmal: Failed to create camera_info component
Segmentation fault
(with an empty image file), see here for details.
Answer: this is related to /dev/vcsm (or /dev/vcsm-csa) missing which is used for camera control / video decoding "stuff". Symlinking to /dev/vc-mem as stated somewhere around the net does not help.
Solution: I was using the latest BR with Kernel 5.10.x (buildroot-2021.02.1 and simply "dowgraded" to buildroot-2020.02.1, rebuilt it and /dev/vcsm appears and everything works fine.
Question 2: I want to do it in docker containers
Answer: No problem. I used balenalib/rpi-raspbian:latest (as suggested here) and it worked flawlessly by running docker run --privileged --device=/dev/vchiq --rm -it balenalib/rpi-raspbian:latest. For this only the proper devices and the support is needed. So the package BR2_PACKAGE_RPI_USERLAND=y could be completely omitted.
Question 3: Does it work with 64-bit?
Answer: No. I tried out the recent version (raspberrypi3_64_defconfig) of buildroot and the version from Feb 2020 as mentioned and for both /dev/vcsm (or /dev/vcsm-csa) is missing. Linux cpi64 4.19.97-v8 #1 SMP PREEMPT Sat Apr 17 14:13:11 CEST 2021 aarch64 GNU/Linux

Making VS Code Remote extension work with GLIBC 2.17 installed in non standard locations

I'm trying to use VSCode Remote extension to connect to a remote host that runs on RHEL/CentOS 6, but it fails to connect since CentOS 6 ships with GLIBC 2.12 and GLIBCXX 3.4.1. As mentioned in this post, in order to get the extension to work, the workaround is to install GLIBC>=2.17 and GLIBCXX>=3.4.18.
Unfortunately, I don't have sudo access for the server, so I won't be able to update these libraries using the bash script provided in the link. Also, in this SO post, the author says not to update the system GLIBC since it can break down system applications. That being said, I've tried something different -- I extracted those rpm packages, as described in this blog, inside my home folder. I've then updated the env variables PATH and LD_LIBRARY_PATH in ~/.bash_profile to point to these new locations. But the node binary (in VS Code Remote) still can't find these libraries.
Is there a way to let the node binary know where to look for these libraries? More precisely, can someone explain how I can make this extension work without sudo access?
I've got it to work by installing gcc and glibc using Linuxbrew. See this post for more details: https://github.com/microsoft/vscode-remote-release/issues/103#issuecomment-546551293.
Couple of things to take note of:
Node binary versions in VS Code Server may vary between commits. In the GitHub comment above, the author uses node#10 -- you may replace it with node#12; everything would still work.
Make sure glibc and gcc are properly installed using linuxbrew. This step is key.

Does Swift executable binary need the .swiftmodule, .swiftdoc and .build files to run?

I'm writing my Swift app for Ubuntu using Vapor. And my mission is to have the smallest Docker image for production. I've trimmed down my image significantly but I wanted to know, just out of curiosity, does my final executable need all the compiled .module, .doc and .build files in the same directory?
tl;dr: No.
The folders/files you listed are byproducts of the build process and can be safely discarded.
When it comes to distribution, your application is just like any other Linux executable. You must have all dynamically linked libraries available on the target system.
These include the runtime libraries of the Swift toolchain plus any compiled C modules your application (or the framework beneath it) links with (*).
You can check the dependencies of the executable using the ldd command.
Some of them are available as packages, some of them will need to be copied to the target system manually.
(*) In case of a Vapor 2 application, such C modules are libCHTTP.so and libCSQLite.so, which are placed in your build folder.

cross-compile postgresql for ARM Sitara AM335x

I'm having trouble cross compiling PostgreSQL for my TI Sitara AM335x EVM SK. My host system is an i386 machine running Ubuntu 12.04.
My application is written in C++ using Qt. When I try and compile, I get the error that libpq.so is incompatible. I believe this is because the cross compiler is trying to use the host libpq.so instead of one for the target system (which as I have found out, doesn't exist).
I've downloaded the source for PostgreSQL with the intention of cross compiling that in order to give me the libpq.so library that will be compatible with my target system, however there is virtually no information on how to do this.
I have tried using the CC argument with the configure file to change my compiler to the following: CC=/home/tim/ti-sdk-am335x-evm-06.00.00.00/linux-devkit/sysroots/i686-arago-linux/usr/bin/arm-linux-gnueabihf-gcc but the configure script gives me this error: configure: error: cannot run C compiled programs. If you meant to cross compile, use --host.
The configure file makes a small reference to the --host option, but the only information in the file that I could find is in reference to mingw and windows, which isn't what I want.
I've done some quick searching through the configure file, and it references the --host option, but with no explanation of what is a valid host. I'm assuming that with --host option there will be an associated --target.
What arguments can I give the configure script so that it will cross compile with the correct compiler to generate a library that my target device can use? Are there any resources out there that I haven't found in regards to how the --host/--target works or how to use them?
OK, so after fiddling around for a little while, I think I was actually able to cross compile PostgreSQL and answer my own question.
Before I went any further, I had realized I had forgotten to add the path to my cross compiler to the PATH environment variable. I used the command export PATH=/path/to/cross/compiler:$PATH to insert the compiler path to the PATH environment variable.
Next, I did some experimenting with the --host option. To start off with I tried using ./configure --host=arm-linux-gnueabihf and running the configure script. The configure script seemed to accept this as the host argument. I then went to the next step of running the makefile. Running this makefile resulted in errors being generated. The errors were selected processor does not support Thumb mode. I did a quick search to see what information I could find about this error and came to this webpage: http://www.postgresql.org/message-id/E1Ra1sk-0000Pq-EL#wrigleys.postgresql.org.
This webpage gave me a bit more information since it seemed like the person was trying to do something very similar to me. One of the responders to the post mentioned that --disable-spinlocks is intended for processors that aren't supported by default by PostgreSQL. I emulated the arguments that were used in the website listed above and used the command: ./configure --host=arm-linux CC=arm-linux-gnueabihf-gcc AR=arm-linux-gnueabihf-ar CPP=arm-linux-gnueabihf-cpp --without-readline --without-zlib --disable-spinlocks to generate my makefile. This makefile actually generated all of the files, including the libpq.so library file I was needing.
Hope this helps somebody else in the future!

Using OpenCV with Matlab: mex does not find header files

I am trying to connect Matlab and OpenCV following this tutorial: http://xanthippi.ceid.upatras.gr/people/evangelidis/matlab_opencv/
Since I work on a Linux system I can't follow the instructions for the mexopts.bat file since the Linux equivalent (mexopts.sh) seems to be rather different and i find none of the options mentioned in the tutorial in the mexopts.sh file.
So I try to set the options in Matlab.
I downloaded the most recent OpenCV Version (2.4.8) and compiled it according to the instructions on their site (http://docs.opencv.org/2.4/doc/tutorials/introduction/linux_install/linux_install.html without the make install).
The structure of the OpenCV directory seems to be a problem, since there are multiple include directories and I was unsure which to specify.
[edit]:
there was a lot of pointless code here which has all been made superfluous as #Peter made me aware that I simply misunderstood how make/make install worked.
"Without the make install" is the problem. The include files and built libraries are scattered all over the source tree, as determined by the build system. make install collects all the headers that are appropriate for use by users of the library and puts them in one directory for including. It does the same with the libraries themselves. make install may also "strip" the libraries, which drastically reduces the size and improves the load time.
If you don't want the installation in a system directory, you can set the install path to be somewhere in your home directory.