Buildroot - extract a custom board/buildroot config/kernel config out of tree - buildroot

I have customized buildroot with the new board ( derived from raspberry pi zero ). So my changes are (in-tree):
.config
board/passkeeper/genimage-passkeeper.cfg
board/passkeeper/post-build.sh
board/passkeeper/post-image.sh
board/passkeeper/rootfs_overlay/etc/init.d/S41passkeeper
board/passkeeper/rootfs_overlay/etc/mdev.conf
board/passkeeper/rootfs_overlay/etc/udhcpd.conf
configs/passkeeper_defconfig
output/build/linux-custom/.config
Now, reading the documentation - I am a bit confused on how to put all these things into the separate folder via BR2_EXTERNAL. Also I'm not sure how do I move the linux configuration from output/build/linux-custom/.config
make linux-update-defconfig BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE=/tmp/passkeeper/linux/linux-config
results in
Unable to perform linux-update-defconfig when using a defconfig rule
Can somebody please provide step-by-step guide on that?

[You are asking two questions. I will answer only the question about saving the linux .config file; the other question is too generic.]
You need to set the appropriate options in menuconfig, not just override on the command line, otherwise they are inconsistent.
The process complete for creating a linux defconfig based on a pre-existing in-tree defconfig is the following. You have already done steps 1, 2 and 3.
In the Buildroot configuration, select BR2_LINUX_KERNEL_USE_DEFCONFIG or BR2_LINUX_KERNEL_USE_ARCH_DEFAULT_CONFIG.
Run make linux-menuconfig and adapt the linux configuration to your needs.
Build and test, iterate over 2 until you have the configuration you want.
In the Buildroot configuration, switch to BR2_LINUX_KERNEL_USE_CUSTOM_CONFIG and set BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE to the place where you want to save it (typically board/passkeeper/linux.config or $(BR2_EXTERNAL_PASSKEEPER)/board/passkeeper/linux.config if you are using an external).
Run make linux-update-defconfig. It is essential you do this before doing anything else, otherwise Buildroot will complain that the file doesn't exist.

Related

Yocto: Build custom bootloaders outside aarch64-poky-linux work directory

I have a project that is mostly complete for a custom third party SoM which requires custom bootstrap/bootloaders, currently the recipe is in a meta-bsp layer which works alright however they are built in the aarch64-poky-linux build directory. We have multiple SoM models from this manufacturer that each require different bootloader configs and I'd like them to be built in a machine specific non-linux directory if possible like aarch64-<machine>-none or something. Is this possible?
We are using dunfell release.
Recipes are in meta-mfg/recipes-bsp/mfg-/mfg-_version.bb
Fairly standard recipe setup including:
SECTION = "bootloaders"
SRC_URI..
DEPENDS (native packages only)
do_configure()
do_compile()
do_install()
Current build directory: tmp/work/aarch64-poky-linux/mfg-bootloader/1.x.x-rx/
Preferred build directory: tmp/work/aarch64-<machine>-none/mfg-bootloader/1.x.x-rx/
When building for different machines/soms it uses the same directory and uses the previous build which may be incorrect. We do have a workaround where we don't stamp the configure/build/install tasks so they always rebuild however we'd like a cleaner more appropriate implementation.
Is there any way to accomplish this?
I haven't been able to find any information so far.
I have been pointed to the solution from the IRC channel for anyone with the same question:
Add PACKAGE_ARCH = "${MACHINE_ARCH}" to the recipe to build in machine specific folders.

How to setup an own device tree for a RaspberryPI in yocto?

I like to disable and enable some pins in my RPi project.
These are GPIO 6, GPIO 5 and GPIO 26. I like to use these PINs in my own kernel driver.
For this project I connect a simple electric board via the GPIOs. The minimal system is build via yocto. I like to change the device tree file to disable/enable GPIOs.
I need to change or make my own dts file. For that I think I will need to:
find the original RPi dts
patch it or create my own dts
add it to the layer.conf
add file to the kernel recipe via append
How can I do this? or where can I find the sources?
Actually I am struggling to find the dts files for the RPi2 I am using. I was checking the "raspberrypi2-poky-linux-gnueabi" recipe results(and do not find any files).
I do not find any tutorial how to setup yocto + meta-raspberrypi + own dts. it would be great if we can figure out the necessary steps.
I'm not convinced this question has been well answered, so let me take a few minutes and document what I've done to add device tree overlays to my yocto builds.
This is a multi-problem process.
I'm going to make a few assumptions:
* You source your oe-init-build-env in a shell, and do your bitbake builds manually in a terminal (or you know how to do it with equivalent tooling)
* You know (or are already learning) the basics of device trees...
Start with your own meta layer. Mine is out on github.
You'll need to create an *-overlay.dts source file. You can start with a simple place holder, and stuff it (quite literally) anywhere on your system. We'll import it to your meta layer in the next step using bitbake to do some of the staging and what-not for us.
recipetool appendsrcfile -wm rpi /path/to/your-layer-meta virtual/kernel /path/to/your-overlay.dts 'arch/${ARCH}/boot/dts/overlays/your-overlay.dts
At this point, you should end up with a recipes-kernel/linux directory with an appropriate bbappend targeting the $MACHINE type of -wm (rpi, as above), ready to copy the device tree source file into the proper spot for bitbake to find it when it building the kernel. But it still won't be included in your kernel build.
We need to add the overlay reference to the KERNEL_DEVICETREE variable, in places that will cover the scopes of: linux, bootfiles, and the sdcard_image-rpi.bbclass from meta-raspberrypi.
In the linux bbappend created in step 3, add KERNEL_DEVICETREE += "overlays/your-overlay.dtbo" to make the linux kernel build include your dts as something to compile into a dtbo.
To make the sdcard_image-rpi.bbclass copy the file, you'll need to add KERNEL_DEVICETREE =+ "overlays/your-overlay.dtbo" to your image recipe.
To make the overlay active, you'll need to create a recipes-bsp/bootfiles/rpi-config_git.bbappend whereyou can append a do_deploy step to add the dtoverlay=your line to config.txt.
I use my layer for more than one project, so I felt OK with having the dts compile with every kernel but only copy it to images where my image recipe added it to the KERNEL_DEVICETREE. For further insurance that I don't get these things interferring in images I don't want them in, my rpi-config append has a test to see if I should add the dtoverlay line to the config.txt
Of course, this was all assuming you were going to use your own home-grown DTS without starting from a kernel-sourced one. The process would be largely the same, but you'd be able to patch the existing, or copy it, or whatever you want to do in your linux recipe.
I hope this helps! I know it's an old question.
First you need to find the kernel used on your yocto project, the recipe is linux-raspberry.bb or something like linux-*.bb. The preferred kernel is probably set in your local.conf or machine.conf: PREFERRED_PROVIDER_virtual/kernel ?= "linux-raspberry"
This is indirectly set via "meta-raspberrypi/conf/machine/include/rpi-default-providers.inc" which is included via "rpi-base.inc"
Once found, take a look at the recipe, clone the git repository of the kernel, on the right branch, and reset at the right SRCREV.
Once downloaded, the dts files are in /path/of/my/kernel/linux-raspberry/arch/arm/boot/dts/. You can find the name of the devicetree file used in the kernel recipe, local.conf or machine.conf, with the variable KERNEL_DEVICETREE = "..."
For the meta-raspberry and rpi2 selected, the dts files can be found in <path to build dir>/linux-raspberrypi2-standard-build/source/arch/arm/boot/dts/. The source dir is a linked dir to the git sources.
You can add a new dtb by creating dtsi/dts files (don't forget to add it in the Makefile).
Create a patch, add it to the kernel recipe:
SRC_URI += "file://0001-mypatch.patch"
and put the patch file like this in your meta
├── files
│   └── 0001-mypatch.patch
└── linux-raspberry.bb
Modify the KERNEL_DEVICETREE variable to add your new dtb.
Now you can bitbake your kernel/image, your new dtb will be created.

Yocto/bitbake/OpenEmbedded: Best place for build/conf/local.conf's content?

I'm trying out yocto (2.0, jethro) and I want to build an image starting from core-image-minimal. This works fine.
Every website out there mention modifying the file build/config/local.conf with (some of) my customization. For example, the target machine (through MACHINE) or some global settings (through EXTRA_IMAGE_FEATURES).
I also need to modify some specific packages and the way to do it is to create a custom layer. So far so good.
What I don't understand is how to "save" all my configuration to version control. I want everything I change to be locate in files that I can commit so that anybody else can reproduce the exact same build (or even contribute to that project). Putting almost everything in build/config/local.conf goes against that goal; the file is under a "build" directory and so I can't just clone a git repo and start the building...
Is it really the way the yocto project works? Or am I missing a different configuration file where I need to put these settings? I though I could place all these in a custom layer but it does not seem to work...
Any idea or suggestion?
Thanks!
Thanks Ross, that clarified it!
Here's some notes about my file organization which I couldn't format into a comment to your answer.
Thanks. So all my custom configurations went into meta-mylayer/conf/distro/mylayer.conf
Almost all my customization went into a layer meta-mylayer, except:
DISTRO which is set in build/conf/local.conf. This is how you tell yocto what you want to build.
MACHINE which is also set in build/conf/local.conf. The reason is that the same image/distro combination could be built for different machines and thus this can't be hard-coded for every images.
Layers are manually added to build/conf/layers.conf. That's the last bit I wish I could moved to my DISTRO or something. For now the folders are git submodules and they are added using bitbake-layers add-layer.
In general everything in your local.conf that is "your project" should be moved to your own distro configuration (MACHINE, image features, package lists). Stuff like where DL_DIR is can be moved to a common site.conf if you wish. Eventually you should end up with a local.conf which just sets DISTRO and some other personal variables.

is there a way to run storage statistics on a yocto-produced filesystem?

I used Yocto to build a filesystem, using a .bbappend of core-image-minimal. Two questions:
how can i figure out which package is taking huge storage space on the rootfs?
I can't think of a way other than to look into the ${D} of every package and see how big its components are. There's gotta be a more systematic, and intelligent way to do that.
From what i can decipher from the manifest, there is nothing related to the size of the package that is being included.
Also, removing some of the packages I added using the IMAGE_INSTALL object, seems to remove the package but the end result of the built image doesn't show a change in its size!!
I compared the size of a particular .so on the build machine and on the installation device (a vm) and found that the size on the installation device was 20-30% of the original size seen on the build machine. Any explanation?
Thanks!
1) One way is to enable buildhistory, by adding the following to local.con
INHERIT += "buildhistory"
BUILDHISTORY_COMMIT = "1"
This will create a directory (git repo) buildhistory in your $BUILDDIR. There you'll be able to find e.g.
images/$MACHINE/eglibc/$IMAGE/installed-package-sizes.txt
That file will give you the sizes of all installed packages.
There are a lot more things you can learn from buildhistory, see buildhistory introduction
2) Where did you compare the particular .so-file? If it was from the package's ${B} (i.e. where the library is built), it's not surprising, as the installed .so-file will be stripped. The debug information is installed into -deb.rpm (as the debug info is usually useless on the target and the smaller size is of much higher importance).
With some looking inside the scripts/ subdir and some googling about some of the existing scripts, it turns out that the good people of Yocto do have these scripts properly functioning out of the box:
scripts/tiny/dirsize.py and ksize.py.
dirsize.py will give you a breakdown of pkg sizes for your rootfs; while ksize.py will give you the equivalent info for the kernel.

Developing with Qooxdoo and multiple developers

I'm interested in Qooxdoo as a possible web development framework. I have downloaded the SDK and installed it in a central location on my PC as I expect to use it on multiple projects. I used the create-application.py script to make a new test application and added all the generated files to my version control system.
I would like to be able to collaborate on this with other developers on other PCs. They are likely to have the SDK installed in a different location. The auto-generated files in Qooxdoo seem to include the SDK path in both config.json and generator.py: if the SDK path moves, the generator.py script stops working. generator.py doesn't seem to be too much of a problem as it looks in config.json for an updated path, but I'm not sure how best to handle config.json.
The only options I've thought of so far are:
Exclude it from the VCS, but there doesn't seem to be a script to regenerate it automatically, so this could be dangerous.
Add it to the VCS but have each developer modify the path line and accept that it might need to be adjusted whenever changes are merged.
Change config.json to be a path and a single 'include' line that points to a second file that contains all the non-SDK-path related information.
Use a relative path to the SDK and keep a separate, closely located copy of the SDK for every project that uses it.
Approach 1 would be ideal if the generation script existed; approach 2 is really nasty; I couldn't get approach 3 to work and approach 4 is a bit messy as it means multiple copies of the SDK littered about the place.
The Android SDK seems to deal with this very well (using approach 1), with the SDK path in its own file with a script that automatically generates that file. As far as I can tell, Qooxdoo puts lots of other important information in config.json and the only way to automatically generate that file is to create a new project.
Is there a better/recommended way to deal with this?
As an alternative to using symlinks, you can override the QOOXDOO_PATH macro on the command line:
./generate.py source -m QOOXDOO_PATH:<local_path_to_qooxdoo>
(Depending on the shell you are using you might have to apply some proper quoting of the -m argument). This way, every programmer can use his locally installed qooxdoo SDK. You can even drop the QOOXDOO_PATH entry from config.json to enforce this.
We work with a symbolic link pointing to the sdk ... config.json contains just the path of the link.