Use yocto to extend a read-only filesystem with extra packages - yocto

I have an embedded linux "proof-of-concept" project that wants to add some packages to an existing piece of hardware with a read-only filesystem. I am very new (1 week) to Yocto but it seems like it is possible. Looking for a general road map of how to achieve this, but any detailed strategy ideas would be helpful to keep in mind as I RTFYM.
It is a networked device, running on ARMv5t hardware.
64GB SD/MMC card is available (empty) and mounted.
telnet, nfs, busybox utils available.
no resident dev tools
The packages I need to add are openssl, python, zeromq, pyzmq, and perhaps other python modules in the future. I cannot place these into the rootfs because it is read-only, but they can reside on the sd card. I am trying to understand how to use Yocto to create this set of packages and collect them together as a build output. What I have so far:
EXTERNAL_TOOLCHAIN and meta-sourcery recipe is working
I can build python and pyzmq independently with bitbake -b
Don't know how to add pyzmq or other modules to python tree
How to build & collect just these items without building entire image?
The python part is running on the hardware but I just hand-copied it to the nfs folder. I am asking if this is a valid approach and if so, to add some directional detail. I hope I have provided enough information.

Related

Booting a clone of another board's Mendel system with already installed libraries

I posted this question in the Unix Stack Exchange (I did quite some research I must say), valueing options like Remastersys and Respin, Clonezilla, dd, and so on. Now I am in SO because the Google-Coral Tag might be helpful. I have found in here a few posts related to backing up using dd (mostly expected from the questions I have seen), but also some errors that are arising to other users.
After getting everything working, I installed several libraries (via apt and via git clone and make install). So now, I would like to have that same system, with the same libraries,in 3 different boards.
My main idea was: The optimal path to follow is to clone the whole system, and instead of installing Mendel in a new board following the tutorial, and then running an install.sh script (that can take some time due to downloading, installing, etc), wouldn't it be easier to just boot up an image but with a Mendel system with my needed libraries? (Making a PRIME SYSTEM, cloning it to distribute it to several boards).
Problems arose as what I found from the already mentioned possible paths (Respin, Clonezilla, etc), seem that won't work on this system in this way. So booting a new Mendel from scratch and backing up the drive with dd seems like a doable option, probably needing to change afterwards some pathings and namings like usernames. If all boards were called the same, it woulnd't be a problem. More like the opposite in my specific case. But names are assigned randomly to avoid collisions with several boards at the same time.
Is there a simple way to install a Mendel system in Google Coral that is a clone of a system from another coral? (Let's call this the PRIME SYSTEM) This would mean the exact same system Google provides, but with specific libraries already installed. Or is the only path to follow to do a normal installation and then a back-up of the PRIME SYSTEM, via dd (or any other that I do not know of?), with proper naming and pathing changes after it is done?

Yocto deploy Debug or Release prebuild?

I am writing a bitbake recipe to deploy a third party pre-built tool, similar to this wiki page: https://wiki.yoctoproject.org/wiki/TipsAndTricks/Packaging_Prebuilt_Libraries
However, I have a Release and Debug pre-build versions of the tool available as *.so files. How do I distinguish inside the recipe which one of both build types I shall deploy?
Thanks and regards,
Martin
You can have two different virtual recipes each with their own .so file. This then warrants a selection in a configuration file (with PREFERRED_PROVIDER_virtual/my-recipe), so either in a machine or distro configuration file. This is probably preferred if you consider having release and debug distros.
A second option is to install the libraries in two different paths, in two different PACKAGES (use FILES_my-package for that) and make them RCONFLICTS_my-package each other to be sure they can't both be in the rootfs. After that, you could write a pkg_postinst_my-package() task specific to each package that actually move the library from the "different" path to the intended one. This will be run both at build time when creating the rootfs and at runtime on first boot, so you need to make sure to exclude one or the other (it's usually done by checking if ${D} exists, which does at build time but not runtime).
c.f.: http://docs.yoctoproject.org/dev-manual/dev-manual-common-tasks.html#post-installation-scripts
If you can manage to have both libraries installed in your rootfs and select the one you want with the LIBRARY_PATH environment variable, a simple recipe, with two packages with each library in a different location, will be sufficient.

Include precompiled zImage in yocto project

I have a custom board with imx6dl chip and peripherals. I have compiled u-boot, zImage and rootfs from examples provided by manufacturer. But when i try to build yocto from git repo with latests releases, it fails to run (some drivers not working, board is loading and display interface, but touchscreen is not working for ex.),
Is there any way to include precompiled binaries zImage, u-boot and device table to bitbake recipes? I'm very new to yocto project, and only need to get bootable image with working drivers and qt5.
If you have a working boot chain (e.g. u-boot, kernel and device tree) that you have built out-of-yocto, then you might try building a rootfs only. This requires two main settings, to be made in your local.conf to get started. Please don't firget that this is just a starting point, and it is highly advised to get the kernel/bootloader build sorted out really soon.
PREFERRED_PROVIDER_virtual/kernel = "linux-dummy to have no kernel being built, and something like MACHINE="qemuarm" to set up an armv7 build on poky later than version 3.0. The core-image-minimal target should at least be enough to drop you in a shell for starters, and then you can proceed from there.
Additionally, it might be qorth asking the board vendor or the yocto community (#yocto on the freenode server) if they know about a proper BSP layer. FSL things are quite nicely supported these days, and if your board is closely related to one of the well-known ones, you've got a high chance that meta-freescale just does the trick nicely.
Addition:
#Martin pointed out the mention of Qemu is misleading. This is just the easiest way to make Yocto build a userland for the armv7-architecture which the imx6dl is based on. The resulting root filesystem should be sufficiently compatible to get started, before moving on to more tuned MACHINE configuration.

Where can I find what drivers built in my yocto project Linux kernel image?

I'm using Yocto project to build a linux kernel image following these steps:
https://www.at91.com/linux4sam/bin/view/Linux4SAM/Sama5d27Som1EKMainPage
For some reasons I just want to reduce my Image size so I can flash it on QSPI 8 Mega octet memory. I have tried to reduce the size of my rootFS, I have removed some packages that I found in .manifest file and some Distro features. But I did not find how can I modify the kernel size which size is fixed ( 4.2 Mega octet ).
I think that when I can remove some drivers that I don't need the kernel size will be reduced.
I just want to know how can I find what drivers are built in my image and where can I find them ? and later how can I delete the ones that I don't need ?
Thank you.
if you check the .config file that was generated for your BSP, it will show what drivers (and other things) were built into your kernel (check for the 'y' on all the options).
Such file should be somewhere in:
tmp/work//linux-yocto//linux-*-build/.config
Sorry that I can't give you the exact location, but it literally depends on what BSP/MACHINE you are building for.
Also, if you want to modify such configuration, you can call:
$ bitbake -c menuconfig virtual/kernel
that will bring up the menuconfig ncurses interface, in which you can not only see what is installed but also modify what you need.

Vagrant Berkshelf - Shelf Path?

Is it possible to set the path where the berkshelf plugin puts the cookbooks it installs? (As in the .berkshelf folder)
I am running Windows 7.
I am currently trying to install a mysql server using an opscode cookbook to a vm and here at work they have the %HOMEDRIVE% system variable set to a network drive. So when .berkshelf starts at the beginning of the Vagrantfile, it pushes the cookbooks to the network drive and it causes it to be slow and well, its not where it should be. Is there a fix to this?
VirtualBox did this as well, but I fixed it by altering the settings. I tried looking for some sort of equivalent settings for berkshelf, but the closest I got was for the standard berkshelf (thats not a vagrant plugin), it appears you can set this environment variable:
ENV['BERKSHELF_PATH']
Found here:
http://www.rubydoc.info/github/RiotGames/berkshelf/Berkshelf#berkshelf_path-class_method
I need to be able to have the cookbooks it reads from the berksfile store to my laptops local drive instead, as in my scenario I cannot have the mobility of the VM limited to the building because of files that are stored on the network.
Any incite would be much appreciated.
Perhaps its better to use the actual berkshelf over the vagrant plugin?
Thanks.
If you want to have the portability - a full chef-repo ready for chef-solo runs, better off using standalone berkshelf instead of the vagrant-berkshelf plugin - which is NOT that flexibly.
For complex cookbooks, I prefer to use standalone berkshelf as it allows me to do berks install --path chef/cookbooks to copy all cookbooks required from ~/.berkshelf/cookbooks, then I can just tar the whole thing and transfer to other machines for the same chef-solo run. some people use capistrano automate the tar and scp/rsync over the network. I just use rysnc/scp;-)
HTH