How to run yocto developed raspberry pi image in qemu? - yocto

I'm compiling an image for raspberry-pi in yocto.How can i develop the same image to run in qemu.?
I included meta-raspberrypi in poky(sumo branch) along with its dependencies(meta-openembedded).I don't want to take the image,flash in SD-card and run in the hardware every time for simple tweaks.
MACHINE ??= "raspberrypi2"
This is what I have included in local.conf.
So how to run my image in qemu to check the changes are applied.What should I include in local.conf to do this.

The answer above was on the right track but chose the wrong machine.
In order to run the image built using the meta-raspberrypi package you need to
comment out the raspberrypi2 machine and set the machine to qemuarm. The reason is
the processor on the raspi2 is a 32bit arm chip either a Broadcom 2836,or 2837 depending on the version of raspi2 you have. If you have a raspi1 B then likely a Broadcom 2835. You can look up the hardware here (raspi-projects) .
In your local.conf file change the lines to match those below .
#MACHINE ??= "raspberrypi2"
MACHINE ??= "qemuarm"
Build the image with
$ bitbake core-image-base
# or
$ bitbake rpi-basic-imag # deprecated
Then you will have a qemu image that can be run with
$ runqemu qemuarm
I have followed there steps myself and created the image I want, and am in the process of developing the system I need for a project. Hope this helps with others to move forward with similar goals.

Try MACHINE = "qemux86-64", then bitbake your image, then use the runqemu script.

Related

Include precompiled zImage in yocto project

I have a custom board with imx6dl chip and peripherals. I have compiled u-boot, zImage and rootfs from examples provided by manufacturer. But when i try to build yocto from git repo with latests releases, it fails to run (some drivers not working, board is loading and display interface, but touchscreen is not working for ex.),
Is there any way to include precompiled binaries zImage, u-boot and device table to bitbake recipes? I'm very new to yocto project, and only need to get bootable image with working drivers and qt5.
If you have a working boot chain (e.g. u-boot, kernel and device tree) that you have built out-of-yocto, then you might try building a rootfs only. This requires two main settings, to be made in your local.conf to get started. Please don't firget that this is just a starting point, and it is highly advised to get the kernel/bootloader build sorted out really soon.
PREFERRED_PROVIDER_virtual/kernel = "linux-dummy to have no kernel being built, and something like MACHINE="qemuarm" to set up an armv7 build on poky later than version 3.0. The core-image-minimal target should at least be enough to drop you in a shell for starters, and then you can proceed from there.
Additionally, it might be qorth asking the board vendor or the yocto community (#yocto on the freenode server) if they know about a proper BSP layer. FSL things are quite nicely supported these days, and if your board is closely related to one of the well-known ones, you've got a high chance that meta-freescale just does the trick nicely.
Addition:
#Martin pointed out the mention of Qemu is misleading. This is just the easiest way to make Yocto build a userland for the armv7-architecture which the imx6dl is based on. The resulting root filesystem should be sufficiently compatible to get started, before moving on to more tuned MACHINE configuration.

buildroot - how to build tool v4l2-ctl?

I want to build v4l2-ctl tool in buildroot (from 2019.02.4) linux system.
But v4l2-ctl is legacy (has been deprecated and replaced by a single option to build all the libv4l utilities).
I try to setup BR2_PACKAGE_LIBV4L_UTILS, to get v4l2-ctl, but after make, in linux system there is no v4l2-clt tool.
Do not understand: where is v4l2-ctl? How to build v4l2-ctl?
I got it:
It is need to rebuild buildroot's packet again:
make libv4l-dirclean
then
make libv4l-rebuild
then
make
and v4l2-ctl will appear in target usr/bin directory

Where can I find what drivers built in my yocto project Linux kernel image?

I'm using Yocto project to build a linux kernel image following these steps:
https://www.at91.com/linux4sam/bin/view/Linux4SAM/Sama5d27Som1EKMainPage
For some reasons I just want to reduce my Image size so I can flash it on QSPI 8 Mega octet memory. I have tried to reduce the size of my rootFS, I have removed some packages that I found in .manifest file and some Distro features. But I did not find how can I modify the kernel size which size is fixed ( 4.2 Mega octet ).
I think that when I can remove some drivers that I don't need the kernel size will be reduced.
I just want to know how can I find what drivers are built in my image and where can I find them ? and later how can I delete the ones that I don't need ?
Thank you.
if you check the .config file that was generated for your BSP, it will show what drivers (and other things) were built into your kernel (check for the 'y' on all the options).
Such file should be somewhere in:
tmp/work//linux-yocto//linux-*-build/.config
Sorry that I can't give you the exact location, but it literally depends on what BSP/MACHINE you are building for.
Also, if you want to modify such configuration, you can call:
$ bitbake -c menuconfig virtual/kernel
that will bring up the menuconfig ncurses interface, in which you can not only see what is installed but also modify what you need.

Modifying core-image-minimal to only make rootfs

I am working on an embedded project on Zedboard. I would like (at least for now) to use Bitbake only to produce proper rootfs. I use recipe core-image-minimal, as I need only limited amount of staff there. How can I "tell" it to not compile kernel, not make u-boot, etc. and focus on rootfs only?
Here is what I've done so far:
Created my build environment
Downloaded needed layers
Modified local.conf to add needed packages to rootfs
Then after typing
bitbake core-image-minimal
I get my rootfs, and all this unnecessary staff. How can I avoid it?
I recently had the same need to only build the rootfs with yocto, skipping other things such as kernel, uboot, image creation etc. There are many legitimate reasons to do so. Anyways, this is what you have to do:
bitbake core-image-minimal -c image_cpio
in krogoth, this will populate the rootfs directory in build/tmp/work/$MACHINE/core-image-minimal/1.0-r0/ and create a rootfs.cpio file in build/tmp/deploy/images/$MACHINE/
in morty, the rootfs.cpio archives seem to be in build/tmp/work/$MACHINE/core-image-minimal/1.0-r0/deploy-core-image-minimal-image-complete/
Interesting concept. However, from what I observed, Yocto must get the defconfig in kernel and u-boot to do configuration on the image itself. Therefore, removing the process will make rootfs not bootable.
These happened for me a lot of time since I used different kernels to compile for different machines. I thought that the ARM image will be the same and will work for all machine but I was wrong.
For Debian, the image compiled need to use kernel's corresponding configuration to compile the rootfs for it to work. And Yocto is the same.
bitbake -e |grep IMAGE_FSTYPE
will give you something like:
IMAGE_FSTYPES="tar.gz cpio cpio.gz.u-boot ...."
it's a list of all the image that will be generated, to remove the undesired ones, in the local.conf file use:
IMAGE_FSTYPES_remove = " cpio cpio.gz.u-boot"
the space before the first element it's not optional.
Regards
If you don't want to build a kernel set the preferred provider of virtual/kernel to 'linux-dummy'.

building the kernel for biackfin target

I'm trying to build a rootfs for an blackfin target, However I can't figure out how I configure the kernel that buildroot produces. The first run through came up with menuconfig, but it's cached the .config since then and I can't see where to change it.
regards
santhosh babu
You need to run make linux-menuconfig to ask Buildroot to start the menuconfig interface of the Linux kernel.