Extend risc-v instructions on QEMU - virtualization

I want to extend the QEMU TCG (tiny code generator) to accept new instructions for the risc-v guest on my x86 machine. However, I have no experience on how the TCG works, so I was wondering if someone can give me some useful pointers on where to start understanding how the TCG works in the QEMU source code?
I know there is a frontend and backend, but I don't really understand where the translation actually happens, and how are the instruction translated.
I also saw the insn32.decode file in target/riscv defining the opcodes for various operators like lui, but I am not sure how that file is used and if it's for the TCG target (ie a risc-v host) or the QEMU guest.
I am looking for something like
QEMU - Code Flow [ Instruction cache and TCG]
but up-to-date with current QEMU version.
Any help is appreciated.

Related

Determine machine architecture reliably in a BitBake recipe

I am writing a recipe for a package which needs to be aware of the underlying machine's microarchitecture. In other words, I would like a string such as aarch64 or arm64 for a 64-bit Arm system, and x86_64 for a 64-bit Intel system.
So far, I have identified:
MACHINE - This seems to be whatever the meta-* layer author decides to name their machine and may contain the architecture, it may not. For example, beaglebone is no use.
MACHINE_ARCH - This seems to be close to what I'm looking for. However, taking this BSP layer as an example, and doing a quick search, it doesn't seem as though this variable is set anywhere. Only read from in a few packages.
TUNE_PKGARCH - May be the best bet so far. But, what format is this variable in? What architecture naming conventions are used? Also, the aforementioned BSP layer, again, doesn't seem to set this anywhere.
I would have thought that knowing the machine architecture in a well-defined format is important, but it doesn't seem to be so simple. Any advice?
I'm accustomed to doing this with uname -m (Windows fans can use the output of SET processor), so for me in Yocto it ends up being a toss-up:
According to the Mega-Manual entry for TARGET_ARCH:
TARGET_ARCH
The target machine's architecture. The OpenEmbedded build system supports many
architectures. Here is an example list of architectures supported. This list is by
no means complete as the architecture is configurable:
arm
i586
x86_64
powerpc
powerpc64
mips
mipsel
uname -m is a bit better since you get subarchitectural information as well. From the machines I have access to at this moment:
Intel-based Nuc build system: x86_64
ARM embedded system: armv7l
Raspberry Pi 4B: aarch64
I have found that the GNU automake (native) and libtool (available for target) packages compute a useful variable named UNAME_MACHINE_ARCH. If you are using libtool already or are willing to take it on just for the purpose of having this done for you :-#), you can solve this way. Look in the built tree for files named config.guess.
You may be able to get by more generically than libtool by using Yocto BUILD_ARCH:
BUILD_ARCH
Specifies the architecture of the build host (e.g. i686). The OpenEmbedded build
system sets the value of BUILD_ARCH from the machine name reported by the uname
command.
So play with these and make your own choice depending on your project's circumstances.

Android kernel build flow with GKI introduced from Android 11

I'm trying to port Android 11 to my board(odroid-n2) and I'm confused about how I can build an board specific kernel module and ramdisk. Could I get some help about this?
Recently, to solve kernel fragmentation, it seems AOSP is splitting kernel into two different block.
(1) GKI(Generic Kernel Image)
(2) Vendor specific kernel
For GKI, I think I can use an image from ci.android.com.
For Vendor specific portion(Related to vendor_boot partition),
is there specific flow for this? or something to refer?
I'm referring to {android kernel}/common/build.config.db845c for case-study, I don't understand why 'gki_defconfig + db845c_gki.fragment' should be combined to one to generate configruation for kernel build. I think we only build kernel module for vendor specific portion.
*) For android docs, I'm referring to the followings.
https://source.android.com/setup/build/building-kernels
https://source.android.com/devices/architecture/kernel/generic-kernel-image
Indeed, with GKI (Generic Kernel Image), generic parts and vendor parts are separated. As of today, that distinction is quite clear: vmlinux is GKI and any module (*.ko) is vendor. That might change in the future if GKI modules show to be useful. Then there could be GKI (Kernel+Modules) + Vendor Modules.
The whole build process is quite new as well and still evolving with this quite fundamental change to how Android kernels are developed. Historically, device kernels and modules were build in one logical step and compatibility was ensured by the combined build. Now there is a shift towards a world where we can cleanly build the kernel and the modules entirely separate without overlap. It is likely to get much easier in the future to build vendor modules without having to build too much of the GKI kernel at the same time. Yet, the way the build currently works, it is easier to set up as it is.
Android 11 introduced the concept of "compliance" for GKI based kernels. That means a shipped kernel is ABI compatible to the GKI kernel. In theory that means that you could literally swap out the kernel that you have and replace it with a build from ci.android.com. Note, a compatible kernel can have significant (ABI compatible) patches that the GKI does not have. So, while compatible, it might not lead to the same experience.
Android 12 enables devices to be launched with signed boot images containing the GKI kernel. Since the Kernel<>Module ABI of those kernels is kept stable, this also allows independent updates of GKI kernel and vendor modules.
When you refer to the db845c build config, yes, this looks a bit confusing. This is a full blown config and the build indeed produces an (ABI compatible!) kernel and the vendor specific modules. The fragment can be considered a patch to the gki_defconfig in the sense that it does not change the core kernel, but enables the required modules.
For the final release, the kernel image from this build will be replaced by the GKI kernel image. But for development, the kernel that comes out of this build is perfectly fine to use.
In practice it helps downstream projects to develop core kernel features and modules at the same time, though changes for modules and kernel need to go into different repositories (db845c being an exception here a reference board).
To somewhat answer your question on how to build the db845c kernel, ci.android.com also provides the build log along with the artifacts to download. For the android12-5.10 branch and the target kernel_db845c, a recent build can be found here. The build.log states at the beginning the instructions to reproduce this:
$ BUILD_CONFIG=common/build.config.db845c build/build.sh
This is the relevant step based on the general instructions on source.android.com - building kernels.

Include precompiled zImage in yocto project

I have a custom board with imx6dl chip and peripherals. I have compiled u-boot, zImage and rootfs from examples provided by manufacturer. But when i try to build yocto from git repo with latests releases, it fails to run (some drivers not working, board is loading and display interface, but touchscreen is not working for ex.),
Is there any way to include precompiled binaries zImage, u-boot and device table to bitbake recipes? I'm very new to yocto project, and only need to get bootable image with working drivers and qt5.
If you have a working boot chain (e.g. u-boot, kernel and device tree) that you have built out-of-yocto, then you might try building a rootfs only. This requires two main settings, to be made in your local.conf to get started. Please don't firget that this is just a starting point, and it is highly advised to get the kernel/bootloader build sorted out really soon.
PREFERRED_PROVIDER_virtual/kernel = "linux-dummy to have no kernel being built, and something like MACHINE="qemuarm" to set up an armv7 build on poky later than version 3.0. The core-image-minimal target should at least be enough to drop you in a shell for starters, and then you can proceed from there.
Additionally, it might be qorth asking the board vendor or the yocto community (#yocto on the freenode server) if they know about a proper BSP layer. FSL things are quite nicely supported these days, and if your board is closely related to one of the well-known ones, you've got a high chance that meta-freescale just does the trick nicely.
Addition:
#Martin pointed out the mention of Qemu is misleading. This is just the easiest way to make Yocto build a userland for the armv7-architecture which the imx6dl is based on. The resulting root filesystem should be sufficiently compatible to get started, before moving on to more tuned MACHINE configuration.

What is the recommended workflow and environment for working on the FreeBSD code base?

I want to develop a new feature or change and existing program of the FreeBSD distribution, specifically the user space¹. To do so, I need to make changes to the FreeBSD code base and then compile and test them.²
Doing so on the tree in /usr/src and installing the result on the system seems like a bad idea, given that it requires you to run your development machine on CURRENT, to develop with root privileges and hoses your system if you make a mistake. I suppose there must be a better way and possibly a standard setup FreeBSD developers use.³
What is the recommended workflow to develop the FreeBSD code base?
¹ so considerations specific to kernel development aren't terribly important
² I'm familiar with the process to submit changes after I have developed them
³ I have previously read both the development handbook and the FreeBSD handbook chapter on building the source but neither seem to recommend a specific process.
I am a src committer.
I often start with the lowest release that I intend to back port to (e.g., RELENG_11_3.
I would then do (before or after making changes):
make buildworld
then deploy to a jail directory:
make DESTDIR=/usr/jails/test installworld
This jail directory, as the first responder hinted, can be used with bhyve, but I find it easier to configure a jail or even just use chroot.
I like to configure my jails in /etc/rc.conf instead of /etc/jail.conf:
Example /etc/rc.conf contents:
jail_enable="YES"
jail_list="test"
jail_test_rootdir="/usr/jails/test"
jail_test_hostname="test"
jail_test_devfs_enable="YES"
I can provide more in-depth examples, ones where the jail has a private networking stack so you can SSH into it, for example, but I don't get the sense that a networking stack is important to your testing from the posted question.
You can see the running jail with "jls" and you can enter the running jail with "jexec test bash"
Inside the jail you can test your changes.
When doing this kind of sandboxing, jails work so long as the /usr/src that you built/installed to the jail is from a release that is:
Older than the guest OS, or
In the same STABLE branch as the guest OS, or
At the very least binary-compatible with the guest OS
Situations 1 and 2 are pretty safe, while situation 3 (e.g., running a newer /usr/src than the guest OS) can get dodgy. For example, trying to run /usr/src head (13.0-CURRENT) on a 12.0-RELEASE-pX guest OS where the KBI, KPI, and API can all differ between kernel and userland (with jails, each jail runs under the guest OS's kernel).
If you find that you have to run the newest sources against an older guest OS, then bhyve is definitely the solution. You would take that jail directory and instead of running a jail with that root directory, run a bhyve instance with the jail directory as its root. I don't use bhyve that often, so I can't recall if you first have to deposit the contents inside a disk image and point bhyve at the disk image first -- others and/or Google would know the answer to that.
I'm a ports committer, not a src one, but AFAIK running CURRENT is a common practice amongst developers.
Another way to work is to setup a CURRENT VM, share it over NFS, mount from the host and install into by running make install DESTDIR=/mnt/current. You can use BHyVe for virtualizing, by the way.

QEMU with KVM, issue with Windows recognizing the virtual environment

I'm running Gentoo right now, I'm using QEMU with KVM support to run a Windows VM, I need to because they're forcing us to use a proprietary CAD software at university (sadly enough). They gave us a license for a year, however when I activate it, it clearly says the license can't be used in a virtual environment. This leads me to the conclusion that somehow the system recognizes it is being emulated and I know there's a way to avoid this, but I actually have no idea what to do. I've read someone had the same problem and apparently solved it, however his solution doesn't seem to work for me. I'll leave you the URL of the question on Stack. https://serverfault.com/questions/727347/solidworks-activation-license-mode-is-not-supported-in-this-virtual-environment
This command:
qemu-system-x86_
64 -enable-kvm -hda windows.qcow2 -cpu host,kvm=off -smbios type=0,vendor=LENOVO,version=FBKTB4AUS,date=07/01/2015,release=1.180 -smbios type=1,manufacturer=LENOVO,product=30AH001GPB,version=ThinkStation P300,serial=S4M88119,uuid=cecf333d-6603-e511-97d5-6c0b843f98ba,sku=LENOVO_MT_30AH,family=P300 -m 8G
gives the output:
qemu-system-x86_64: -smbios type=1,manufacturer=LENOVO,product=30AH001GPB,version=ThinkStation: drive with bus=0, unit=0 (index=0) exists
I have no idea what to do, I also checked with ruby that the command I pasted from the post I've linked is actually ASCII, it is correct apparently.
I really need this to work, doesn't even work with a cracked license.
Thank you.