Emulating Solaris 10 SPARC on QEMU - solaris

I have an old Solaris SPARC application that I'm trying to get running.
I learned from this question that x86 Solaris won't cut it. I recently learned that VM virtualbox can't emulate SPARC architecture. Therefore, I am currently trying to emulate Solaris 10 SPARC using QEMU.
I have acquired a Solaris 10 SPARC iso (sol-10-u11-ga-sparc-dvd.iso) from here.
I have Qemu 3.1.50 installed.
However, when I try to run, it gives me:
C:\Users\xxxx\Documents\CMARPS>"C:\Program Files\qemu\qemu-system-sparc64" -m 512 -cdrom "sol-10-u11-ga-sparc-dvd.iso" -boot d -nographic
OpenBIOS for Sparc64
Configuration device id QEMU version 1 machine id 0
kernel cmdline
CPUs: 1 x SUNW,UltraSPARC-IIi
UUID: 00000000-0000-0000-0000-000000000000
Welcome to OpenBIOS v1.1 built on Feb 15 2019 10:05
Type 'help' for detailed information
Trying cdrom:f...
Not a bootable ELF image
Not a bootable a.out image
Loading FCode image...
Loaded 7420 bytes
entry point is 0x4000
Evaluating FCode...
Evaluating FCode...
Ignoring failed claim for va 1000000 memsz af6d6!
Ignoring failed claim for va 1402000 memsz 4dcc8!
Ignoring failed claim for va 1800000 memsz 510c8!
SunOS Release 5.10 Version Generic_147147-26 64-bit
Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved.
could not find debugger-vocabulary-hook>threads:interpret: exception -13 caught
interpret \ Copyright (c) 1995-1999 by Sun Microsystems, Inc.
\ All rights reserved.
\
\ ident "#(#)data64.fth 1.3 00/07/17 SMI"
hex
only forth also definitions
vocabulary kdbg-words
also kdbg-words definitions
defer p#
defer p!
['] x# is p#
['] x! is p!
8 constant ptrsize
d# 32 constant nbitsminor
h# ffffffff constant maxmin
\
\ Copyright 2008 Sun Microsystems, Inc. All rights reserved.
\ Use is subject to license terms.
\
\ #pragma ident "#(#)kdbg.fth 1.20 08/06/06 SMI"
h# 7ff constant v9bias
h# panic - kernel: no nucleus hblk8 to allocate
EXIT
Trying to boot gives me:
0 > boot
boot Not a Linux kernel image
Not a bootable ELF image
Not a bootable a.out image
Loading FCode image...
Unhandled Exception 0x00000000ffeb6080
PC = 0x00000000ffd27954 NPC = 0x00000000ffd27958
Stopping execution
Either there is something causing a kernel panic or my iso isn't actually booting correctly(?)
I thought that maybe the iso was actually a 32 bit SPARC iso so I tried that:
C:\Users\xxxx\Documents\CMARPS>"C:\Program Files\qemu\qemu-system-sparc" -m 256 -cdrom "sol-10-u11-ga-sparc-dvd.iso" -boot d -nographic
Configuration device id QEMU version 1 machine id 32
Probing SBus slot 0 offset 0
Probing SBus slot 1 offset 0
Probing SBus slot 2 offset 0
Probing SBus slot 3 offset 0
Probing SBus slot 4 offset 0
Probing SBus slot 5 offset 0
Invalid FCode start byte
CPUs: 1 x FMI,MB86904
UUID: 00000000-0000-0000-0000-000000000000
Welcome to OpenBIOS v1.1 built on Feb 15 2019 10:04
Type 'help' for detailed information
Trying cdrom:d...
Not a bootable ELF image
Not a bootable a.out image
No valid state has been set by load or init-program
0 > boot
boot Trying cdrom:d...
Not a bootable ELF image
Not a bootable a.out image
No valid state has been set by load or init-program
ok
0 >
What am I doing wrong here?

You're trying to run with only 512 mb of RAM:
...qemu-system-sparc64" -m 512 ...
Per the Oracle Solaris 10 1/13 Installation Guide: Planning for Installation and Upgrade page on System Requirements and Recommendations:
For UFS or ZFS root file systems, 1.5 GB is the minimum memory required for installation. However, note that some optional installation features are enabled only when sufficient memory is present. For example, if your system has insufficient memory and you install from a DVD, you install through the Oracle Solaris installation program's text installer, not through the GUI.

The current version of QEMU (I last tested with 4.1) does not support SPARC64 systems well yet. While the instruction set is supported, to run OpenSolaris the rest of the hardware needs to be emulated too. The system emulation code is not finished yet, but I have seen some slow progress on Git.

Related

CARLA Quick-start Crashing with Segmentation fault (core dumped)

Using Carla quick-start build, and just trying to launch CARLA
Installed CARLA quickstart according to the docs for 0.9.12, and facing a core dump issue.. unclear why?
CARLA version 0.9.12
About my Linux Machine
Operating System: Kubuntu 20.04 KDE Plasma Version: 5.18.8 KDE Frameworks Version: 5.68.0 Qt Version:
5.12.8 Kernel Version: 5.15.0-56-generic OS Type: 64-bit Processors: 16 × 11th Gen Intel® Core™ i7-11800H # 2.30GHz Memory: 31.1 GiB of RAM
My Graphics/GPU
user#my-machine:lspci -k | grep -A 2 -i "VGA"
0000:00:02.0 VGA compatible controller: Intel Corporation TigerLake-H GT1 [UHD Graphics] (rev 01)
Subsystem: CLEVO/KAPOK Computer Device 65f1
Kernel driver in use: i915
--
0000:01:00.0 VGA compatible controller: NVIDIA Corporation GA104M [GeForce RTX 3080 Mobile / Max-Q 8GB/16GB] (rev a1)
Subsystem: CLEVO/KAPOK Computer Device 65f1
Kernel driver in use: nvidia
ERROR & CRASH
user#my-machine:/opt/carla-simulator$ ./CarlaUE4.sh
4.26.2-0+++UE4+Release-4.26 522 0
Disabling core dumps.
../src/intel/isl/isl.c:2105: FINISHME: ../src/intel/isl/isl.c:isl_surf_supports_ccs: CCS for 3D textures is disabled, but a workaround is available.
X Error of failed request: BadMatch (invalid parameter attributes)
Major opcode of failed request: 149 ()
Minor opcode of failed request: 4
Serial number of failed request: 285
Current serial number in output stream: 295
terminating with uncaught exception of type std::__1::system_error: mutex lock failed: Invalid argument
Signal 6 caught.
Segmentation fault (core dumped)

Fail to get exact Centos kernel version

uname -a tells the "kernel version" is 3.10.0
[root#iZbp16uggk8lf3x949ewxiZ ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
[root#iZbp16uggk8lf3x949ewxiZ ~]# uname -a
Linux iZbp16uggk8lf3x949ewxiZ 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
LINUX_VERSION_CODE also says current kernel version is 3.10.0
[root#iZbp16uggk8lf3x949ewxiZ ~]# grep -rni "LINUX_VERSION_CODE" /usr/include/
/usr/include/linux/version.h:1:#define LINUX_VERSION_CODE 199168
But When I find a kernel function tcp_rtx_synack exist
[root#iZbp16uggk8lf3x949ewxiZ ~]# cat /proc/kallsyms | grep tcp_rtx_synack
ffffffff952a0700 t tcp_rtx_synack.part.26
ffffffff952a0730 T tcp_rtx_synack
ffffffff95721f90 r __ksymtab_tcp_rtx_synack
ffffffff95739e78 r __kcrctab_tcp_rtx_synack
ffffffff95765c36 r __kstrtab_tcp_rtx_synack
3.17 https://elixir.bootlin.com/linux/v3.17/A/ident/tcp_rtx_synack func tcp_rtx_synack exist
3.16 https://elixir.bootlin.com/linux/v3.17/A/ident/tcp_rtx_synack func tcp_rtx_synack not exist
Which means my centos kernel version is at least 3.17 not 3.10
I'm writting the eBPF which need more exact kernel version info because function and data struct are different from each kernel version.
I have buy two VMs on different Cloud provisioner and both VMs show like above.
CentOS, like RHEL, contains backports of various patches (features, fixes, etc.) to older kernel versions. So you can't rely on the kernel version to know which features or functions are available.
Instead, you can either probe for the available features from userspace (e.g., with bpftool feature probe) or use CO-RE to detect it with BTF.

Build AOSP on MacOS M1

I wanted to get my feet wet with AOSP on my MacOS M1 (ARM64) with Big Sur, but it looks like there is no configuration build for this host.
When I look under build/soong/cc/config I only see one Darwin related file, namely x86_darwin_host.go.
With the latest aosp release, namely android-11.0.0_r35 I'm able to build a generic arm64 target, but the resulting emulator does not boot. Configuration shows that the HOST is detected as x86_64, in fact binaries generated are in x86_64 format.
total 0
drwxr-xr-x 3 salvatorebenedetto staff 102 Apr 10 15:21 common
drwxr-xr-x 9 salvatorebenedetto staff 374 Apr 10 15:21 darwin-x86
➜ aosp file out/host/darwin-x86/lib64/libc++.dylib
out/host/darwin-x86/lib64/libc++.dylib: Mach-O 64-bit dynamically linked shared library x86_64
➜ aosp
This is what I get when booting the emulator from the kernel
RAMDISK: lz4 image found at block 0
RAMDISK: lz4 decompressor not configured!
Invalid ramdisk decompression routine. Select appropriate config option.
Kernel panic - not syncing: Could not decompress initial ramdisk image.
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.18.94+ #1
Hardware name: ranchu (DT)
Call trace:
[<ffffffc00008a590>] dump_backtrace+0x0/0x128
[<ffffffc00008a6cc>] show_stack+0x14/0x1c
[<ffffffc0005ca064>] dump_stack+0x80/0xa4
[<ffffffc0005c95a0>] panic+0xe8/0x228
[<ffffffc00074ba34>] rd_load_image+0x2fc/0x5e0
[<ffffffc00074be30>] initrd_load+0x50/0x2cc
[<ffffffc00074b47c>] prepare_namespace+0xd8/0x1ac
[<ffffffc00074ad04>] kernel_init_freeable+0x1bc/0x1dc
[<ffffffc0005c8150>] kernel_init+0x10/0xf4
Rebooting in 5 seconds..Reboot failed -- System halted
Any idea how I can debug what is wrong with the initrd image?
I think this should help, it has a peace specific for setup the missing sdk 10.15.
Good luck, let us know if it works to you!

Hugepagesize is not increasing to 1G in VM

I am using CentOS VM in ESXi Server. I want to increase the Huge page size to 1G.
I followed the link:
http://dpdk-guide.gitlab.io/dpdk-guide/setup/hugepages.html
I executed the small script to check if the size of 1 GB is supported:
[root#localhost ~]# if grep pdpe1gb /proc/cpuinfo >/dev/null 2>&1; then echo "1GB supported."; fi
1GB supported.
[root#localhost ~]#
I added default_hugepagesz=1GB hugepagesz=1G hugepages=4 to /etc/default/grub.
grub2-mkconfig -o /boot/grub2/grub.cfg
Rebooted the VM.
But still I can see 2048 KB (2MB) for the Huge page size.
[root#localhost ~]# cat /proc/meminfo | grep -i huge
AnonHugePages: 8192 kB
HugePages_Total: 1024
HugePages_Free: 1024
HugePages_Rsvd: 0
HugePages_Surp: 0
**Hugepagesize: 2048 kB**
[root#localhost ~]#
The following are details of VM:
[root#localhost ~]# uname -a
Linux localhost.localdomain 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[root#localhost ~]#
[root#localhost ~]# cat /proc/cpuinfo | grep -i flags
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt aes xsave avx hypervisor lahf_lm ida arat epb pln pts dtherm tpr_shadow vnmi ept vpid
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt aes xsave avx hypervisor lahf_lm ida arat epb pln pts dtherm tpr_shadow vnmi ept vpid
[root#localhost ~]#
8GB of Memory and 2 CPUs are allocated to VM.
CPU flag of 1gb hugepage support and guest OS support/enabling are not enough to get 1 gb hugepages working in virtualized environment.
The idea of huge pages both on PMD (2MB or 4 MB before PAE and x86_64) and on PUD level (1 GB) is to create mapping from aligned virtual region of huge size into some huge region of physical memory (As I understand, it should be aligned too). With additional virtualization level of hypervisor there are now three (or four) memory levels: virtual memory of app in guest OS, some memory which is considered as physical by guest OS (it is the memory managed by virtualization solution: ESXi, Xen, KVM, ....), and the real physical memory. It's reasonable to assume that hugepage idea should have the same size of huge region in all three levels to be useful (generate less TLB miss, use less Page Table structures to describe a lot of memory - grep "Need bigger than 4KB pages" in the DickSites's "Datacenter Computers: modern challenges in CPU design", Google, Feb2015).
So, to use huge page of some level inside Guest OS, you should already have same sized huge pages in physical memory (in your Host OS) and in your virtualization solution. You can't effectively use huge page inside Guest when they are not available for you host OS and Virtualization software. (Some like qemu or bochs may emulate them but this will be from slow to very slow.) And when you want both 2 MB and 1 GB huge pages: Your CPU, Host OS, Virtual System and Guest OS all should support them (and host system should have enough aligned continuous physical memory to allocate 1 GB page, you probably can't split this page over several Sockets in NUMA).
Don't know about ESXi, but there as some links for
RedHat and some(?) linux virtualization solutions (with libvirtd). In "Virtualization Tuning and Optimization Guide" manual hugepages are listed for Host OS: ⁠8.3.3.3. Enabling 1 GB huge pages for guests at boot or runtime
:
Procedure 8.2. Allocating 1 GB huge pages at boot time
To allocate different sizes of huge pages at boot, use the following command, specifying the number of huge pages. This example allocates 4 1 GB huge pages and 1024 2 MB huge pages: 'default_hugepagesz=1G hugepagesz=1G hugepages=4 hugepagesz=2M hugepages=1024' Change this command line to specify a different number of huge pages to be allocated at boot.
Note The next two steps must also be completed the first time you allocate 1 GB huge pages at boot time.
Mount the 2 MB and 1 GB huge pages on the host:
# mkdir /dev/hugepages1G
# mount -t hugetlbfs -o pagesize=1G none /dev/hugepages1G
# mkdir /dev/hugepages2M
# mount -t hugetlbfs -o pagesize=2M none /dev/hugepages2M
Restart libvirtd to enable the use of 1 GB huge pages on guests:
# service restart libvirtd
1 GB huge pages are now available for guests.
For Ubuntu and KVM: "KVM - Using Hugepages"
By increasing the page size, you reduce the page table and reduce the pressure on the TLB cache. ... vm.nr_hugepages = 256 ... Reboot the system (note: this is about physical reboot of host machine and host OS) ... Set up Libvirt to use Huge Pages KVM_HUGEPAGES=1 ... Setting up a guest to use Huge Pages
For Fedora and KVM (old manual about 2MB pages): https://fedoraproject.org/wiki/Features/KVM_Huge_Page_Backed_Memory
ESXi 5 had support of 2MB pages, which should be manually enabled: How to Modify Large Memory Page Settings on ESXi
For "VMware’s ESX server" of unknown version, from March 2015 paper: BQ Pham, "Using TLB Speculation to Overcome Page Splintering in Virtual Machines", Rutgers University Technical Report DCS-TR-713, March 2015:
Lack of hypervisor support for large pages: Finally, hypervisor vendors can take a few production cycles before fully adopting large pages. For example, VMware’s ESX server currently has no support for 1GB large pages in the hypervisor,
even though guests on x86-64 systems can use them.
Newer paper, no direct conclusion about 1GB pages: https://rucore.libraries.rutgers.edu/rutgers-lib/49279/PDF/1/
We find that large pages are conflicted with lightweight memory management across a range of hypervisors (e.g., ESX, KVM) across architectures (e.g., ARM, x86-64) and container-based technologies.
Old pdf from VMWare: "Large Page Performance. ESX Server 3.5 and ESX Server 3i v3.5". https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/large_pg_performance.pdf --- only 2MB huge pages are listed as supported
VMware ESX Server 3.5 and VMware ESX Server 3i v3.5 introduce 2MB large page support to the virtualized
environment. In earlier versions of ESX Server, guest operating system large pages were emulated using small
pages. This meant that, even if the guest operating system was using large pages, it did not get the
performance benefit of reducing TLB misses. The enhanced large page support in ESX Server 3.5 and ESX
Server 3i v3.5 enables 32‐bit virtual machines in PAE mode and 64‐bit virtual machines to make use of large
pages.
Passthrough host cpu to VM work for me, which gives VM pdpe1gb cpu flag.
I use Qemu + libvirt, enable 1G hugepagesz on host.
Maybe it is helpful. set the cpu fuature in the xml described the vm as following:
<cpu mode='custom' match='exact' check='partial'>
<model fallback='allow'>Broadwell</model>
<feature policy='force' name='pdpe1gb'/>
</cpu>

Raspberry PI u-boot reports "no partition table"

I have built the very latest u-boot release and placed the image on a Raspberry PI. After boot, I get this error message:
U-Boot 2013.10 (Oct 24 2013 - 09:35:44)
DRAM: 448 MiB
WARNING: Caches not enabled
MMC: bcm2835_sdhci: 0
Using default environment
In: serial
Out: lcd
Err: lcd
Hit any key to stop autoboot: 0
mmc0 is current device
** No partition table - mmc 0 **
U-Boot>
Obviously, u-boot failed to mount the file system.
Can this be related to some mmc related setting as access speed?
Thanks in advance