How to use GDB to debug a kernel with custom GDT? (Breakpoints Failed) - operating-system

I'm using QEMU to emulate a mini OS.
The kernel was linked with the virtual address entry point located at 0xC0100000. (All symbols' addresses in file kernel are higher than 0xC0100000)
After bootloader loaded kernel on the physical address 0x00100000, it setted a global descriptor table which makes so that virtual address space beyond 0xC0100000 is mapped to physical address 0x00100000, then long jumped to the kernel entry point.
When I'm using GDB to debug the kernel, it ignores all breakpoints setted in kernel.
The GDB still assumes my kernel code is located at 0xC0100000, however, kernel code is located at 0x00100000. When I set breakpoints using physical addresses it could hit but raises issues in later execution.
How to make GDB does not ignore the existence of custom GDT?

Related

How to get U-Boot to Load and Run a Bare-Metal Binary on the Raspberry Pi 3 Model B+? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 days ago.
Improve this question
I have a Raspberry Pi Model 3B+. Currently, I can successfully load this exact kernel8.img file (which is just a raw binary) from this tutorial bare metal following the instructions outlined in the tutorial's README as shown below:
... you can download a raspbian image, dd it to the SD card, mount it and delete the unnecessary .img files. Whichever you prefer. What's important, you'll create kernel8.img with these tutorials which must be copied to the root directory on the SD card, and no other .img files should exists there.
The serial output after the above kernel8.img is running successfully looks something like this:
EMMC: GPIO set up
EMMC: reset OK
sd_clk divisor 00000068, shift 00000006
EMMC: Sending command 00000000 arg 00000000
EMMC: Sending command 08020000 arg 000001AA
EMMC: Sending command 37000000 arg 00000000
EMMC: Sending command 29020000 arg 51FF8000
EMMC: CMD_SEND_OP_COND returned VOLTAGE CCS 0000000040F98000
...
However, I would like to load that kernel8.img file via U-Boot and TFTP so that I don't have to keep plugging/unplugging microSD cards.
I have a functioning TFTP server and I have loaded U-Boot onto the Raspberry Pi successfully.
The physical address the kernel image gets loaded to bare metal is 0x80000 as explained by the tutorial:
Important note, for AArch64 the load address is 0x80000, and not 0x8000 as with AArch32.
The kernel8.img's file-type is also just a raw binary:
$ file kernel8.img
kernel8.img: data
As such, I've run the following two U-Boot commands:
tftp 0x80000 rpi3bp/kernel8.img
go 0x80000
However, as shown below, I'm getting some garbled mess once the binary is running: ��ogK�S��rK.
...
U-Boot> tftp 0x80000 rpi3bp/kernel8.img
lan78xx_eth Waiting for PHY auto negotiation to complete........ done
Using lan78xx_eth device
TFTP from server 192.168.0.198; our IP address is 192.168.0.111
Filename 'rpi3bp/kernel8.img'.
Load address: 0x80000
Loading: ################################################## 6.6 KiB
603.5 KiB/s
done
Bytes transferred = 6808 (1a98 hex)
U-Boot> go 0x80000
## Starting application at 0x00080000 ��ogK�S��rK
According to this SO post (as shown below), the go command is all that is required to get a bare-metal binary running on U-boot.
If you have issues executing your binary when using the go command,
then the problem lies with your program, e.g. taking control of the
processor and initializing its C environment.
However, I know for a fact that the kernel image runs fine bare-metal using the Raspberry Pi's default bootloader. So what could be the issue here and why can't I seem to get that kernel image running via U-Boot?
Edit 1:
Here's some context on how I set up U-Boot on the Raspberry Pi. Currently, the Raspberry Pi's bootloader is booting U-Boot in 64-bit mode. The Raspberry Pi's bootloader is configured via the config.txt file and below is my config.txt file:
enable_uart=1
arm_64bit=1
kernel=u-boot.bin
Documentation on the arm_64bit option is here:
arm_64bit
If set to non-zero, forces the kernel loading system to assume a
64-bit kernel, starts the processors up in 64-bit mode, and sets
kernel8.img to be the kernel image loaded, unless there is an explicit
kernel option defined in which case that is used instead. Defaults to
0 on all platforms.
The issue is related to the fact that U-Boot uses UART1 on the Raspberry Pi and the binary I'm trying to run uses UART0 as explained by #sawdust below:
Per U-Boot's DT, it appears that U-Boot uses UART1 on gpios 14&15. The
standalone uses UART0 on the same gpios. So that explains both (a)
getting output from two programs on same pin, and (b) why removing
uart_init() failed. Since U-Boot sets up UART0 for Bluetooth, perhaps
the standalone does not expect that; hence a goofy baudrate. An
oscilloscope could easily verify that.
I know this to be true because I modified the code in uart.c to use UART1 instead and I can now see sensible serial output now.
However, it is still unclear to me why despite the seemingly valid initialisation of UART0 in this part of the code, UART0 does not seem to output legible characters at the right baud rate.

Fetching additional resources from UEFI program when loaded over PXE

I have an UEFI program that requires additional files from the same medium it was started from. This works fine when booting from disk or USB; I can get the device path for the program itself by requesting the EFI_DEVICE_PATH_PROTOCOL (as EFI_LOADED_IMAGE_DEVICE_PATH_PROTOCOL) from the handle passed to efi_main, and then I modify the path element at the end to find the other files.
When loading via PXE however, the device path I get only contains the path to the Ethernet adapter and the IP protocol:
PciRoot(0x0)/Pci(0x14,0x0)/Pci(0x0,0x0)/MAC(...,0x1)/IPv4(0.0.0.0)
The handle only has EFI_LOADED_IMAGE_PROTOCOL and EFI_LOADED_IMAGE_DEVICE_PATH_PROTOCOL attached, and the FilePath member of the former is an empty path.
Do I still have an IP configuration at this point, or has this been discarded?
Can I find out where the current executable was loaded from?
Can I otherwise express "relative to the current executable"?
In principle I could also repeat the PXE boot, but the PXE menu might contain multiple TFTP servers for different OS installations, so
Can I recover the "active" PXE menu entry?

iMX6: MSI-X not working in Linux PCIe device driver

I'm trying to get MSI-X working on an iMX6 (Freescale/NXP/Qualcomm) CPU in Linux v4.1 for a PCIe character device driver. Whenever I call either pci_enable_msix() or pci_enable_msix_range() or pci_enable_msix_exact() I get an EINVAL value returned. I do have the CONFIG_PCI_MSI option selected in the kernel configuration and I am also able to get single MSI working with pci_enable_msi(), but I cannot get multiple MSI working either.
I have tested my driver code on an Intel i7 running kernel v3 with the same PCIe hardware attached and I was able to get MSI-X working without any problems so I know my code is correctly written and the hardware is correctly functioning.
When running on the iMX6 I can use lspci -v to view that the hardware has MSI-X capabilities and see the number of IRQs it allows. I can even get the same correct number in my driver when calling pci_msix_vec_count().Questions
Are there any other kernel configuration flags I need to set?
Is there anything specific to the iMX6 CPU I need to consider?
Does anyone have any experience with the iMX6 and either MSI-X or
multiple MSI?

Operating system-loader

My question is how operating system loads
User space application to RAM. I know how
Bootloader works when we first turn computer on Bios simply reads 512 kb data till aa55 bootloader signature and loads bootloader to ram. Do regular userspace programms are handled in this way? If yes how? Because bootloader activated by bios and how user space program handled by operating system? More specifacally how execv() load program to RAM and start execution point for user space ?
Thanks in advance
Userspace programs are not handled like the bios, the Kernel will be involved in running a userspace program.
In general:
When a program is executed in shell, the shell will invoke system calls to create a new task in a new address space, read in the executable binary, and begin executing it.
To understand the details, you need to understand:
The elf format. Of course there are also other formats which can be used in Linux, elf is just the most common one, and a good starting point. Understanding elf will help you understand how the kernel loads the executable binary into memory precisely.
Linux process management; this will help you to understand how a program starts to run.
Reading the related codes in the kernel. fs/exec.c will be of great help.
The procedure varies among operating systems. Some systems have a background command interpreter that exists through the life of a process and within the process itself. When a program is run, the command interpreter stays in the background (in protected from user mode access). When the program completes, the command interpreter comes to the foreground and can run another program in the same process.
In the Eunuchs-world, the command interpreter is just a user-mode program. Whenever it runs a program it kicks off another process.
Both of these types of systems use a loader to configure the process address space for running a program. The executable file is a set of instructions that define how to lay out the address space,
This is significantly different from a bootloader. A bootloader blindly loads a block of stored data into memory. A program loader contains complex instructions for laying out a process address space that include handling shared libraries and doing address fixups.

Type-1 VMM and Ring 1

Recently, I am doing homework about Virtualization. My question is, how VMM transfer control to the guest kernel and run that code in Ring 1?
Type-1 VMM: This is the classical trap-and-emulate VMM. The VMM runs directly on hardware, acts as a "host operating system" in Ring 0. Guest kernel and guest applications run upon VMM, in Ring 1 and Ring 3 respectively.
When Guest applications make a syscall, it will trap to Ring 0 VMM, (CPU is designed to do this).
VMM will then detect that this is a syscall, and then transfer control to the guest kernel syscal handler and execute it in Ring 1.
When it is done, the guest kernel performs syscall-return, this is a privileged call, which will trap again into VMM.
VMM then do a real return to the guest user space in ring 3. (CPU is also designed to do this.)
My question is about step 2. How does the VMM transfer control to guest kernel and force the CPU to ring 1? It couldn't be a simple "call" since then the guest kernel code will run in ring 0. It must be some kind of "syscall-return" or some special context switch instructions.
Do you have some idea? Thank you!
Simply running the guest OS with a CS selector with RPL=1 (on x86 though). Returning from more privileged ring to lower one is generally done using iret.
Xen is one of the VMMs that run guest OSes in ring 1. In Xen, instructions such as the HLT instruction, (the instruction in ring 1 where the guest OSes run) is replaced by a hyper-call. In this case, instead of calling the HLT instruction, as is done eventually in the Linux kernel, the xen_idle() method is called. It performs a hypercall instead, namely, the HYPERVISOR_sched_op(SCHEDOP_block, 0) hypercall that manages the privilege ring switching. For more info see:
http://www.linuxjournal.com/article/8909