knowing info about cpu while developing kernel - x86-64

Is there any data structures of bios or any thing in CMOS to tell about cpu vendor and it's version. I can know about how many processor in system by bios mulitiprocessor specification struct in bios ram area that bios created but what about cpu vendor name and it's version

Related

How can operating system detect live resize of disk capacity?

I saw the following discussion and had some questions:
live resize of a NVMe drive
If the physical capacity of the nvme device changes (e.g., from 10GB to 20GB), how the operating system detect it without rebooting?
In the above link, re-scanning pci bus is solution.
When the re-scanning be executed, does the operating system ask the nvme device to update its meta-info (e.g., capacity, etc.) ?
How does OS interact with disk specifically? (How to read changed device parameters from the disk, not the old device parameters in memory?)
This is an AWS virtual-machine probably so the disk is actually a virtual-disk. You can't resize a physical disk like you upgrade its capacity physically (you'd need to change the disk).
With that said, this machine probably runs on top of a type 1 hypervisor. What I understand about these is that the virtual machines (VMs) run as processes on a different ring on top of a minimal operating-system (hypervisor). When the VMs execute privileged instructions, it will trigger a protection fault and the hypervisor can thus inspect who actually triggered the fault (was it the guest kernel or a user mode process within the guest kernel?). If it was the guest kernel than it can execute that instruction on behalf of the guest. Otherwise, it will probably do what a real kernel would do (trigger an exception). It can tell the difference because the guest kernel runs in a different ring than the ring 3 (user mode).
With that said, the NVME device isn't PCI it is NVME. The host controller of the NVME drive is PCI. To rescan the NVME drives, you will read/write to some registers that are memory mapped in RAM and ask the NVME PCI host controller what is the size of the different disks that are found. PCI is known to be hot pluggable (similarly to USB) in some cases but mostly not on consumer motherboards. I don't think you'll get any interrupt when a PCI device is hot plugged so you are left doing a rescan of the devices.
For NVME, it depends on the host controller if you'll get an interrupt when a disk is swapped/changes size. As to virtual-disks, it probably depends on a lot of different things. You could definitely be left doing a PCI rescan here. I guess it depends on the hypervisor, on the OS and on the host controller configurations.

How can ARM processors use more than 4GB of ram?

I recently started working on my own operating system. I am following jsandler18‘s awesome tutorial and making changes as I go to allow it to run on the raspberry pi 4.
Sadly, jsandler18 stopped updating the tutorial before he had finished the page on virtual memory. I read through some other sources, and found a little problem: The ARM l1 address translation table divides the computers RAM into 1-MB blocks. The problem here is that it only allows up to 4096 entries, or 4GB of virtual ram.
Is there some way I can use the ARM MMU to translate more than 4GB of virtual memory?
The tutorial being referenced appears to be executing in ARMV7, which can be thought of as 32-bit ARM. This is roughly equivalent to running in 32-bit PAE mode in X86. Thus using this example it is not possible to to use more that 4GB of virtual memory.
ARMV8 (or AARCH64) supports 64-bit virtual addresses, and would allow mapping more that 4GB of virtual memory.
Switching into ARMV8 is done by switching Exception levels, which are usually denoted as EL0, EL1, EL2 and EL3. The one challenge you could run into is that once you enter AARCH32 mode, you can not go to a lower exception level and switch to AARCH64. For example going from EL1 64-bit -> EL0 32-bit is supported, but going EL1 32-bit -> EL0 64-bit is not. This could pose a challenge if the firmware handing execution off to your OS is in AARCH32 mode.

how does the os know which hdd it booted from?

In a computer having two or more hdds(Hard disk drive, such as hard disks, cd-roms, usb disks), the bootloader uses bios int 13h/42h to load sectors from the booted hdd. When the os is loaded, I guess it will erase all the bios code and scan all the hdds available by itself. Before that, I think the os must know which hdd it booted from. The only way the os can know that is to ask bios, I think. For example, there are 3 usb disks, 3 hard disks(on the pci bus) attatched to the computer, the os must know which one it booted from. So I want to ask how the os gets that infomation?

OS boot in a multiprocessor system

In a single processor system, when powered on the processor starts executing the boot rom code and the multiple stages of the boot. However how does this work in a multi process system? Does one processor act as the master? Who decides which processor is the master and the others helpers?
How and where is it configured?
Are the page tables shared between the processors? The processor caches are obviously different, at least the L1 caches is.
Multiprocessor Booting
1 One processor designated as ‘Boot Processor’ (BSP)
– Designation done either by Hardware or BIOS
– All other processors are designated AP (Application Processors)
2- BIOS boots the BSP
3- BSP learns system configuration
4- BSP triggers boot of other AP
– Done by sending an Startup IPI (inter processor interrupt) signal to
the AP
look here
and here for more details

Non-Hypervisor Virtualization vs Type2 Hypervisor

According to a marked answer on stackoverflow.com here and another reference here, I understand that :
Hypervisor virtualization = below the OS and a hardware virtualization where the hardware is designed to support virtualization
Non-Hypervisor virtualization = on top of the OS (like an application software), that is purely software virtualization
But we do also have Type1 and Type2 classifications for hypervisors and it seems to me that Type2 is purely Software Virtualization ... so does this mean that Non-Hypervisor Virtualization is equivalent to Type 2 Hypervisor or are there some subtle differences??
Or is it the case that these terms all are loosely defined??
Thanks in advance.
it seems to me that Type2 is purely Software Virtualization
Don't conflate "Type 1 vs Type 2" and "Hardware vs Software" Virtualization. In fact, there is actually a middle ground between hardware and software: There is Full hardware (HVM), "partial" hardware (PVM), and Pure Software (SW).
I'll try to clarify by expanding all 6 combinations:
Type 1 + Full Hardware (HVM) - This allows a hypervisor like Xen HVM to boot an unmodified guest OS. This is actually slow because the hypervisor must decode "telegraph messages" that the guest OS is trying to send to the hardware. (i.e. writing to the disk drive involves repeatedly storing bytes in location 0xblahblah.)
Type 1 + Paravirtualization (PVM) - This is when you modify the guest OS a little to call the Hypervisor directly for some tasks, like disk I/O. This is faster because the guest just says "here, write this page of bytes" and doesn't have to do (virtualized) I/O on each byte. You know you're doing PVM when you install special drivers. Of course, sometimes the OS has virtual drivers built in already. For example, any modern Linux kernel will switch to PVM mode at boot automatically when it detects it's running under Xen, KVM, UML, etc.
Type 1 + Pure Software (SW) - I'm not sure if this exists, but it wouldn't be that hard to build. Since software emulation is slow, the overhead of booting a real OS and running Type 2 isn't a big deal.
Type 2 + Full Hardware (HVM) - This allows you to boot an un-modified Windows under VirtualBox or KVM. You know it's type 2 when you can reboot all your Guests and still play MP3s in the background :)
Type 2 + Paravirtualization (PVM) - This happens any time you install guest drivers, or boot a modern Linux kernel under VirtualBox/KVM.
Type 2 + Pure Software (SW) - early versions of Bochs and Qemu. (Latter versions actually have hardware assisted modes too.) You can tell they are "pure software" because they allow you to run software that you normally can't run without it. (i.e. I've run Windows '95 under Bochs on an ARM processor, and I've booted an ARM distro on an x86 under Qemu.)
There is also another subject that is unlike the above:
Container technology. Containers like Docker/Rkt/LXD don't fit in the above table. Applications in Containers are ordinary programs calling the kernel in ordinary ways, no Hypervisor involved.
It's just that containers use the Kernel features of cgroups and namespaces to make an app "feel" like it's in a VM. Each container gets a 'partitioned' view of the system: It's own filesystem, it's own user IDs, it's own process IDs, it's own hostname + IP address, etc. But from the outside, you can see all processes in all containers with 'ps'.
In my mind, Non-Hypervisor virtualization means a virtualization layer that runs something OTHER than an OS on top of it -- most commonly virtualizing the user-level environment of some other operatoring system. For example, the WINE project is non-hypervisor virtualization -- it allows running win32 programs on a linux (or other) host. There's no attempt to run an actual Windows OS or emulate 'bare' hardware for a virtualized OS. Instead the virtual layer provides the user-level abstractions and system calls for windows directly.
Contrast this with a hypervisor which may be either type 1 (running on bare metal) or type2 (running on an OS) and which provides hardware-level abstractions and which you run an entire OS on top of.
A Hypervisor, by definition, emulates hardware. (That may or may not physically exist) - it may virtualize some as well.
Virtualization intercepts a call and redirects it elsewhere.
They are two different but interrelated topics.
Type 1 Hypervisors run on "bare metal" and sit between the hardware and your virtual operating systems (the hypervisor itself is the operating system). For example, VMWare ESX, Citrix XenServer or Microsoft Hyper-V
Type 2 Hypervisors run on top of your existing operating system and may support either hardware or software virtualization. For example both QEmu and Bochs) emulate an entire CPU, optionally even a different CPU architecture. Both are Type 2 Hypervisors but have significant overhead on performance due to the emulation required.
VMware Workstation/Server/Player/Fusion, Parallels, Virtualbox are all examples of Type 2 hypervisors that support Hardware-assisted Virtualization - this means rather than emulating the CPU instructions, the CPU instructions can pass through directly with no emulation or translation -- effectively running with no loss of performance if the processor supports it.
Next up, non-hypervisor virtualization which is (effectively) application virtualization. The hardware itself is not being emulated in any way at all, the virtualization layer is just intercepting certain system calls and virtualizing those. Examples in this category include VMWare Thinapp, Microsoft App-V and many more. Windows Vista itself virtualizes certain registry and disk writes to areas where the user doesn't have permission to write. This virtualization in Vista is critical for backwards compatibility with many legacy applications.
Finally we have pure emulators - no virtualization is happening here. In this category we have WINE and to some extent Cygwin. Also Bochs fits in this category as well as a Type 2 Hypervisor since there is no virtualization, just hardware emulation. DOSEMU is another one that fits in here.
I'm sure I've missed plenty of examples, but
(I'll post my comment to #answer-16868851 here since I miss few points to fulfill "You must have 50 reputation to comment" requirement)
BraveNewCurrency writes:
Type 1 + Pure Software (SW) - I'm not sure if this exists, but it wouldn't be that hard to build. Since software emulation is slow, the overhead of booting a real OS and running Type 2 isn't a big deal.
So far I've found only one Type 1 hypervisor capable of doing this -- it's VMware ESXi.
vSphere 5 Documentation Center | ESXi Hardware Requirements say:
■ To support 64-bit virtual machines, support for hardware virtualization (Intel VT-x or AMD RVI) must be enabled on x64 CPUs.
Hence, 32-bit guests work without VT-x in it.
As I see zero competition for it camed (either proprietary or opensource), I guess trapping sensitive CPU instructions without VT-x support (that is in Pure Software) is serious challenge in practice.
While following doesn't relate to the original question already, v5.0 (and v4.x) requires 64-bit support from CPU though:
■ ESXi 5.0 will install and run only on servers with 64-bit x86 CPUs.
■ ESXi 5.0 requires a host machine with at least two cores.
Those interested in running Type 1 + SW hypervisor on 32-bit machines (like me) may use it's earlier versions. Minimum system requirements for installing ESXi/ESX (1003661) says:
ESX 3.5.x
The hardware requirements for ESX 3.5.x are the same as those listed in the ESX 3.0.x section, with the following additions.
[...]
ESX 3.0.x
You need these hardware and system resources to install and use ESX 3:
At least two processors:
1500 MHz Intel Xeon and above, or AMD Opteron (32bit mode) for ESX
1500 MHz Intel Xeon and above, or AMD Opteron (32bit mode) for Virtual SMP
1500 MHz Intel Viiv or AMD A64 x2 dual-core processors
+ ESX 3.5 Installation Guide repeats this in following section / subsection:
ESX Server 3 Requirements
This section discusses the minimum and maximum hardware configurations supported by ESX Server 3 version 3.5.
Minimum Server Hardware Requirements
...
Hence, Pure (and 32-bit only) Software :)