How does a program control hardware? - operating-system

In order to be executed by the cpu, a program must be loaded into RAM. A program is just a sequence of machine instructions (like the x86 instruction set) that a processor can understand (because it physically implements their semantic through logic gates).
I can more or less understand how a local instruction (an instruction executed inside the cpu chipset) such as 'ADD R1, R2, R3' works. Even how the cpu interfaces with the ram through the northbridge chipset using the data bus and the address bus is clear enough to me.
What I am struggling with is the big picture.
For example how can a file be saved into an hard disk?
Let's say that the motherboard uses a SATA interface to communicate with the HDD.
Does this mean that this SATA interface has an instruction set which can be used by the cpu by preparing SATA instructions written in the correct format?
Does the same apply with the PCI interface, the AGP interface and so on?
Is all the hardware communication basically accomplished through determining a stardard interface for some task and implementing it (by the companies that create hardware chipsets) with an instruction set that any other hardware components can query?
Is my high level understanding of hardware and software interaction correct?

Nearly. It's actually more general than an instruction.
Alot of these details are architecture specific, so I will stick to a high level general overview of how this can be done.
The CPU can read and write to RAM without a problem correct? You can issue instructions that read and write to any memory address. So rather than try to extend the CPU to understand every possible hardware interface out there, hardware manufacturers simply map sections of the address space (where RAM normally would be) to hardware.
Say for example you want to save a file to a hard drive. This is a possible sequence of command that would occur:
The command register of the hard drive controller is address 0xF00, an address that is outside of RAM but accessible to the CPU
Write the instruction to the command register that indicates we want to write to the hard drive.
There could be conceivably an address register at 0xF01 that tells the hard drive controller where to save the data
Tell the hard drive controller that the data I want to write is at some address in RAM, and initiate the write sequence.
There are a myriad of other ways this can be conceivably be done, but the important thing to note is that it is simply using the instructions that the CPU already has for using RAM.
All of this can be done by the CPU without any special instructions on the CPU side, just read and write to an address. You can imagine this being extended, there is a special place in the address space for the USB controller that contains a list of USB devices, there is a special place for the PCI device list, each PCI devices has several registers that can be read and written to instruct them to do things.
Essentially the role of a device driver is to know how these special registers are to be read and written, what kind of commands devices can accept, etc. Often, as is the case with many graphics cards, what these registers do is not documented to the public and so we rely on their drivers to run the cards correctly.

Related

How does the kernel initialize and access the rest of the hardware on x86/64?

I'm interested in the details of how operating systems work and perhaps writing my own.
From what I've gathered, the BIOS/UEFI is supposed to handle setting up the hardware, and do things like memory-mapping (or IO ports for) the graphics card and other IO devices like audio and ethernet.
My question is, how does the kernel know how to access and (re)configure these devices when it's passed control from the bootloader? Are there just conventions like 'the graphics card is always memory mapped from X to Y address space'? Are you at the mercy of a hardware manufacturer writing a driver for an operating system which knows how the hardware will be initialized?
That seems wrong, so maybe the kernel code includes instructions which somehow iterate through all the bus-connected devices. But what instructions can accomplish that? Is the PCI(e) controller also a memory-mapped device? How do you begin querying and setting up the system?
My primary reference has been the Intel 64 Architectures Software Developer's Manual, which has excellent documentation on how the CPU works, but doesn't describe how the system is setup.
I never wrote a firmware so I don't really know how that works in general. You probably have some memory detection done like an actual memory iteration that is done and some interrogation of PCI devices that are memory mapped in RAM. You also probably have some information in the Developer's manuals as to how you should get some information about memory and stuff like that.
In the end, the kernel doesn't need to bother about that because the firmware takes care to do all that and to provide temporary drivers before the kernel is set up completely.
The firmware passes information to the kernel using the ACPI tables so that is the convention you are looking for. The UEFI firmware launches the /efi/boot/bootx64.efi EFI app from the hard-disk automatically. It calls the main function of that app often called the bootloader. When you write that application, often with frameworks such as EDK2 or GNU-EFI, you can thus use the temporary drivers to get some information like the location of the RSDP which points to all other ACPI tables.
The ACPI convention specifies a language that is AML which, when your kernel interprets, tells it all about hardware. You thus have all the required information there to load drivers and such.
PCI (which is everything nowadays) is another thing. It works with memory mapped IO but the ACPI tables (the MCFG) is helpful to find the beginning of the configuration space for PCI devices that take the form of memory mapped registers.
As to graphics cards, you probably don't want to start with those. They are complex and, at first, you should probably stick to the framebuffer returned by UEFI and at least write a driver for xHCI which is the PCI host controller responsible to interact with USB including keyboards and mouses.

CPU, Disk, RAM, Ethernet Data Flow [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Trying to have an understanding of data flow on a modern consumer desktop computers.
Looking first at a SATA port. If you want to load some bytes into RAM does the CPU send that request to the memory controller which handles the request to the SATA device and is that data than moved onto RAM by the memory controller or does the CPU cache or registers get involved with the data at all?
I assume the OS typically blocks the thread until an I/O request is completed. Does the memory controller send an interrupt to let the OS know it can schedule that thread into the queue again?
Ethernet: So assuming the above steps are complete where some bytes of a file have been loaded onto RAM does the memory get moved to ethernet controller by the memory controller or does the CPU get involved holding any of this data?
What if you use socket with localhost? Do we just do a round about with the memory controller or do we involve the ethernet controller at all?
SATA to SATA storage transfer buffered anywhere?
I know that is a lot of questions, if you can comment on any I would appreciate it! I am really trying to understand the fundamentals here. I have a hard time moving on the higher levels of abstraction without these details...
The memory controller doesn't create requests itself (it has no DMA or bus mastering capabilities).
The memory controller is mostly about routing requests to the right places. For example, if the CPU (or a device) asks to read 4 bytes from physical address 0x12345678, then the memory controller uses the physical address to figure out whether to route that request to a PCI bus, or a different NUMA node/different memory controller (e.g. using quickpath or hyper-transport or omni-path links to other chips/memory controllers), or to its locally attached RAM chips. If the memory controller forwards a request to its locally attached RAM chips; then memory controller also handles the "which memory channel" and timing part; and may also handle encryption and ECC (both checking/correcting errors, and reporting them to OS).
Most devices support bus mastering themselves.
Because most operating systems use paging "contiguous in virtual memory" often doesn't imply "contiguous in physical memory". Because devices only deal with physical addresses and most transfers are not contiguous in physical memory; most devices support the use of "lists of extents". For example, if a disk controller driver wants to read 8 KiB from a disk, then the driver may tell the disk controller "get the first 2 KiB from physical address 0x11111800, then the next 4 KiB from physical address 0x22222000, then the last 2 KiB from physical address 0x33333000"; and the disk controller will follow this list to transfer the pieces of an 8 KiB transfer to the desired addresses.
Because devices use physical addresses and almost all software (including kernel, drivers) typically primarily use virtual addresses, something (kernel) will have to convert the virtual addresses (e.g. from a "read()" function call) into the "list of extents" that the device driver/s (probably) need. When an IOMMU is being used (for security or virtualization) this conversion may include configuring the IOMMU to suit the transfer; and in that case its better to think of them as "device addresses" that the IOMMU may convert/translate into physical addresses (where the device uses "device addresses" and not actual physical addresses).
For some (relatively rare, mostly high-end server) cases; the motherboard/chipset may also include some kind of DMA engine (e.g. "Intel QuickData technology"). Depending on the DMA engine; this may be able to inject data directly into CPU caches (rather than only being able to transfer to/from RAM like most devices), and may be able to handle direct "from one device to another device" transfers (rather than having to use RAM as a buffer). However; in general (because device drivers need to work when there's no "relatively rare" DMA engine) it's likely that any DMA engine provided by the motherboard/chipset won't be supported by the OS or drivers well (and likely that the DMA engine won't be supported or used at all).
Looking first at a SATA port. If you want to load some bytes into RAM does the CPU send that request to the memory controller which handles the request to the SATA device and is that data than moved onto RAM by the memory controller or does the CPU cache or registers get involved with the data at all?
The CPU is not involved during the operation. It is a AHCI which makes the transfer which is a PCI DMA device.
The AHCI from Intel (https://www.intel.ca/content/www/ca/en/io/serial-ata/serial-ata-ahci-spec-rev1-3-1.html) is used for SATA disks. Meanwhile, for more modern NVME disks an NVME PCI controller is used (https://wiki.osdev.org/NVMe).
The OS will basically write in RAM to write to the registers of the AHCI. This way, it will be able to tell the AHCI to do stuff and tell it to write to RAM at a certain positions (probably in a buffer provided/allocated by the user mode process which asks for the data on disk). The operation is DMA so the CPU is not really involved.
From C++, you ask for data probably using fstream or direct API calls made to the OS in the library provided by the OS. For example, on Windows, you can use the WriteFile() function (https://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-writefile) to write to some files.
Underneath, the library is a thin wrapper which makes system calls (it is the same for the standard C++ library). You can browse my answer at Who sets the RIP register when you call the clone syscall? for more information on system calls and how they work on modern systems.
The memory controller is not really involved. It is involved to write to the registers of the AHCI but Brendan's answer is probably more useful for that matter because I'm not really aware.
I assume the OS typically blocks the thread until an I/O request is completed. Does the memory controller send an interrupt to let the OS know it can schedule that thread into the queue again?
Yes the OS blocks the thread and yes the AHCI triggers an MSI interrupt on command completion.
The OS will put the process on the queue of processes which are waiting for IO. The AHCI does trigger interrupts using the MSI capability of PCI devices. The MSI capability is a special capability which allows to bypass the IO APIC of modern x86 processors and to directly send an inter-processor interrupt to the local APIC. The local APIC then looks in the IDT as you would expect for the vector to trigger and makes the processor jump to the handler associated.
It is the handler number which differentiates between the devices that triggered the interrupt which makes it easy for the OS to just place a proper handler for the device on that interrupt number (a driver).
The OS has a driver model which takes for account the different types of devices that will be used on modern computers. The driver model will often be in the form of a virtual filesystem. The virtual filesystem basically presents every devices to the upper layer of the OS as a file. The upper layer basically makes open, read, write and ioctl calls on the file. Underneath, the driver will do complex things like triggering read/write cycles by writing the registers of the AHCI and put the process on other queues to wait for IO.
From user mode (like when calling fstream's open method), it is really a syscall which is made. The syscall handler will check all permissions and make sure that everything is okay with the request before returning a handle to the file.
Ethernet: So assuming the above steps are complete where some bytes of a file have been loaded onto RAM does the memory get moved to ethernet controller by the memory controller or does the CPU get involved holding any of this data?
The Ethernet controller is also a PCI DMA device. It reads and writes in RAM directly.
The Ethernet controller is a PCI DMA device. I never wrote an Ethernet driver but I can tell it just reads and writes in RAM directly. Now for network communications you will have sockets which act similarly to files but are not part of the virtual filesystem. The network cards (including Ethernet controllers) are not presented to the upper layer as files. Instead, you use sockets to communicate with the controller. Sockets not implemented in the C++ standard library but are present on all widespread platforms as OS provided libraries that must be used from C++ or C. Sockets are also system calls on their own.
What if you use socket with localhost? Do we just do a round about with the memory controller or do we involve the ethernet controller at all?
The ethernet controller is probably not involved.
If you use sockets with localhost, the OS will simply send the data to the loopback interface. The Wikipedia article is quite direct here (https://en.wikipedia.org/wiki/Localhost):
In computer networking, localhost is a hostname that refers to the current computer used to access it. It is used to access the network services that are running on the host via the loopback network interface. Using the loopback interface bypasses any local network interface hardware.
SATA to SATA storage transfer buffered anywhere?
It is transferred to RAM from the first device then transferred to the second device afterwards.
On the link I provided earlier for the AHCI it is stated that:
This specification defines the functional behavior and software interface of the Advanced Host Controller Interface (ACHI), which is a hardware mechanism that allows software to communicate with Serial ATA devices. AHCI is a PCI class device that acts as a data movement engine between system memory and Serial ATA devices.
The AHCI isn't for moving from SATA to SATA. It is for moving from SATA to RAM or from RAM to SATA. The SATA to SATA operation thus involves bringing the data in RAM then to move the data from RAM to the other SATA device.

Do CPU and main memory need drivers to work?

Peripheral devices require drivers to work in a computer system (operating system).
Does a CPU need a driver to work?
Same question for a main memory?
The answer is no.
The reason is that the motherboard comes with an (upgradable) BIOS, which takes care of making sure the CPU features function correctly (obviously, an AMD processor won't work on an Intel motherboard). You can upgrade the BIOS, but that should be avoided until, ... reasons of course.
Same goes for memory, it does not require a driver either.
Just so that you know, if you ever tried overclocking you can notice that you can alter the way the RAM functions, ganged/unganged mods and so on. My point is that there is already an interface established using code allowing you to make changes in real time, isn't that the very purpose we even have drivers, to be able to use a peripheral with expected outcome.
On the other hand, peripheral devices are just extensions, which the motherboard does not know how to handle, hence needing a set of instructions i.e. drivers.
In a modern system both memory and the CPU require kernel mode code — as do devices — to function.
Memory requires management of virtual memory tables. The CPU requires maintenance of process control structures.
In the business, such code is not called a "driver".
Generally, one thinks of a device driver as being kernel mode code that responds to devices through the interrupt vector.
That said, on some systems there are "printer drivers" that do not fit that definition of driver.
In short, do memory and CPU have something called a "driver"? No.
Do they have something analogous to a driver? Yes.

Address of Video memory

Address of video memory (0xB8000), who is mapping video memory to this address?
The routine which is copying the data from the address and putting to the screen, Is it a processor inbuilt function (Is this driver comes with the processor)?
What happens when you write to the address:
That area of the address space is not mapped to RAM, instead it gets sent across the system bus to your VGA card. The BIOS set this up with your VGA card at boot time (Lots of address ranges are memory mapped to various devices). No code is being executed on the CPU to plot the pixels when you write to this area of the address space. The VGA card receives this information instead of your RAM and does this itself.
If you wanted you could look BIOS functions calls and have it reconfigure the hardware so you could plot pixels instead of place characters at the video address instead. You could even probe it to see if it supports VESA and switch to a nice 1280*768 32bpp resolution. The BIOS would then map an area of the address space of your choosing to the VGA card for you.
More about the BIOS:
The BIOS is a program that comes with your motherboard that your CPU executes when it first powers up. It sets up all the hardware, maps all the memory mapped devices, creates various useful tables, assigns IO ports, hooks interrupts up to a bunch of routines it leaves in memory. It then loads your bootsector from a device and jumps to your OS code.
The left behind routines and data structures enable you to get your OS off the ground. You can load sectors off a disk, write text to the screen, get information about the system (Memory maps, ACPI tables, MP Tables .etc). Without these routines and data structures, it would be a lot harder if not impossible to make an acceptable bootsector and have all the information about the system to build a functional kernel.
However the routines are dated, slow and have very restrictive limitations. For one the routines left in memory are 16bit real mode code, so as soon as you switch to 32bit protected mode you have to switch back constantly or use VM86 mode to access them (Completely unaccessible in 64bit mode, Apparently emulating the instructions with a modified linux x86emu library is an option though). So the routines are generally very slow. So you will need to write your own drivers from scratch if you move away from real mode programming.
In most cases, the PC monitor is a VGA-compatible device, which by standard includes a mode for setting text buffer (32 KB sized) through MMIO starting from the address of 0xB8000.
The picture above summarizes how MMIO works.

OS memory isolation

I am trying to write a very thin hypervisor that would have the following restrictions:
runs only one operating system at a time (ie. no OS concurrency, no hardware sharing, no way to switch to another OS)
it should be able only to isolate some portions of RAM (do some memory translations behind the OS - let's say I have 6GB of RAM, I want Linux / Win not to use the first 100MB, see just 5.9MB and use them without knowing what's behind)
I searched the Internet, but found close to nothing on this specific matter, as I want to keep as little overhead as possible (the current hypervisor implementations don't fit my needs).
What you are looking for already exists, in hardware!
It's called IOMMU[1]. Basically, like page tables, adding a translation layer between the executed instructions and the actual physical hardware.
AMD calls it IOMMU[2], Intel calls it VT-d (please google:"intel vt-d" I cannot post more than two links yet).
[1] http://en.wikipedia.org/wiki/IOMMU
[2] http://developer.amd.com/documentation/articles/pages/892006101.aspx
Here are a few suggestions / hints, which are necessarily somewhat incomplete, as developing a from-scratch hypervisor is an involved task.
Make your hypervisor "multiboot-compliant" at first. This will enable it to reside as a typical entry in a bootloader configuration file, e.g., /boot/grub/menu.lst or /boot/grub/grub.cfg.
You want to set aside your 100MB at the top of memory, e.g., from 5.9GB up to 6GB. Since you mentioned Windows I'm assuming you're interested in the x86 architecture. The long history of x86 means that the first few megabytes are filled with all kinds of legacy device complexities. There is plenty of material on the web about the "hole" between 640K and 1MB (plenty of information on the web detailing this). Older ISA devices (many of which still survive in modern systems in "Super I/O chips") are restricted to performing DMA to the first 16 MB of physical memory. If you try to get in between Windows or Linux and its relationship with these first few MB of RAM, you will have a lot more complexity to wrestle with. Save that for later, once you've got something that boots.
As physical addresses approach 4GB (2^32, hence the physical memory limit on a basic 32-bit architecture), things get complex again, as many devices are memory-mapped into this region. For example (referencing the other answer), the IOMMU that Intel provides with its VT-d technology tends to have its configuration registers mapped to physical addresses beginning with 0xfedNNNNN.
This is doubly true for a system with multiple processors. I would suggest you start on a uniprocessor system, disable other processors from within BIOS, or at least manually configure your guest OS not to enable the other processors (e.g., for Linux, include 'nosmp'
on the kernel command line -- e.g., in your /boot/grub/menu.lst).
Next, learn about the "e820" map. Again there is plenty of material on the web, but perhaps the best place to start is to boot up a Linux system and look near the top of the output 'dmesg'. This is how the BIOS communicates to the OS which portions of physical memory space are "reserved" for devices or other platform-specific BIOS/firmware uses (e.g., to emulate a PS/2 keyboard on a system with only USB I/O ports).
One way for your hypervisor to "hide" its 100MB from the guest OS is to add an entry to the system's e820 map. A quick and dirty way to get things started is to use the Linux kernel command line option "mem=" or the Windows boot.ini / bcdedit flag "/maxmem".
There are a lot more details and things you are likely to encounter (e.g., x86 processors begin in 16-bit mode when first powered-up), but if you do a little homework on the ones listed here, then hopefully you will be in a better position to ask follow-up questions.