I imagine this would be accomplished by assigning some RAM and L3 cache to one OS, some to another, and having two hard drives and two monitors. I don't know if its possible to do that at all, and if it is, how? A wrapper OS? Are there any functional examples?
I know that most advantages of such a system can be acquired by virtualization, but that is different than what I mean.
Theoretically It is possible to have multiple operating system running on a single machine but on different cores. Like one core will be running windows and other will be running linux distro. Though It is very hard to achieve, because both OS assume that It is the only king of the island, and tries to rule on everything like memory and devices. Eventually without having any exclusive lock, both OS will confuse hardware or itself and crash.
Let's come to the point that How is it even possible theoretically? This is possible through asymmetric multiprocessing (AMP), Before executing operating system A you hide the 2nd core so that OS will assume that there is only one core present on the machine then after OS will setup environment for that core.
Once things are ready this side, You ask operating system B to load up on 2nd core by hiding first core. And yes you need a separate program except boot loader to do all of this work.
Now you have two OS running but what about memory? devices? Yes that's a major concern. One workaround that I could see is to modify kernel of OS A and OS B such that you could properly divide system resources. Like you tell the OS A to use lower 2GB memory to use and assume upper 2GB as not available, thus modify OS B to use upper 2GB memory.
Memory concern is resolved, but than It would little tricky to modify every device driver to do that.
I guess this is the only reason for not doing such kind of experiment. It isn't worthy at all.
Outside of virtualization, it would really not be possible to do this on any current processors.
When the processor receives an interrupt, what operating system handles it?
Related
I would like to have a computer running only one program, so whenever the computer boots, it executes that program.
For example:
Computer board from Tesla car, common supermarket systems
One example of how I use that:
Develop a system to make a house automatic, so there would be a screen showing lights which can be turned on or off, and if the house runs out of power, when the energy come back the computer would reboot and display lights options again.
Do I have to build a OS for that?
A program that boots + runs on bare metal is called a "freestanding" program. It doesn't run under an OS, and includes everything it needs to manage the hardware, and includes all libraries it needs (statically linked).
It needs to do some of the same things an OS does (talk to hardware, install interrupt handlers, etc.) so in some respects you could call it an OS, but it's also just one program and doesn't necessarily provide any mechanism for running other programs.
The more bare-bones and light-weight the microcontroller is (and/or the program), the more obvious it is that it's just a program, not an OS. (e.g. if you don't do any dynamic memory allocation. Or you don't load any code from anywhere into RAM, just execute it from ROM).
BTW, an OS kernel is a freestanding program. Not all freestanding programs are kernels, but a kernel has to be be freestanding by the normal definitions of what a kernel is.
Also BTW, it's totally normal for an embedded system to run an OS, and have that OS start some specific programs. In fact the examples you cited do use OSes. So instead of writing all your own drivers, scheduling code, etc., you use an existing OS and write a program that runs under that OS.
Sometimes that OS is Linux, sometimes it's a light-weight real-time OS.
For a kiosk, sometimes that OS is even Windows. (Or in older systems, DOS which is barely an OS.) See comments under the question.
You should take a look at IncludeOS Which is made exactly for your purpose, only include what is needed.
Peripheral devices require drivers to work in a computer system (operating system).
Does a CPU need a driver to work?
Same question for a main memory?
The answer is no.
The reason is that the motherboard comes with an (upgradable) BIOS, which takes care of making sure the CPU features function correctly (obviously, an AMD processor won't work on an Intel motherboard). You can upgrade the BIOS, but that should be avoided until, ... reasons of course.
Same goes for memory, it does not require a driver either.
Just so that you know, if you ever tried overclocking you can notice that you can alter the way the RAM functions, ganged/unganged mods and so on. My point is that there is already an interface established using code allowing you to make changes in real time, isn't that the very purpose we even have drivers, to be able to use a peripheral with expected outcome.
On the other hand, peripheral devices are just extensions, which the motherboard does not know how to handle, hence needing a set of instructions i.e. drivers.
In a modern system both memory and the CPU require kernel mode code — as do devices — to function.
Memory requires management of virtual memory tables. The CPU requires maintenance of process control structures.
In the business, such code is not called a "driver".
Generally, one thinks of a device driver as being kernel mode code that responds to devices through the interrupt vector.
That said, on some systems there are "printer drivers" that do not fit that definition of driver.
In short, do memory and CPU have something called a "driver"? No.
Do they have something analogous to a driver? Yes.
I'm doing a question list on Operating Systems and this question came up, "How do Virtual Machines make possible the use of multiple OS on the same hardware? Consider the fact that the OS's have absolute control over the hardware". Can someone help me answer this one?
Multiple virtual machines run simultaneously on same hardware in same way as multiple processes run on same hardware. This simultaneous execution becomes possible due to illusion provided to OS that it is the only controlling entity which is running on the hardware. There are terms/concepts of abstraction and indirection used to provide illusion. Virtualization software make VM to think that it is running on its own hardware by abstracting hardware resources. There are some cases where instructions are transparently handled by virtualization software(indirection). Now underlying hardwares provide facilitations to run virtual machines efficiently e.g. Intel VTx/EPT used to give efficient control to virtualized OS over memory and CPU.
This question already has answers here:
What is an OS kernel ? How does it differ from an operating system? [closed]
(11 answers)
Closed last month.
i know that operating system is nothing without kernel. But I had been asked a question in interview that-
What is (OS-Kernel). So what exactly is left if we remove kernel from operating system.
(Please do not give it negative rating if it is silly, please give answer in comments and then i will delete this question).
In addition to Sam Dunk's (see other post) statement, there is one other part that is part of the "operating system" - for a given value of operating system: The boot loader.
When a PC (and presumably other architectures) boot up, the BIOS loads the boot sector. The BIOS is not part of the operating system. The boot sector (arguably) is. The boot sector (limited to 512 bytes!) loads the bootloader.
The bootloader may give options between different operating systems (where multiple operating systems are installed on the same computer), and/or options for loading the operating system (e.g. "Safe mode", or different run levels for Unix - q.v. etc). The bootloader then loads the (appropriate) kernel, runs it. As soon as control is passed to the kernel, the bootloader is discarded (until the next boot).
The above is somewhat simplified.
For further reading on how the parts fit together (in the case of Linux), see "Inside the Linux boot process" http://www.ibm.com/developerworks/library/l-linuxboot/ for example. The master boot record is referred to as "Stage 1 boot loader", and what I referred to as "the boot loader" they refer to as "Stage 2 boot loader".
Details will vary from O/S to O/S.
To add to Sam Dunk's answer, we have to think what is the purpose of having an operating system. An OS does memory management, process scheduling, devices management etc etc...but that is not why we need an OS. It is how the OS do its job. The reason we need an OS is it abstracts the underlying hardware infrastructure for applications. Period. Nothing else. The other stuff like user interface, system utilities, are just sugar added on top (hey a command line OS is still an OS). This is the kernel, or the core of the OS. It provides a simplified and consistent platform for applications to execute across multiple hardware configurations.
For an analogy, think about the pipes and cables behind the walls in your house. Without them your wall sockets and water taps are practically useless. The sinks, cabinets, walls to separate rooms, are the system applications. (They usually come with the house, but they aren't absolutely necessary.)
I am trying to write a very thin hypervisor that would have the following restrictions:
runs only one operating system at a time (ie. no OS concurrency, no hardware sharing, no way to switch to another OS)
it should be able only to isolate some portions of RAM (do some memory translations behind the OS - let's say I have 6GB of RAM, I want Linux / Win not to use the first 100MB, see just 5.9MB and use them without knowing what's behind)
I searched the Internet, but found close to nothing on this specific matter, as I want to keep as little overhead as possible (the current hypervisor implementations don't fit my needs).
What you are looking for already exists, in hardware!
It's called IOMMU[1]. Basically, like page tables, adding a translation layer between the executed instructions and the actual physical hardware.
AMD calls it IOMMU[2], Intel calls it VT-d (please google:"intel vt-d" I cannot post more than two links yet).
[1] http://en.wikipedia.org/wiki/IOMMU
[2] http://developer.amd.com/documentation/articles/pages/892006101.aspx
Here are a few suggestions / hints, which are necessarily somewhat incomplete, as developing a from-scratch hypervisor is an involved task.
Make your hypervisor "multiboot-compliant" at first. This will enable it to reside as a typical entry in a bootloader configuration file, e.g., /boot/grub/menu.lst or /boot/grub/grub.cfg.
You want to set aside your 100MB at the top of memory, e.g., from 5.9GB up to 6GB. Since you mentioned Windows I'm assuming you're interested in the x86 architecture. The long history of x86 means that the first few megabytes are filled with all kinds of legacy device complexities. There is plenty of material on the web about the "hole" between 640K and 1MB (plenty of information on the web detailing this). Older ISA devices (many of which still survive in modern systems in "Super I/O chips") are restricted to performing DMA to the first 16 MB of physical memory. If you try to get in between Windows or Linux and its relationship with these first few MB of RAM, you will have a lot more complexity to wrestle with. Save that for later, once you've got something that boots.
As physical addresses approach 4GB (2^32, hence the physical memory limit on a basic 32-bit architecture), things get complex again, as many devices are memory-mapped into this region. For example (referencing the other answer), the IOMMU that Intel provides with its VT-d technology tends to have its configuration registers mapped to physical addresses beginning with 0xfedNNNNN.
This is doubly true for a system with multiple processors. I would suggest you start on a uniprocessor system, disable other processors from within BIOS, or at least manually configure your guest OS not to enable the other processors (e.g., for Linux, include 'nosmp'
on the kernel command line -- e.g., in your /boot/grub/menu.lst).
Next, learn about the "e820" map. Again there is plenty of material on the web, but perhaps the best place to start is to boot up a Linux system and look near the top of the output 'dmesg'. This is how the BIOS communicates to the OS which portions of physical memory space are "reserved" for devices or other platform-specific BIOS/firmware uses (e.g., to emulate a PS/2 keyboard on a system with only USB I/O ports).
One way for your hypervisor to "hide" its 100MB from the guest OS is to add an entry to the system's e820 map. A quick and dirty way to get things started is to use the Linux kernel command line option "mem=" or the Windows boot.ini / bcdedit flag "/maxmem".
There are a lot more details and things you are likely to encounter (e.g., x86 processors begin in 16-bit mode when first powered-up), but if you do a little homework on the ones listed here, then hopefully you will be in a better position to ask follow-up questions.