what is left in operating system if we remove kernel? [duplicate] - operating-system

This question already has answers here:
What is an OS kernel ? How does it differ from an operating system? [closed]
(11 answers)
Closed last month.
i know that operating system is nothing without kernel. But I had been asked a question in interview that-
What is (OS-Kernel). So what exactly is left if we remove kernel from operating system.
(Please do not give it negative rating if it is silly, please give answer in comments and then i will delete this question).

In addition to Sam Dunk's (see other post) statement, there is one other part that is part of the "operating system" - for a given value of operating system: The boot loader.
When a PC (and presumably other architectures) boot up, the BIOS loads the boot sector. The BIOS is not part of the operating system. The boot sector (arguably) is. The boot sector (limited to 512 bytes!) loads the bootloader.
The bootloader may give options between different operating systems (where multiple operating systems are installed on the same computer), and/or options for loading the operating system (e.g. "Safe mode", or different run levels for Unix - q.v. etc). The bootloader then loads the (appropriate) kernel, runs it. As soon as control is passed to the kernel, the bootloader is discarded (until the next boot).
The above is somewhat simplified.
For further reading on how the parts fit together (in the case of Linux), see "Inside the Linux boot process" http://www.ibm.com/developerworks/library/l-linuxboot/ for example. The master boot record is referred to as "Stage 1 boot loader", and what I referred to as "the boot loader" they refer to as "Stage 2 boot loader".
Details will vary from O/S to O/S.

To add to Sam Dunk's answer, we have to think what is the purpose of having an operating system. An OS does memory management, process scheduling, devices management etc etc...but that is not why we need an OS. It is how the OS do its job. The reason we need an OS is it abstracts the underlying hardware infrastructure for applications. Period. Nothing else. The other stuff like user interface, system utilities, are just sugar added on top (hey a command line OS is still an OS). This is the kernel, or the core of the OS. It provides a simplified and consistent platform for applications to execute across multiple hardware configurations.
For an analogy, think about the pipes and cables behind the walls in your house. Without them your wall sockets and water taps are practically useless. The sinks, cabinets, walls to separate rooms, are the system applications. (They usually come with the house, but they aren't absolutely necessary.)

Related

How many processors does Origin's bigO uses?

We had a question in the exam whether a desktop computer is a multiprocessor or not. We are having a discussion now whether the BigO pc from Origin uses a single microprocessor or it uses more than one.
This question is really too broad for SO (for future reference); but Ill provide some insight that will hopefully improve your understanding. Your question is really, really broad in general and since the term microprocessor is a bit general and not all the technical information about all the parts of modern PCs is publicly available, its hard to give an exact number, mostly because alot of the components and subsystems of a modern desktop PC will have some kind of processor; generally these are microcontrollers but they are still processors running some firmware/software to do whatever functionality is required by that subsystem.
Certainly, none of the modern PCs (like the one you mention assuming its this one: https://www.originpc.com/gaming/desktops/big-o/) would be consider single processor system. Everything from desktops to laptops to smart phones these days all have at least 2-4 physical processors (ie, cores) as part of the application SoC; these all have mulitple main cores. So when you read about how this system has an Intel i7-9700K, that "processor" is really made up for 8 of the same x86-64 processors all in one. Its these cores that run all your applications and operating systems; but there are many little processors running their own code to do various other functions. For example, on Intel CPUs, there is a small processor that starts up when the computer first powers on and enables various management and security features (https://en.wikipedia.org/wiki/Intel_Management_Engine). Likewise, theres processors in many of the subsystems, like the audio subsystem for controlling low-level audio features has a small microcontroller/DSP, the graphics system can have tens or more of small processors depending on how you count the cores in the integrated GPU. And all of these as contained inside the package of the i7; there are even more on the motherboard and external components. Depending on what you count, there can be 100s of small processors in a modern computer system.
In the past, the main processor was a single core unit that really only had one microprocessor in it; the term "processor" and "CPU" have kind of held over so you might say that a desktop has an Intel i7 for a processor despite the fact the chip itself contains many main processors/cores and numerous subprocessors/microcontrollers. So while you might say that particular desktop has a single "processor" (as there are systems that can install more than 1 Intel/AMD SoCs, these are usually for high-end workstations or servers, also called multisocketed), note the difference between multiple processors and multiple sockets on the motherboard.
So, to directly answer your questions, it depends what is meant by processor. If the question is, can I fit multiple i7s (ie, is the system multisocketed), then no. If the question is, does a modern PC has multiple processors in terms of CPU cores, then yes. If the question is to count all the processing units on the system, including all the little microprocessors doing their particular job, then its really hard to say but there are alot of them.

What does kernel of an operating system do?

What exactly is a kernel of an operating system? What does it do? I tweaked my linux kernel a couple of times but didn't know why did I do it or what did it change.
The kernel is the central part of the operating system. It handles the 'behind the scenes' tasks so that your user space application (end user application) may run without knowing the inner details and complexities of the hardware.
Most importantly, a kernel performs:
Memory Management
Process Management
Virtual Filesystem support
Interrupt handlers
+ more.
You can read more about the inner working of Linux kernel in the wonderful book "Linux Kernel Development" by Robert Love.
The kernel is the software that handles exceptions and interrupts in the system.

Is Virtual Memory in some way related to Virtualization Technology?

I think it is a bit vague question. But I was trying to get a clear understanding on how a hypervisor interacts with operating systems under the hood, and what makes them two so different. Let me just drive you through my thought process.
Why do we need a virtualization manager a.k.a. a hypervisor, if we already have an operating system to manage resources which are shared?
One answer that I got was: suppose the system crashes, and if we have no virtualization manager, then it's a full loss. So, virtualization keeps another system unaffected, by providing isolation.
Okay, then why do we need an operating system? Well, both operating systems and hypervisors have different task to handle: hypervisor handles how to allocate the resources (compute, networking etc.), while OS handles process management, file system, memory (hmm.. We also have a virtual memory. Right?)
I think I haven't asked the question in a trivial manner? But I am confused, so may be I could get a little help to clear my insight.
"Virtual" roughly means "something that is not what it seems". It is a common task in computing to substitute one thing with another.
A "virtual resource" is a common approach for that. It also means that there is an entity in a system that transparently substitutes one portion of resource with another. Memory is one of the most important resources in computing systems, therefore "Virtual Memory" is one of the first terms that historically was introduced.
However, there are other resources that are worth virtualizing. One can virtualize registers, or, more specifically, their values. Input/output devices, time, number of processors, network connections — all these resources can be and are virtualized these days (see: Intel VT-d, Virtual Time papers, Multicore simulators, Virtual switches and network adapters as respective examples). A combination of such things is what roughly constitutes a "Virtualization Technology". It is not a well-defined term, unless you talk about Intel® Virtualization Technology, which is one-vendor trade name.
In this sense, a hypervisor is such an entity that substitutes/manages chosen resources transparently to other controlled entities, which are then said to reside inside "containers", "jails", "virtual machines" — different names exist.
Both operating system and hypervisors have different task to handle
In fact, they don't.
An operating system is just a hypervisor for regular user applications, as it manages resources behind their back and transparently for them. The resources are: virtual memory, because an OS makes it seem that every application has a huge flat memory space for its own needs; virtual time, because each application does not manage context switching points; virtual I/O, because each application uses system calls to access devices instead of directly writing into their registers.
A hypervisor is a fancy way to say a "second level operating system", as it virtualizes resources visible to operating systems. The resources are essentially the same: memory, time, I/O; a new thing are system registers.
It can go on and on, i.e. you can have hypervisors of higher levels that virtualize certain resources for entities of lower level. For Intel systems, it roughly corresponds to the stack SMM -> VMM -> OS -> user application, where SMM (system management mode) is the outermost hypervisor and user application is the inner entity (that actually does useful job of running a web browser and web server you use right now).
Why do we need a virtualization manager aka hypervisor, if we already have an operating system to manage how the resources are shared?
We don't need it if chosen computer architecture supports more than one level of indirection for resource management (e.g. nested virtualization). Thus, it depends on chosen architecture. On certain IBM systems (System/360, years 1960-1970), hypervisors were invented and used much earlier than operating systems had been introduced in a modern sense. More common IBM Personal Computer architecture based on Intel x86 CPUs (around 1975) had deficiencies that did not allow to achieve required level of isolation between multiple OSes without introducing a second layer of abstraction (hypervisors) into the architecture (which happened around 2005).

Is it possible to run two, seperate operating systems on a computer?

I imagine this would be accomplished by assigning some RAM and L3 cache to one OS, some to another, and having two hard drives and two monitors. I don't know if its possible to do that at all, and if it is, how? A wrapper OS? Are there any functional examples?
I know that most advantages of such a system can be acquired by virtualization, but that is different than what I mean.
Theoretically It is possible to have multiple operating system running on a single machine but on different cores. Like one core will be running windows and other will be running linux distro. Though It is very hard to achieve, because both OS assume that It is the only king of the island, and tries to rule on everything like memory and devices. Eventually without having any exclusive lock, both OS will confuse hardware or itself and crash.
Let's come to the point that How is it even possible theoretically? This is possible through asymmetric multiprocessing (AMP), Before executing operating system A you hide the 2nd core so that OS will assume that there is only one core present on the machine then after OS will setup environment for that core.
Once things are ready this side, You ask operating system B to load up on 2nd core by hiding first core. And yes you need a separate program except boot loader to do all of this work.
Now you have two OS running but what about memory? devices? Yes that's a major concern. One workaround that I could see is to modify kernel of OS A and OS B such that you could properly divide system resources. Like you tell the OS A to use lower 2GB memory to use and assume upper 2GB as not available, thus modify OS B to use upper 2GB memory.
Memory concern is resolved, but than It would little tricky to modify every device driver to do that.
I guess this is the only reason for not doing such kind of experiment. It isn't worthy at all.
Outside of virtualization, it would really not be possible to do this on any current processors.
When the processor receives an interrupt, what operating system handles it?

OS memory isolation

I am trying to write a very thin hypervisor that would have the following restrictions:
runs only one operating system at a time (ie. no OS concurrency, no hardware sharing, no way to switch to another OS)
it should be able only to isolate some portions of RAM (do some memory translations behind the OS - let's say I have 6GB of RAM, I want Linux / Win not to use the first 100MB, see just 5.9MB and use them without knowing what's behind)
I searched the Internet, but found close to nothing on this specific matter, as I want to keep as little overhead as possible (the current hypervisor implementations don't fit my needs).
What you are looking for already exists, in hardware!
It's called IOMMU[1]. Basically, like page tables, adding a translation layer between the executed instructions and the actual physical hardware.
AMD calls it IOMMU[2], Intel calls it VT-d (please google:"intel vt-d" I cannot post more than two links yet).
[1] http://en.wikipedia.org/wiki/IOMMU
[2] http://developer.amd.com/documentation/articles/pages/892006101.aspx
Here are a few suggestions / hints, which are necessarily somewhat incomplete, as developing a from-scratch hypervisor is an involved task.
Make your hypervisor "multiboot-compliant" at first. This will enable it to reside as a typical entry in a bootloader configuration file, e.g., /boot/grub/menu.lst or /boot/grub/grub.cfg.
You want to set aside your 100MB at the top of memory, e.g., from 5.9GB up to 6GB. Since you mentioned Windows I'm assuming you're interested in the x86 architecture. The long history of x86 means that the first few megabytes are filled with all kinds of legacy device complexities. There is plenty of material on the web about the "hole" between 640K and 1MB (plenty of information on the web detailing this). Older ISA devices (many of which still survive in modern systems in "Super I/O chips") are restricted to performing DMA to the first 16 MB of physical memory. If you try to get in between Windows or Linux and its relationship with these first few MB of RAM, you will have a lot more complexity to wrestle with. Save that for later, once you've got something that boots.
As physical addresses approach 4GB (2^32, hence the physical memory limit on a basic 32-bit architecture), things get complex again, as many devices are memory-mapped into this region. For example (referencing the other answer), the IOMMU that Intel provides with its VT-d technology tends to have its configuration registers mapped to physical addresses beginning with 0xfedNNNNN.
This is doubly true for a system with multiple processors. I would suggest you start on a uniprocessor system, disable other processors from within BIOS, or at least manually configure your guest OS not to enable the other processors (e.g., for Linux, include 'nosmp'
on the kernel command line -- e.g., in your /boot/grub/menu.lst).
Next, learn about the "e820" map. Again there is plenty of material on the web, but perhaps the best place to start is to boot up a Linux system and look near the top of the output 'dmesg'. This is how the BIOS communicates to the OS which portions of physical memory space are "reserved" for devices or other platform-specific BIOS/firmware uses (e.g., to emulate a PS/2 keyboard on a system with only USB I/O ports).
One way for your hypervisor to "hide" its 100MB from the guest OS is to add an entry to the system's e820 map. A quick and dirty way to get things started is to use the Linux kernel command line option "mem=" or the Windows boot.ini / bcdedit flag "/maxmem".
There are a lot more details and things you are likely to encounter (e.g., x86 processors begin in 16-bit mode when first powered-up), but if you do a little homework on the ones listed here, then hopefully you will be in a better position to ask follow-up questions.