Load device drivers into kernel - operating-system

In Linux Operating System, device drivers are also loaded into kernel along with operating system. Sometime these drivers get crashed due to some circumstances and operating system become not responding or restarts. To avoid these situations, first approach is to load these drivers into the core kernel. Secondly, these drivers can be loaded into kernel as a separate process.
To avoid such problem which method should be used and why?
1st or 2nd?

In Linux kernel there is no such thing like "separate process" for the driver. Drivers operate in the same address space as the kernel core and share threads with the kernel core.
Situation when crash in a device driver may crash entire OS is unavoidable in the vanilla Linux kernel.

Related

Are emulation and hardware-assisted virtualization synonyms?

What is the distinction between emulation and Full Virtualization, also called Hardware-assisted virtualizion (HVM)?
From this source, it is not clear what the relationship is.
Full Virtualization or Hardware-assisted virtualizion (HVM) uses
virtualization extensions from the host CPU to virtualize guests. HVM
requires Intel VT or AMD-V hardware extensions. The Xen Project
software uses Qemu to emulate PC hardware, including BIOS, IDE disk
controller, VGA graphic adapter, USB controller, network adapter etc.
Virtualization hardware extensions are used to boost performance of
the emulation. Fully virtualized guests do not require any kernel
support. This means that Windows operating systems can be used as a
Xen Project HVM guest. Fully virtualized guests are usually slower
than paravirtualized guests, because of the required emulation.
Source: Xen Project Wiki
In the following book these terms are considered synonymous.
At one extreme you have full virtualization, or emulation, in which
the virtual machine is a software simulation of hardware, real or
fictional — as long as there’s a driver, it doesn’t matter much.
Products in this category include VMware and QEMU.
Source: The book of Xen
Following are the excerpts from an article describing the actual difference between emulation and HWM. However, the only distinction I can see is, that virtualization enables to create more than one computing environment.
If emulation takes such a toll, why bother? Because we might want to
do one of the following:
Run an OS on a hardware platform for which it was not designed.
Run an application on a device other than the one it was developed for (e.g., run a Windows program on a Mac).
Read data that was written onto storage media by a device we no longer have or that no longer works.
Source: Russell Kay
Virtual machines offer the following advantages:
They're compatible with all Intel x86 computers.
They're isolated from one another, just as if they were physically separate.
Each is a complete, encapsulated computing environment.
They're essentially independent of the underlying hardware.
They're created using existing hardware.
Source: Russell Kay
There is another article, which only supports my hypothesis.
Emulation, in short, involves making one system imitate another. For
example, if a piece of software runs on system A and not on system B,
we make system B “emulate” the working of system A. The software then
runs on an emulation of system A.
In this same example, virtualization would involve taking system A and
splitting it into two servers, B and C.
So lets consider B=C and we have emulation, dont we?
Please note that virtualization is achieved by emulating the hardware components network adapters, USB, hard disk, CD drives etc in software. Thus emulation actually helps achieving virtualization.
Full virtualization is the technique of virtualization in which the guest OS runs unmodified, that is, the guest is not aware of whether it is running in a virtual machine environment or on a physical machine. Initially binary translation of the guest code was done in order to achieve full virtualization, but it wasn't good from performance perspective.
Para virtualization is a technique which requires modifications in the guest Operating System in order to gain better performance.
Hardware assisted virtualization is full virtualization technique as the guest Operating System runs unmodified. It is called hardware assisted because this type of virtualization utilizes virutalization specific extensions in host hardware like Intel-vtx, AMD-V etc. This technique not only offers full virtualization (guest OS does not require modification) but also has performance benefits and major vendors like Intel and AMD are providing extensions in hardware to support virtualization.

PowerPC 970 Based Macs, Why Is Hypervisor Mode Unavailable?

I recently have acquired a Apple G5 computer (PPC 970) and am interested in learning more about the PowerPC architecture (most of my systems programming knowledge comes from x86 and my own hobby kernel).
After using the computer a while and getting used to PowerPC assembly (RISC), I noticed that low level CPU virtualization is not possible on PowerPC 970 based Macs. The CPU in documentation (PowerPC 64) seems to support hypervisor mode, but it has been noted that it is not possible due to Open Firmware.
Do all operating systems which are loaded from Open Firmware on PowerPC 970 series Macs load in hypervisor mode, making "nested" virtualization impossible? If this is true, why does Open Firmware load all Operating systems in hypervisor mode? Is this in order to provide a secure layer for communication between the the Operating System and Open Firmware (using firmware for everything except ACPI and memory discovery during boot, which requires a transition into "real-mode", is unsafe in x86?).
Also if the Operating system were using hyper-calls to facilitate a secure transition to firmware based routines, wouldn't this impose a large penalty just as syscalls do?
I'm not privy to Apple's hardware designs, but I've heard that the HV mode (ie., HV=1 in the Machine State Register) was disabled, through hardware, on the CPUs used in the G5 machines.
If this is the case, then it's not up to the system firmware to enable/disable HV mode - it's simply not available.
At the time that these machines were available, other Power hardware designs had a small amount of firmware running in HV=1 mode, and only exposed HV=0 to the kernel. However, the G5 wasn't one of these.

What is the difference between Full, Para and Hardware assisted virtualization?

I am going through the topic of virtualization and i am totally sucked up understanding the basic concept, Wikipedia does provide some relevant information, but it is not good enough for me to understand the basic idea. The concept will be of 2 to 3 line, but neither I am able to find them on net, nor on the book.
I will be pleased if someone gives me a basic understanding of these three types. I am well aware of virtualization and understand it well, but these 3 types...
Paravirtualization is virtualization in which the guest operating system (the one being virtualized) is aware that it is a guest and accordingly has drivers that, instead of issuing hardware commands, simply issue commands directly to the host operating system. This also includes memory and thread management as well, which usually require unavailable privileged instructions in the processor.
Full Virtualization is virtualization in which the guest operating system is unaware that it is in a virtualized environment, and therefore hardware is virtualized by the host operating system so that the guest can issue commands to what it thinks is actual hardware, but really are just simulated hardware devices created by the host.
Hardware Assisted Virtualization is a type of Full Virtualization where the microprocessor architecture has special instructions to aid the virtualization of hardware. These instructions might allow a virtual context to be setup so that the guest can execute privileged instructions directly on the processor without affecting the host. Such a feature set is often called a Hypervisor. If said instructions do not exist, Full Virtualization is still possible, however it must be done via software techniques such as Dynamic Recompilation where the host recompiles on the fly privileged instructions in the guest to be able to run in a non-privileged way on the host.
There is also a combination of Para Virtualization and Full Virtualization called Hybrid Virtualization where parts of the guest operating system use paravirtualization for certain hardware drivers, and the host uses full virtualization for other features. This often produces superior performance on the guest without the need for the guest to be completely paravirtualized. An example of this: The guest uses full virtualization for privileged instructions in the kernel but paravirtualization for IO requests using a special driver in the guest. This way the guest operating system does not need to be fully paravirtualized, since this is sometimes not available, but can still enjoy some paravirtualized features by implementing special drivers for the guest.
In the case of hardware assisted virtualisation, the virtualisation is designed in. Instruction set provides instructions for partitioning the host. See VT-x technology of Intel as an example. So that the hypervisor works directly with hardware without using any operating system to acces it and provide full virtualisation

hypervisors and java virtual machine

The questions I would like to ask are:
1) What exactly does hypervisor do? Why is it needed?
2) What is the difference between hypervisor and Java Virtual mMchine?
3) Does JVM use a hypervisor?
4) When a host operating system like linux can handle multiple guest operating system,why use hypervisor?
Would be great help if someone shed light on this
A Hypervisor also known as hardware virtualization are a virtualization layer that allows running one or more native operating system on top of it, as if they run on a physical machine. It is similar to emulation but only runs operating systems that would be able to run without the Hyperviser, which are much faster.
Both are virtualization layers. However Java are optimized for performance and portability. While Java are technicaly an emulator, it are much faster than an hyperviser. This can be achieved because the emulated platform are designed for fast emulation. Java do not run x86 or x86_64/amd64 code, it runs something called Bytecode. The technical term for Bytecode are Intermediate Language (IL). It are compiled to code that are native to your processor when you run it, by the Just In Time compiler (JIT). As the JIT do a compilation process it can make sure that the program follows Java:s security constraints, by simply not generating code that violates such constraints. The Hyperviser enforce security constraints by intercepting so called privileged instructions and by emulating devices such as disk drives. This are done because native x86 or x86_64/amd64 code are very hard for a program to understand, and changing it so that it self-enforce security constraints are next to impossible. Java on the other hand runs Bytecode which are easy for a program to understand and chance so that it self-enforce security rules.
The short answer: An hyperviser are slower than Java but allows you to run a multitude of complete operating systems, and all the software available for them. This while Java are faster, but you can only run Java software on it. If you want to run Windows and Office in your virtual machine, you can't do that in Java.
I think I answered this above but no, it use code inspection and modifies the program so that it self-enforce security rules. This can be done because runnable Java application are in a intermediate state called Bytecode, which are easy for Java to understand, inspect, find code that may violate the rules and modify it in order to obey them. This are a rather complex process that have several advantage over hypervisor. The first advantage are "compile once run everywhere", as Java are compiled and distributed as bytecode. The second advantage are speed, JIT:ed code have the same speed as non-virtualized code even when strict security are enforced. The disadvantage are that only Bytecode programs can run, so you for example cannot run Windows or Linux inside the virtual machine.
If you are running another operating system like Windows or another Linux distribution - you are running an Hyperviser. KVM, Xen and VirtualBox are examples of Hypervisors. You can also run multiple instances of Linux with one shared kernel, known as OS-based virtualization or "Container". But a Container share the kernel and therefor you can only virtual machines with the OS you are running. The advantage with Containers it are more lightweight as you do not need to run multiple kernels on top of each other...
Hypervisor or virtual machine manager, is a program that allows multiple operating systems to share a single hardware host.
JVM or Java Virtual Machine interprets bytecode for a computers processor so that it can perform Java program instructions.
No JVM does not use hypervisor as it is not an virtual machine that runs an OS rather it is just a interpreter.
A host operating system manages different VMs using Hypervisor or virtual machine manager
Before answering your questions, I would recommend you search related entries in wikipedia. A hypervisor is used to run multiple guest OSes while JVM is used to interpret java byte code. JVM runs on top of OS and it doesn't care whether the OS runs on top of bare metal or on a hypervisor. Actually, linux can handle multiple guest operating systems with KVM which is part of the linux kernel. So the description of the last question is totally wrong.

How can a program compiled to machine language run on different machines?

In school we've been taught that compilers compile a computer program to machine language. We've also been taught that the machine language consists of direct instructions to the hardware. Then how can the same compiled program run on several computer configurations with different hardware?
Depends what you mean by 'different hardware' if it is the same processor (or same family eg Intel x86) then the machine code instructions are the same.
If the extra hardware is different peripherals (screens, disks printers etc) then the operating system hides those details by giving you a consistent set of instructions to drive them
If you mean, how can you run a program for an ARM cpu on an Intel x86, then you can't - except by some sort of virtual machine emulator that reads each of the ARM instructions and either translates them into x86 or runs the same functionality as a set of x86 funcs and then returns the same answer that the ARM ones would have done.
Edit: I assume you mean PCs with different hw - ie different peripherals but the same processor family?
Talking to hardware doesn't involve specific instructions as such - it's mostly a matter of moving memory to specific locations where the operating system and/or device driver have specifically reserved for data going to that device. In the old days of DOS and BIOS you would then trigger an interupt to call a specific bit of code in the BIOS to act on that data and send it to the HW.
With an emulator or a virtual machine, either of which effectively translates the machine language on the fly.
I think it is more accurate to say that native compilers compile to a specific instruction set of a processor. Since there are families of processors that keep backwards compatibility: 8086 - 80386 - 80486 - 80586 - Dual Core - Quad Core...; then each processor runs the instructions of its ancestors. If you want to port your code across processor architectures, then you need for sure a virtual machine or emulator, like it was mentioned previously.