xen split driver model - virtualization

iam confused over these two concepts. The xen split driver model and paravirtualization. Are these two the same ? Do you get the split driver model when xen is running in full virtualized mode ?

Paravirtualization is the general concept of making modifications to the kernel of a guest Operating System to make it aware that it is running on virtual, rather than physical, hardware, and so exploit this for greater efficiency or performance or security or whatever. A paravirtualized kernel may not function on physical hardware at all, in a similar fashion to attempting to run an Operating System on incompatible hardware.
The Split Driver model is one technique for creating efficient virtual hardware. One device driver runs inside the guest Virtual Machine (aka domU) and communicates with another corresponding device driver inside the control domain Virtual Machine (aka dom0). This pair of codesigned device drivers function together, and so can be considered to be a single "split" driver.
Examples of split device drivers are Xen's traditional block and network device drivers when running paravirtualized guests.
The situation is blurrier when running HVM guests. When you first install a guest Operating System within a HVM guest, it uses the OS's native device drivers that were designed for use with real physical hardware, and Xen and dom0 emulate those devices for the new guest. However, when you then install paravirtual drivers within the guest (these are the "tools" that you install in the guest on XenServer, or XenClient, and likely also on VMware, etc.) - well, then you're in a different configuration again. What you have there is a HVM guest, running a non-paravirtualized OS, but with paravirtual split device drivers.
So, to answer your question, when you're running in fully virtualized mode, you may or may not be using split device drivers -- it depends on whether or not they are actually installed to be used by the guest OS. Recent Linux kernels already include paravirtual drivers that can be active within a HVM domain.

As I understand it, they're closely related, though not exactly the same. Split drivers means that a driver in domU works by communicating with a corresponding driver in dom0. The communication is done via hypercalls that ask the Xen hypervisor to move data between domains. Paravirtualization means that a guest domain knows it's running under a hypervisor and talks to the hypervisor instead of trying to talk to real hardware, so a split driver is a paravirtualized driver, but paravirtualization is a broader concept.
Split drivers aren't used in an HVM domain because the guest OS uses its own normal drivers, which think they're talking to real hardware.

Related

Query about Virtio PMD in DPDK

I have few queries regarding DPDK's virtio driver when used on ARM machine. We do unbind virti-pci driver(of guest Kernel) from virto-net device emualted by QEMU, and then bind it to VFIO-PCI driver(of guest Kernel) but I didn't understand why we bind virtio-net device to VFIO-PCI driver?
In normal world, VFIO is used to assign device to guest but how it work in case of virtio-pmd driver for DPDK?
Also, is it necessary to have vhost-net in case of virtio-pmd and how it fit in there ?

How is arm trusted OS is different from normal OS?

I am going through the arm trusted firmware architecture from this link https://github.com/ARM-software/arm-trusted-firmware/blob/master/docs/firmware-design.rst and I am confused. I have below queries:-
What is the need of trusted OS ?
How it is different from normal world OS ?
If trusted OS is secure then why not only use trusted OS and remove normal OS ?
from what threats the trusted OS give security and how ?
When is the switch between trusted OS and normal world is required ?
I would suggest you to first read the introductory article on Wikipedia, then the notice about OP-TEE, which is an implementation running on Cortex-A processors, now maintained by Linaro, then the various documentations available here, and finaly look at the OP-TEE code itself.
Now, rather concise answers to your questions could be:
1) What is the need of trusted OS ?
The need is to prevent resources from being directly accessed from the generalist OS it is running concurrently with, for example by a user of the generalist OS having root privileges.
2) How it is different from normal world OS ?
It is small by design, and running with more privileged access to the hardware than the generalist operating system. On an ARMv8-a system, portions of the Trusted OS will run at EL3, when an hypervisor will run at EL2, and Linux at EL1.
3) If trusted OS is secure then why not only use trusted OS and remove normal OS ?
Because of its limited scope: its purpose is not to replace, say, Linux, which has millions of line of working/well tested code, but to secure resources from Linux users at a small cost.
4) From what threats the trusted OS give security and how ?
From attempts from the generalist OS users, either legitimate, or illegitimate (hackers having compromised the generalist OS for example), to access resources/services protected by the Trusted OS.
5) When is the switch between trusted OS and normal world is required ?
It is required at the time some code running in the context of the generalist operating system will need to access a resource managed by the Trusted OS, for example having some encrypted content decrypted using a key which is only accessible by the Trusted OS. This does involve (I think) the use of the SMC instruction.
An other occurence of such a switch is when a hardware interrupt needs to be handled: EL3, EL2 and EL1 have their own interrupt vectors tables, and interrupts happening at EL2 or EL1 can be routed to EL3, so that interrupt handling may be safely handled in the context of the Trusted Execution Environment - thanks to artless noise for reminding me about this.

Does QEMU emulate enough features for vfio to work in the guest?

I'm considering using vfio instead of uio to access a PCI device from userspace code within a QEMU guest.
Can Linux running as a x86_64 QEMU guest use the vfio driver to make an emulated PCI device accessible to a userspace program running in the guest?
It's not clear to me because vfio appears to make heavy use of hardware virtualisation features (such as the IOMMU) and I'm not sure whether QEMU emulates these to the degree required to make this work.
Note that I'm not trying to pass through real PCI devices to the QEMU guest, which is what vfio is traditionally used for (by QEMU itself). Instead I am investigating whether vfio is a suitable alternative to uio within the context of the guest.
The question doesn't mention any elaborations regarding vfio support within
the guest which you may have already come across yourself. That said, it would be useful to address this in the answer.
QEMU does provide VT-d emulation (guest vIOMMU). However, enabling this demands that Q35 platform type be selected. For example, one may enable vIOMMU device in
QEMU with the following options that need to be passed to x86_64-softmmu/qemu-system-x86_64 application on start:
-machine q35,accel=kvm,kernel-irqchip=split -device intel-iommu,intremap=on
This will provide a means to bind a device within the guest to vfio-pci.
More info can be found on QEMU wiki: Features/VT-d .
If you did try following this approach and stuck with malfunction, it would be
nice if you shed some light on your precise observations.

Do all server guest accounts have emulated operating systems?

If a physical server is running a particular operating system and a hypervisor on top, do all guest accounts have the same operating system as that server?
The hypervisor guests run their own operating systems.
Depending on how the hypervisor is implemented (you mention in your question that it runs "on top of" a particular host operating system) parts of what that guest sees as "hardware" may actually be emulated/wrapped by the host OS, but software running on the guest will not be aware of that (it only sees its own guest OS).

How can I tell if I'm running in a VMWARE virtual machine (from linux)?

I have a VMWARE ESX server. I have Redhat VMs running on that server. I need a way of programatically testing if I'm running in a VM. Ideally, I'd like to know how to do this from Perl.
See the answer to "Detect virtualized OS from an application?".
You shouldn't 100% depend on any method, as they are undocumented features/bugs - they work on some host OSes and some virtualization solutions, but there is no guarantee that they will continue working; indeed, the whole point of virtualization is to be as undistinguishable from real metal as possible. With this in mind, the blue pill red pill (which is mentioned in the accepted answer to this similar question) seems to work ... for now.
VMWare has a couple of SDK's, including an SDK for Perl.
I think (depending on the version of esx) you can inspect at the MAC address of the NIC. VMs running in VMWare NIC will have a manufacturer string assigned to VMWare, no the physical NIC MAC. (We were trying to spoof the MAC to VM a license server and newer versions won't let you do it.) Also, this won't guarantee you aren't running on a physical box with a NIC spoofed to look like VMWare, but that would be an odd thing to do in most circumstances anyway.
Run the following command:
lspci | grep VMware
It should show something like this:
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)