I have few queries regarding DPDK's virtio driver when used on ARM machine. We do unbind virti-pci driver(of guest Kernel) from virto-net device emualted by QEMU, and then bind it to VFIO-PCI driver(of guest Kernel) but I didn't understand why we bind virtio-net device to VFIO-PCI driver?
In normal world, VFIO is used to assign device to guest but how it work in case of virtio-pmd driver for DPDK?
Also, is it necessary to have vhost-net in case of virtio-pmd and how it fit in there ?
Related
I want to demontsrate kernel exploitation on raspberry pi, by using qemu for emulation. When I use vexpress-v2p-ca9.dtb it work, kernel want to execute the userspace code, but when I try to use another dtb for raspi machine which is bcm2709-rpi-2-b.dtb it won't work and there is no error message from the kernel, it just hanging on before it jump to userspace address.
I have unable PAN in kernel config.
I want kernel in raspi dtb able to execute userpace code.
You cannot simply pass a different DTB file to QEMU to cause it to emulate a different machine type. What controls the kind of machine that QEMU emulates is the '-machine' option. The DTB is just a file passed to the guest kernel to tell the guest kernel what it is running on. If this doesn't match what it's actually running on, then the kernel will crash in early bootup, usually without being able to print a message. All of these things need to match up:
the QEMU -machine command line option
the guest kernel (i.e. it needs to be built with support for the machine and its devices)
the guest DTB
I have an Intel 82599ES 10G NIC which supports Intel SR-IOV. I have successfully created 8 virtual functions (VF) of it and assigned to 2 qemu/kvm VMs (2 VFs per each VM). Both of the VMs run DPDK applications (warp17 on one and my custom application on other) using assigned VFs. What I need to do is test my custom DPDK application by sending traffic through it using warp17. My test setup looks like this,
The red arrow represents the traffic path.
My Physical NIC (PF) use dpdk poll mode driver (igb_uio). What I need to do is route traffic between VFs as shown by the red arrows. I think https://doc.dpdk.org/guides/prog_guide/switch_representation.html has explained switching behavior but I cannot understand it. warp17 and my custom dpdk application both works perfectly on physical hardware. What I trying to do is virtualize my test setup to preserve resources. Has anyone tried to do such configuration?
neither X710 fortville and Ninatic 82599ES ASIC does not have internal Bridging or forwarding VERBor feature. The best option is to have software virtual switch like SPP, OVS-DPDK or custom application to forward packets via virtio or tap.
if you still want to use physical NIC or x710 or 82599ES you will need to have connection at other end and run the logic to direct packets to relevant VF (modifying dst mac).
Edit-1: (as per DPDK 20.11) VEB virtual ethernet Bridging is an option, but specific NIC firmware and driver is required to create VEB on PF then propagate to VF. Once done the NIC can not receive packets from the Outside world
I'm considering using vfio instead of uio to access a PCI device from userspace code within a QEMU guest.
Can Linux running as a x86_64 QEMU guest use the vfio driver to make an emulated PCI device accessible to a userspace program running in the guest?
It's not clear to me because vfio appears to make heavy use of hardware virtualisation features (such as the IOMMU) and I'm not sure whether QEMU emulates these to the degree required to make this work.
Note that I'm not trying to pass through real PCI devices to the QEMU guest, which is what vfio is traditionally used for (by QEMU itself). Instead I am investigating whether vfio is a suitable alternative to uio within the context of the guest.
The question doesn't mention any elaborations regarding vfio support within
the guest which you may have already come across yourself. That said, it would be useful to address this in the answer.
QEMU does provide VT-d emulation (guest vIOMMU). However, enabling this demands that Q35 platform type be selected. For example, one may enable vIOMMU device in
QEMU with the following options that need to be passed to x86_64-softmmu/qemu-system-x86_64 application on start:
-machine q35,accel=kvm,kernel-irqchip=split -device intel-iommu,intremap=on
This will provide a means to bind a device within the guest to vfio-pci.
More info can be found on QEMU wiki: Features/VT-d .
If you did try following this approach and stuck with malfunction, it would be
nice if you shed some light on your precise observations.
iam confused over these two concepts. The xen split driver model and paravirtualization. Are these two the same ? Do you get the split driver model when xen is running in full virtualized mode ?
Paravirtualization is the general concept of making modifications to the kernel of a guest Operating System to make it aware that it is running on virtual, rather than physical, hardware, and so exploit this for greater efficiency or performance or security or whatever. A paravirtualized kernel may not function on physical hardware at all, in a similar fashion to attempting to run an Operating System on incompatible hardware.
The Split Driver model is one technique for creating efficient virtual hardware. One device driver runs inside the guest Virtual Machine (aka domU) and communicates with another corresponding device driver inside the control domain Virtual Machine (aka dom0). This pair of codesigned device drivers function together, and so can be considered to be a single "split" driver.
Examples of split device drivers are Xen's traditional block and network device drivers when running paravirtualized guests.
The situation is blurrier when running HVM guests. When you first install a guest Operating System within a HVM guest, it uses the OS's native device drivers that were designed for use with real physical hardware, and Xen and dom0 emulate those devices for the new guest. However, when you then install paravirtual drivers within the guest (these are the "tools" that you install in the guest on XenServer, or XenClient, and likely also on VMware, etc.) - well, then you're in a different configuration again. What you have there is a HVM guest, running a non-paravirtualized OS, but with paravirtual split device drivers.
So, to answer your question, when you're running in fully virtualized mode, you may or may not be using split device drivers -- it depends on whether or not they are actually installed to be used by the guest OS. Recent Linux kernels already include paravirtual drivers that can be active within a HVM domain.
As I understand it, they're closely related, though not exactly the same. Split drivers means that a driver in domU works by communicating with a corresponding driver in dom0. The communication is done via hypercalls that ask the Xen hypervisor to move data between domains. Paravirtualization means that a guest domain knows it's running under a hypervisor and talks to the hypervisor instead of trying to talk to real hardware, so a split driver is a paravirtualized driver, but paravirtualization is a broader concept.
Split drivers aren't used in an HVM domain because the guest OS uses its own normal drivers, which think they're talking to real hardware.
I have a VMWARE ESX server. I have Redhat VMs running on that server. I need a way of programatically testing if I'm running in a VM. Ideally, I'd like to know how to do this from Perl.
See the answer to "Detect virtualized OS from an application?".
You shouldn't 100% depend on any method, as they are undocumented features/bugs - they work on some host OSes and some virtualization solutions, but there is no guarantee that they will continue working; indeed, the whole point of virtualization is to be as undistinguishable from real metal as possible. With this in mind, the blue pill red pill (which is mentioned in the accepted answer to this similar question) seems to work ... for now.
VMWare has a couple of SDK's, including an SDK for Perl.
I think (depending on the version of esx) you can inspect at the MAC address of the NIC. VMs running in VMWare NIC will have a manufacturer string assigned to VMWare, no the physical NIC MAC. (We were trying to spoof the MAC to VM a license server and newer versions won't let you do it.) Also, this won't guarantee you aren't running on a physical box with a NIC spoofed to look like VMWare, but that would be an odd thing to do in most circumstances anyway.
Run the following command:
lspci | grep VMware
It should show something like this:
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)