Is PCI 3.0 compatible with PCI 2.1? - pci

I upgrade my cPCI board to an adlink-3970 (PCI 3.0). The machine boots up to windows, all the drivers are installed correctly including the my PCI 2.1 device's driver (meaning the CPU was able to read the ROM over the PCI bus). However, when I try reading data from my PCI 2.1 device, all the registers read 0. Are these two boards not compatible?
More info:
I've read that they should be compatible and the electrical/mechanical specs indicate that they are. I've also tried swapping in for another CPU board but with the same results. The only difference in the upgraded board is that the CPU uses a PCIe-PCI bridge to communicate on the PCI bus. I'm wondering if that's the issue.

PCIe is backward compatible. A PCIe3.0 card will work on a PCIe2.0 slot.

Related

Trying to run Raspberry-Pi image under QEMU, but VM memory is limited to 256MB

I want to build a time consuming package (mediapipe) on my Raspberry-Pi buster image under QEMU. So far, I've gotten the image to load and run (including with network connectivity); however, I'm limited to 256MB of storage, which just isn't enough to do much - especially build a mediapipe. Can someone explain why Raspbian images running under QEMU seem to be limited to 256MB?
I've seen some posts about people running with 512MB and even one with 1GB, but they don't seem to be very successful. Can anyone explain the reason for the restriction, and a potential fix?
The problem here is that a lot of people claim to be running "raspberry pi emulation in QEMU" when they're actually just running Raspbian userspace on top of a kernel for a different machine emulation. So it's easy to be confused if you look at several different tutorials that are really describing entirely different emulation setups. Look for what machine type they pass QEMU.
The "versatilepb" machine type gets used in a lot of tutorials, especially older ones, because it has been in QEMU a long time and it is possible to get it to work with the 1176 CPU that the classic Raspberry Pi boards used. This specific machine has a 256MB maximum memory size, because the real hardware it's emulating has that restriction (it's imposed by the way the physical memory address space is designed). This machine type will never be able to support more RAM, so if you need more then you should ignore any tutorial or setup that uses it.
More recent versions of QEMU really do emulate the actual raspberry pi hardware; these are the raspi0, raspi1ap, raspi2b, raspi3ap, raspi3b machine types. These will have the same amount of RAM as the real raspi hardware they're emulating (either 512MB or 1GB). The downside of these board models is that some of the device emulation is lacking features -- so older QEMU will often not correctly boot a newer kernel, and sometimes devices you would like to use are not present. Also, because the raspi boards hang their ethernet device off the USB controller, the only way to get ethernet on these QEMU models would also be to use a USB ethernet device, eg with:
-device usb-net,netdev=eth0 -netdev user,id=eth0
This probably needs a recent QEMU version to get a working USB controller.
I don't know if there are any tutorials/recipes for running Raspbian on top of the QEMU "virt" board. If there are, this would probably be the best experience, because the virt board permits lots of memory, PCI devices, virtio devices, and is well maintained.

Enabling Intel SGX in BIOS

I want to test Intel SGX technology on my Lenovo Tower S510 10L3-000JFM. I checked via https://github.com/ayeks/SGX-hardware that my CPU Intel Core i7-6700 supports SGX but BIOS does not, or may be not enabled (in BIOS). A BIOS update can fix this. However, a recent BIOS update from Lenovo in https://pcsupport.lenovo.com/us/en/products/desktops-and-all-in-ones/lenovo-s-series-all-in-ones/s510-desktop/10kw/downloads/ds112505 does not specify that explicitly as I do not want to proceed to this risky operation without being sure.
My question is: is this BIOS update supporting Intel SGX? Or not?
Any help or resources are welcomed.
Last BIOS update is on 01/09/2016 and last CPU microcode update is on 07/01/2016.
According to a Lenovo BIOS engineer, BIOS for this computer model does not support Intel SGX and there is no plan for the future.
The Linux kernel does not transparently handle the Intel SGX. An application has to be written specifically for Intel SGX to use it.
If you just want to write code for Intel SGX, you can use the SIMULATION mode provided in the SGX SDK to write code and test it out. You won't be able to use Remote Attestation (and Local attestation) as it requires access to the hardware. Apart from that, everything should work fine.

PowerPC 970 Based Macs, Why Is Hypervisor Mode Unavailable?

I recently have acquired a Apple G5 computer (PPC 970) and am interested in learning more about the PowerPC architecture (most of my systems programming knowledge comes from x86 and my own hobby kernel).
After using the computer a while and getting used to PowerPC assembly (RISC), I noticed that low level CPU virtualization is not possible on PowerPC 970 based Macs. The CPU in documentation (PowerPC 64) seems to support hypervisor mode, but it has been noted that it is not possible due to Open Firmware.
Do all operating systems which are loaded from Open Firmware on PowerPC 970 series Macs load in hypervisor mode, making "nested" virtualization impossible? If this is true, why does Open Firmware load all Operating systems in hypervisor mode? Is this in order to provide a secure layer for communication between the the Operating System and Open Firmware (using firmware for everything except ACPI and memory discovery during boot, which requires a transition into "real-mode", is unsafe in x86?).
Also if the Operating system were using hyper-calls to facilitate a secure transition to firmware based routines, wouldn't this impose a large penalty just as syscalls do?
I'm not privy to Apple's hardware designs, but I've heard that the HV mode (ie., HV=1 in the Machine State Register) was disabled, through hardware, on the CPUs used in the G5 machines.
If this is the case, then it's not up to the system firmware to enable/disable HV mode - it's simply not available.
At the time that these machines were available, other Power hardware designs had a small amount of firmware running in HV=1 mode, and only exposed HV=0 to the kernel. However, the G5 wasn't one of these.

Understanding webcam 's Linux device drivers

As far as I know, device driver is a part of SW that is able to communicate with a particular type of device that is attached to a computer.
In case of a USB webcam, the responsible driver is UVC that supports any UVC compliant device. This means that enables OS or other computer program to access hardware functions without needing to know precise details of the hardware being used.
For this reason, I installed UVC Linux device driver by running:
opkg install kernel-module-uvcvideo
Webcam has been recognised by Linux kernel: dev/video0. However, I still wasn't able to perform video streaming with FFmpeg, as I was missing V4L2 API. I installed V4L2, by configuring kernel.
My queries are:
How UVC driver and V4L2 are linked together?
What is the purpose of V4L2 API?
If I haven't installed UVC first, it would be installed with V4L2?
LinuxTV refers: The uvcvideo driver implementation is adherent only to the V4L2 API. This means that UVC is part of V4L2 API?

Allow guest OS to access graphics adapter directly

Modern hardware-assisted desktop virtualization products (like VMWare Workstation or VirtualBox) normally provide the guest OS with a virtual graphics adapter that has a limited functionality.
Is it possible to switch the adapters, i.e provide guest OS with direct access to the real graphics adapter, and assign a virtual graphics adapter to the host OS? Is there any software that has this functionality? If not, is it possible to develop such system? Let's assume we only have a single guest OS.
It should be possible soon with using VGA passthrough as implemented by Xen 4 (unstable branch for now):
Quoting the Xen FAQ:
"Xen 4.0.0 is the first version to support VGA graphics adapter passthrough to Xen HVM (fully virtualized) guests. This means you can give HVM guest full and direct control of the graphics adapter, making it possible to have high performance full 3D and video acceleration in a virtual machine"
"Xen VGA passthrough requires IOMMU (Intel VT-d) support from the motherboard chipset, from the motherboard BIOS and from Xen."
Note that only a few motherboard support IOMMU for now. See the FAQ for more info.
I/O hardware Virtualization especially for graphics card is made using technology called IOMMU.
AMD has published a specification for IOMMU technology in the HyperTransport architecture. Intel has published a specification for IOMMU technology as Virtualization Technology for Directed I/O, abbreviated VT-d.
With virtualization, guest operating systems can use hardware that is not specifically made for virtualization. An example for IOMMU is Graphics Address Remapping Table (GART) used by AGP and PCI Express graphics cards. Higher performance hardware such as graphics cards use DMA to access memory directly; in a virtual environment all the memory addresses are remapped by the virtual machine software, which causes DMA devices to fail. The IOMMU handles this remapping, allowing for the native device drivers to be used in a guest operating system.
Most of the Virtualization softwares supports hardware acceleration for OpenGL and some of them provides experimental Direct3D acceleration such as VMWare. Products from VMware, Citrix and VirtualBox etc. provides hardware accelaration
What processor?
This is the idea behind I/O virtualization (Intel's implementation is called VT-d). You need CPU support to allow the guest direct access to the video hardware while blocking it from stomping on other resources, such as the disk system.