I want to demontsrate kernel exploitation on raspberry pi, by using qemu for emulation. When I use vexpress-v2p-ca9.dtb it work, kernel want to execute the userspace code, but when I try to use another dtb for raspi machine which is bcm2709-rpi-2-b.dtb it won't work and there is no error message from the kernel, it just hanging on before it jump to userspace address.
I have unable PAN in kernel config.
I want kernel in raspi dtb able to execute userpace code.
You cannot simply pass a different DTB file to QEMU to cause it to emulate a different machine type. What controls the kind of machine that QEMU emulates is the '-machine' option. The DTB is just a file passed to the guest kernel to tell the guest kernel what it is running on. If this doesn't match what it's actually running on, then the kernel will crash in early bootup, usually without being able to print a message. All of these things need to match up:
the QEMU -machine command line option
the guest kernel (i.e. it needs to be built with support for the machine and its devices)
the guest DTB
Related
I'm in confusion that will an I2C device get detected in raspberry pi even when there
are no device drivers and DTS files related to it.
Will it show up when we use this command
ls /dev/i2c-*
and are we able to detect its address when I try to probe using
i2cdetect -y bus_number
In short:
... when there are no device drivers and DTS files related to it.
Will it show up when we use this command
ls /dev/i2c-*
No. This command will list available I2C buses, not devices.
and are we able to detect its address when I try to probe using
i2cdetect -y bus_number
Maybe. In most cases yes.
A bit more elaborated:
Depending of what kind of I2C device it is, and what you want to do with it, you might still be able to communicate with it.
driver - best case
If you have relevant device tree change to describe this I2C device (on what bus it is located, its address, extra signals if needed - like interrupt pin, etc) and associated driver is present (built-in or as a module, check *_defconfig options in Linux kernel source) - driver should probe device during either boot or manual module loading.
Why best case? If you have a driver you don't have to think about protocols and programming, and, as an example, reading a value from ADC device might be as simple as:
root#pi:~# cat /sys/bus/iio/devices/iio:device0/in_voltage0_raw
291
i2ctools
Another approach would be to use i2cget/i2cset tools from i2ctools package. No device tree changes needed. With these tools you can talk with any unclaimed I2C device on any enabled I2C bus in device tree.
You'll need to implement communication with I2C device by your own. From security and stability perspective - IMO this is the worst case to go, but is good for hardware debugging and, in some cases, initial bring-up.
Example is here.
Note regarding i2cdetect - this command tries to detect devices on particular bus, but gives no warranty. As per man i2cdetect:
As there is no standard I2C detection command, i2cdetect uses arbitrary SMBus commands (namely SMBus quick write and SMBus receive byte) to probe for devices.
I'm considering using vfio instead of uio to access a PCI device from userspace code within a QEMU guest.
Can Linux running as a x86_64 QEMU guest use the vfio driver to make an emulated PCI device accessible to a userspace program running in the guest?
It's not clear to me because vfio appears to make heavy use of hardware virtualisation features (such as the IOMMU) and I'm not sure whether QEMU emulates these to the degree required to make this work.
Note that I'm not trying to pass through real PCI devices to the QEMU guest, which is what vfio is traditionally used for (by QEMU itself). Instead I am investigating whether vfio is a suitable alternative to uio within the context of the guest.
The question doesn't mention any elaborations regarding vfio support within
the guest which you may have already come across yourself. That said, it would be useful to address this in the answer.
QEMU does provide VT-d emulation (guest vIOMMU). However, enabling this demands that Q35 platform type be selected. For example, one may enable vIOMMU device in
QEMU with the following options that need to be passed to x86_64-softmmu/qemu-system-x86_64 application on start:
-machine q35,accel=kvm,kernel-irqchip=split -device intel-iommu,intremap=on
This will provide a means to bind a device within the guest to vfio-pci.
More info can be found on QEMU wiki: Features/VT-d .
If you did try following this approach and stuck with malfunction, it would be
nice if you shed some light on your precise observations.
I try to make two raspberry pi communicate (text) with each other via XBee S2 module. Instead of using XBee shield, I connected XBee and pi with dupont lines(PIN: 3.3V, Tx, Rx, Ground).
Under pi, install minicom and
minicom -b 9600 -D /dev/ttyAMA0
I could enter XBee command mode, where I got reply 'OK' when I type some commands. My test architecture is shown below.
(C)PI-XBee (R)XBee-PI
I set same PANID and destination address as source address of each other. However, I cannot get the message from each other in minicom.
Did I miss something? Or I did need to setup with X-CTU.
Did you exit command mode before sending data (I think the command is ATCN, or just let command mode time out)? Are the modules joined to the same network? Check AI (association indicator, should be zero), SC (scan channels, identical on both modules), CH (channel) and OI (operating PAN ID). The read-only CH and OI should be the same on both modules if they're on the same network. Use ATNR to reset the network on the coordinator, and then on the router to force it to rejoin the network. Be sure to use ATWR to write your settings if you want them to stick after power cycling.
Edit: Turns out both modules had Router firmware installed, so they were both trying to join a network. The S2B has different firmware files for Coordinator and Router/End Device node types. The S2C has a single firmware and uses the setting of ATCE to select coordinator (1) or router/end device (0) operation.
I am working on some start-up (pre-logon) code for Windows 7, and would like to be able to debug it (if only to see how it really works, as Microsoft's documentation is terrible).
My environment is VirtualBox on a Linux host, with three Windows VMs (a Windows 2008 domain controller, a Windows 7 dev machine, and a Windows 7 test machine), and I'd like to be able to debug the startup process of the test machine remotely from the dev machine using a virtual serial connection two virtual machines.
[I have, in another life, debugged Linux kernel drivers in one linux VM from another using VMware workstation on a Windows host so I know that this sort of thing is potentially doable.]
I've seen people using windbg to debug Windows in a VirtualBox VM from the host, but I need to do it from a second guest (because my host is non-Windows). Has anyone figured out how to do that?
Edit:
I had tried the obvious approach before I posted. I created a virtual serial port in each VM configuration and attached them both to the same host pipe, to be created by the dev VM (debugger) and used by the test VM (debugee). I then ran
bcdedit /dbgsettings serial debugport:1 baudrate:115200
bcdedit /debug {current} on
in the test VM and shut it down. Ran windbg in the dev VM selected kernel debugging (on the correct serial port) and restarted the test VM. Some messages appeared about not having any symbols available and the test VM hung.
I have since found this article: http://www.benjaminhumphrey.co.uk/remote-kernel-debugging-windbg-virtualbox/ which (although that guy is using a Windows host) seems to describe exactly the method I'd tried, but his test VM doesn't hang. The output I get in the wndbg window is the same as his, but stops before the line staring "Windows XP Kernel ..."
I'm now less sure that this problem is related to VirtualBox and more unsure as to whether I'm using windbg correctly. Any help would be appreciated.
Another Edit I have tried attaching the virtual serial port of the Test VM to a host file, and I get some debugging output in the file. I have tried setting the virtual serial ports of the two VMs to point to a host pipe and running a terminal (rather than WinDbg) in the Dev VM, and I get debugging information in the terminal.
I think I've now determined that this is definitely a problem with WinDbg rather than VirtualBox (I'll remove the virtualbox tag and replace it with windbg) but I'm not sure why WinDbg isn't talking.
More information:
I've just upgrade Upgrading to VirtualBox 4.2.4 (not sure whether the version matters) and have looked at this again.
I rebuilt the test VM and was more patient!
It now seems that the test VM is running - and I do eventually get some output in the windbg window - but it takes about 15 minutes for the debuggee OS to boot! This is clearly not useful for day-to-day kernel debugging. I have no idea why this should be so slow ... there is no perceptible slowdown if I run a simple terminal in the dev VM instead of windbg (though, of course, the debug information is then mostly garbage).
Any ideas?
I realize this is one helluva necro, but...
Have you tried setting up the debugee for kernel-mode network debugging? I'm thinking that the slowdown is in a large part because serial is so g.d. slow.
http://msdn.microsoft.com/en-us/library/windows/hardware/hh439346%28v=vs.85%29.aspx
If/when M$ decides to rot away that link, these parts of above article are what you need to do to get this set up:
Setting Up the Target Computer
To set up the target computer, follow these steps:
Verify that the target computer has a supported network adapter.
Connect the supported adapter to a network hub or switch using standard CAT5 or better network cable. Do not use a crossover cable, and do not use a crossover port in your hub or switch.
In an elevated Command Prompt window, enter the following commands, where w.x.y.z is the IP address of the host computer, and n is a port number of your choice:
bcdedit /debug on
bcdedit /dbgsettings net hostip:w.x.y.z port:n
bcdedit will display an automatically generated key. Copy the key and store it on a removable storage device like a USB flash drive. You will need the key when you start a debugging session on the host computer.
Note We strongly recommend that you use an automatically generated key. However, you can create your own key as described later in the Creating Your Own Key section.
If there is more than one network adapter in the target computer, use Device Manager to determine the PCI bus, device, and function numbers for the adapter you want to use for debugging. Then in an elevated Command Prompt window, enter the following command, where b, d, and f are the bus number, device number, and function number of the adapter:
bcdedit /set "{dbgsettings}" busparams b.d.f
Reboot the target computer.
And to connect to it, use the following steps:
Using WinDbg
On the host computer, open WinDbg. On the File menu, choose Kernel Debug. In the Kernel Debugging dialog box, open the Net tab. Enter your port number and key. Click OK.
You can also start a session with WinDbg by opening a Command Prompt window and entering the following command, where n is your port number and Key is the key that was automatically generated by bcdedit when you set up the target computer:
windbg -k net:port=n,key=Key
If you are prompted about allowing WinDbg to access the port through the firewall, allow WinDbg to access the port for all the different network types.
Using KD
On the host computer, open a Command Prompt window. Enter the following command, where n is your port number and Key is the key that was automatically generated by bcdedit when you set up the target computer:
kd -k net:port=n,key=Key
If you are prompted about allowing KD to access the port through the firewall, allow KD to access the port for all the different network types.
iam confused over these two concepts. The xen split driver model and paravirtualization. Are these two the same ? Do you get the split driver model when xen is running in full virtualized mode ?
Paravirtualization is the general concept of making modifications to the kernel of a guest Operating System to make it aware that it is running on virtual, rather than physical, hardware, and so exploit this for greater efficiency or performance or security or whatever. A paravirtualized kernel may not function on physical hardware at all, in a similar fashion to attempting to run an Operating System on incompatible hardware.
The Split Driver model is one technique for creating efficient virtual hardware. One device driver runs inside the guest Virtual Machine (aka domU) and communicates with another corresponding device driver inside the control domain Virtual Machine (aka dom0). This pair of codesigned device drivers function together, and so can be considered to be a single "split" driver.
Examples of split device drivers are Xen's traditional block and network device drivers when running paravirtualized guests.
The situation is blurrier when running HVM guests. When you first install a guest Operating System within a HVM guest, it uses the OS's native device drivers that were designed for use with real physical hardware, and Xen and dom0 emulate those devices for the new guest. However, when you then install paravirtual drivers within the guest (these are the "tools" that you install in the guest on XenServer, or XenClient, and likely also on VMware, etc.) - well, then you're in a different configuration again. What you have there is a HVM guest, running a non-paravirtualized OS, but with paravirtual split device drivers.
So, to answer your question, when you're running in fully virtualized mode, you may or may not be using split device drivers -- it depends on whether or not they are actually installed to be used by the guest OS. Recent Linux kernels already include paravirtual drivers that can be active within a HVM domain.
As I understand it, they're closely related, though not exactly the same. Split drivers means that a driver in domU works by communicating with a corresponding driver in dom0. The communication is done via hypercalls that ask the Xen hypervisor to move data between domains. Paravirtualization means that a guest domain knows it's running under a hypervisor and talks to the hypervisor instead of trying to talk to real hardware, so a split driver is a paravirtualized driver, but paravirtualization is a broader concept.
Split drivers aren't used in an HVM domain because the guest OS uses its own normal drivers, which think they're talking to real hardware.