How is arm trusted OS is different from normal OS? - operating-system

I am going through the arm trusted firmware architecture from this link https://github.com/ARM-software/arm-trusted-firmware/blob/master/docs/firmware-design.rst and I am confused. I have below queries:-
What is the need of trusted OS ?
How it is different from normal world OS ?
If trusted OS is secure then why not only use trusted OS and remove normal OS ?
from what threats the trusted OS give security and how ?
When is the switch between trusted OS and normal world is required ?

I would suggest you to first read the introductory article on Wikipedia, then the notice about OP-TEE, which is an implementation running on Cortex-A processors, now maintained by Linaro, then the various documentations available here, and finaly look at the OP-TEE code itself.
Now, rather concise answers to your questions could be:
1) What is the need of trusted OS ?
The need is to prevent resources from being directly accessed from the generalist OS it is running concurrently with, for example by a user of the generalist OS having root privileges.
2) How it is different from normal world OS ?
It is small by design, and running with more privileged access to the hardware than the generalist operating system. On an ARMv8-a system, portions of the Trusted OS will run at EL3, when an hypervisor will run at EL2, and Linux at EL1.
3) If trusted OS is secure then why not only use trusted OS and remove normal OS ?
Because of its limited scope: its purpose is not to replace, say, Linux, which has millions of line of working/well tested code, but to secure resources from Linux users at a small cost.
4) From what threats the trusted OS give security and how ?
From attempts from the generalist OS users, either legitimate, or illegitimate (hackers having compromised the generalist OS for example), to access resources/services protected by the Trusted OS.
5) When is the switch between trusted OS and normal world is required ?
It is required at the time some code running in the context of the generalist operating system will need to access a resource managed by the Trusted OS, for example having some encrypted content decrypted using a key which is only accessible by the Trusted OS. This does involve (I think) the use of the SMC instruction.
An other occurence of such a switch is when a hardware interrupt needs to be handled: EL3, EL2 and EL1 have their own interrupt vectors tables, and interrupts happening at EL2 or EL1 can be routed to EL3, so that interrupt handling may be safely handled in the context of the Trusted Execution Environment - thanks to artless noise for reminding me about this.

Related

Does QEMU emulate enough features for vfio to work in the guest?

I'm considering using vfio instead of uio to access a PCI device from userspace code within a QEMU guest.
Can Linux running as a x86_64 QEMU guest use the vfio driver to make an emulated PCI device accessible to a userspace program running in the guest?
It's not clear to me because vfio appears to make heavy use of hardware virtualisation features (such as the IOMMU) and I'm not sure whether QEMU emulates these to the degree required to make this work.
Note that I'm not trying to pass through real PCI devices to the QEMU guest, which is what vfio is traditionally used for (by QEMU itself). Instead I am investigating whether vfio is a suitable alternative to uio within the context of the guest.
The question doesn't mention any elaborations regarding vfio support within
the guest which you may have already come across yourself. That said, it would be useful to address this in the answer.
QEMU does provide VT-d emulation (guest vIOMMU). However, enabling this demands that Q35 platform type be selected. For example, one may enable vIOMMU device in
QEMU with the following options that need to be passed to x86_64-softmmu/qemu-system-x86_64 application on start:
-machine q35,accel=kvm,kernel-irqchip=split -device intel-iommu,intremap=on
This will provide a means to bind a device within the guest to vfio-pci.
More info can be found on QEMU wiki: Features/VT-d .
If you did try following this approach and stuck with malfunction, it would be
nice if you shed some light on your precise observations.

Do all server guest accounts have emulated operating systems?

If a physical server is running a particular operating system and a hypervisor on top, do all guest accounts have the same operating system as that server?
The hypervisor guests run their own operating systems.
Depending on how the hypervisor is implemented (you mention in your question that it runs "on top of" a particular host operating system) parts of what that guest sees as "hardware" may actually be emulated/wrapped by the host OS, but software running on the guest will not be aware of that (it only sees its own guest OS).

RFC or something for choosing private ethernet MAC addresses?

for virtualization purposes I want to assign all my VM ethernet cards an individual static MAC-address. This way it is easier for me to manage DHCP.
Normally ESXi Server chooses an address with this prefix: 00:50:56
But I don't want to use this prefix to don't get problems with later/other auto generated MACs or later joining VMs.
The VMs are only connected via NAT to the Internet, so the MAC will stay private.
Is there a guideline which MAC to choose?
Something like a private MAC address space?
Or should I just choose a random one?
Lots of information here. Creating your own MAC addresses can lead to conflict, but if they are only private, it should not cause you too much issue. How to modify the MAC for your particular Virtualization technology is of course not addressed here.
Wikipedia MAC Address
Okay, I found out, that in newer versions of ESXi Server / VMWare dynamic assigned MAC addresses will start with 00:0c:29.
In older versions and when MACs are static assigned, they will start with 00:50:56. So it is secure to choose the 00:50:56 prefix when I can be sure not to use older and newer VMs at the same time.
http://communities.vmware.com/message/55378

xen split driver model

iam confused over these two concepts. The xen split driver model and paravirtualization. Are these two the same ? Do you get the split driver model when xen is running in full virtualized mode ?
Paravirtualization is the general concept of making modifications to the kernel of a guest Operating System to make it aware that it is running on virtual, rather than physical, hardware, and so exploit this for greater efficiency or performance or security or whatever. A paravirtualized kernel may not function on physical hardware at all, in a similar fashion to attempting to run an Operating System on incompatible hardware.
The Split Driver model is one technique for creating efficient virtual hardware. One device driver runs inside the guest Virtual Machine (aka domU) and communicates with another corresponding device driver inside the control domain Virtual Machine (aka dom0). This pair of codesigned device drivers function together, and so can be considered to be a single "split" driver.
Examples of split device drivers are Xen's traditional block and network device drivers when running paravirtualized guests.
The situation is blurrier when running HVM guests. When you first install a guest Operating System within a HVM guest, it uses the OS's native device drivers that were designed for use with real physical hardware, and Xen and dom0 emulate those devices for the new guest. However, when you then install paravirtual drivers within the guest (these are the "tools" that you install in the guest on XenServer, or XenClient, and likely also on VMware, etc.) - well, then you're in a different configuration again. What you have there is a HVM guest, running a non-paravirtualized OS, but with paravirtual split device drivers.
So, to answer your question, when you're running in fully virtualized mode, you may or may not be using split device drivers -- it depends on whether or not they are actually installed to be used by the guest OS. Recent Linux kernels already include paravirtual drivers that can be active within a HVM domain.
As I understand it, they're closely related, though not exactly the same. Split drivers means that a driver in domU works by communicating with a corresponding driver in dom0. The communication is done via hypercalls that ask the Xen hypervisor to move data between domains. Paravirtualization means that a guest domain knows it's running under a hypervisor and talks to the hypervisor instead of trying to talk to real hardware, so a split driver is a paravirtualized driver, but paravirtualization is a broader concept.
Split drivers aren't used in an HVM domain because the guest OS uses its own normal drivers, which think they're talking to real hardware.

iOS communicating with OS X

I'm looking for a pointer in the right direction to get started with writing an iPhone app that sends commands to OS X, for example telling OS X to sleep. I can't seem to find the relevant part of the documentation?
AFAIK, most apps have been performing this kind of communication with a client/server design, where a "server" app runs on the host OS X machine, and a "client" app on the iOS device connecting using some sort of protocol (HTTP? Bonjour?).
You won't find this in the documentation because this is a niche design pattern that few apps need (especially since documents can now be shared more easily with the new version of iOS and iTunes).
iOS doesn't support ObjC remote objects, which would be the easiest way to communicate between two OS X machines.
An alternative to the HTTP client/server approach could be making your iPhone app connect in via a remote Unix shell to the OS X machine (via ssh) and then issuing Unix or AppleScript commands to perform your system actions.
You could also set up a socket connection. I have done a app for Android, that does exactly the same for Windows computers. The app is in use in a computer store :)
For Mac, you have to use sudo command, so you need the user to type in password, on first use.
Then the server application on the Mac can send "sudo shutdown -h now" to the terminal when the server application gets a predefined byte stream on the socket input.