Can applications running in ring0 be secure without formal verification? - operating-system

How can one ensure security without formal verification of a program that runs in ring0? Could a VM be used without differing userspace kernelspace?

The question is slightly confusing, but I'll do my best to answer.
Running any untrusted code in a privileged mode is unlikely to be "secure" in the sense that most people understand it. As you correctly surmise, however, it is possible to use something akin to a virtual machine in order to moderate the actions which an untrusted process can take within that environment. This is the principle upon which modern "hypervisors" operate - access to the hardware (or memory) is moderated by some piece of "monitor" software or hardware.
That said, if you are taking that approach, it's likely to be the case that formal verification of the virtual machine is highly desirable. Otherwise it seems possible that a maliciously constructed program could find a way to escape from the virtual machine, or make the virtual machine behave in undesirable ways.
A reasonable modern approach to this problem is to use proof carrying code, in which a piece of untrusted code carries with it a machine-checkable proof that it behaves according to some security policy. All the host operating system needs to do at that point is to check the proof against the code (a reasonably computationally cheap operation), and then it is safe to run that code without needing to virtualise it or do any runtime checking.

Related

Why we need binary translation in full virtualization?

In hardware assisted virtualization, guest operation system runs on Ring 0, therefore it can run privileged instruction directly, am I right?
So why in full virtualization, VMM just won't run guest privileged instructions on Ring 0? why we need to translate them?
One reason which come into mind is different architectures (Different guest and host). is there anything more?
therefore it can run privileged instruction directly, am I right?
No, it is not completely true. Privileged instructions would still attempt accessing privileged resources and thus cannot be allowed to see/change them behind VMM's back. Therefore they would trap. That is why a classic VMM executes guests with "trap-and-emulate" approach. The majority of guest instructions that are non-privileged are executed directly, and privileged ones trap and a emulated one-by-one. No translation, that is, transformation of large (>1 guest instruction) blocks of the code is required in any case.
Alternatively, a system resource can be made non-privileged and thus instructions accessing it turn into innocuous inside the virtualized environment.
So why in full virtualization, VMM just won't run guest privileged instructions on Ring 0?
"Ring 0" is just a number, it does not mean much except that certain instructions receive new semantics: instead of faulting as they would do on the higher rings they are allowed to access system resources. But inside a VMM, they are not allowed to do that.
why we need to translate them?
We don't, individual privileged instructions may be trapped and then emulated, or interpreted. "Translation" as a term has meaning only for blocks of instructions.
One reason which come into mind is different architectures
That is a some sort of a degenerative case when 100% of guest instructions are "privileged", i.e. they will not behave as expected on chosen host. It does not make sense to attempt executing them directly, and interpreting each and every of them is too slow for many applications. This is where translation == compilation of bigger blocks starts making sense.
is there anything more?
For Intel architecture, there are certain architectural idiosyncrasies that sometimes make the idea of (temporarily) disabling hardware-assisted virtualization and falling back to binary translation beneficial in terms of speed and correctness. However, I assume this topic to be part of another, more specific question, as the answer is quite involved and requires deep understanding of Intel VT-x.

How to imitate servers (without loss of computing power)?

I have production environment, which is running on one server. But I need to run 2 instances of one software, each on "another" server.
Is it possible to imitate more servers on one real server for free? Without loss of computing power and network flow in/out of the real server?
EDIT:
In another words: I want to run two instances of the same software on one machine.
And then I need to use some function that transport some subinstance from instance1 into instance2. But this function is only possible to use when instance1 is on another server than instance2. So I need to imitate that one of both instances running on local is on different servers.
I'm making an assumption that you are using Windows, in which case you could use a Hypervisor like Hyper-V however if you have only purchased one license of Windows you may be fairly limited in what you can run in a production capacity.
If you mean that the software you need to run only has one license you typically are not allowed to virtualize it either, so it seems like the answer is legally you are not going to be able to do much with just one license, however my assumptions may be all wrong, your question wasn't clear enough.

Programming considerations for virtualized applications

There are lots of questions on SO asking about the pros and cons of virtualization for both development and testing.
My question is subtly different - in a world in which virtualization is commonplace, what are the things a programmer should consider when it comes to writing software that may be deployed into a virtualized environment? Some of my initial thoughts are:
Detecting if another instance of your application is running
Communicating with hardware (physical/virtual)
Resource throttling (app written for multi-core CPU running on single-CPU VM)
Anything else?
You have most of the basics covered with the three broad points. Watch out for:
Hardware communication related issues. Disk access speeds are vastly different (and may have unusually high extremes - imagine a VM that is shut down for 3 days in the middle of a disk write....). Network access may interrupt with unusual responses
Fancy pointer arithmetic. Try to avoid it
Heavy reliance on unusually uncommon low level/assembly instructions
Reliance on machine clocks. Remember that any calls you're making to the clock, and time intervals, may regularly return unusual values when running on a VM
Single CPU apps may find themselves running on multiple CPU machines, that do funky things like Work Stealing
Corner cases and unusual failure modes are much more common. You might not have to worry as much that the network card will disappear in the middle of your communication on a real machine, as you would on a virtual one
Manual management of resources (memory, disk, etc...). The more automated the work, the better the virtual environment is likely to be at handling it. For example, you might be better off using a memory-managed type of language/environment, instead of writing an application in C.
In my experience there are really only a couple of things you have to care about:
Your application should not fail because of CPU time shortage (i.e. using timeouts too tight)
Don't use low-priority always-running processes to perform tasks on the background
The clock may run unevenly
Don't truss what the OS says about system load
Almost any other issue should not be handled by the application but by the virtualizer, the host OS or your preferred sys-admin :-)

Benefits of JVM atop an OS VM?

I see many deployments where IT groups run effectively nothing but a JVM application stack inside a VM (vmware, &c) instance.
I guess I consider the JVM to be a formal VM: what real benefit is it to run your Java application stack inside another VM?
Two JVM instances within the same (real or virtualized) machine wouldn't be completely isolated from each other: they couldn't both have sockets listening on the same well-known numbered port, they might interfere with each other if they both wrote in the same filesystem, and so on, and so forth.
Using OS-level VMs (vmware or whatever) does guarantee you as much isolation as you would have on physically separate systems, which is quite a different proposition.
It's an unfortunate terminology collision
Those are really two different terms that unfortunately use the same english words, but have only a rather abstract connection.
IBM used the term "virtual machine" first, so I guess we can't rename that one to "virtual server" or something.
Too bad "software framework" doesn't have VM in its initials. If you think of the JVM that way it will be obvious that you are really just running a framework in a VM, not a thing inside the same kind of thing...
So a real VM can casually give away super user shell accounts, ssh access, software installation privs, ....
what real benefit is it to run your Java application stack inside another VM?
By doing this, your JVM will run on virtualized hardware that you can modify and run in parallel of other virtual machines. This is a nice way to slice a big server into "shares" that you can allocate on demand.
(EDIT: I'm answering a comment from the OP directly in this answer)
I get what you're saying, but why would one not be able to do the very same thing as separate processes on the host OS?
I could mention that a guest can possibly run another OS but this is not the most important part. As pointed out in another answer, the biggest difference is that a virtual machine is isolated from other VMs, it's are real dedicated environment. The port stuff was a good example but I prefer to illustrate it this way: another process won't eat "your" CPU cycles. This is a very important difference, especially for IT teams that usually don't like to share resources. Instead, you can size a virtual machine exactly as needed, possibly dynamically, and bill IT teams for what they are really using. This is IMO what makes mutualisation of resources actually possible (and thus costs cutting).

Virtualization and why it is good for programmers

Why does it help to know about virtualization from a programmer's perspective? Except testing and developing on several different platforms without the need of switching between operating systems is there a particular reason why virtualization is important for a programmer? Are there any details that must be kept in mind before developing on virtual instances?
I use it for testing our installer, because it is important to check whether the application will work on a clean installation of the operating system.
I used to do these tests by keeping a hard drive with a fresh operating system installation and making a copy of that disk for (almost) every new test run. This was very time consuming, and the virtual machine solution has saved me a lot of time. Note that this even allows you to do remote debugging as easily as when using two non-virtual machines.
Note: If you're interested, I'm using VirtualBox, which is a very good and free virtualization tool.
If you develop a driver or something very close to the hardware with a high risk to crash the machine, you will be glad to be working on a virtual machine.
Reverting to an old state is easier than to repair a damaged OS.
One of the main advantages is having your entire development environment as a single image file. I have a perfectly configured version of Windows Server, Visual Studio, ReSharper, etc. I can easily try a new version of something on a copy of this virtual machine without worrying about it causing problems.
I can also back up my entire dev environment to transfer it to another physical machine very easily. I've been through 3 machines in this office alone so that was a lifesaver in itself.
The only real trade-off I see is performance. You generally have to use less physical CPU cores than you actually have and less memory. With a sufficiently powerful machine this is not much of a problem though.
Edit: As nader said, I/O is obviously important for most projects as well. Although developing on a virtual machine does mean a fairly large I/O penalty compared with a native OS install, in practice I rarely find it to be a problem. The superior random access capabilities of SSDs are helping to mitigate this drawback as well.
Being able to completely reset the state of the system is very useful to debug applications which modify their environment - If the actions are repeated after a reset, and they're constrained to the sandbox environment of the VM, you are pretty much guaranteed to get the same result.
We have a large number of different versions / customer customisations of our software, and its not possible for 2 installs of our software to coexist on the same machine. Virtualisation allows us to replace the 50-60 physical machines that we need to maintain for testing and problem reproduction with 2-3 virtual servers - it takes around 10 miniutes to make a copy of a VHD template we have and create a new virtual machine, and as long as you allocate 1-2Gb of RAM the performance is comparable to that of a (slow) physical machine.
Virtual machines are also great for build machines.
Personally I do all of my development on my deskop machine for best performance, and remote debug into VMs. I dont run virtual machines on my desktop as it uses up too much RAM, we have dedicated virtual servers for that.
Good for developing, because you have same server configuration in virtual machine like on production server.
https://stackoverflow.com/questions/905926/developer-software-setup
From a user space application there should be no difference developing for a virtualised OS versus a normal OS. There may be some gotchas if your code makes explicit assumptions of the machines memory size and number of processors and believes what the hypervisor tells you.
I'm surprised no one has mentioned the ease of deployment. All you need to do is get the build down on the virtual O/S and then you can copy the image to as many new servers (running some kind of virtualization solution [like VMWare]) as you want, easily scaling your application.
Record the state of a bug in a program, and send it to the developer (along with the entire "machine").
Testing your code on various O.S, some of which you don't have.
Working in a more protected environment, making sure that the code doesn't harm your system -useful for understanding dangerous programs, like viruses, and developing security against it, for writing potentially wrong hard-drive programs, and anything that can have catastrophic effects on your system.
Easily Write your own O.S without the need to write on 'real' boot sectors, a potentially harmful act (Hope this is not new...).
Quickly use tools and programs not found on your own O.S.
Demonstrate a program at various times, by restoring a virtual machine,
quicker and less prone to failure, than trying to recreate the state at the minutes before the demo.
Less directly connected to programing, but surfing vie a virtual machine (for example to see documentation) has the added value that your own important system (and code) is less likely to be harmed by malicious programs.
From my experience in most cases the answer is typically "no" (When testing and targeting multiple platforms is removed) Both are huge reasons to be familiar with "desktop" VM solutions. Others have done an excellent job of listing rare exceptions like debugging kernel codes.
There are some quirks one must be aware of when running on a virtual machine. This is hardly an exhaustive list:
Loss of precsision or even time reversal in high resolution timers due to emulation of hardware resources (depends somewhat on the vm platform and operating system)
Virtual network interfaces ususally bridged. We've seen some extremely odd behavior in the host system with an application that sets up its own bridge between virtual interfaces -behavior which logically should not effect the host in one of the leading VM solutions.
Usage models - If your product has orwellian licensing codes or records state dependant behavior when interacting with remote systems you should account for what would happen if a system were "paused" and "restarted" or restarted from an earlier "state". Normally this kind of thing would be taken into account anyway in a robust implementation.
If you are developing in a virtual environment you will want to make sure you know what specifications were used to create the environment. If you have say a 4 Gig machine and create a virtual environment with 1 Gig you will want to make sure things in your development do not grow to a point that it overruns the memory. This will cause slight performance problems. I personally ran into this and it was a pretty tricky thing to track down. The scenario was that I was fixing a bug and testing it in a virtual environment. I did not setup the virtual environment by the way... The application took a performance hit because of all of the memory swapping that was taking place.
A very good use for a virtual environment is when you are developing applications that mess with the Windows Gina. It's much easier to reinstall a virtual environment than an entire PC....(been here done that too).
I do all of my development on a virtual XP instance under VMWare Fusion so that I can use a Mac for everything and still write .NET code ;-)
Sometimes they are necessary, because the platform you are programming doesn't support the standard developer environment. One such example is Sharepoint. As of Sharepoint 2007 you still need a server OS to install Sharepoint 2007, WSS, and the Visual Studio Sharepoint Extensions (VseWSS).
Thus for Sharepoint I have to use a Window Server VM to do my development work. As for Sharepoint 2010 they are supporting installations on Vista and 7 x64, but I will still use a VM, because I don't want to have Sharepoint on my main machine slowing everything down. Rather I want it in a VM where the services are on when needed and off when I don't without having to manually turn off/on each service. This in addition to the many great answers posted above.