Can KVM be used inside a GCE instance? - virtualization

Is it possible to run a KVM virtual machine inside of a Google Compute Engine instance? Nested virtualization, in short?

As of right now, the virtualized environment the GCE instances run on doesn't offer the virtualization extensions KVM requires to function. During installation it does indicate so, and running:
sudo /etc/init.d/qemu-kvm start
[FAIL] Your system does not have the CPU extensions required to use
KVM. Not doing anything. ... failed!
PS - Even so, at least in theory, there's nothing preventing the execution of virtualized environments that do not depend on these extensions: Docker, QEMU (stand-alone), etc...

Yes, you can use nested virtualization in the GCE environment.
When you first asked this question, and when #sammy-villoldo first answered you could not.
But September 28, 2017 Google announced:
Google Compute Engine now supports nested virtualization in beta
It used to be that you needed to be careful as it is restricted to CPU architectures based on Haswell or newer, and those were not available everywhere. Scanning the list now it appears every GCE zone has Haswell or newer as the default so that's not a problem.
Their documentation contains all the details.
Even in CI environments layered on GCE now it is possible these days to do nested virtualization, Travis CI implements it for instance with their ubuntu bionic / language general (or bash) images. You can start a free github or gitlab account and connect a repo to Travis to play with it for zero cost if you like.
Here is an example config https://travis-ci.org/ankidroid/Anki-Android/builds/607187626/config

Related

What is meant by a "lightweight vm" as discussed in the technology stack for WSL2?

My understanding is that Docker on Windows currently uses a "regular VM" under the hood. WSL2 (and Docker) will switch to using a lightweight VM. But what does this actually mean; is it just using a smaller initial memory foot print with some memory passthrough technnique, or is there more to it?
TL;DR
The big change is in the move from a virtualized Linux system call interpreter for the Windows kernel in WSL to a full-on Linux kernel provided in WSL2. This move dramatically cuts down on virtualization overhead.
Juicy Details
Directly from the DevBlogs Post on the announcement of WSL2:
Microsoft will be shipping a Linux kernel with Windows ... This kernel has been specially tuned for WSL 2. It has been optimized for size and performance to give an amazing Linux experience on Windows.
This is a departure from the ways of the current (as of writing) WSL which doesn't make use of a proper Linux kernel, demonstrated in the original WSL overview from 2016.
WSL executes unmodified Linux ELF64 binaries by virtualizing a Linux kernel interface on top of the Windows NT kernel.
The WSL LXCore service runs an interpreter of sorts for native Linux system calls as well as running its own VolFs and DriveFs operations to provide file access between WSL and Windows 10, which essentially performs the role of a traditional VM's translation layer the likes of VirtualBox.
Citation: MSDN Blog
Little is known as of yet about the exact system employed for WSL2, what we do know is from the Build2019 WSL2 talk. To help answer the question regarding file system changes and the light VM:
Here, we see that the Linux kernel runs alongside the NT Kernel instead of as a virtualized environment on top of it. (as a Windows service). The lightweight VM likely comes into play for facilitating the necessary interactions between the two kernels.
This gives a peek into the inner workings of that interoperability layer. Discussed verbally in the Build2019 talk, the two kernels serve each other files via natively hosted file servers (inaccessible to the Windows userspace via means other than WSL2).
Again, much is still up in the air from our perspective as users due to the limited details currently available to us at the time of writing.

Which Intel virtualizaton techniques are necessary for Docker?

Which Intel virtualizaton techniques are necessary for Docker?
On a Linux system running on a Intel cpu, what Virtualization Technologies by Intel are necessary to fullfill the execution of a Docker container? E.g. there are VT-X, ...
Or is there no need of using such a technology because Docker is somehow different to existing virtualization solutions like VirtualBox. In this case, why is there no need?
None. Docker uses a completely different system - it's not running a virtual machine so much as a super chroot. See the question below:
Can I run Docker directly on a non VT-X machine (no Virtual Machine used)?
The tutorials that tell you you'll need VT-x are usually based on running docker in Windows (on Hyper-V) or in VirtualBox.

What are the RedHat Minishift hardware requirements?

As much as I've looked, I can't find the hardware requirements for running minishift. Nothing mentioned in the Container Development Kit documentation, and the OpenShift documentation only mentions hardware requirements for production deployments.
I've been following RedHat's advice on running their Container Development Kit with nested KVM.
https://developers.redhat.com/blog/2018/02/13/red-hat-cdk-nested-kvm/
I may be pushing the limits. On a MacBook Air with 4x1.7GHz & 8GB RAM I’m running Fedora 27. Gave 6GB RAM & 2 cores to the RHEL Server and starting Minishift saw that it was giving 2 cores and 4GB RAM to VM. It took about 30 minutes to download and extract the 4 docker images. Things got progressively worse from there.
I’m trialing OpenShift Online. Would I run into a world of pain using Minishift directly on Fedora?
You would be better of running Minishift directly on Fedora 27 with KVM. Personally I use Minishift on Fedora 27. Using nested virtualisation will not give optimum performance as Minishift creates another VM to provision the OpenShift. So I will not recommend using nested virtualisation for Minishift. With the default settings i.e. 4GB RAM, 2 cores and 20GB disk you should be able to run few simple micro services on it. The resource requirement comes from the application you are trying to run on top of it. So if you are running an application which needs a lot of resources then you need to increase the resources to Minishift.
Once you know how much resources are fine for your application, you should save your configurations using "minishift config set" command. It will persist the settings across start/delete.

How can I convert Solaris Containers to something VMWare or VirtualBox can use?

I have a number of application environments based in Solaris containers. Is there some method to take those environments and port them to something usable by either VMWare Workstation or Sun VirtualBox? Both the source and target hardware is x86, if that helps.
It's possible to migrate a container to a new computer. See for example https://www.sun.com/offers/details/moving_containers.xml. You could set up a VM to be the new container host, then migrate the container into the VM.
not automatically, that I know of anyway. but you should be able to spin up a vm in about 2-5 min in virtual box or vmware. moving the apps is where the work is, but it all depends on how you set things up in the zones. If the systems are all in one or a few directories then just move them over and configure it again. How many zones are you talking about? Besides this is an opportunity to clean thing up, if needed, anyway.

How can developers make use of Virtualization?

Where can virtualization techniques be applied by an application developer? How can virtualization be applied on a day-to-day basis?
I would like to understand from veteran developers making use of it. I am interested in the following things:
How it helps in development.
How it could be used for testing purposes.
What are the recommended practices.
The main benefit, in my view, is that in a single machine, you can test an application in:
Different OSs, in case your app is multiplatform
Different configurations, like testing a client in one machine and a server in the other, or trying different parameters
Diffferent performance characteristics, like with minimal CPU and RAM, and with multicore and high amounts of RAM
Additionally, you can provide VM images to distribute applications preconfigured, be it for testing or for running applications in virtualized environments, where it makes sense (for apps which do not demand much power)
Can't say I'm a veteran developer, but I've used virtualization extensively when environments need to be controlled. That goes for:
Development: not only is it really useful to have VMs about for different deployment environments (e.g. browser versions, Windows XP / Vista / 7) but especially for maintenance it's handy to have a VM with the right development tools configured for a particular job.
Testing: this is where VMs really shine: it's great to have different deployment environments that can be set back to a known good configuration and multiple server instances running in parallel to test load balancing.
I've also found it useful to have a standard test image available that I can run locally to verify that a fix works. If it doesn't then I can roll back to the previous snapshot with no problems.
I've been using Virtual PC running Windows XP to test products I'm developing. I have clients who still need XP support while my primary dev environment is Vista (haven't had time to jump to Win7 yet), so having a virtual setup for XP is a big time saver.
Before each client drop, I build and test on my Vista dev machine then fire up VPC with XP, drag the binaries to the XP guest OS (enabled by installing Virtual PC additions on the guest OS) and run my tests there. I use the Undo disk feature of Virtual PC so I can always start with a clean XP image. This process would have been really cumbersome without virtualization.
I can now dump my old PCs at the local PC Recycle with no regrets :)
Some sort of test environment: if you are debugging malware (either writing it or developing a pill against it) it is not clever to use the real OS. The only possible disadvantage is that the viruses can detect that they are being run in the virtualization. :( One of the possibilities to do it is because the VM engines can emulate a finite set of hardware.