What are the RedHat Minishift hardware requirements? - redhat

As much as I've looked, I can't find the hardware requirements for running minishift. Nothing mentioned in the Container Development Kit documentation, and the OpenShift documentation only mentions hardware requirements for production deployments.
I've been following RedHat's advice on running their Container Development Kit with nested KVM.
https://developers.redhat.com/blog/2018/02/13/red-hat-cdk-nested-kvm/
I may be pushing the limits. On a MacBook Air with 4x1.7GHz & 8GB RAM I’m running Fedora 27. Gave 6GB RAM & 2 cores to the RHEL Server and starting Minishift saw that it was giving 2 cores and 4GB RAM to VM. It took about 30 minutes to download and extract the 4 docker images. Things got progressively worse from there.
I’m trialing OpenShift Online. Would I run into a world of pain using Minishift directly on Fedora?

You would be better of running Minishift directly on Fedora 27 with KVM. Personally I use Minishift on Fedora 27. Using nested virtualisation will not give optimum performance as Minishift creates another VM to provision the OpenShift. So I will not recommend using nested virtualisation for Minishift. With the default settings i.e. 4GB RAM, 2 cores and 20GB disk you should be able to run few simple micro services on it. The resource requirement comes from the application you are trying to run on top of it. So if you are running an application which needs a lot of resources then you need to increase the resources to Minishift.
Once you know how much resources are fine for your application, you should save your configurations using "minishift config set" command. It will persist the settings across start/delete.

Related

MongoDB Performance : Windows 2016 Server DC vs Same machine running Hyper-V Ubuntu

I have read a few articles that say that running MongoDB on Windows is a lot slower than Linux. They mention filesystems like XFS is better than NTFS etc, and that it's more designed for Linux.
Reference Why Mongodb performance better on Linux than on Windows?
So my question is, has anyone done any benchmarking of MongoDB performance on Windows (e.g installed directly on the server) vs the same machine (running Windows) but it running a VM (Ubuntu 18.04, XFS) via HyperV?
the same machine (running Windows) but it running a VM (Ubuntu 18.04, XFS) via HyperV
The reason why Linux performs better than Windows for MongoDB is because Linux is more efficient with hardware resources (disk, memory and networking were called out in the post you referenced). Putting Linux in a Windows VM does not eliminate the overhead of Windows that makes it slower for MongoDB. Instead you would have two overheads (Linux AND Windows).
You should also troubleshoot your actual performance problems (per your other post) rather than trying random things like OS changes in the hope that they will make your performance issues go away. The particular issue might go away but chances are you'll run into another one down the road, then what?

Kubernetes in docker for Ubuntu

Is there an ubuntu version of Kubernetes in docker for Ubuntu, that works like docker for mac(https://blog.docker.com/2018/01/docker-mac-kubernetes/).
and docker for windows (https://docs.docker.com/docker-for-windows/#kubernetes)
minikube consumes lots of resource, and I want to try out a lighter alternative, which I found docker for mac that supports kubernetes, but my machine is ubuntu 18.04.
As you may know there are a lot of projects that offer K8S solution, Minikube is the closest to an official mini distribution for local testing and development, but if you wanna try lightweight options you can check:
Kind runs Kubernetes clusters in Docker containers. It supports multi-node clusters as well as HA clusters. Because it runs K8s in Docker, kind can run on Windows, Mac, and Linux. Kind may not have developer-friendly features.
K3s is ma project by Rancher as a lightweight Kubernetes offering suitable for edge environments, IoT devices, CI pipelines, and even ARM devices, like Raspberry Pi's. It runs on any Linux distribution without any additional external dependencies or tools. K3s provides lightweight by replacing docker with containerd, and using sqlite3 as the default DB (instead of etcd). This solution consumes 512 MB of RAM and 200 MB of disk space.
K3d
It is based on a k3s which is a lightweight kubernetes distribution (similar to kind).
Microk8s runs upstream Kubernetes as native services on Linux systems supporting snap. A good option if you are running Ubuntu on your Laptop. There is a very good installation tutorial:
And there are plenty more. You can check what solution suits you best.
Check kind it is kubernetes in docker.

What are the minimum requirements to run an Eclipse Che server for 1 user?

I like the features of Codenvy but the sever is too long to wake up and can't be used from a mobile. Yes, I code in the subway for hobbies. I had in the idea to install it on a small VPS. I will be coding a Django site roughly the same size as a small eShop. What would be the minimal requirement for the server to run it smoothly?
If you're using Minishift I suggest granting it at least 6GB of RAM.
minishift config set memory 6G
For more information, see the Eclipse Che admin guide
See Eclipse Che documentation - Single-User: Installation on Docker:
Minimum one CPU, 2GB of RAM, 3GB disk space.

Can KVM be used inside a GCE instance?

Is it possible to run a KVM virtual machine inside of a Google Compute Engine instance? Nested virtualization, in short?
As of right now, the virtualized environment the GCE instances run on doesn't offer the virtualization extensions KVM requires to function. During installation it does indicate so, and running:
sudo /etc/init.d/qemu-kvm start
[FAIL] Your system does not have the CPU extensions required to use
KVM. Not doing anything. ... failed!
PS - Even so, at least in theory, there's nothing preventing the execution of virtualized environments that do not depend on these extensions: Docker, QEMU (stand-alone), etc...
Yes, you can use nested virtualization in the GCE environment.
When you first asked this question, and when #sammy-villoldo first answered you could not.
But September 28, 2017 Google announced:
Google Compute Engine now supports nested virtualization in beta
It used to be that you needed to be careful as it is restricted to CPU architectures based on Haswell or newer, and those were not available everywhere. Scanning the list now it appears every GCE zone has Haswell or newer as the default so that's not a problem.
Their documentation contains all the details.
Even in CI environments layered on GCE now it is possible these days to do nested virtualization, Travis CI implements it for instance with their ubuntu bionic / language general (or bash) images. You can start a free github or gitlab account and connect a repo to Travis to play with it for zero cost if you like.
Here is an example config https://travis-ci.org/ankidroid/Anki-Android/builds/607187626/config

How can developers make use of Virtualization?

Where can virtualization techniques be applied by an application developer? How can virtualization be applied on a day-to-day basis?
I would like to understand from veteran developers making use of it. I am interested in the following things:
How it helps in development.
How it could be used for testing purposes.
What are the recommended practices.
The main benefit, in my view, is that in a single machine, you can test an application in:
Different OSs, in case your app is multiplatform
Different configurations, like testing a client in one machine and a server in the other, or trying different parameters
Diffferent performance characteristics, like with minimal CPU and RAM, and with multicore and high amounts of RAM
Additionally, you can provide VM images to distribute applications preconfigured, be it for testing or for running applications in virtualized environments, where it makes sense (for apps which do not demand much power)
Can't say I'm a veteran developer, but I've used virtualization extensively when environments need to be controlled. That goes for:
Development: not only is it really useful to have VMs about for different deployment environments (e.g. browser versions, Windows XP / Vista / 7) but especially for maintenance it's handy to have a VM with the right development tools configured for a particular job.
Testing: this is where VMs really shine: it's great to have different deployment environments that can be set back to a known good configuration and multiple server instances running in parallel to test load balancing.
I've also found it useful to have a standard test image available that I can run locally to verify that a fix works. If it doesn't then I can roll back to the previous snapshot with no problems.
I've been using Virtual PC running Windows XP to test products I'm developing. I have clients who still need XP support while my primary dev environment is Vista (haven't had time to jump to Win7 yet), so having a virtual setup for XP is a big time saver.
Before each client drop, I build and test on my Vista dev machine then fire up VPC with XP, drag the binaries to the XP guest OS (enabled by installing Virtual PC additions on the guest OS) and run my tests there. I use the Undo disk feature of Virtual PC so I can always start with a clean XP image. This process would have been really cumbersome without virtualization.
I can now dump my old PCs at the local PC Recycle with no regrets :)
Some sort of test environment: if you are debugging malware (either writing it or developing a pill against it) it is not clever to use the real OS. The only possible disadvantage is that the viruses can detect that they are being run in the virtualization. :( One of the possibilities to do it is because the VM engines can emulate a finite set of hardware.