I am trying to use KubeVirt with GKE cluster.
I found I am able to create a nested virtualization enabled GCP VM, but I didn't find a way to achieve the same thing for GKE cluster node.
If I cannot enable nested virtualization for GKE cluster node, I can only use the kubevirt with debug.useEmulation which is not what I want.
Thanks
Yes you can -- it isn't even hard to do, it just isn't very intuitive.
Start a GKE cluster with ubuntu/containerd, n1-standard nodes and minimum cpu of Haswell. I think you also need to enable "Basic Authorization" to get virtctl working (sorry).
Find the template used for your new cluster, then to determine the proper source image:
gcloud compute instance-templates describe --format=json | jq ".properties.disks[0].initializeParams.sourceImage"
Create a copy of the source disk with nested virtualization enabled:
gcloud compute images --project $PROJECT create $NEW_IMAGE_NAME --source-image $SOURCE_IMAGE --source-image-project=$SOURCE_PROJECT --licenses "https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx"
Use "Create Similar" on the template for your GKE cluster. Change the boot disk to $NEW_IMAGE_NAME. You will also need to drill down to networking/alias and change the default subnet to your pod network.
Trigger a rolling update on the group for your GKE nodes to move them to the new template.
You can now install kubevirt (I had to use 0.38.1 instead of the current)
Caveats: I don't know how to use google disk images for kubevirt which would be an obvious match. I haven't even figured out how to get private GCR working with CDI. Oh, and console doesn't work due to websocket problems. But... you can shell to a gke node and see /dev/kvm, you can also kubevirt a VM then ssh into it, so yes, it does work.
Anyone know how to make any of this better?
Currently nested virtualization is available only on GCE as per this docs.
There is already question regarding supporting Nested Virtualization on GKE and it can be found here. I'd say it's not introduced yet, thats why you cannot find proper documentation about GKE and nested virtualization.
Also please consider that GCP and GKE are quite different.
Google Compute Engine VM instance is unmanaged by google. So besides ready base image, you can do whatever you need, like it would be normal VM.
However, Google Kubernetes Engine was created especially for containers. Thoses VMs are managed by google. GKE already creates Cluster for you and all VMs are automatically part of the cluster. In GKE you are unable to run Minikube or Kubeadm.
Here you have some characteristics of GKE
Related
I am now running two kubernetes clusters.
First Cluster is running on bare metal, and Second Cluster is running on EKS.
but since maintaining EKS costs a lot, so I am finding ways to change this service as Single Cluster that autoscales on AWS.
I did tried to consider several solutions such as RHACM, Rancher and Anthos.
But those solutions are for controlling multi cluster.
I just want to change this cluster as "onpremise based cluster that autoscales (on AWS) when lack of resources"
I could find "EKS anywhere" solution but since price is too high, I want to build similar architecture.
need advice for any use cases for ingress controller, or (physical) loadbalancer, or other architecture that could satisfies those conditions
Cluster API is probably what you need. It is a concept of creating Clusters with Machine objects. These Machine objects are then provisioned using a Provider. This provider can be Bare Metal Operator provider for your bare metal nodes and Cluster API Provider AWS for your AWS nodes. All resting in a single cluster (see the docs below for many other provider types).
You will run a local Kubernetes cluster which will have the Cluster API running in it. This will include components that will allow you to be able to create different Machine objects and tell Kubernetes also how to provision those machines.
Here is some more reading:
Cluster API Book: Excellent reading on the topic.
Documentation for CAPI Provider - AWS.
Documentation for the Bare Metal Operator I worked on this project for a couple of years and the community is pretty amazing. This GitHub repository hosts the CAPI Provider for bare metal nodes.
This should definitely get you going. You can start by running different providers individually to get a taste of how they work and then work with Cluster API and see it in function.
I'm developing a Kubernetes scheduler and I want to test its performance when nodes join and leave a cluster, as well as how it handles node failures.
What is the best way to test this locally on Windows 10?
Thanks in advance!
Unfortunately, you can't add nodes to Docker Desktop with Kubernetes enabled. Docker Desktop is single-node only.
I can think of two possible solutions, off the top of my head:
You could use any of the cloud providers. Major (AWS, GCP, Azure) ones have some kind of free tier (under certain usage, or timed). Adding nodes in those environments is trivial.
Create local VM for each node. This is less than perfect solution - very resource intesive. To make adding nodes easier, you could use kubeadm to provision your cluster.
I'm looking for a way to create a live Kubernetes cluster without too much hassle.
I've got a nice HP server, which could run a few VM's with kubernetes on top. The reason for VM's is to isolate this from the host machine. Ideally, the VMs should only run containerd and kubelet and are essentially disposable for node-upgrades.
However, I get lost in what tooling would provide this. minikube? microk8s? k3s? rancher? charmed kubernetes? some existing qemu image? some existing vagrant config? The more managed it is, the better. So far I liked minikube, but it doesn't have "start on reboot" for example, nor the flexibility for node upgrades.
I have tried a lot of tools to train for the CKAD certification. For my usage, the better option for a local cluster was k3s and multipass (for online clusters, I have used Civo). Both are very fast to proceed their respective tasks, so it allows me to create clusters at will and dispose them to be able to work on clean environments.
multipass to create VM quickly
k3s which is nothing else than a lightweight kubernetes
You can find easily some tutorials to automate the creation of clusters for example:
https://betterprogramming.pub/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c
https://medium.com/#yankee.exe/setting-up-multi-node-kubernetes-cluster-with-k3s-and-multipass-d4efed47fed5
https://github.com/superseb/multipass-k3s
I am trying to install kubernetes on Self-hosted production environment running on Ubuntu 16.04. I am not able to find any helpful guide to setup production grade kubernetes master and connect worked nodes to it.
any help is much appreciated.
you can use the kubespray to self Host production environment.
https://github.com/kubernetes-incubator/kubespray
Depends on what you understand by saying "self-host". The most people think it's about deploying kubernetes in the own environment.
If you want to compare different approaches to deploy k8s in a custom environment, refer to this article which covers a bunch of options suitable for that.
If you are interested in how to set up an HA Kubernetes cluster using kubeadm, refer to this article.
However, in kubernetes, there is a different definition of "self-hosted". It means running kubernetes itself as a workload in kubernetes. If you are interested in a real self-hosted approach (on a custom environment), refer to this article
Hope this helps
You can use typhoon which can be used to provision an HA kubernetes cluster.
Here is a sample configuration which I used to bring up my own home cluster.
A few advantages of typhoon are that you have the option of choosing your choice of a cloud provider for provisioning your infrastructure, which is done using terraform and the fact that it gives you upstream k8s is a big plus too.
Internally, it uses bootkube to bring up the temporary control plane, which would consist of
api-server
controller-manager
scheduler
and then when we have the temporary control plane object, we inject the objects to the API server to have our k8s cluster.
Have a look at this kubecon talk given by CoreOS which explains how this is working.
Is it in any way possible to configure a Kubernetes Cluster that utilizes ressources from multiple IaaS providers at the same time e.g. a cluster running partially on GCE and AWS? Or a Kubernetes Cluster running on your bare metal and an IaaS provider? Maybe in combination with some other tools like Mesos? Are there any other tools like Kubernetes that provide this capability? If it's not possbile with Kubernetes, what would one have to do in order to provide that feature?
Any help or suggestions would be very much appreciated.
There is currently no supported way to achieve what you're trying to do. But there is a Kubernetes project under way to address it, which goes under the name of Kubernetes Cluster Federation, alternatively known as "Ubernetes". Further details are available here:
http://www.slideshare.net/quintonh/federation-of-kubernetes-clusters-aka-ubernetes-kubecon-2015-slides-quinton-hoole
http://tinyurl.com/ubernetesv2
http://tinyurl.com/ubernetes-wg-notes