Production ready Kubernetes cluster on Linux VM - kubernetes

We are running all our applications in Linux VM's and tried Kubernetes cluster on local Mac using minikube and it looks promising.
Interested in setting up Kubernetes on Linux VM's, but:
Is is possible to setup production ready cluster on Linux VM's?

As shown in kubernetes/kubeadm issue 465, setting up a cluster using VMs can be a challenge.
Using Calico will help, since it provides secure network connectivity for containers and virtual machine workloads.
Use Calico 2.6.

Related

Q: Rancher + Calico + Ununtu 20.04 bare metal - no access to service network (10.43.0.10)

Looking for a peice of advice on troubleshooting an issue with Rancher + Calico on a bare metal Ubuntu 20.04.
Here is the issue.
We have few Rancher (2.5.7) clusters built on top of Ubuntu 20.04 running on KVM(Proxmox) VMs.
All clusters have similar setup and use Calico as CNI. Everything works like a charm.
The other day we decided to add a bare metal Ubuntu 20.04 node to one of the clusters.
And everything worked pretty well - Rancher shows new node as healthy and k8s scheudles pods there - however,
it turned out that pods on that node can't access service network - 10.43. Specifically they can't access DNS at 10.43.0.10.
If I do "nc 10.43.0.10 53" on VM Ubuntu host - it connects to DNS pod through service network with no issues. If I'm trying to do the same on a bere metal - connection hangs.
Ubuntu set up is exactly the same for VM and BM. All VMs and BMs are on the same vlan. For the sake of expetiment we configured only one NIC on BM with no fancy stuff like bonding.
calicoctl shows all the BGP peers Established.
I tried to create a fresh cluster and reproduced the same problem - cluster built of VMs works with no issues and each VM(and pods there) can connect to service network, once I add BM - BM is having issues connecting to service network.
My guess is that issue is somewhere with iptables, but I'm not sure how to troubleshoot WHY iptables will be different on BM an on VM.
Will greatly appreaciate any piece of advice.
After few hours of debugging we figured out that the issue was in "tcp offloading".
On VMs virtual NIC does not support offloading so everything worked fine.
On BMs we had to issue
sudo ethtool -K <interface> tx off rx off
to disable offloading and that fixed the issue.

LXD vs classic VMs in production Cluster (Kubernetes)

I'm setting up a bare-metal hypervisor with VMware ESXi on a local server which will have a kubernetes cluster.
Should I use Linux containers with LXD to set up my Kubernetes cluster? or should I use several VMs that I can provide with my VMware hypervisor?
I'm not sure what you are referring to by using LXD to set up your Kubernetes cluster. Kubernetes doesn't officially support LXC/LXD.
So, you can use several VMs for your Kubernetes control (masters) and data planes (nodes). You can either use straight docker or any containerd or cri-o as shims to run your container runtime.
In any case, most of this stuff is already set up by the deployment tools like:
kubespray
kubeadm
kops
Vendor offerings (EKS, GKS, AKS, etc)
etc
If you are looking for something more minimal you can try:
minikube
kind
microk8s
K3s

Is there major difference between Minikube and Kind?

I know Kind needs Docker, and Minikube needs Virtual Box - but for learning Kubernetes features are they the same?
Thank you.
In terms of learning Kubernetes features, they are the same. You will get the same Kubernetes and Kubernetes resources in both: Pod, Deployments, ConfigMaps, StatefulSets, Secrets, etc. assuming they both have the same Kubernetes version.
Under the hood they very similar too with some implementation differences.
Minikube
Runs K8s in VM (1.7.0 vesion now supports running minikube on Docker)
Support multiple Hypervisors (VirtualBox, Hyperkit, parallels, etc)
You need to ssh to VM to run docker. (minikube ssh)
On the positive side, if you are using VMs, you get the VM isolation which is 'more secure' per se.
Update: It does support running in docker with --driver=docker
Kind
Runs Docker in a VM (Part of the docker desktop installation for Mac, or Windows)
Runs Kubernetes in that "Docker" VM
Supports Hyperkit (Mac) or Hyper-V (Windows) hypervisors.
Has the convenience that you can run the docker client from your Mac or Windows.
You can actually run it in Linux with no need to use a VM (It's a docker native installation on Linux)
It runs all K8s components in a single container.

Adding nodes to a Windows Minikube Kubernetes Installation - How?

I have MiniKube running on my Windows 10 machine. I would like to add an additional node to the cluster.
I have a Centos VM running on a different host that has k8s installed. How to I get the kubectrl join command to run on the VM from the master node running on my Windows machine?
Do I need to install an overlay network on the MiniKube VM? Or is one already installed?
Minikube is officially single-node at the moment. There's a discussion about this limitation at https://github.com/kubernetes/minikube/issues/94 But it seems people have found ways to do it with VirtualBox and there are other ways to run a multi-node cluster locally. Otherwise I'd suggest creating a cluster with one of the cloud providers (e.g. GKE).

Where does minikube configure master node components?

If i have installed K8S using minikube, where will the master node components be installed. (Ex: the api server, replication controller, etcd etc)
Is it in the host? or the VM?
I understand the worker node is the VM configured by minikube
Everything is installed in the Virtual Machine. Based on the localkube project, it is creating an All-in-one single-node cluster.
More information here: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/local-cluster-ux.md