LXD vs classic VMs in production Cluster (Kubernetes) - kubernetes

I'm setting up a bare-metal hypervisor with VMware ESXi on a local server which will have a kubernetes cluster.
Should I use Linux containers with LXD to set up my Kubernetes cluster? or should I use several VMs that I can provide with my VMware hypervisor?

I'm not sure what you are referring to by using LXD to set up your Kubernetes cluster. Kubernetes doesn't officially support LXC/LXD.
So, you can use several VMs for your Kubernetes control (masters) and data planes (nodes). You can either use straight docker or any containerd or cri-o as shims to run your container runtime.
In any case, most of this stuff is already set up by the deployment tools like:
kubespray
kubeadm
kops
Vendor offerings (EKS, GKS, AKS, etc)
etc
If you are looking for something more minimal you can try:
minikube
kind
microk8s
K3s

Related

Kubernetes PV through Ceph RBD

I'm testing with a small k8s cluster and ceph cluster to create and assign persistent volumes. Ceph cluster version is Nautilus and the machines in k8s cluster are ubuntu 20.04. As far as I understand, I can't install ceph nautilus common packages on Ubuntu 20.04. Is it possible to install Ceph Octopus common packages on the machines and connect them to a Nautilus cluster?
Yes, you can connect to a nautilus cluster with a client in versions of both octopus and pacific.
Edit: nevermind, I was wrong.
It depends of what you intend to do.
Obviously yes, an octopus client can connect to a nautilus cluster.
Though if you're looking to setup dynamic volume provisioning, nowadays you would need CSI (ceph-csi on github). And then, I think you need an octopus cluster at least.

Adding nodes to a Windows Minikube Kubernetes Installation - How?

I have MiniKube running on my Windows 10 machine. I would like to add an additional node to the cluster.
I have a Centos VM running on a different host that has k8s installed. How to I get the kubectrl join command to run on the VM from the master node running on my Windows machine?
Do I need to install an overlay network on the MiniKube VM? Or is one already installed?
Minikube is officially single-node at the moment. There's a discussion about this limitation at https://github.com/kubernetes/minikube/issues/94 But it seems people have found ways to do it with VirtualBox and there are other ways to run a multi-node cluster locally. Otherwise I'd suggest creating a cluster with one of the cloud providers (e.g. GKE).

Creation of nodes in VMs by using Kubernetes kubeadm and minikube

I am trying to create Kubernetes cluster with different number of nodes using same machine. Here I want to create separate VMs and need to create node in those VMs. I am currently exploring about kubeadm and minikube for these tasks.
When I am exploring I had the following confusions:
I need to create 4 number of nodes each need to create in different VMs. Can I use kubeadm for these requirement?
Also found that Minikube is using for creating the single node structure and also possible to use to creation of VMs. What is the difference between kubeadm and minikube ?
If I want to create nodes in different VMs which tool should use along with installation of Kubernetes cluster master?
If I am using VMs, then can I directly install VMware workstation / virtualbox in my Ubuntu 16.04 ?
In AWS EC2, they already giving the Ubuntu as a virtual machine. So is possible to install VMware workstation on ubuntu? Since it is VMs on another VM.
Kubeadm should be a good choice for you. It is quite easy to use by just following the documentation. Minikube would give you only single node Kubernetes. As of minikube 1.10.1, it is possible to use multi-node clusters.
Kubeadm is a tool to get Kubernetes up and running on already existing machine. It will basically configure and start all required Kubernetes components (for minimum viable cluster). Kubeadm is the right tool to bootstrap the Kubernetes cluster on your virtual machines. But you need to prepare the machines your self (install OS + required software, networking, ...). kubeadm will not do it for you.
Minikube is a tool which will allow you to start locally single node Kubernetes cluster. This is usually done in a VM - minikube supports VirtualBox KVM and others. It will start for you the virtual machine and take care of everything. But it will not do a 4 node cluster for you.
Kubeadm takes care of both. You first setup the master and then use kubeadm on the worker nodes to join the master.
When you use Kubeadm, it doesn't really care what do you use for the virtualization. You can choose whatever you want.
Why do you want to run virtual machines on top of your EC2 machine? Why not just create more (perhaps smaller) EC2 machines for the cluster? You can use this as an inspiration: https://github.com/scholzj/terraform-aws-kubernetes. There are also some more advanced tools for setting up the whole cluster such as (for example) Kops.

Production ready Kubernetes cluster on Linux VM

We are running all our applications in Linux VM's and tried Kubernetes cluster on local Mac using minikube and it looks promising.
Interested in setting up Kubernetes on Linux VM's, but:
Is is possible to setup production ready cluster on Linux VM's?
As shown in kubernetes/kubeadm issue 465, setting up a cluster using VMs can be a challenge.
Using Calico will help, since it provides secure network connectivity for containers and virtual machine workloads.
Use Calico 2.6.

Where does minikube configure master node components?

If i have installed K8S using minikube, where will the master node components be installed. (Ex: the api server, replication controller, etcd etc)
Is it in the host? or the VM?
I understand the worker node is the VM configured by minikube
Everything is installed in the Virtual Machine. Based on the localkube project, it is creating an All-in-one single-node cluster.
More information here: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/local-cluster-ux.md