I'm developing a Kubernetes scheduler and I want to test its performance when nodes join and leave a cluster, as well as how it handles node failures.
What is the best way to test this locally on Windows 10?
Thanks in advance!
Unfortunately, you can't add nodes to Docker Desktop with Kubernetes enabled. Docker Desktop is single-node only.
I can think of two possible solutions, off the top of my head:
You could use any of the cloud providers. Major (AWS, GCP, Azure) ones have some kind of free tier (under certain usage, or timed). Adding nodes in those environments is trivial.
Create local VM for each node. This is less than perfect solution - very resource intesive. To make adding nodes easier, you could use kubeadm to provision your cluster.
Related
I'm looking for a way to create a live Kubernetes cluster without too much hassle.
I've got a nice HP server, which could run a few VM's with kubernetes on top. The reason for VM's is to isolate this from the host machine. Ideally, the VMs should only run containerd and kubelet and are essentially disposable for node-upgrades.
However, I get lost in what tooling would provide this. minikube? microk8s? k3s? rancher? charmed kubernetes? some existing qemu image? some existing vagrant config? The more managed it is, the better. So far I liked minikube, but it doesn't have "start on reboot" for example, nor the flexibility for node upgrades.
I have tried a lot of tools to train for the CKAD certification. For my usage, the better option for a local cluster was k3s and multipass (for online clusters, I have used Civo). Both are very fast to proceed their respective tasks, so it allows me to create clusters at will and dispose them to be able to work on clean environments.
multipass to create VM quickly
k3s which is nothing else than a lightweight kubernetes
You can find easily some tutorials to automate the creation of clusters for example:
https://betterprogramming.pub/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c
https://medium.com/#yankee.exe/setting-up-multi-node-kubernetes-cluster-with-k3s-and-multipass-d4efed47fed5
https://github.com/superseb/multipass-k3s
I am already running a single master kubernetes cluster now and I am doing research about setting up Highly available Kubernetes clusters. I was thinking of Multi master cluster setup then realized self-hosted cluster might be a better option to go future ready.
Additional challenge is I am doing it in Bare Metal (Meaning, I am going to use cloud vms from these cloud provider, Hetzner, Linode, DigitialOcean and they have CSI driver, cloud controller manager etc., )
In this case, I see 2 options.
Setup with bootkube (https://github.com/kubernetes-sigs/bootkube)
Setup with kubeadm self-hosting. (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/self-hosting/)
I assume this is still an early topic hence I am not able to find guidance to choose the right approach and then correct documentation. I need this for a scalable production environment where I will start small with at least 8 nodes and can grow faster.
Is bootkube considerable for future readiness?
or kubeadm self-hosting is still in alpha stage, am I getting into a risk running a production environment?
Any good, documentation, blog, article to go in this direction?
I use Keepalived + Haproxy and Ansible to deploy HA kubernetes cluster. Now kubeadm supports join control plane command, so it easy to integrate with ansible.
You can also refer: https://github.com/kubernetes-sigs/kubespray.
Let's say we're using EKS on AWS, would we need to manually manage the underlying Node's OS, installing patches and updates?
I would imagine that the pods and containers running inside the Node could be updated by simply version bumping the containers OS in your Dockerfile, but I'm unsure about how that would work for the Node's OS. Would the provider (AWS) in this case manage that?
Would be great to get an explanation for both Windows and Linux nodes. Are they different? Thank you!
Yes, you need to keep the nodes updated. But this has recently became easier with the new Bottlerocket - container optimized OS for nodes in EKS.
Updates to Bottlerocket can be automated using container orchestration services such as Amazon EKS, which lowers management overhead and reduces operational costs.
See also the blog post Bottlerocket – Open Source OS for Container Hosting
I am searching for a solution that enables me to set up a single node K8s cluster and if I needed I add nodes to it later.
I am aware of solutions such as minikube and microk8s but they are not expandable. I am trying k3s at the moment exactly because it is offering this feature but I have some problems with storage and other stuff that I am working on them.
Now my questions:
What other solution for this exists?
What are the disadvantages if I untaint the master node and run everything there (for a long period and not just for test)?
You can use kubeadm to setup a single node "cluster". Then you can use the join command to add more nodes
You can expand k3s cluster via k3sup join.Here is guide.
Key Kubernetes services such as kube-apiserver, kube-scheduler should be available and running smoothly at all times on master nodes. Therefore, it is essential to have dedicated resources for the master nodes, and avoid having other non-critical workloads interfere with the functioning of the master services
What are the disadvantages if I untaint the master node and run everything there (for a long period and not just for test)?
Failure of the worker will of course bring down your applications. When you recover it or spin up another one, K8s will recover your apps for you.
Failure of the master will not adversely affect your systems only the cluster's ability to manage itself and its self-healing capabilities (which will affect uptime at some point).
I am searching for a solution that enables me to set up a single node K8s cluster and if I needed I add nodes to it later.
To the best of my knowledge, there is no such thing as single node production ready k8s cluster.
For something small and simple you can check Rancher.
What other solution for this exists?
kubeadm allows you to install everything on a single node. Install kubeadm on the node, "kubeadm init", install a pod network, then remove the master taint.
Another solution you may be interested in is the Kubespray.
Some "honorable mentions" are:
Charmed Kubernetes by Canonical allows you to do everything on one node; however it should be quite a big node, so may be not the case here (but still worth mentioning).
If you don't really require all the k8s power (with only one small node), then Nomad could be an alternative.
Let me know if that helps.
Does Kubernetes have the ability/need to hook into a cloud provider (AWS, Rackspace) to spin up new nodes? If so, how does it then provision the node - does it run Ansible etc? Or will Kubernetes need to have all the nodes available to it manually?
The short answer is no.
The longer answer is explained in the following blog posting that describes the new kubeadm command:
http://blog.kubernetes.io/2016/09/how-we-made-kubernetes-easy-to-install.html
There are three stages in setting up a Kubernetes cluster, and we
decided to focus on the second two (to begin with):
Provisioning: getting some machines
Bootstrapping: installing Kubernetes on them and configuring certificates
Add-ons: installing necessary cluster add-ons like DNS and monitoring services, a pod network, etc
We realized early on that there's enormous variety in the way that
users want to provision their machines.
They use lots of different cloud providers, private clouds, bare
metal, or even Raspberry Pi's, and almost always have their own
preferred tools for automating provisioning machines: Terraform or
CloudFormation, Chef, Puppet or Ansible, or even PXE booting bare
metal. So we made an important decision: kubeadm would not provision
machines. Instead, the only assumption it makes is that the user has
some computers running Linux.
Update
http://blog.kubernetes.io/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters.html