Start/Stop local dev kubernetes cluster created by kubeadm (like microk8s or minikube) - kubernetes

3 nodes Kubernetes cluster created using kubeadm v1.19.9. The nodes are VMs (using KVM hypervisor on Ubuntu 20.04).
The usage of this Kubernetes cluster is for development and exercises on Kubernetes. I'd like to stop and restart the cluster where it was left off. In the same fashion as the stop and start commands available with minikube or microk8s.
EDIT: clarify the question to avoid suggested duplicated posts. I am looking for an elegant solution to stop and restart the same cluster. NOT to destroy / reset / uninstall the cluster.
I couldn't find a simple solution from various web searches. There are solutions which suggested to tear down the cluster which is not my use case here. An answer dating 3 years ago, proper shutdown of a kubernetes cluster, is closer to what I want but this sounds quite complicate. Another solution How to Setup & Recover a Self-hosted Kubeadm Kubernetes Cluster After Reboot doesn't explain well enough about the principle used.
I hope there is a simpler solution now.
EDIT (2021-04-11): Kubernetes 1.21 release notes:
Kubelet Graceful Node Shutdown feature graduates to Beta and enabled by default.
kubernetes/enhancements Graceful node shutdown #2000
Enhancement target (which target equals to which milestone):
Alpha release target (1.20)
Beta release target (1.21)
Stable release target (1.23)

To summarize:
k8s should be able to handle shutdowns. What may not be able to handle it are the applications/containers that you run - just make sure containers start on their own and don't require manual intervention and you should be fine.
I mentioned in comments about flushing etcd data to disk but (after some research) this should not be neccessary since etcd does it itself and implements strong consistency model to make sure it doesn't lose the data. But this doesn't mean you should not be doing your backups - it's better to have a backup and don't ever use it then don't have one when needed.
The solution mentioned in How to Setup & Recover a Self-hosted Kubeadm Kubernetes Cluster After Reboot is relevant only if you use SelfHosting.
Also (for convenience) make sure that all configs are persisting between reboots e.g. swap partition should be disabled and if you only run swapoff -a it won't persist after reboot - it's much better to make changes in fstab so that when rebooted you don't have to disable anything again manually.
Here are some links:
Backing up an etcd cluster
etcd disaster recovery
Permanently Disable Swap for Kubernetes Cluster

Related

High available kubernetes cluster? bootkube or kubeadm self-hosting

I am already running a single master kubernetes cluster now and I am doing research about setting up Highly available Kubernetes clusters. I was thinking of Multi master cluster setup then realized self-hosted cluster might be a better option to go future ready.
Additional challenge is I am doing it in Bare Metal (Meaning, I am going to use cloud vms from these cloud provider, Hetzner, Linode, DigitialOcean and they have CSI driver, cloud controller manager etc., )
In this case, I see 2 options.
Setup with bootkube (https://github.com/kubernetes-sigs/bootkube)
Setup with kubeadm self-hosting. (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/self-hosting/)
I assume this is still an early topic hence I am not able to find guidance to choose the right approach and then correct documentation. I need this for a scalable production environment where I will start small with at least 8 nodes and can grow faster.
Is bootkube considerable for future readiness?
or kubeadm self-hosting is still in alpha stage, am I getting into a risk running a production environment?
Any good, documentation, blog, article to go in this direction?
I use Keepalived + Haproxy and Ansible to deploy HA kubernetes cluster. Now kubeadm supports join control plane command, so it easy to integrate with ansible.
You can also refer: https://github.com/kubernetes-sigs/kubespray.

Expandable single node K8s cluster

I am searching for a solution that enables me to set up a single node K8s cluster and if I needed I add nodes to it later.
I am aware of solutions such as minikube and microk8s but they are not expandable. I am trying k3s at the moment exactly because it is offering this feature but I have some problems with storage and other stuff that I am working on them.
Now my questions:
What other solution for this exists?
What are the disadvantages if I untaint the master node and run everything there (for a long period and not just for test)?
You can use kubeadm to setup a single node "cluster". Then you can use the join command to add more nodes
You can expand k3s cluster via k3sup join.Here is guide.
Key Kubernetes services such as kube-apiserver, kube-scheduler should be available and running smoothly at all times on master nodes. Therefore, it is essential to have dedicated resources for the master nodes, and avoid having other non-critical workloads interfere with the functioning of the master services
What are the disadvantages if I untaint the master node and run everything there (for a long period and not just for test)?
Failure of the worker will of course bring down your applications. When you recover it or spin up another one, K8s will recover your apps for you.
Failure of the master will not adversely affect your systems only the cluster's ability to manage itself and its self-healing capabilities (which will affect uptime at some point).
I am searching for a solution that enables me to set up a single node K8s cluster and if I needed I add nodes to it later.
To the best of my knowledge, there is no such thing as single node production ready k8s cluster.
For something small and simple you can check Rancher.
What other solution for this exists?
kubeadm allows you to install everything on a single node. Install kubeadm on the node, "kubeadm init", install a pod network, then remove the master taint.
Another solution you may be interested in is the Kubespray.
Some "honorable mentions" are:
Charmed Kubernetes by Canonical allows you to do everything on one node; however it should be quite a big node, so may be not the case here (but still worth mentioning).
If you don't really require all the k8s power (with only one small node), then Nomad could be an alternative.
Let me know if that helps.

When to use MiniKube and when to use Kubernetes?

I've found a partial answer Difference between Minikube, Kubernetes, Docker Compose, Docker Swarm, etc here, but I still do not completely get it:
In my understanding, kubernetes is a container-orchestration system. However, Minikube looks very similar to me.
Can somebody explain me when you would use minikube versus when you would use minikube, and why?
I think your question should have been "Can somebody explain me when you would use minikube versus when you would use Kubernetes, and why?"
Minikube is a small and easy Kubernetes setup for your Work-PC. You can install and configure a Kubernetes cluster very easily with it. However, for a production environment it is not the best choice. Minikube normally starts a virtual machine on your PC witch will affects the performance of your cluster other than Kubernetes which will run directly with your kernel if you use linux. Furthermore, like Butuzov already answered, it is only one node, not a "real" cluster.
So you use Kubernetes if you are in a production environment where you need distributed systems and workload as well as redundancy and failure safety.
Hope that helps for your understanding.
Edit: Use cases
Minikube:
Developer or DevOps who trying to execute a complex distributed system locally for testing purposes but with deployment over Helm.
Developer or DevOps who tries to create a deployment with Helm locally.
Kubernetes (standalone):
Execute complex distributed system on production systems.
Execute heavy workload (multiple products, distributed systems) in production
minikube - is one node cluster, with a master that can get loads, with a lot of solved and automated issues. designated to test, learn things from kubernetes ecosystem.
kubernetes itself is orchestrator that can come to you as managed service with a lot of problems (pv or loadbalancers) solved or like a lego, or you will tune here and there... well thing we called production ready.
minikube is ok to learn (not always but in 90% of cases) or experiment with tiny loads.

Is kubeadm production ready now?

I'm here to know about kubeadm. I'm planing to create kubernetes cluster using kubeadm on my production environment. So, I wanted to know is kubeadm production is ready to deploy in my product?
Happy news!
It is now.We have a production release,
we’re excited to announce that it has now graduated from beta to
stable and generally available (GA)!
https://kubernetes.io/blog/2018/12/04/production-ready-kubernetes-cluster-creation-with-kubeadm/
POST EDITED in 2019 to reflect the current situation.
In 2018, According to the K8S documentation here.
The cluster created here has a single master, with a single etcd database running on it. This means that if the master fails, your cluster may lose data and may need to be recreated from scratch. Adding HA support (multiple etcd servers, multiple API servers, etc) to kubeadm is still a work-in-progress.
Whereas in novembr 2019 :
The cluster created here has a single control-plane node, with a single etcd database running on it. This means that if the control-plane node fails, your cluster may lose data and may need to be recreated from scratch.
Workarounds:
Regularly back up etcd. The etcd data directory configured by kubeadm is at /var/lib/etcd on the control-plane node.
Use multiple control-plane nodes by completing the HA setup instead
So, kubeadm is in 2018, kubernetes was WIP and not production ready yet. For development and testing, kubeadm should be good enough. Look here for other solutions.
In 2019, this availability issue is fixed.
Officially is not production ready yet, but I've been successful setting up 1.10 and later clusters with no problems.
If you want to create an HA cluster with multiple masters there's also a kubeadm guide here. But use it at your own risk.
Also, keep in mind if your master(s) go down your workloads will keep running, you just won't be able to make changes or schedule new pods until the master(s) comes back up.
You can also use any of the other solutions depending on your environment as pointed out in the other answer here.

how to recover from master failure with kubeadm

I set up a Kubernetes cluster with a single master node and two worker nodes using kubeadm, and I am trying to figure out how to recover from node failure.
When a worker node fails, recovery is straightforward: I create a new worker node from scratch, run kubeadm join, and everything's fine.
However, I cannot figure out how to recover from master node failure (without interrupting the deployments running on the worker nodes). Do I need to backup and restore the original certificates or can I just run kubeadm init to create a new master from scratch? How do I join the existing worker nodes?
I ended up writing a Kubernetes CronJob backing up the etcd data. If you are interested: I wrote a blog post about it: https://labs.consol.de/kubernetes/2018/05/25/kubeadm-backup.html
In addition to that you may want to backup all of /etc/kubernetes/pki to avoid issues with secrets (tokens) having to be renewed.
For example, kube-proxy uses a secret to store a token and this token becomes invalid if only the etcd certificate is backed up.
As per your mention about Master's backup , actually if you mean backup procedures (like traditional/legacy backups tools/techs) isn't mentioned directly in the official documentation (as i know), but you can take your precautions by some Options/Workarounds :
Setup HA Masters (only for GCE)
Set up High-Availability Kubernetes Masters
Setup HA etcd cluster / Master Load Balancer
Setting-up-an-ha-etcd-cluster
Set up master Load Balancer
Operating etcd clusters for Kubernetes
OS file Systems Snapshot/backup
kubeadm init will definitely not work out of the box, as that will create a new cluster altogether, credentials, ip space, etc.
At a minimum, restoring the master node will require a backup of your etcd data. This typically lives in /var/lib/etcd directory.
You will also need the kubeadm config from the cluster
kubeadm config view should output this. (upward of v1.8)
The step-by-step to restore a master node really isn't so clean cut, which is why they introduce HA - High Availability. This is a much safer way of maintaining redundancy and uptime. Particularly because restoring anything from etcd can be a real pain (in my humble opinion and experience).
If I may go a bit off topic from your question, if you are still getting started with Kubernetes and not deeply invested in kubeadm, i would suggest you consider creating your cluster with kops instead. It supports HA already and I found kops to be more robust and easier to use to either kubeadm and kube-aws (the coreos cluster builder).
https://kubernetes.io/docs/getting-started-guides/kops/