Run k8s on single node without minikube - kubernetes

is it possible to run k8s on single node without using minikube? Today I use kubeadm with 2 hosts, but I would like to know if it is possible to run using only one host.

You can run kubeadm init command to initialize single node cluster. You can add/remove nodes to the cluster.
taint the master so that it can run containers using the below command
kubectl taint nodes --all node-role.kubernetes.io/master-

You need to look into the hardware requirements for running a single node cluster. You would need to run
etcd which is the backing store for all cluster data.
Control plane software(scheduler, controller manager, api-server, kubeadm)
Worker node software(kubectl, kube-proxy)
all on one node.
When installing kube-adm I see the hardware requirements(https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) as
2 GB or more of RAM per machine (any less will leave little room for your apps)
and 2 CPUs or more
Example configurations for etcd (https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/hardware.md#example-hardware-configurations).
For the CKA exam training material the recommended node setting for a single machine is 2 vcpu's and 7.5 GB memory with a note of caution that you may experience slowness.
I am going by Ubuntu 18.04 Linux for my recommendations. Another thing you need to do is disable the swap(https://serverfault.com/questions/881517/why-disable-swap-on-kubernetes). It is necessary since kubernetes makes maximum use of disk and cpu resources provided.
So if it for your learning. Go ahead and start with 2 vcpu's and 7.5 GB memory.

You could check
k3s
KinD
MicroK8s
for single-node Kubernetes installations.

Related

What is minikube config specifying?

According to the minikube handbook the configuration commands are used to "Configure your cluster". But what does that mean?
If I set cpus and memory then are these the max values the cluster as a whole can ever consume?
Are these the values it will reserve on the host machine in preparation for use?
Are these the values that are handed to the control plane container/VM and now I have to specify more resources when making a worker node?
What if I want to add another machine (VM or bare metal) and add its resources in the form of a worker node to the cluster? From the looks of it I would have to delete that cluster, change the configuration, then start a new cluster with the new configuration. That doesn't seem scalable.
Thanks for the help in advance.
Answering the question:
If I set cpus and memory then are these the max values the cluster as a whole can ever consume?
In short. It will be a limit for the whole resource (either a VM, a container, etc. depending on a --driver used). It will be used for the underlying OS, Kubernetes components and the workload that you are trying to run on it.
Are these the values it will reserve on the host machine in preparation for use?
I'd reckon this would be related to the --driver you are using and how its handling the resources. I personally doubt it's reserving the 100% of CPU and memory you've passed in the $ minikube start and I'm more inclined to the idea that it uses how much it needs during specific operations.
Are these the values that are handed to the control plane container/VM and now I have to specify more resources when making a worker node?
By default, when you create a minikube instance with: $ minikube start ... you will create a single node cluster capable of being a control-plane node and a worker node simultaneously. You will be able to run your workloads (like an nginx-deployment without adding additional node).
You can add a node to your minikube ecosystem with just: $ minikube node add. This will make another node marked as a worker (with no control-plane components). You can read more about it here:
Minikube.sigs.k8s.io: Docs: Tutorials: Multi node
What if I want to add another machine (VM or bare metal) and add its resources in the form of a worker node to the cluster? From the looks of it I would have to delete that cluster, change the configuration, then start a new cluster with the new configuration. That doesn't seem scalable.
As said previously, you don't need to delete the minikube cluster to add another node. You can run $ minikube node add to add a node on a minikube host. There are also options to delete/stop/start nodes.
Personally speaking if the workload that you are trying to run requires multiple nodes, I would try to consider other Kubernetes cluster built on top/with:
Kubeadm
Kubespray
Microk8s
This would allow you to have more flexibility on where you want to create your Kubernetes cluster (as far as I know, minikube works within a single host (like your laptop for example)).
A side note!
There is an answer (written more than 2 years ago) which shows the way to add a Kubernetes cluster node to a minikube here :
Stackoverflow.com: Answer: How do I get the minikube nodes in a local cluster
Additional resources:
Kubernetes.io: Docs: Setup: Production environment: Tools: Kubeadm: Create cluster kubeadm
Github.com: Kubernetes sigs: Kubespray
Microk8s.io

Expandable single node K8s cluster

I am searching for a solution that enables me to set up a single node K8s cluster and if I needed I add nodes to it later.
I am aware of solutions such as minikube and microk8s but they are not expandable. I am trying k3s at the moment exactly because it is offering this feature but I have some problems with storage and other stuff that I am working on them.
Now my questions:
What other solution for this exists?
What are the disadvantages if I untaint the master node and run everything there (for a long period and not just for test)?
You can use kubeadm to setup a single node "cluster". Then you can use the join command to add more nodes
You can expand k3s cluster via k3sup join.Here is guide.
Key Kubernetes services such as kube-apiserver, kube-scheduler should be available and running smoothly at all times on master nodes. Therefore, it is essential to have dedicated resources for the master nodes, and avoid having other non-critical workloads interfere with the functioning of the master services
What are the disadvantages if I untaint the master node and run everything there (for a long period and not just for test)?
Failure of the worker will of course bring down your applications. When you recover it or spin up another one, K8s will recover your apps for you.
Failure of the master will not adversely affect your systems only the cluster's ability to manage itself and its self-healing capabilities (which will affect uptime at some point).
I am searching for a solution that enables me to set up a single node K8s cluster and if I needed I add nodes to it later.
To the best of my knowledge, there is no such thing as single node production ready k8s cluster.
For something small and simple you can check Rancher.
What other solution for this exists?
kubeadm allows you to install everything on a single node. Install kubeadm on the node, "kubeadm init", install a pod network, then remove the master taint.
Another solution you may be interested in is the Kubespray.
Some "honorable mentions" are:
Charmed Kubernetes by Canonical allows you to do everything on one node; however it should be quite a big node, so may be not the case here (but still worth mentioning).
If you don't really require all the k8s power (with only one small node), then Nomad could be an alternative.
Let me know if that helps.

Running Kubernetes master and node on the same server (scheduling pods on Kubernetes master)

If you run taint command on Kubernetes master:
kubectl taint nodes --all node-role.kubernetes.io/master-
it allows you to schedule pods.
So it acts as node and master.
I have tried to run 3 server cluster where all nodes have both roles. I didn't notice any issues from the first look.
Do you think nowadays this solution can be used to run small cluster for production service? If not, what are the real downsides? In which situations this setup fails comparing with standard setup?
Assume that etcd is running on all three servers.
Thank you
The standard reason to run separate master nodes and worker nodes is to keep a busy workload from interfering with the cluster proper.
Say you have three nodes as proposed. One winds up running a database; one runs a Web server; the third runs an asynchronous worker pod. Suddenly you get a bunch of traffic into your system, the Rails application is using 100% CPU, the Sidekiq worker is cranking away at 100% CPU, the MySQL database is trying to handle some complicated joins and is both high CPU and also is using all of the available disk bandwidth. You run kubectl get pods: which node is actually able to service these requests? If your application triggers the Linux out-of-memory killer, can you guarantee that it won't kill etcd or kubelet, both of which are critical to the cluster working?
If this is running in a cloud environment, you can often get away with smaller (cheaper) nodes to be the masters. (Kubernetes on its own doesn't need a huge amount of processing power, but it does need it to be reliably available.)

kubernetes - can we create 2 node master-only cluster with High availability

I am new to the Kubernetes and cluster.
I would like to bring up an High Availability Master Only Kubernetes Cluster(Need Not to!).
I have the 2 Instances/Servers running Kubernetes daemon, and running different kind of pods on both the Nodes.
Now I would like to somehow create the cluster and if the one of the host(2) down, then all the pods from that host(2) should move to the another host(1).
once the host(2) comes up. the pods should float back.
Please let me know if there is any way i can achieve this?
Since your requirement is to have a 2 node master-only cluster and also have HA capabilities then unfortunately there is no straightforward way to achieve it.
Reason being that a 2 node master-only cluster deployed by kubeadm has only 2 etcd pods (one on each node). This gives you no fault tolerance. Meaning if one of the nodes goes down, etcd cluster would lose quorum and the remaining k8s master won't be able to operate.
Now, if you were ok with having an external etcd cluster where you can maintain an odd number of etcd members then yes, you can have a 2 node k8s cluster and still have HA capabilities.
It is possible that master node serves also as a worker node however it is not advisable on production environments, mainly for performance reasons.
By default, kubeadm configures master node so that no workload can be run on it and only regular nodes, added later would be able to handle it. But you can easily override this default behaviour.
In order to enable workload to be scheduled also on master node you need to remove from it the following taint, which is added by default:
kubectl taint nodes --all node-role.kubernetes.io/master-
To install and configure multi-master kubernetes cluster you can follow this tutorial. It describes scenario with 3 master nodes but you can easily customize it to your needs.

Creation of nodes in VMs by using Kubernetes kubeadm and minikube

I am trying to create Kubernetes cluster with different number of nodes using same machine. Here I want to create separate VMs and need to create node in those VMs. I am currently exploring about kubeadm and minikube for these tasks.
When I am exploring I had the following confusions:
I need to create 4 number of nodes each need to create in different VMs. Can I use kubeadm for these requirement?
Also found that Minikube is using for creating the single node structure and also possible to use to creation of VMs. What is the difference between kubeadm and minikube ?
If I want to create nodes in different VMs which tool should use along with installation of Kubernetes cluster master?
If I am using VMs, then can I directly install VMware workstation / virtualbox in my Ubuntu 16.04 ?
In AWS EC2, they already giving the Ubuntu as a virtual machine. So is possible to install VMware workstation on ubuntu? Since it is VMs on another VM.
Kubeadm should be a good choice for you. It is quite easy to use by just following the documentation. Minikube would give you only single node Kubernetes. As of minikube 1.10.1, it is possible to use multi-node clusters.
Kubeadm is a tool to get Kubernetes up and running on already existing machine. It will basically configure and start all required Kubernetes components (for minimum viable cluster). Kubeadm is the right tool to bootstrap the Kubernetes cluster on your virtual machines. But you need to prepare the machines your self (install OS + required software, networking, ...). kubeadm will not do it for you.
Minikube is a tool which will allow you to start locally single node Kubernetes cluster. This is usually done in a VM - minikube supports VirtualBox KVM and others. It will start for you the virtual machine and take care of everything. But it will not do a 4 node cluster for you.
Kubeadm takes care of both. You first setup the master and then use kubeadm on the worker nodes to join the master.
When you use Kubeadm, it doesn't really care what do you use for the virtualization. You can choose whatever you want.
Why do you want to run virtual machines on top of your EC2 machine? Why not just create more (perhaps smaller) EC2 machines for the cluster? You can use this as an inspiration: https://github.com/scholzj/terraform-aws-kubernetes. There are also some more advanced tools for setting up the whole cluster such as (for example) Kops.