How to specify "master" and "worker" nodes when using one machine to run Kubernetes? - kubernetes

I am using an Ubuntu 22.04 machine to run and test Kubernetes locally. I need some functionality like Docker-Desktop. I mean it seems both master and worker nodes/machines will be installed by Docker-Desktop on the same machine. But when I try to install Kubernetes and following the instructions like this, at some points it says run the following codes on master node:
sudo hostnamectl set-hostname kubernetes-master
Or run the following comands on the worker node machine:
sudo hostnamectl set-hostname kubernetes-worker
I don't know how to specify master/worker nodes if I have only my local Ubuntu machine?
Or should I run join command after kubeadm init command? Because I can't understand the commands I run in my terminal will be considered as a command for which master or worker machine?
I am a little bit confused about this master/worker nodes or client/server machine stuff while I am just using one machine for both client and server machines.

Prerequisites for installing kubernetes in cluster:
Ubuntu instance with 4 GB RAM - Master Node - (with ports open to all traffic)
Ubuntu instance with at least 2 GB RAM - Worker Node - (with ports open to all traffic)
It means you need to create 3 instances from any cloud provider like Google (GCP), Amazon (AWS), Atlantic.Net Cloud Platform, cloudsigma as per your convenience.
For creating an instance in gcp follow this guide. If you don’t have an account create a new account ,New customers also get $300 in free credits to run, test, and deploy workloads.
After creating instances you will get ips of the instance using them you can ssh into the instance using terminal in your local machine by using the command: ssh root#<ip address>
From there you can follow any guide for installing kubernetes by using worker and master nodes.
example:
sudo hostnamectl set-hostname <host name>
Above should be executed in the ssh of the worker node, similarly you need to execute it into the worker node.

The hostname does nothing about node roles.
If you do kubeadm init, the node will be a master node (currently called control plane).
This node can also be used as a worker node (currently called just a node), but by default, Pods cannot be scheduled on the control plane node.
You can turn off this restriction by removing its taints with the following command:
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
and then you can use this node as both control-plane and node.
But I guess some small kubernetes like k0s, k3s, and microk8s are better options for your use case rather than kubeadm.

Related

What is minikube config specifying?

According to the minikube handbook the configuration commands are used to "Configure your cluster". But what does that mean?
If I set cpus and memory then are these the max values the cluster as a whole can ever consume?
Are these the values it will reserve on the host machine in preparation for use?
Are these the values that are handed to the control plane container/VM and now I have to specify more resources when making a worker node?
What if I want to add another machine (VM or bare metal) and add its resources in the form of a worker node to the cluster? From the looks of it I would have to delete that cluster, change the configuration, then start a new cluster with the new configuration. That doesn't seem scalable.
Thanks for the help in advance.
Answering the question:
If I set cpus and memory then are these the max values the cluster as a whole can ever consume?
In short. It will be a limit for the whole resource (either a VM, a container, etc. depending on a --driver used). It will be used for the underlying OS, Kubernetes components and the workload that you are trying to run on it.
Are these the values it will reserve on the host machine in preparation for use?
I'd reckon this would be related to the --driver you are using and how its handling the resources. I personally doubt it's reserving the 100% of CPU and memory you've passed in the $ minikube start and I'm more inclined to the idea that it uses how much it needs during specific operations.
Are these the values that are handed to the control plane container/VM and now I have to specify more resources when making a worker node?
By default, when you create a minikube instance with: $ minikube start ... you will create a single node cluster capable of being a control-plane node and a worker node simultaneously. You will be able to run your workloads (like an nginx-deployment without adding additional node).
You can add a node to your minikube ecosystem with just: $ minikube node add. This will make another node marked as a worker (with no control-plane components). You can read more about it here:
Minikube.sigs.k8s.io: Docs: Tutorials: Multi node
What if I want to add another machine (VM or bare metal) and add its resources in the form of a worker node to the cluster? From the looks of it I would have to delete that cluster, change the configuration, then start a new cluster with the new configuration. That doesn't seem scalable.
As said previously, you don't need to delete the minikube cluster to add another node. You can run $ minikube node add to add a node on a minikube host. There are also options to delete/stop/start nodes.
Personally speaking if the workload that you are trying to run requires multiple nodes, I would try to consider other Kubernetes cluster built on top/with:
Kubeadm
Kubespray
Microk8s
This would allow you to have more flexibility on where you want to create your Kubernetes cluster (as far as I know, minikube works within a single host (like your laptop for example)).
A side note!
There is an answer (written more than 2 years ago) which shows the way to add a Kubernetes cluster node to a minikube here :
Stackoverflow.com: Answer: How do I get the minikube nodes in a local cluster
Additional resources:
Kubernetes.io: Docs: Setup: Production environment: Tools: Kubeadm: Create cluster kubeadm
Github.com: Kubernetes sigs: Kubespray
Microk8s.io

Move kubernetes (kubespray) nodes to another IP range

I installed a kubernetes cluster by using kuberspray on my internal network, 192.168.0.0/24.
Now I need more nodes and these nodes will be located on other networks.
So I will set up a VPN between the current nodes and the new nodes.
The problem is that I cannot find any information specifically related to kubespray on how to change the internal IPs of the nodes in order to "move them on the VPN".
I think after moving the nodes on the VPN, then it's just a matter of installing the new nodes in the cluster and I'm set.
So: Using kubespray (or manually if not possible via kubespray directly) how can I change the internal IPs of the nodes in order to move them on the VPN?
Kubespray supports kubeadm for cluster creation since v2.3 and deprecated non-kubeadm deployment starting from v2.8.
I assume that you can use kubeadm with your Kubespray installation.
I see two ways to achieve your goal. Both from Kubernetes side:
By using ifconfig command:
run kubeadm reset on the node you want to reconfig
run ifconfig <network interface> <IP address>
run kubeadm join in order to add the node again with the new IP
By editing the kubelet.conf file:
run systemctl status kubelet to find out the location of your kubelet.conf (usually /etc/kubernetes/kubelet.conf)
edit it by adding KUBELET_EXTRA_ARGS=--node-ip=<IP_ADDRESS>
run systemctl daemon-reload
run systemctl restart kubelet
Please let me know if that helped.

Run k8s on single node without minikube

is it possible to run k8s on single node without using minikube? Today I use kubeadm with 2 hosts, but I would like to know if it is possible to run using only one host.
You can run kubeadm init command to initialize single node cluster. You can add/remove nodes to the cluster.
taint the master so that it can run containers using the below command
kubectl taint nodes --all node-role.kubernetes.io/master-
You need to look into the hardware requirements for running a single node cluster. You would need to run
etcd which is the backing store for all cluster data.
Control plane software(scheduler, controller manager, api-server, kubeadm)
Worker node software(kubectl, kube-proxy)
all on one node.
When installing kube-adm I see the hardware requirements(https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) as
2 GB or more of RAM per machine (any less will leave little room for your apps)
and 2 CPUs or more
Example configurations for etcd (https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/hardware.md#example-hardware-configurations).
For the CKA exam training material the recommended node setting for a single machine is 2 vcpu's and 7.5 GB memory with a note of caution that you may experience slowness.
I am going by Ubuntu 18.04 Linux for my recommendations. Another thing you need to do is disable the swap(https://serverfault.com/questions/881517/why-disable-swap-on-kubernetes). It is necessary since kubernetes makes maximum use of disk and cpu resources provided.
So if it for your learning. Go ahead and start with 2 vcpu's and 7.5 GB memory.
You could check
k3s
KinD
MicroK8s
for single-node Kubernetes installations.

Is it possible to join the kubernetes worker node to kubernetes master without enabling ssh

I am installing kubernetes cluster on bare metal. One of the Prerequisites
An SSH key pair on your local linux machines. This ssh key pair is used to join the worker node with master.
Is it possible to join the kubernetes worker node to kubernetes master without enabling ssh?
Well, SSH is required only, if you want to control all nodes from a single machine actually. I mean here operations like transferring all binaries/config files/certs required by minions.
If you want to setup Kubernetes cluster in really hard way, without SSH protocol part, you need to find a way (alternative to SSH) to be able to run on each worker node a similar command* to this (assuming the rest of prerequisites are already in place):
kubeadm join –discovery-token abcdef.1234567890abcdef 1.2.3.4:6443
*I'm assuming you are bootsraping your cluster with kubeadm, if not please check this tutorial.

Creation of nodes in VMs by using Kubernetes kubeadm and minikube

I am trying to create Kubernetes cluster with different number of nodes using same machine. Here I want to create separate VMs and need to create node in those VMs. I am currently exploring about kubeadm and minikube for these tasks.
When I am exploring I had the following confusions:
I need to create 4 number of nodes each need to create in different VMs. Can I use kubeadm for these requirement?
Also found that Minikube is using for creating the single node structure and also possible to use to creation of VMs. What is the difference between kubeadm and minikube ?
If I want to create nodes in different VMs which tool should use along with installation of Kubernetes cluster master?
If I am using VMs, then can I directly install VMware workstation / virtualbox in my Ubuntu 16.04 ?
In AWS EC2, they already giving the Ubuntu as a virtual machine. So is possible to install VMware workstation on ubuntu? Since it is VMs on another VM.
Kubeadm should be a good choice for you. It is quite easy to use by just following the documentation. Minikube would give you only single node Kubernetes. As of minikube 1.10.1, it is possible to use multi-node clusters.
Kubeadm is a tool to get Kubernetes up and running on already existing machine. It will basically configure and start all required Kubernetes components (for minimum viable cluster). Kubeadm is the right tool to bootstrap the Kubernetes cluster on your virtual machines. But you need to prepare the machines your self (install OS + required software, networking, ...). kubeadm will not do it for you.
Minikube is a tool which will allow you to start locally single node Kubernetes cluster. This is usually done in a VM - minikube supports VirtualBox KVM and others. It will start for you the virtual machine and take care of everything. But it will not do a 4 node cluster for you.
Kubeadm takes care of both. You first setup the master and then use kubeadm on the worker nodes to join the master.
When you use Kubeadm, it doesn't really care what do you use for the virtualization. You can choose whatever you want.
Why do you want to run virtual machines on top of your EC2 machine? Why not just create more (perhaps smaller) EC2 machines for the cluster? You can use this as an inspiration: https://github.com/scholzj/terraform-aws-kubernetes. There are also some more advanced tools for setting up the whole cluster such as (for example) Kops.