I am new to Kubernetes and, trying to setup the master and 2 node architecture using oracle Virtualbox.
OS: Ubuntu 16.04.6 LTS
Docker: 17.03.2-ce
Kubernetes
Client Version: v1.17.4
Server Version: v1.17.4
When I run the join command on the worker node, "kube-controller-manager" and "api-server manager" get disappeared and worker nodes are not getting joined (though join command executed successfully)
I have set the Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" but still same error.
Please see below snapshot.
Thanks.
The link you have provided is no longer available. While learning and trying out Kubernetes for the first time I highly recommend using the official docs.
There you will find a detailed guide regarding Creating a single control-plane cluster with kubeadm. Note that:
To follow this guide, you need:
One or more machines running a deb/rpm-compatible Linux OS; for example: Ubuntu or CentOS.
2 GiB or more of RAM per machine–any less leaves little room for your apps.
At least 2 CPUs on the machine that you use as a control-plane node.
Full network connectivity among all machines in the cluster. You can use either a public or a private network.
You also need to use a version of kubeadm that can deploy the version
of Kubernetes that you want to use in your new cluster.
Kubernetes’ version and version skew support policy applies to kubeadm
as well as to Kubernetes overall. Check that policy to learn about
what versions of Kubernetes and kubeadm are supported. This page is
written for Kubernetes v1.18.
The kubeadm tool’s overall feature state is General Availability (GA).
Some sub-features are still under active development. The
implementation of creating the cluster may change slightly as the tool
evolves, but the overall implementation should be pretty stable.
If you encounter any issues, first try the troubleshooting steps.
Please let me know if that helped.
Related
What are the steps for upgrading Kubernetes offline via kubeadm. I have a vanilla kubernetes cluster running with no access to internet. In order to upgrade kuberenetes when
kubeadm upgrade plan 'command is executed, it reaches out to internet for the plan.
The version of kubernetes used is 22.1.2,
CNI used: flannel.
Cluster size: 3 master, 5 worker.
It is a time taking process to manage the offline Kubernetes cluster. Because you need to set up your own repositories and registries for images. Once you are done with the setup of the nodes and registries, one can upgrade the cluster based on the requirements. There are a lot of resources available online that will teach how to manage different repositories for each OS distribution.
You can build your own images based on the requirements and push them to the registry. Later these images will help to create the Pods. You need to set up your own CA certificates because container engines require SSL. Example SSL setup.
For more information refer to this K8’s community discussion forum.
I am already running a single master kubernetes cluster now and I am doing research about setting up Highly available Kubernetes clusters. I was thinking of Multi master cluster setup then realized self-hosted cluster might be a better option to go future ready.
Additional challenge is I am doing it in Bare Metal (Meaning, I am going to use cloud vms from these cloud provider, Hetzner, Linode, DigitialOcean and they have CSI driver, cloud controller manager etc., )
In this case, I see 2 options.
Setup with bootkube (https://github.com/kubernetes-sigs/bootkube)
Setup with kubeadm self-hosting. (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/self-hosting/)
I assume this is still an early topic hence I am not able to find guidance to choose the right approach and then correct documentation. I need this for a scalable production environment where I will start small with at least 8 nodes and can grow faster.
Is bootkube considerable for future readiness?
or kubeadm self-hosting is still in alpha stage, am I getting into a risk running a production environment?
Any good, documentation, blog, article to go in this direction?
I use Keepalived + Haproxy and Ansible to deploy HA kubernetes cluster. Now kubeadm supports join control plane command, so it easy to integrate with ansible.
You can also refer: https://github.com/kubernetes-sigs/kubespray.
If you are adding worker nodes to a cluster, as long as k8s and docker versions are the same, can you use different operating systems and different versions? I would think you are able to as long as the k8s pieces used within the OS are the same, but I have not tested this.
Yes, a cluster can be heterogeneous with worker nodes of different operating systems. The services running on worker nodes are container runtime, kubelet and kube-proxy which should be compatible with the control plane version. You also need to make sure the CNI plugin is compatible with different operating systems in the cluster.
Yes this is possible.
As you have not tested this you can possible refer this on official kubernetes documentation on how to add a windows based node to a cluster where control plane linux base.
Guide for adding Windows Nodes in Kubernetes
I wanna ask something cause I looked for it and couldn't find a clear answer about it anywhere.
Can kubelet be used in windows 10?
Because all I found is usage of kubelet in linux operating systems only.
Also what became my theory is that kubectl is the kubelet version of windows operating system maybe?
I'm really confused about it and couldn't find any clear answer about kubelet in windows and about a comparison between kubelet and kubectl.
I'll be really grateful if someone could explain that to me.
Can kubelet be used in windows 10
Kubelet is one of Node Components and it is part of Kubernetes infrastructure. It is required to proper working of Kubernetes, so it is used in linux/unix, windows and mac.
Also what became my theory is that kubectl is the kubelet version of
windows operating system maybe?
kubelet is an agent that runs on each node in the cluster. It makes sure that containers are running in a pod.
kubectl is a command line interface for running commands against Kubernetes clusters. More information ca be found in documentation.
Please visit Kubernetes Components to get familiar with others Kubernetes components. Here you can find more information about kubelet and here about K8s infrastructure.
I'm really confused about it and couldn't find any clear answer about
kubelet in windows and about a comparison between kubelet and kubectl.
Those both cannot be compared. One is component of infrastructure, second is command line to execute K8s commands.
===
To run Kubernetes on Linux/Windows/MacOS you have to have container manager like docker. For Linux there is special package to download, for Windows is Docker for Windows. (Latest versions of Kuberetes also supports Windows Containers, but its different topic.)
To run Kubernetes on Windows, you have to use Minikube. It allows to run a single-node Kubernetes cluster inside a Virtual Machine.
You can find how to configure Kubernetes on Windows in this tutorial.
Hope it help to understand.
You can add windows node to the Kubernetes cluster following the instructions from the official documentation page. As it mentioned in the documentation, you can get all required components using the links from the Kubernetes CHANGELOG-1.15.md page:
Client binaries (kubectl.exe)
Server binaries (no Windows binaries, because windows cannot be master node at this moment)
Node binaries (kube-proxy.exe, kubeadm.exe, kubectl.exe, kubelet.exe)
If you need other version of binaries please find CHANGELOG for specific version on Kubernetes Releases page.
You need to have Docker engine installed on your Windows machine. Here is the manual how to do it for Windows 10.
I'm looking to run Kubernetes in production on a single machine - bare metal - no VM. But can't seem to find a writeup for this scenario. The reason is basically that we have on-premise small installations and we'd prefer to have everything based on Kubernetes rather than having two different environments - one cloud and one on-prem.
Update
With the official Ubuntu guide I managed to get it up and running in conjunction with the following: http://www.dangtrinh.com/2017/09/how-to-deploy-openstack-in-single.html. LXD version was wrong for what Conjure was expecting and IPv6 needs to be turned off for LXD. Now I am the happy owner of an Intel NUC here for testing which acts as a master and a node. Thanks for all the assistance in the comments!