How to install kubernetes by kubernetes - kubernetes

I have kubernetes cluster with one master and 3 worker. I want to create another kubernetes cluster with this kubernetes I mean the other kubernetes cluster (master and worker) should be container forexample I build kubernetes cluster with 1 master and 5 worker.

Since all components in Kubernetes, except Kubelet can be run as pod, you can deploy Kubernetes apiserver, controller,scheduler as a pod in another kubernetes.
You will need to export a SVC exposing API server as a Node port.
IN next step, This Node Port can be used as apiserver URL for second cluster kubelet.
The only challenge you will face, In case you are running calico, ON master node, there could be only one instance of Calico.
So in case you are using operators, your APi server Pod will not be able to reach Operator controller Pod.

That is going to be very difficult. What are you using to run Kubernetes (minikube, Google Kubernetes Engine, etc.)
If using minikube (local) minikube makes use of virtualboxs' hypervisor to create new "containers". If you create a cluster, you are already using that hypervisor. What hypervisor would your 'new' cluster use?
Right now, I think you don't make much sense either. you want to deploy a 'new' Kubernetes Cluster as a pod? Where would this pod get its resources? Let's say you used GKE (Google Cloud) and had a really big node (100vCPUs, 1000RAM). Once you're in this node as a pod and you create another cluster (within this pod) (theoretically), would this pod act as the master node? What if this pod goes down? If the master node goes down, the cluster is lost. Pods are ephemeral. It is theoretically possible, but there is absolutely no logical reason one would implement this. This isn't even an answer, but more of a probe for you to help answer our questions, so we can attempt answering yours.

apt-get update
apt-get install apt-transport-https ca-certificates curl software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
apt-get update -y
apt-get install docker.io -y
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list
apt-get update -y
apt-get install kubelet kubeadm kubectl -y
10. kubeadm init
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
13. sudo chown $(id -u):$(id -g) $HOME/.kube/config
mkdir -p /root/.kube
sudo cp -i /etc/kubernetes/admin.conf /root/.kube/config
16. sudo chown $(id -u):$(id -g) /root/.kube/config
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
18. kubectl taint nodes --all node-role.kubernetes.io/master-
Use the above commands to install Kubernetes and Make kubernetes run as single node cluster for demo purpose - OS Family - Ubuntu 16.04/18.04

Related

Install minikube (Kubernetes) because I have only one master server / node, but it is not pointing to my IP, but to IP 172.17.0.2

Good afternoon, I am new to Kubernetes and I am installing for the development environment Kubernetes, I have a red hat server(redhat) to be node/master, at the same time, I followed the following steps to install:
Following the tutorial on the page:
https://www.linuxtechi.com/install-kubernetes-k8s-minikube-centos-8/
sudo dnf update -y
sudo setenforce 0
sudo sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
sudo systemctl start docker
sudo systemctl enable docker
sudo dnf install conntrack -y
#Installing Kubectl
sudo cat <<EOF > /etc/yum.repos.d/kubernetes.repo (root)
yum install -y kubectl (root)
#Installing Minikube
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
chmod +x minikube
mkdir -p /usr/local/bin/
install minikube /usr/local/bin/
However, when I configure minikube, it does not point to the IP of my server, but to the IP 172.17.0.2:
minikube ip
172.17.0.2
kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 172.17.0.2:8443 was refused - did you specify the right host or port?
My IP is 10.154.7.209
What I can be doing wrong? If I can't use minikube to raise a server as master/node, what can I use?
You are not doing anything wrong 👌. minikube is basically using the docker driver so 172.17.0.2 is the IP address of the container where your cluster is running.
But looks like you are using a proxy somewhere in your system. So you need to include the 172.17.0.0/24 range in the NO_PROXY environment variable.
Something like this:
export HTTP_PROXY=http://<proxy hostname:port>
export HTTPS_PROXY=https://<proxy hostname:port>
export NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.0/24 👈
minikube start
# get pods
minukube kubectl -- get pods --all-namespaces
✌️

The connection to the server localhost:8080 was refused - did you specify the right host or port? FAQ

I am new to kubernetes . I got the below error while interacting with the cluster kubectl get nodes .
ERROR:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
After search in the internet i fixed my issues .
#sudo cp /etc/kubernetes/admin.conf $HOME/
#sudo chown $(id -u):$(id -g) $HOME/admin.conf
#export KUBECONFIG=$HOME/admin.conf
Your kubectl is probably not referring to right kubeconfig file or the kubeconfig file does not right details.
When there is clear instructions by kubeadm init to execute following commands as an regular user, if you miss runing them you end up with issue reported.
To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should check back the logs at kubeadm init time and you will find similar as below asking to execute the command.
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a Pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Maybe you not set environment variables, try this:
export KUBERNETES_MASTER=http://MasterIP:8080
MasterIP is your Kubernetes master IP
Issue is with the use-context in kubectl command.Please check the same in kubeconfig file.

How to deploy a microservice application using Kubernetes using three machines in my local network?

As I am new to Kubernetes, I still haven't figured out how to deploy an application on my local network using one or various physical machines.
Usually the tutorials on the internet describe situations using minikube, but it is only for local machine tests, isn't it? Or situations where the deploy is performed on cloud platforms, like google.
I would really appreciate some support in where to begin? In my case will I need to install only Kubernetes on the machines? Is it a trivial task?
One way is to install kubernetes is via kubeadm.
I've done this with my own VM. But should also work for your physical machines.
You can use kubeadm, here is a doc which will help you to install it.
Each server will need to have kubeadm, kubelet and kubectl installed.
On master node you will do kubeadm init and once the whole process is finished it will provide you with an output which you can run to add worker nodes.
Of course those servers needs to see each other on your network.
There are also other options to install Kubernetes. For example using kops or kubespray.
From https://kubernetes.io/docs/setup/learning-environment/minikube/
Minikube is a tool that makes it easy to run Kubernetes locally.
Minikube runs a single-node Kubernetes cluster inside a Virtual
Machine (VM) on your laptop for users looking to try out Kubernetes or
develop with it day-to-day
If you want to set up your own cluster including, there's quite a few options including kubeadm or kubespray.
Personally I've used Vagrant and Ansible to set up a local Kubernetes cluster using kubeadm - there's a good tutorial here... and here's my implementation :)
Kubeadm allows you to create cluster using any machine that runs Docker.
This guide will help you to create your first cluster with 1 master and 1 worker nodes. (This is based on Debian)
Execute the following command on all 2 machines:
Installing Docker from official repository
$ sudo apt-get update
$ sudo apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common
$ curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg | sudo apt-key add -
$ sudo apt-key fingerprint 0EBFCD88
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable"
$ sudo apt-get update
$ sudo apt-get install -y docker-ce docker-ce-cli containerd.io
$ sudo groupadd docker
$ sudo usermod -aG docker $USER
Install kubeadm, kubectl and kubelet from official repository
$ sudo apt-get update && sudo apt-get install -y apt-transport-https curl
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ sudo su -c "cat > /etc/apt/sources.list.d/kubernetes.list <<EOF
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF"
$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl
Execute the following command only on your master node
Initialize your cluster
$ sudo kubeadm init
When the init finishes, kubeadm will give you the command to add your workers to your cluster. Save this command for later use.
EXAMPLE:
kubeadm join 10.128.0.17:6443 --token 27evky.hoel95h16poqici6 \
--discovery-token-ca-cert-hash sha256:521f69cb935951bbfee142432108caeaeaf1682d8647208ba2f558624890ab63
After the kubeadm init command completes, run the following commands to start using your new cluster
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Your master is not ready yet, we need to create a cni network (You can choose different CNIs)
$ kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
Check if your master node is ready. It needs a while to start all dependencies
$ kubectl get nodes
EXAMPLE:
user#kubemaster:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster NotReady master 32s v1.16.2
To check more details about your node:
$ kubectl describe node <NODENAME>
When your master node is ready you can proceed and run the kubectl join command (saved previously) on your worker nodes:
$ sudo kubeadm join 10.128.0.17:6443 --token 27evky.hoel95h16poqici6 --discovery-token-ca-cert-hash sha256:521f69cb935951bbfee142432108caeaeaf1682d8647208ba2f558624890ab63
Check if your worker node is ready (command must be executed on master node)
$ kubectl get nodes
If you want to have more workers, just repeat the kubectl join command on the new workers.

Cannot connect to Kubernetes api on AWS vm's

I have deployed Kubernetes using the link Kubernetes official page
I see that Kubernetes is deployed because in the end i got this
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 172.16.32.101:6443 --token ma1d4q.qemewtyhkjhe1u9f --discovery-token-ca-cert-hash sha256:408b1fdf7a5ea5f282741db91ebc5aa2823802056ea9da843b8ff52b1daff240
when i do kubectl get pods it thorws this error
# kubectl get pods
The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port?
When I do see the cluster-info it says as follows
kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:6553
But when i see the config it shows as follows
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1EWXlNREEyTURJd04xb1hEVEk0TURZeE56QTJNREl3TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0ZXCkxWQkJoWmZCQms4bXJrV0w2MmFGd2U0cUYvZkRHekJidnE5TFpGN3M4UW1rdDJVUlo5YmtSdWxzSlBrUXV1U2IKKy93YWRoM054S0JTQUkrTDIrUXdyaDVLSy9lU0pvbjl5TXJlWnhmRFdPTno2Y3c4K2txdnh5akVsRUdvSEhPYQpjZHpuZnJHSXVZS3lwcm1GOEIybys0VW9ldytWVUsxRG5Ra3ZwSUZmZ1VjVWF4UjVMYTVzY2ZLNFpweTU2UE4wCjh1ZjdHSkhJZFhNdXlVZVpFT3Z3ay9uUTM3S1NlWHVhcUlsWlFqcHQvN0RrUmFZeGdTWlBqSHd5c0tQOHMzU20KZHJoeEtyS0RPYU1Wczd5a2xSYjhzQjZOWDB6UitrTzhRNGJOUytOYVBwbXFhb3hac1lGTmhCeGJUM3BvUXhkQwpldmQyTmVlYndSWGJPV3hSVzNjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDTFBlT0s5MUdsdFJJTjdmMWZsNmlINTg0c1UKUWhBdW1xRTJkUHFNT0ZWWkxjbDVJZlhIQ1dGeE13OEQwOG1NNTZGUTNEVDd4bi9lNk5aK1l2M1JrK2hBdmdaQgpaQk9XT3k4UFJDOVQ2S1NrYjdGTDRSWDBEamdSeE5WTFIvUHd1TUczK3V2ZFhjeXhTYVJJRUtrLzYxZjJsTGZmCjNhcTdrWkV3a05pWXMwOVh0YVZGQ21UaTd0M0xrc1NsbDZXM0NTdVlEYlRQSzJBWjUzUVhhMmxYUlZVZkhCMFEKMHVOQWE3UUtscE9GdTF2UDBDRU1GMzc4MklDa1kzMDBHZlFEWFhiODA5MXhmcytxUjFQbEhJSHZKOGRqV29jNApvdTJ1b2dHc2tGTDhGdXVFTTRYRjhSV0grZXpJRkRjV1JsZnJmVHErZ2s2aUs4dGpSTUNVc2lFNEI5QT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
**server: https://172.16.32.101:6443**
Even telnet shows that there is a process running on 6443 but not on 6553
how can change the port and how can I fix the issue??
Any help would be of great use
Thanks in advance.
It looks like your last kubectl config interferes with the previous clusters configurations.
It is possible to have settings for several different clusters in one .kube/config or in separate files.
But in some cases, you may want to manage only the cluster you've just created.
Note: After tearing down the exited cluster using kubeadm reset followed by initializing fresh cluster using kubeadm init, new certificates will be generated. To operate the new cluster, you have to update kubectl configuration or replace it with the new one.
To clean up old kubectl configurations and apply the last one, run the following commands:
rm -rf $HOME/.kube
unset KUBECONFIG
# Check if you have KUBECONFIG configured in profile dot files and comment or remove it.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
It gives you up-to-date configuration for the last cluster you've created using kubeadm tool.
Note: You should copy kubectl configuration for all users accounts which you are going to use to manage the cluster.
Here are some examples of how to manage config file using the command line.
I figured out the issue it is because of the firewall in the machine I could join nodes to the cluster once I allowed traffic via port 6443. I didn't fix the issue with this post but for beginners use this K8's on AWS for a better idea.
Thanks for the help guys...!!!

How to completely uninstall kubernetes

I installed kubernetes cluster using kubeadm following this guide. After some period of time, I decided to reinstall K8s but run into troubles with removing all related files and not finding any docs on official site how to remove cluster installed via kubeadm.
Did somebody meet the same problems and know the proper way of removing all files and dependencies? Thank you in advance.
For more information, I removed kubeadm, kubectl and kubelet using apt-get purge/remove but when I started installing the cluster again I got next errors:
[preflight] Some fatal errors occurred:
Port 6443 is in use
Port 10251 is in use
Port 10252 is in use
/etc/kubernetes/manifests is not empty
/var/lib/kubelet is not empty
Port 2379 is in use
/var/lib/etcd is not empty
In my "Ubuntu 16.04", I use next steps to completely remove and clean Kubernetes (installed with "apt-get"):
kubeadm reset
sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube*
sudo apt-get autoremove
sudo rm -rf ~/.kube
And restart the computer.
use kubeadm reset command. this will un-configure the kubernetes cluster.
If you are clearing the cluster so that you can start again, then, in addition to what #rib47 said, I also do the following to ensure my systems are in a state ready for kubeadm init again:
kubeadm reset -f
rm -rf /etc/cni /etc/kubernetes /var/lib/dockershim /var/lib/etcd /var/lib/kubelet /var/run/kubernetes ~/.kube/*
iptables -F && iptables -X
iptables -t nat -F && iptables -t nat -X
iptables -t raw -F && iptables -t raw -X
iptables -t mangle -F && iptables -t mangle -X
systemctl restart docker
You then need to re-install docker.io, kubeadm, kubectl, and kubelet to make sure they are at the latest versions for your distribution before you re-initialize the cluster.
EDIT: Discovered that calico adds firewall rules to the raw table so that needs clearing out as well.
kubeadm reset
/*On Debian base Operating systems you can use the following command.*/
# on debian base
sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube*
/*On CentOs distribution systems you can use the following command.*/
#on centos base
sudo yum remove kubeadm kubectl kubelet kubernetes-cni kube*
# on debian base
sudo apt-get autoremove
#on centos base
sudo yum autoremove
/For all/
sudo rm -rf ~/.kube
The guide you linked now has a Tear Down section:
Talking to the master with the appropriate credentials, run:
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
Then, on the node being removed, reset all kubeadm installed state:
kubeadm reset
I use the following scripts to completely uninstall an existing Kubernetes cluster and its running docker containers
sudo kubeadm reset
sudo apt purge kubectl kubeadm kubelet kubernetes-cni -y
sudo apt autoremove
sudo rm -fr /etc/kubernetes/; sudo rm -fr ~/.kube/; sudo rm -fr /var/lib/etcd; sudo rm -rf /var/lib/cni/
sudo systemctl daemon-reload
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
# remove all running docker containers
docker rm -f `docker ps -a | grep "k8s_" | awk '{print $1}'`
If wanting to make it easily repeatable, it would make sense to make this into a script. This is assuming you are using a Debian based OS:
#!/bin/sh
# Kube Admin Reset
kubeadm reset
# Remove all packages related to Kubernetes
apt remove -y kubeadm kubectl kubelet kubernetes-cni
apt purge -y kube*
# Remove docker containers/ images ( optional if using docker)
docker image prune -a
systemctl restart docker
apt purge -y docker-engine docker docker.io docker-ce docker-ce-cli containerd containerd.io runc --allow-change-held-packages
# Remove parts
apt autoremove -y
# Remove all folder associated to kubernetes, etcd, and docker
rm -rf ~/.kube
rm -rf /etc/cni /etc/kubernetes /var/lib/dockershim /var/lib/etcd /var/lib/kubelet /var/lib/etcd2/ /var/run/kubernetes ~/.kube/*
rm -rf /var/lib/docker /etc/docker /var/run/docker.sock
rm -f /etc/apparmor.d/docker /etc/systemd/system/etcd*
# Delete docker group (optional)
groupdel docker
# Clear the iptables
iptables -F && iptables -X
iptables -t nat -F && iptables -t nat -X
iptables -t raw -F && iptables -t raw -X
iptables -t mangle -F && iptables -t mangle -X
NOTE:
This will destroy everything related to Kubernetes, etcd, and docker on the Node/server this command is run against!