how to remove kubernetes with all it's dependencies Centos 7 - kubernetes

i want to uninstall kubernetes from Centos 7 and all it's dependencies and files like :
kube-apiserver
kube-controller-manager
kubectl
kubelet
kube-proxy
kube-scheduler

Check this thread or else these steps should help,
First clean up the pods running into your k8s cluster using,
$ kubectl delete node --all
then remove data volumes and back-up(if it's not needed) from your host system. Finally, you can stop all the k8s services using the script,
$ for service in kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler; do
systemctl stop $service
done
$ yum -y remove kubernetes #if it's registered as a service

Related

Confluent Platform setup on minikube multinode cluster

I am trying to setup confluent platform on minikube multi-node cluster following confluent documentation
Below are the steps followed:
Created minikube multinode cluster minikube start --nodes 3
Verified kubectl get nodes
Created namespace kubectl create namespace confluent & kubectl config set-context --current --namespace confluent
Installed confluent-operator helm upgrade --install confluent-operator confluentinc/confluent-for-kubernetes
Pulled below images to minikube minikube ssh docker pull <image-name>
docker.io/confluentinc/cp-server:7.3.0
docker.io/confluentinc/cp-server-connect:7.3.0
docker.io/confluentinc/cp-schema-registry:7.3.0
docker.io/confluentinc/cp-ksqldb-server:7.3.0
docker.io/confluentinc/cp-kafka-rest:7.3.0
docker.io/confluentinc/cp-enterprise-control-center:7.3.0
docker.io/confluentinc/confluent-operator:0.581.34
docker.io/confluentinc/confluent-init-container:2.5.0
Initiated setup kubectl apply -f https://raw.githubusercontent.com/confluentinc/confluent-kubernetes-examples/master/quickstart-deploy/confluent-platform.yaml
Now after initiating setup, it starts with creating pods for zookeeper and connect.
These pods never succeeds and keeps on restarting with Error and CrashLoopbackOff status.
I also tried to look into logs of these pods but it has no error or exception logged.

How to restore accidentally deleted a kube-proxy DaemonSet in a Kubernetes cluster?

I accidentally deleted kube-proxy daemonset by using command: kubectl delete -n kube-system daemonset kube-proxy which should run kube-proxy pods in my cluster, what the best way to restore it?
That's how it should look
Kubernetes allows you to reinstall kube-proxy by running the following command which install the kube-proxy addon components via the API server.
$ kubeadm init phase addon kube-proxy --kubeconfig ~/.kube/config --apiserver-advertise-address string
This will generate the output as
[addons] Applied essential addon: kube-proxy
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
Hence kube-proxy will be reinstalled in the cluster by creating a DaemonSet and launching the pods.
kube-proxy daemon got created at the time of cluster creation, so you need to write your own manifest for daemon-set unless you have a backup to restore it from there.

Kubernetes Deployment: Error: failed to create deployment:

Environment Details:
Kubernetes version: `v1.20.2`
Master Node: `Bare Metal/Host OS: CentOS 7`
Worker Node: `VM/Host OS: CentOS 7`
I have installed & configured the Kubernetes cluster, the Master node on the bare metal server & the worker node on windows server 2012 HyperV VM. Both master and worker nodes have the same Kubernetes version ( v1.20.2) & centos7. Successfully joined worker node to master, below is the get nodes status.
$ kubectl get nodes
**NAME STATUS ROLES AGE VERSION
k8s-worker-node1 Ready <none> 2d2h v1.20.2
master-node Ready control-plane,master 3d4h v1.20.2**
While creating a deployment on the worker node I am getting the below error message.
On worker node, I issued the following command.
$ kubectl create deployment nginx-depl --image=nginx
Error message is:
error: failed to create deployment: Post “http://localhost:8080/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create”: dial tcp: lookup localhost on 8.8.8.8:53: no such host
please help me to resolve this issue as I am not able to understand what is the problem.
May you have to run minikube start before. I’m learning and between one class and another I forgot to run this command. I hope I have helped someone.
This worked for me.
It seems that you are issuing the kubectl create deployment command on the worker node. This won't work because the kubectl command communicates with the kub-apiserver for cluster communication. Since the apiserver does not run on the worker node executing the command on it will raise an error.
Instead execute the same kubectl command on the master node as a non-root user with the following additional commands,
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl create deployment nginx-depl --image=nginx

`kubectl` not found. If you need it, try: 'minikube kubectl -- get pods -A'

I installed minikube in Windows 10 . I am able to start minikube
**C:\WINDOWS\system32>minikube start
* minikube v1.15.1 on Microsoft Windows 10 Pro 10.0.18363 Build 18363
* Using the hyperv driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Restarting existing hyperv VM for "minikube" ...
* Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default**
But there is a warning in above output ( 2nd last line ) says
kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
After that I executed this command too minikube kubectl -- get pods -A
Still getting below error while trying kubectl
C:\WINDOWS\system32>kubectl
'kubectl' is not recognized as an internal or external command,
operable program or batch file.
Minikube installs kubectl inside of itself.
So to use the kubectl which you installed via minikube, you have to prepend the command arguments with minikube kubectl --. For example:
# the same as `kubectl version --client`
minikube kubectl -- version --client
For convenience, you may want to add an alias in your shell configuration.
Source: https://minikube.sigs.k8s.io/docs/handbook/kubectl/
kubectl is wrapped around minikube.
Don't forget to add a -- after minikube kubectl
minikube kubectl -- describe pod kube-scheduler-minikube --namespace kube-system
minikube kubectl -- get pods --namespace kube-system
You have installed minikube, kubectl is not a part of minikube package.
It says when you do minikube start that kubectl is not present and if you need to you can use minikube kubectl instead.
This is also mentioned here
If you already have kubectl installed, you can now use it to access your shiny new cluster
It means that the kubectl might not be present on your machine or that it is not added to your PATH.
You can follow these instructions to install it either by downloading executable or by using curl:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.19.0/bin/windows/amd64/kubectl.exe
After that add the binary to PATH.
You can run kubectl version --client to ensure correct version is downloaded.
Use doskey.exe to create an alias for kubectl.
Example:
doskey kubectl="%PROGRAMFILES%\Kubernetes\Minikube\minikube.exe" kubectl -- $*
You might need to update the path if you've installed minikube somewhere else.

Installing a Kubernetes pod network for cluster nodes hosted on VirtualBox VMs

On OS X 10.11.6, I created 4 CentOS 7 VMs each with two interfaces ( One NAT, and one Host-only network.) in VirtualBox. Each VM's host-only interface receives an IP via DCHCP and DNS via dnsmasq.
OS X is running dnsmasq configure via a /usr/local/etc/dnsmasq.conf file that contains:
interface=vboxnet0
bind-interfaces
dhcp-range=vboxnet0,192.168.56.100,192.168.56.200,255.255.255.0,infinite
dhcp-leasefile=/usr/local/etc/dnsmasq.leases
local=/dev/
expand-hosts
domain=dev
address=/kube-master.dev/192.168.56.100
address=/kube-minion1.dev/192.168.56.101
address=/kube-minion2.dev/192.168.56.102
address=/kube-minion3.dev/192.168.56.103
address=/vbox-host.dev/192.168.56.1
dhcp-host=08:00:27:09:48:16,192.168.56.100
dhcp-host=0a:00:27:00:00:00,192.168.56.1
dhcp-host=08:00:27:95:AE:39,192.168.56.101
dhcp-host=08:00:27:97:C9:D4,192.168.56.102
dhcp-host=08:00:27:9B:AD:B5,192.168.56.103
I can ssh into each VM through their respective host-only adapter's associated address (e.g., kube-master.dev, kube-minion1.dev, kube-minion2.dev, kube-minion3.dev), and then
yum update
skipping a few steps, get to the point of installing kubeadm as per http://kubernetes.io/docs/getting-started-guides/kubeadm/, that is:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y docker kubelet kubeadm kubectl kubernetes-cni ebtables
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet
Then it is unclear to me if the following is correct but on kube-master.dev I execute
kubeadm init --api-advertise-addresses=192.168.56.100 --api-external-dns-names=kube-master.dev
And then on each minion execute:
rm -Rf /etc/kubernetes/manifests/
kubeadm join --token=e7cd12.68011e93d5db7670 192.168.56.100
On kube-master.dev, I then run
kubectl get nodes
to verify the each node has joined the cluster.
The command returns:
NAME STATUS AGE
kube-master.dev Ready 44m
kube-minion1.dev Ready 40m
kube-minion2.dev Ready 39m
kube-minion3.dev Ready 39m
indicating things are groovy.
Afterward, things go entirely off the rail when I attempt to install a pod network.
On kube-master.dev, I run:
kubectl apply -f https://git.io/weave-kube
to install Weave Net, and once the POD network is installed I start monitoring that network is working via executing:
watch kubectl get pods --all-namespaces
And
kube-dns-654381707-05i1t 0/3
never moves off of zero.
So please what am I doing wrong? I've hammered at this for days. The kubeadm documentation is a bit thin in a few place, so I'm not sure I init'ed the master correctly, and installing the pod network is bit conjecture on my part. Also, I haven't found a tutorial other than the Kubernetes kubeadm and the associated youtube video documenting the use of kubeadm to set up a kubernetes cluster.