how to delete all files related to k8s - kubernetes

I had configured multi master of k8s repeatedly.
It looks like the previous configuration of master held still.
The pod of calico node didn't work properly.
I want to remove all files of k8s.
Could you know how to remove the all files ?

Many depends how did you create cluster.
If you used Kubeadm you can use kubeadm reset which will revert of changes made by kubeadm init or kubeadm join.
Beside that you need to delete kubectl, kubeadm, etc. If you instlled it using apt-get (depends on distro. This is example from ubuntu) you can purge them.
$ sudo apt-get purge kubeadm kubectl kube*
Then use autoremove to get rid out of all dependencies related.
$ sudo apt-get autoremove
And at the end you should remove rest of config using:
$ sudo rm -rf ~/.kube
ipvsadm --clear or iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X or similar to clean-up iptables rules
If this won't answer your question, please add more information about your cluster.

Related

when i try to setup k8s cluster on ubuntu machine I m getting error while running kubeadm init command on master

A.ON MASTER AND WORKER BOTH
Install and enable container
i. sudo apt-get update
ii. sudo apt-get install docker.io
install and enable k8s runtime
i. sudo apt-get update
ii. sudo apt-get install -y apt-transport-https ca-certificates curl
iii. sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
iv. echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
v. sudo apt-get update
vi. sudo apt-get install -y kubelet kubeadm kubectl
vii. sudo apt-mark hold kubelet kubeadm kubectl
B.ON MASTER ONLY
(RUN AS A ROOT) Initiate API server
i. kubeadm init --apiserver-advertise-address= --pod-network-cidr=192.168.0.0/16 enter image description here
In screenshot shared it has successfully bootstrapped master. As non root user you can create .kube and apply the network. It has given the join command which can be used in worker nodes to join them in k8s cluster.

Kops, autoscale policy did not configure the master node correctly?

I have running k8s cluster using kops. the autoscaling policy terminate the master machine and recreated a new one since then every time i try to run kubectl command it returns "The connection to the server refused, did you specify the right host or port". i tried to ssh to the master machine but the did not found any of k8s services so i think the autoscale policy did not configure the master node correctly. so what should i do in this situation ?
update: also i found this log in syslog file:
E: Package 'ebtables' has no installation candidate
Jun 25 12:03:33 ip-172-20-35-193 nodeup[7160]: I0625 12:03:33.389286 7160 executor.go:145] No progress made, sleeping before retrying 2 failed task(s)
the issue was the kops was unable to install ebtables and conntrack so i installed it manually by :
sudo apt-get -o Acquire::Check-Valid-Until=false update
sudo apt-get install -y ebtables --allow-unauthenticated
sudo apt-get install --yes conntrack
and everything is running fine now

How to change the default port of microk8s?

Microk8s is installed on default port 16443. I want to change it to 6443. I am using Ubuntu 16.04. I have installed microk8s using snapd and conjure-up.
None of the following options I have tried worked.
Tried to edit the port in /snap/microk8s/current/kubeproxy.config. As the volume is read-only, I could not edit it.
Edited the /home/user_name/.kube/config and restarted the cluster.
Tried using the command and restarted the cluster
sudo kubectl config set clusters.microk8s-cluster.server https://my_ip_address:6443.
Tried to use kubectl proxy --port=6443 --address=0.0.0.0 --accept-hosts=my_ip_address &. It listens on 6443, but only HTTP, not HTTPS traffic.
That was initially resolved in microk8s issue 43, but detailed in microk8s issue 300:
This is the right one to use for the latest microk8s:
#!/bin/bash
# define our new port number
API_PORT=8888
# update kube-apiserver args with the new port
# tell other services about the new port
sudo find /var/snap/microk8s/current/args -type f -exec sed -i "s/8080/$API_PORT/g" {} ';'
# create new, updated copies of our kubeconfig for kubelet and kubectl to use
mkdir -p ~/.kube && microk8s.config -l | sed "s/:8080/:$API_PORT/" | sudo tee /var/snap/microk8s/current/kubelet.config > ~/.kube/microk8s.config
# tell kubelet about the new kubeconfig
sudo sed -i 's#${SNAP}/configs/kubelet.config#${SNAP_DATA}/kubelet.config#' /var/snap/microk8s/current/args/kubelet
# disable and enable the microk8s snap to restart all services
sudo snap disable microk8s && sudo snap enable microk8s

Kubeadm init stops at image pulling

I'm having some troubles with initializing the master using kubeadm..
I'm trying to follow https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ . I installed docker, kubelet, kubeadm and kubectl.
Now I executed kubeadm init, but it stops at [init] This might take a minute or longer if the control plane images have to be pulled.
I looked into journalctl and there I found out that: Unable to update cni config: No networks found in /etc/cni/net.d and Failed to list *v1.Pod: Get https://10.159.43.30:6443/api/v1/pods?fieldSelector=spec.nodeName%3Deskubernv01&limit=500&resourceVersion=0: dial tcp 10.159.43.30:6443: getsockopt: connection refused.
I tried to set up weave-net with kubectl apply -f https://git.io/weave-kube but it cannot connect: The connection to the server localhost:8080 was refused - did you specify the right host or port?.
I cannot copy admin.conf file which should allow me to connect from /etc/kubernates, because kubeadm init failed so these are not proper files.
I feel like I'm in a loop here and I'm mising something.
I'm out of options right now. Any ideas?
I found the way out.
If anyone has a problem like this - check docker logs.
In my case it was proxy which was unset for docker service.
To set it I used:
Create a systemd drop-in directory for the docker service:
$ sudo mkdir -p /etc/systemd/system/docker.service.d
Create a file called /etc/systemd/system/docker.service.d/http-proxy.conf that adds the HTTP_PROXY environment variable:
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:80/"
Source: https://docs.docker.com/config/daemon/systemd/#httphttps-proxy
I solved it by specifying the version [1.9.7-00] when installing kubeadm,kubectl,and kubelet , like this:
# ----- Install kubernetes -----
# kubeadm docs: https://kubernetes.io/docs/setup/independent/install-kubeadm/
echo " "
echo - Installing Kubernetes...
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet=1.9.7-00 kubeadm=1.9.7-00 kubectl=1.9.7-00
Note the kubelet=1.9.7-00 kubeadm=1.9.7-00 kubectl=1.9.7-00

Invalid x509 certificate for kubernetes master

I am trying reach my k8s master from my workstation. I can access the master from the LAN fine but not from my workstation. The error message is:
% kubectl --context=employee-context get pods
Unable to connect to the server: x509: certificate is valid for 10.96.0.1, 10.161.233.80, not 114.215.201.87
How can I do to add 114.215.201.87 to the certificate? Do I need to remove my old cluster ca.crt, recreate it, restart whole cluster and then resign client certificate? I have deployed my cluster with kubeadm and I am not sure how to do these steps manually.
One option is to tell kubectl that you don't want the certificate to be validated. Obviously this brings up security issues but I guess you are only testing so here you go:
kubectl --insecure-skip-tls-verify --context=employee-context get pods
The better option is to fix the certificate. Easiest if you reinitialize the cluster by running kubeadm reset on all nodes including the master and then do
kubeadm init --apiserver-cert-extra-sans=114.215.201.87
It's also possible to fix that certificate without wiping everything, but that's a bit more tricky. Execute something like this on the master as root:
rm /etc/kubernetes/pki/apiserver.*
kubeadm init phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=10.161.233.80,114.215.201.87
docker rm `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
This command for new kubernetes >=1.8:
rm /etc/kubernetes/pki/apiserver.*
kubeadm alpha phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=10.161.233.80,114.215.201.87
docker rm -f `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
Also whould be better to add dns name into --apiserver-cert-extra-sans for avoid issues like this in next time.
For kubeadm v1.13.3
rm /etc/kubernetes/pki/apiserver.*
kubeadm init phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=114.215.201.87
docker rm -f `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
If you used kubespray to provision your cluster then you need to add a 'floating ip' (in your case its '114.215.201.87'). This variable is called supplementary_addresses_in_ssl_keys in the group_vars/k8s-cluster/k8s-cluster.yml file. After updating it, just re-run your ansible-playbook -b -v -i inventory/<WHATEVER-YOU-NAMED-IT>/hosts.ini cluster.yml.
NOTE: you still have to remove all the apiserver certs (rm /etc/kubernetes/pki/apiserver.*) from each of your master nodes prior to running!
Issue cause:
Your configs at $HOME/.kube/ are present with your old IP address.
Try running,
rm $HOME/.kube/* -rf
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
For Kubernetes 1.12.2/CentOS 7.4 the sequence is as follows:
rm /etc/kubernetes/pki/apiserver.*
kubeadm alpha phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=51.158.75.136
docker rm -f `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
Use the following command:
kubeadm init phase certs all
For me when I was trying to accessing via root (after sudo -i) I got the error.
I excited and with normal user it was working.
For me the following helped:
rm -rf ~/.minikube
minikube delete
minikube start
Probably items no 2 and 3 would have been sufficient