kubernetes lost ~/.kube/config - kubernetes

Unfortunately I lost my local
~/.kube/config
where I had configuration for my namespace.
Is there a way to get this config if I have access to master nodes?
Thanks in advance

I believe you're using kubeadm to start your kubernetes cluster, you can generate the new kubeconfig file using following command:
kubeadm alpha phase kubeconfig admin --kubeconfig-dir /etc/kubernetes --cert-dir /etc/kubernetes/pki
This will generate a new config file in /etc/kubernetes/admin.conf. Then you can copy the file in following way:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

No need to re-install kubernetes. Just copy the file from /etc/kubernetes/admin.conf.
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

Thanks to #PrafullLadha, there's a similar solution for OpenShift:
Install kubeadm with snap:
sudo snap install kubeadm --classic
cd to your openshift cluster installation directory.
Make a copy of your TLS certificate:
cp ./tls/journal-gatewayd.crt ./tls/ca.crt
Make a copy of your TLS key:
cp ./tls/journal-gatewayd.key ./tls/ca.key
Run kubeadm as follow:
kubeadm init phase kubeconfig admin --kubeconfig-dir ./auth --cert-dir "${PWD}/tls"
It should output: [kubeconfig] Writing "admin.conf" kubeconfig file
vi ./auth/admin.conf - see that the certificates were added, and make sure the server address (https://api.your-cluster:6443) is correct.
Rename: mv ./auth/admin.conf ./auth/kubeconfig and you're all set.

Try this.
kubeadm init phase kubeconfig admin

Related

change master ip address --apiserver-advertise-address

How do you change ip address of the master or any worker node
I have experimented with:
kubeadm init --control-plane-endpoint=cluster-endpoint --apiserver-advertise-address=<x.x.x.x>
And then I guess I need the new config with the right certificate:
sudo cp -i /etc/kubernetes/admin.conf ~/.kube/config
I tried the following suggested from Hajed.Kh:
Changed ip address in:
etcd.yaml (contained ip)
kube-apiserver.yaml (contained ip)
kube-controller-manager.yaml (not this one?
kube-scheduler.yaml (not this one?)
But I still get the same ip address in:
sudo cp -i /etc/kubernetes/admin.conf ~/.kube/config
The apiserver-advertise-address flag is located in the api-server manifest file and all Kubernetes components manifests are located here /etc/kubernetes/manifest/.Those are realtime updated files so change and save and it will be redeployed instantally :
etcd.yaml
kube-apiserver.yaml
kube-controller-manager.yaml
kube-scheduler.yaml
For the worker I think it will automatically update changes while the kubelet is connected to the api-server.

Config not found: /etc/kubernetes/admin.conf -- After setting up kubeadm worker node

Following this tutorial, I set up a worker node for my cluster. However, after running the join command and attempting kubectl get node to verify the node was connected, I am met with the following error
W0215 17:58:44.648813 3084402 loader.go:223] Config not found: /etc/kubernetes/admin.conf
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Checking for the existence of admin.conf in /etc/kubernetes/ shows it does not exist. I have ensured that $HOME/.kube/config is also clear. Why is the join command not creating an admin.conf file?
TLDR
Run join with sudo
mv /etc/kubernetes/kubelet.conf /etc/kubernetes/admin.conf
After some tinkering, I realized it was a combination of a permissions error and the correct file being generated with an incorrect name.
Instead of running kubeadm join ... naked, running with sudo allowed for the command to create files necessary in /etc/kubernetes
sudo kubeadm join <MASTER_IP:PORT> --token <TOKEN> --discovery-token-ca-cert-hash <HASH>
However this doesn't generate an admin.conf, but does create a kubelet.conf. I'm unsure why this happens, and couldn't find any documentation on this behavior, however running kubectl with the following parameter solved my issue
kubectl get nodes --kubeconfig /etc/kubernetes/kubelet.conf
Rename kubelet.conf to admin.conf for your convenience at this point.

how to delete all files related to k8s

I had configured multi master of k8s repeatedly.
It looks like the previous configuration of master held still.
The pod of calico node didn't work properly.
I want to remove all files of k8s.
Could you know how to remove the all files ?
Many depends how did you create cluster.
If you used Kubeadm you can use kubeadm reset which will revert of changes made by kubeadm init or kubeadm join.
Beside that you need to delete kubectl, kubeadm, etc. If you instlled it using apt-get (depends on distro. This is example from ubuntu) you can purge them.
$ sudo apt-get purge kubeadm kubectl kube*
Then use autoremove to get rid out of all dependencies related.
$ sudo apt-get autoremove
And at the end you should remove rest of config using:
$ sudo rm -rf ~/.kube
ipvsadm --clear or iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X or similar to clean-up iptables rules
If this won't answer your question, please add more information about your cluster.

put custom ssh public key into authorized_keys on minikube cluster

How to put a custom ssh public key into authorized_keys on minikube cluster? Why the changes to /home/docker/.ssh/authorized_keys lose after reboot? How to edit this file effectively?
This works (minikube v1.2.0):
minikube ssh
cd /var/lib/boot2docker
sudo cp userdata.tar userdata-backup.tar
cd /home/docker
echo YOUR_SSH_PUBLIC_KEY_HERE >> .ssh/authorized_keys
sudo tar cf /var/lib/boot2docker/userdata.tar .
because of minikube extract files from userdata.tar at boot, and the source code is here.

Invalid x509 certificate for kubernetes master

I am trying reach my k8s master from my workstation. I can access the master from the LAN fine but not from my workstation. The error message is:
% kubectl --context=employee-context get pods
Unable to connect to the server: x509: certificate is valid for 10.96.0.1, 10.161.233.80, not 114.215.201.87
How can I do to add 114.215.201.87 to the certificate? Do I need to remove my old cluster ca.crt, recreate it, restart whole cluster and then resign client certificate? I have deployed my cluster with kubeadm and I am not sure how to do these steps manually.
One option is to tell kubectl that you don't want the certificate to be validated. Obviously this brings up security issues but I guess you are only testing so here you go:
kubectl --insecure-skip-tls-verify --context=employee-context get pods
The better option is to fix the certificate. Easiest if you reinitialize the cluster by running kubeadm reset on all nodes including the master and then do
kubeadm init --apiserver-cert-extra-sans=114.215.201.87
It's also possible to fix that certificate without wiping everything, but that's a bit more tricky. Execute something like this on the master as root:
rm /etc/kubernetes/pki/apiserver.*
kubeadm init phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=10.161.233.80,114.215.201.87
docker rm `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
This command for new kubernetes >=1.8:
rm /etc/kubernetes/pki/apiserver.*
kubeadm alpha phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=10.161.233.80,114.215.201.87
docker rm -f `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
Also whould be better to add dns name into --apiserver-cert-extra-sans for avoid issues like this in next time.
For kubeadm v1.13.3
rm /etc/kubernetes/pki/apiserver.*
kubeadm init phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=114.215.201.87
docker rm -f `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
If you used kubespray to provision your cluster then you need to add a 'floating ip' (in your case its '114.215.201.87'). This variable is called supplementary_addresses_in_ssl_keys in the group_vars/k8s-cluster/k8s-cluster.yml file. After updating it, just re-run your ansible-playbook -b -v -i inventory/<WHATEVER-YOU-NAMED-IT>/hosts.ini cluster.yml.
NOTE: you still have to remove all the apiserver certs (rm /etc/kubernetes/pki/apiserver.*) from each of your master nodes prior to running!
Issue cause:
Your configs at $HOME/.kube/ are present with your old IP address.
Try running,
rm $HOME/.kube/* -rf
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
For Kubernetes 1.12.2/CentOS 7.4 the sequence is as follows:
rm /etc/kubernetes/pki/apiserver.*
kubeadm alpha phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=51.158.75.136
docker rm -f `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
Use the following command:
kubeadm init phase certs all
For me when I was trying to accessing via root (after sudo -i) I got the error.
I excited and with normal user it was working.
For me the following helped:
rm -rf ~/.minikube
minikube delete
minikube start
Probably items no 2 and 3 would have been sufficient