How do you change ip address of the master or any worker node
I have experimented with:
kubeadm init --control-plane-endpoint=cluster-endpoint --apiserver-advertise-address=<x.x.x.x>
And then I guess I need the new config with the right certificate:
sudo cp -i /etc/kubernetes/admin.conf ~/.kube/config
I tried the following suggested from Hajed.Kh:
Changed ip address in:
etcd.yaml (contained ip)
kube-apiserver.yaml (contained ip)
kube-controller-manager.yaml (not this one?
kube-scheduler.yaml (not this one?)
But I still get the same ip address in:
sudo cp -i /etc/kubernetes/admin.conf ~/.kube/config
The apiserver-advertise-address flag is located in the api-server manifest file and all Kubernetes components manifests are located here /etc/kubernetes/manifest/.Those are realtime updated files so change and save and it will be redeployed instantally :
etcd.yaml
kube-apiserver.yaml
kube-controller-manager.yaml
kube-scheduler.yaml
For the worker I think it will automatically update changes while the kubelet is connected to the api-server.
Related
Following this tutorial, I set up a worker node for my cluster. However, after running the join command and attempting kubectl get node to verify the node was connected, I am met with the following error
W0215 17:58:44.648813 3084402 loader.go:223] Config not found: /etc/kubernetes/admin.conf
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Checking for the existence of admin.conf in /etc/kubernetes/ shows it does not exist. I have ensured that $HOME/.kube/config is also clear. Why is the join command not creating an admin.conf file?
TLDR
Run join with sudo
mv /etc/kubernetes/kubelet.conf /etc/kubernetes/admin.conf
After some tinkering, I realized it was a combination of a permissions error and the correct file being generated with an incorrect name.
Instead of running kubeadm join ... naked, running with sudo allowed for the command to create files necessary in /etc/kubernetes
sudo kubeadm join <MASTER_IP:PORT> --token <TOKEN> --discovery-token-ca-cert-hash <HASH>
However this doesn't generate an admin.conf, but does create a kubelet.conf. I'm unsure why this happens, and couldn't find any documentation on this behavior, however running kubectl with the following parameter solved my issue
kubectl get nodes --kubeconfig /etc/kubernetes/kubelet.conf
Rename kubelet.conf to admin.conf for your convenience at this point.
Now i did three things:
First, install kubectl on one linux machine,
Second, copy the admin.conf file from the remote k8s server to the ~/.kube/ file on the linux host,
Third, running kubectl get nodes under Linux reports an error. .
wanlei#kf-test:~/.kube$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I want to know what steps I have missed. .
The goal is to use kubectl from my linux host to manage k8s on the remote host
You need to place the kubeconfig file at .kube/config location i.e there should be a file with name config at .kube directory.That's where kubectl looks for the kubeconfig file by default.
Alternative to above would be defining KUBECONFIG environment variable to point to a kubeconfig file in a different location.
What happened:
when I reboot the centos7 server and run get pod, see below error:
The connection to the server localhost:8080 was refused - did you specify the right host or port? What you expected to happen:
before I reboot the system, the Kubernetes have three nodes, and pods/service/,.. all working fine.
How to reproduce it (as minimally and precisely as possible):
reboot the server
kubectl get pod
Anything else we need to know?
I even used sudo kubeadm reset and init again but the issue still exists!
There are few things to consider:
kubeadm reset performs a best effort revert of changes made by kubeadm init or kubeadm join. So some configurations may stay on the cluster.
Make sure you run kubectl as a proper user. You might need to copy the admin.conf to .kube/config dir of the user's home directory.
After kubeadm init you need to run the following commands:
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
Make sure you do so.
Check Centos' firewall configuration. After the restart it might go back to defaults.
Please let me know if that helped.
I am trying reach my k8s master from my workstation. I can access the master from the LAN fine but not from my workstation. The error message is:
% kubectl --context=employee-context get pods
Unable to connect to the server: x509: certificate is valid for 10.96.0.1, 10.161.233.80, not 114.215.201.87
How can I do to add 114.215.201.87 to the certificate? Do I need to remove my old cluster ca.crt, recreate it, restart whole cluster and then resign client certificate? I have deployed my cluster with kubeadm and I am not sure how to do these steps manually.
One option is to tell kubectl that you don't want the certificate to be validated. Obviously this brings up security issues but I guess you are only testing so here you go:
kubectl --insecure-skip-tls-verify --context=employee-context get pods
The better option is to fix the certificate. Easiest if you reinitialize the cluster by running kubeadm reset on all nodes including the master and then do
kubeadm init --apiserver-cert-extra-sans=114.215.201.87
It's also possible to fix that certificate without wiping everything, but that's a bit more tricky. Execute something like this on the master as root:
rm /etc/kubernetes/pki/apiserver.*
kubeadm init phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=10.161.233.80,114.215.201.87
docker rm `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
This command for new kubernetes >=1.8:
rm /etc/kubernetes/pki/apiserver.*
kubeadm alpha phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=10.161.233.80,114.215.201.87
docker rm -f `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
Also whould be better to add dns name into --apiserver-cert-extra-sans for avoid issues like this in next time.
For kubeadm v1.13.3
rm /etc/kubernetes/pki/apiserver.*
kubeadm init phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=114.215.201.87
docker rm -f `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
If you used kubespray to provision your cluster then you need to add a 'floating ip' (in your case its '114.215.201.87'). This variable is called supplementary_addresses_in_ssl_keys in the group_vars/k8s-cluster/k8s-cluster.yml file. After updating it, just re-run your ansible-playbook -b -v -i inventory/<WHATEVER-YOU-NAMED-IT>/hosts.ini cluster.yml.
NOTE: you still have to remove all the apiserver certs (rm /etc/kubernetes/pki/apiserver.*) from each of your master nodes prior to running!
Issue cause:
Your configs at $HOME/.kube/ are present with your old IP address.
Try running,
rm $HOME/.kube/* -rf
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
For Kubernetes 1.12.2/CentOS 7.4 the sequence is as follows:
rm /etc/kubernetes/pki/apiserver.*
kubeadm alpha phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=51.158.75.136
docker rm -f `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
Use the following command:
kubeadm init phase certs all
For me when I was trying to accessing via root (after sudo -i) I got the error.
I excited and with normal user it was working.
For me the following helped:
rm -rf ~/.minikube
minikube delete
minikube start
Probably items no 2 and 3 would have been sufficient
So I have a Kubernetes cluster, and I am using Flannel for an overlay network. It has been working fine (for almost a year actually) then I modified a service to have 2 ports and all of a sudden I get this about a completely different service, one that was working previously and I did not edit:
<Timestamp> <host> flanneld[873]: I0407 18:36:51.705743 00873 vxlan.go:345] L3 miss: <Service's IP>
<Timestamp> <host> flanneld[873]: I0407 18:36:51.705865 00873 vxlan.go:349] Route for <Service's IP> not found
Is there a common cause to this? I am using Kubernetes 1.0.X and Flannel 0.5.5 and I should mention only one node is having this issue, the rest of the nodes are fine. The bad node's kube-proxy is also saying it can't find the service's endpoint.
Sometime flannel will change it's subnet configuration... you can tell this if the IP and MTU from cat /run/flannel/subnet.env doesn't match ps aux | grep docker (or cat /etc/default/docker)... in which case you will need to reconfigure docker to use the new flannel config.
First you have to delete the docker network interface
sudo ip link set dev docker0 down
sudo brctl delbr docker0
Next you have to reconfigure docker to use the new flannel config.
Note: sometimes this step has to be done manually (i.e. read the contents of /run/flannel/subnet.env and then alter /etc/default/docker)
source /run/flannel/subnet.env
echo DOCKER_OPTS=\"-H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}\" > /etc/default/docker
Finally, restart docker
sudo service docker restart