My kubernetes PKI expired (API server to be exact) and I can't find a way to renew it. The error I get is
May 27 08:43:51 node1 kubelet[8751]: I0527 08:43:51.922595 8751 server.go:417] Version: v1.14.2
May 27 08:43:51 node1 kubelet[8751]: I0527 08:43:51.922784 8751 plugins.go:103] No cloud provider specified.
May 27 08:43:51 node1 kubelet[8751]: I0527 08:43:51.922800 8751 server.go:754] Client rotation is on, will bootstrap in background
May 27 08:43:51 node1 kubelet[8751]: E0527 08:43:51.925859 8751 bootstrap.go:264] Part of the existing bootstrap client certificate is expired: 2019-05-24 13:24:42 +0000 UTC
May 27 08:43:51 node1 kubelet[8751]: F0527 08:43:51.925894 8751 server.go:265] failed to run Kubelet: unable to load bootstrap
kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory
The documentation on https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/ describes how to renew but it only works if the API server is not expired. I have tried to do a
kubeadm alpha cert renew all
and do a reboot but that just made the entire cluster fail so I did a rollback to a snapshot (my cluster is running on VMware).
The cluster is running and all containers seem to work but I can't access it via kubectl so I can't really deploy or query.
My kubernetes version is 1.14.2.
So the solution was to (first a backup)
$ cd /etc/kubernetes/pki/
$ mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} ~/
$ kubeadm init phase certs all --apiserver-advertise-address <IP>
$ cd /etc/kubernetes/
$ mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} ~/
$ kubeadm init phase kubeconfig all
$ reboot
then
$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
that did the job for me and thanks for your hints :)
This topic is also discussed in:
https://github.com/kubernetes/kubeadm/issues/581
after 1.15 kubeadm upgrade automatically will renewal the certificates for you!
also 1.15 added a command to check cert expiration in kubeadm
Kubernetes: expired certificate
Kubernetes v1.15 provides docs for "Certificate Management with kubeadm":
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/
Check certificate expiration:
kubeadm alpha certs check-expiration
Automatic certificate renewal:
kubeadm renews all the certificates during control plane upgrade.
Manual certificate renewal:
You can renew your certificates manually at any time with the kubeadm alpha certs renew command.
This command performs the renewal using CA (or front-proxy-CA) certificate and key stored in /etc/kubernetes/pki.
Overall for Kubernetes v1.14 I find this procedure the most helpful:
https://stackoverflow.com/a/56334732/1147487
I was using Kubernetes v15.1 and updated my certificates as explained above, but I still got the same error.
The /etc/kubernetes/kubelet.conf was still referring to the expired/old "client-certificate-data".
After some research I found out that kubeadm is not updating the /etc/kubernetes/kubelet.conf file if the certificate-renew was not set to true. So please be aware of a bug of kubeadm below version 1.17 (https://github.com/kubernetes/kubeadm/issues/1753).
kubeadm only upgrades if the cluster upgrade was done with certificate-renewal=true. So I manually had to delete the /etc/kubernetes/kubelet.conf and regenerated it with kubeadm init phase kubeconfig kubelet which finally fixed my problem.
Try to do cert renewal via kubeadm init phase certs command.
You can check certs expiration via the following command:
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text
openssl x509 -in /etc/kubernetes/pki/apiserver-kubelet-client.crt -noout -text
First, ensure that you have most recent backup of k8s certificates inventory /etc/kubernetes/pki/*.
Delete apiserver.* and apiserver-kubelet-client.* cert files in /etc/kubernetes/pki/ directory.
Spawn a new certificates via kubeadm init phase certs command:
sudo kubeadm init phase certs apiserver
sudo kubeadm init phase certs apiserver-kubelet-client
Restart kubelet and docker daemons:
sudo systemctl restart docker; sudo systemctl restart kubelet
You can find more related information in the official K8s documentation.
This will update all certs under /etc/kubernetes/ssl
kubeadm alpha certs renew all --config=/etc/kubernetes/kubeadm-config.yaml
and do this to restart server commpenont:
kill -s SIGHUP $(pidof kube-apiserver)
kill -s SIGHUP $(pidof kube-controller-manager)
kill -s SIGHUP $(pidof kube-scheduler)
[root#nrchbs-slp4115 ~]# kubectl get apiservices |egrep metrics
v1beta1.metrics.k8s.io kube-system/metrics-server True 125m
[root#nrchbs-slp4115 ~]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 20d
metrics-server ClusterIP 10.99.2.11 <none> 443/TCP 125m
[root#nrchbs-slp4115 ~]# kubectl get ep -n kube-system
NAME ENDPOINTS AGE
kube-controller-manager <none> 20d
kube-dns 10.244.0.5:53,10.244.0.6:53,10.244.0.5:53 + 3 more... 20d
kube-scheduler <none> 20d
metrics-server 10.244.2.97:443 125m
[root#nrchbs-slp4115 ~]#
To help anyone else with Multi-Master setup as I was searching for the answer after the first master has been updated on the second master I did this I found this from another question:
kubeadm only upgrades if the cluster upgrade was done with certificate-renewal=true. So I manually had to delete the /etc/kubernetes/kubelet.conf and regenerated it with kubeadm init phase kubeconfig kubelet which finally fixed my problem.
I use a config.yaml to configure the Masters so for me, the answer was:
sudo -i
mkdir -p ~/k8s_backup/etcd
cd /etc/kubernetes/pki/
mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} ~/k8s_backup
cd /etc/kubernetes/pki/etcd
mv {healthcheck-client.crt,healthcheck-client.key,peer.crt,peer.key,server.crt,server.key} ~/k8s_backup/etcd/
kubeadm init phase certs all --ignore-preflight-errors=all --config /etc/kubernetes/config.yaml
cd /etc/kubernetes
mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} ~/k8s_backup
kubeadm init phase kubeconfig all --config /etc/kubernetes/config.yaml --ignore-preflight-errors=all
For good measure I reboot
shutdown now -r
The best solution is as Kim Nielsen wrote.
$ cd /etc/kubernetes/pki/
$ mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} ~/
$ kubeadm init phase certs all --apiserver-advertise-address <IP>
$ cd /etc/kubernetes/
$ mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} ~/
$ kubeadm init phase kubeconfig all
$ reboot
With the next command, you can check when the new certifications will be expired:
$ kubeadm alpha certs check-expiration --config=/etc/kubernetes/kubeadm-config.yaml
or
$ kubeadm certs check-expiration --config=/etc/kubernetes/kubeadm-config.yaml
However, if you have more than one master, you need to copy these files on them.
Log in to the second master and do (backup):
$ cd /etc/kubernetes/pki/
$ mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} ~/
$ cd /etc/kubernetes/
$ mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} ~/
Then log in to the first master (where you created new certificates) and do the next commands (node2 should be changed with IP address of second master machine):
$ rsync /etc/kubernetes/pki/*.crt -e ssh root#node2:/etc/kubernetes/pki/
$ rsync /etc/kubernetes/pki/*.key -e ssh root#node2:/etc/kubernetes/pki/
$ rsync /etc/kubernetes/*.conf -e ssh root#node2:/etc/kubernetes/
In my case, I've got same issue after certificate renewal. My cluster was built using Kubespray. My kubelet stopped working and it was saying that I do not have /etc/kubernetes/bootstrap-kubelet.conf file so I looked what this config does.
--bootstrap-kubeconfig string
| Path to a kubeconfig file that will be used to get client certificate for kubelet. If the file specified by --kubeconfig does not exist, the bootstrap kubeconfig is used to request a client certificate from the API server. On success, a kubeconfig file referencing the generated client certificate and key is written to the path specified by --kubeconfig. The client certificate and key file will be stored in the directory pointed by --cert-dir.
I understood that this file might not be needed.
A note that I renewed k8s 1.19 with:
kubeadm alpha certs renew apiserver-kubelet-client
kubeadm alpha certs renew apiserver
kubeadm alpha certs renew front-proxy-client
... and that was not sufficient.
The solution was
cp -r /etc/kubernetes /etc/kubernetes.backup
kubeadm alpha kubeconfig user --client-name system:kube-controller-manager > /etc/kubernetes/controller-manager.conf
kubeadm alpha kubeconfig user --client-name system:kube-scheduler > /etc/kubernetes/scheduler.conf
kubeadm alpha kubeconfig user --client-name system:node:YOUR_MASTER_HOSTNAME_IS_HERE --org system:nodes > /etc/kubernetes/kubelet.conf
kubectl --kubeconfig /etc/kubernetes/admin.conf get nodes
I am able to solve this issue with below steps:
openssl req -new -key <existing.key> -sub "/CN=system:node:<HOST_NAME>/O=system:nodes" -out new.csr
<existing.key> - use it from kubelet.conf
<HOST_NAME> - hostname in lower case ( refer expired certificate with command openssl ex: openssl x509 -in old.pem -text -noout)
openssl x509 -req -in new.csr -CA <ca.crt> -CAkey <ca.key> -CAcreateserial -out new.crt -days 365
<ca.crt> - from master node
<ca.key> - from
new.crt - is the renewed certificate replace with expired certificate.
Once replaced , restart kubelet and docker or any other container service.
kubeadm alpha kubeconfig user --org system:nodes --client-name system:node:$(hostname) >/etc/kubernetes/kubelet.conf
Renew the kubelet.conf helps me solve this issue.
Related
My certificates for rancher server expired and now I can not log in to UI anymore to manage my k8s clusters.
Error:
2021-05-26 00:57:52.437334 I | http: TLS handshake error from 127.0.0.1:43238: remote error: tls: bad certificate
2021/05/26 00:57:52 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version?timeout=30s: x509: certificate has expired or is not yet valid
So what I did was rolling back the date on the RancherOS machine that is running Rancher Server container. After that I restarted the container and it refreshed the certificates. I checked with:
for i in `ls /var/lib/rancher/k3s/server/tls/*.crt`; do echo $i; openssl x509 -enddate -noout -in $i; done
Since now I was able to log into the UI I forced a certificate rotation on the k8s cluster.
But I still get the same error once the date is reset to current and I can not log in to the Rancher Server UI.
What am I missing here?
This was the missing piece: https://github.com/rancher/rancher/issues/26984#issuecomment-818770519
Deleting the dynamic-cert.json and running kubectl delete secret
I recently had to swap this again and this is how I do it now:
sudo docker exec -it <container_id> sh -c "rm /var/lib/rancher/k3s/server/tls/dynamic-cert.json" && \
sudo docker exec -it <container_id> k3s kubectl --insecure-skip-tls-verify=true delete secret -n kube-system k3s-serving && \
sudo docker restart <container_id>
I want to setup an etcd cluster runnin on multiple nodes. I have running 2 unbuntu 18.04 machines running on a Hyper-V terminal.
I followed this guide on the official kubernetes site:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/
Therefore, I changed the several scripts and executed this scripts on HOST0 and HOST1
export HOST0=192.168.101.90
export HOST1=192.168.101.91
mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/
ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2})
NAMES=("infra0" "infra1")
for i in "${!ETCDHOSTS[#]}"; do
HOST=${ETCDHOSTS[$i]}
NAME=${NAMES[$i]}
cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta2"
kind: ClusterConfiguration
etcd:
local:
serverCertSANs:
- "${HOST}"
peerCertSANs:
- "${HOST}"
extraArgs:
initial-cluster: ${NAMES[0]}=https://${ETCDHOSTS[0]}:2380,${NAMES[1]}=https://${ETCDHOSTS[1]}:2380
initial-cluster-state: new
name: ${NAME}
listen-peer-urls: https://${HOST}:2380
listen-client-urls: https://${HOST}:2379
advertise-client-urls: https://${HOST}:2379
initial-advertise-peer-urls: https://${HOST}:2380
EOF
done
After that, I executed this command on HOST0
kubeadm init phase certs etcd-ca
I created all the nessecary on HOST0
# cleanup non-reusable certificates
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.k
kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
cp -R /etc/kubernetes/pki /tmp/${HOST1}/
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
# No need to move the certs because they are for HOST0
# clean up certs that should not be copied off this host
find /tmp/${HOST1} -name ca.key -type f -delete
After that, I copied the files to the second ETCTD node (HOST1). Before that I created a root user mbesystem
USER=mbesystem
HOST=${HOST1}
scp -r /tmp/${HOST}/* ${USER}#${HOST}:
ssh ${USER}#${HOST}
USER#HOST $ sudo -Es
root#HOST $ chown -R root:root pki
root#HOST $ mv pki /etc/kubernetes/
I'll check all the files were there on HOST0 and HOST1.
On HOST0 I started the etcd cluster using:
kubeadm init phase etcd local --config=/tmp/192.168.101.90/kubeadmcfg.yaml
On Host1 I started using:
kubeadm init phase etcd local --config=/home/mbesystem/kubeadmcfg.yaml
After I executed:
docker run --rm -it \
--net host \
-v /etc/kubernetes:/etc/kubernetes k8s.gcr.io/etcd:3.4.3-0 etcdctl \
--cert /etc/kubernetes/pki/etcd/peer.crt \
--key /etc/kubernetes/pki/etcd/peer.key \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--endpoints https://192.168.101.90:2379 endpoint health --cluster
I discovered my cluster is not healty, I'll received a connection refused.
I can't figure out what went wrong. Any help will be appreciated.
I've looked into it, reproduced what was in the link that you provided: Kubernetes.io: Setup ha etcd with kubeadm and managed to make it work.
Here is some explanation/troubleshooting steps/tips etc.
First of all etcd should be configured with odd number of nodes. What I mean by that is it should be created as 3 or 5 nodes cluster.
Why an odd number of cluster members?
An etcd cluster needs a majority of nodes, a quorum, to agree on updates to the cluster state. For a cluster with n members, quorum is (n/2)+1. For any odd-sized cluster, adding one node will always increase the number of nodes necessary for quorum. Although adding a node to an odd-sized cluster appears better since there are more machines, the fault tolerance is worse since exactly the same number of nodes may fail without losing quorum but there are more nodes that can fail. If the cluster is in a state where it can't tolerate any more failures, adding a node before removing nodes is dangerous because if the new node fails to register with the cluster (e.g., the address is misconfigured), quorum will be permanently lost.
-- Github.com: etcd documentation
Additionally here are some troubleshooting steps:
Check if docker is running You can check it by running command (on systemd installed os):
$ systemctl show --property ActiveState docker
Check if etcd container is running properly with:
$ sudo docker ps
Check logs of etcd container if it's running with:
$ sudo docker logs ID_OF_CONTAINER
How I've managed to make it work:
Assuming 2 Ubuntu 18.04 servers with IP addresses of:
10.156.0.15 and name: etcd-1
10.156.0.16 and name: etcd-2
Additionally:
SSH keys configured for root access
DNS resolution working for both of the machines ($ ping etcd-1)
Steps:
Pre-configuration before the official guide.
I did all of the below configuration with the usage of root account
Configure the kubelet to be a service manager for etcd.
Create configuration files for kubeadm.
Generate the certificate authority.
Create certificates for each member
Copy certificates and kubeadm configs.
Create the static pod manifests.
Check the cluster health.
Pre-configuration before the official guide
Pre-configuration of this machines was done with this StackOverflow post with Ansible playbooks:
Stackoverflow.com: 3 kubernetes clusters 1 base on local machine
You can also follow official documentation: Kubernetes.io: Install kubeadm
Configure the kubelet to be a service manager for etcd.
Run below commands on etcd-1 and etcd-2 with root account.
cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
[Service]
ExecStart=
# Replace "systemd" with the cgroup driver of your container runtime. The default value in the kubelet is "cgroupfs".
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd
Restart=always
EOF
$ systemctl daemon-reload
$ systemctl restart kubelet
Create configuration files for kubeadm.
Create your configuration file on your etcd-1 node.
Here is modified script that will create kubeadmcfg.yaml for only 2 nodes:
export HOST0=10.156.0.15
export HOST1=10.156.0.16
# Create temp directories to store files that will end up on other hosts.
mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/
ETCDHOSTS=(${HOST0} ${HOST1})
NAMES=("etcd-1" "etcd-2")
for i in "${!ETCDHOSTS[#]}"; do
HOST=${ETCDHOSTS[$i]}
NAME=${NAMES[$i]}
cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta2"
kind: ClusterConfiguration
etcd:
local:
serverCertSANs:
- "${HOST}"
peerCertSANs:
- "${HOST}"
extraArgs:
initial-cluster: ${NAMES[0]}=https://${ETCDHOSTS[0]}:2380,${NAMES[1]}=https://${ETCDHOSTS[1]}:2380
initial-cluster-state: new
name: ${NAME}
listen-peer-urls: https://${HOST}:2380
listen-client-urls: https://${HOST}:2379
advertise-client-urls: https://${HOST}:2379
initial-advertise-peer-urls: https://${HOST}:2380
EOF
done
Take a special look on:
export HOSTX on the top of the script. Paste the IP addresses of your machines there.
NAMES=("etcd-1" "etcd-2"). Paste the names of your machines (hostname) there.
Run this script from root account and check if it created files in /tmp/IP_ADDRESS directory.
Generate the certificate authority
Run below command from root account on your etcd-1 node:
$ kubeadm init phase certs etcd-ca
Create certificates for each member
Below is a part of the script which is responsible for creating certificates for each member of etcd cluster. Please modify the HOST0 and HOST1 variables.
#!/bin/bash
HOST0=10.156.0.15
HOST1=10.156.0.16
kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
cp -R /etc/kubernetes/pki /tmp/${HOST1}/
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
# No need to move the certs because they are for HOST0
Run above script from root account and check if there is pki directory inside /tmp/10.156.0.16/.
There shouldn't be any pki directory inside /tmp/10.156.0.15/ as it's already in place.
Copy certificates and kubeadm configs.
Copy your kubeadmcfg.yaml of etcd-1 from /tmp/10.156.0.15 to root directory with:
$ mv /tmp/10.156.0.15/kubeadmcfg.yaml /root/
Copy the content of /tmp/10.156.0.16 from your etcd-1 to your etcd-2 node to /root/ directory:
$ scp -r /tmp/10.156.0.16/* root#10.156.0.16:
After that check if files copied correctly, have correct permissions and copy pki folder to /etc/kubernetes/ with command on etcd-2:
$ mv /root/pki /etc/kubernetes/
Create the static pod manifests.
Run below command on etcd-1 and etcd-2:
$ kubeadm init phase etcd local --config=/root/kubeadmcfg.yaml
All should be running now.
Check the cluster health.
Run below command to check cluster health on etcd-1.
docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes k8s.gcr.io/etcd:3.4.3-0 etcdctl --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt --endpoints https://10.156.0.15:2379 endpoint health --cluster
Modify:
--endpoints https://10.156.0.15:2379 with correct IP address of etcd-1
It should give you a message like this:
https://10.156.0.15:2379 is healthy: successfully committed proposal: took = 26.308693ms
https://10.156.0.16:2379 is healthy: successfully committed proposal: took = 26.614373ms
Above message concludes that the etcd is working correctly but please be aware of an even number of nodes.
Please let me know if you have any questions to that.
Running under Ubuntu I used kubeadm init to setup my cluster (master node) and copied over the /etc/kubernetes/admin.conf $HOME/.kube/config and all was well when using kubectl.
However after a reboot my master node has had an IP address change which is not the same as what is in $HOME/.kube/config so now I can no longer connect kubectl
So how do I regenerate the admin.conf now that I have a new IP address? Running kubeadm init will just kill everything which is not what I want.
I found this solution on the internet and it works for me:
systemctl stop kubelet docker
cd /etc/
mv kubernetes kubernetes-backup
mv /var/lib/kubelet /var/lib/kubelet-backup
mkdir -p kubernetes
cp -r kubernetes-backup/pki kubernetes
rm kubernetes/pki/{apiserver.*,etcd/peer.*}
systemctl start docker
kubeadm init --ignore-preflight-errors=DirAvailable--var-lib-etcd
#Run "kubeadm reset" on all nodes if was this error "error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists"
cp kubernetes/admin.conf ~/.kube/config
kubectl get nodes --sort-by=.metadata.creationTimestamp
kubectl delete node $(kubectl get nodes -o jsonpath='{.items[(#.status.conditions[0].status=="Unknown")].metadata.name}')
kubectl get pods --all-namespaces
After These, Join your Slaves to Master.
Reference: https://medium.com/#juniarto.samsudin/ip-address-changes-in-kubernetes-master-node-11527b867e88
The following command can be used to regenerate admin.conf
kubeadm alpha phase kubeconfig admin --apiserver-advertise-address <new_ip>
However, if you use an IP instead of a hostname, your API-server certificate will be invalid. So, either regenerate your certs ( kubeadm alpha phase certs renew apiserver ), use hostnames instead of IPs or add the insecure --insecure-skip-tls-verify flag when using kubectl
You do not want to use kubeadm reset. That will reset everything and you would have to start configuring your cluster again.
Well, in your scenario, please have a look on the steps below:
nano /etc/hosts (update your new IP against YOUR_HOSTNAME)
nano /etc/kubernetes/config (configuration settings related to your cluster) here in this file look for the following params and update accordingly
KUBE_MASTER="--master=http://YOUR_HOSTNAME:8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://YOUR_HOSTNAME:2379" #2379 is default port
nano /etc/etcd/etcd.conf (conf related to etcd)
KUBE_ETCD_SERVERS="--etcd-servers=http://YOUR_HOSTNAME/WHERE_EVER_ETCD_HOSTED:2379"
2379 is default port for etcd. and you can have multiple etcd servers defined here comma separated
Restart kubelet, apiserver, etcd services.
It is good to use hostname instead of IP to avoid such scenarios.
Hope it helps!
I am trying to achieve that my kubernetes cluster should have a validity of 5 years, so I have made my ca.crt, apiserver.crt, kubelet-client.crt, front-proxy.crt of 5 years validity and placed those in /etc/kubernetes/pki.
Also, I have enabled my kubelet with client certificate rotation
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki --feature-gates=RotateKubeletClientCertificate=true"
So to verify my cluster is working fine I changed the date on my system to 1 day before 1 year expiration and certificate rotation are done properly
Oct 22 06:00:16 ip-10-0-1-170.ec2.internal kubelet[28887]: I1022 06:00:16.806115 28887 reconciler.go:154] Reconciler: start to sync state
Oct 22 06:00:23 ip-10-0-1-170.ec2.internal kubelet[28887]: I1022 06:00:23.546154 28887 transport.go:126] certificate rotation detected, shutting down client connections to start using new credentials
But once my cluster passes one year it starts showing the error on any kubectl get nodes/pods command:
"error: You must be logged in to the server (Unauthorized)"
The possible issue I can think is /etc/kubernetes/admin.conf has only one-year validity certificates. Thanks for your help
Your client-certificate(/etc/kubernetes/admin.conf) is generated for one year. You can generate your client certificate using following command:
kubeadm alpha phase kubeconfig admin --cert-dir /etc/kubernetes/pki --kubeconfig-dir /etc/kubernetes/
I have figured out a way to regenerate new admin.conf certificate before expiry of cluster
Generate admin.key and admin.csr using openssl
openssl genrsa -out admin.key 2048
openssl req -new -key admin.key -out admin.csr -subj "/O=system:masters/CN=kubernetes-admin"
Now create CSR in kubernetes using above openssl admin.csr
cat <<EOF | kubectl create -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: admin_csr
spec:
groups:
- system:authenticated
request: $(cat admin.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- client auth
EOF
Now approve the CSR generated using
kubectl certificate approve admin_csr
Now extract the admin.crt from approved CSR
kubectl get csr admin_csr -o jsonpath='{.status.certificate}' | base64 -d > admin.crt
Now change the current user and context to use the new admin key and certificates.
kubectl config set-credentials kubernetes-admin --client-certificate=/home/centos/certs/admin.crt --client-key=/home/centos/certs/admin.key
kubectl config set-context kubernetes-admin#kubernetes --cluster=kubernetes --user=kubernetes-admin
After this step your kubeconfig which in my case is /root/.kube/config has new client certificate data and key.
Hope this helps.
verified to work with K8s / kubeadm versions v1.14.x
Make sure to perform these steps on every control plane node:
to manually regenerate certificates, use the following
# all certs
kubeadm alpha certs renew all
# individual cert
# see `kubeadm alpha certs renew --help` for list
kubeadm alpha certs renew apiserver-kubelet-client
here's a helpful script to automate checking cert expiration, and renew if expired: https://gist.github.com/anapsix/974d6c51c7691af45e33302a704ad72b
to regenerate /etc/kubernetes/admin.conf config, one can use the following command, which is evidently a heavily guarded secret, since I was unable to find any documentation mentioning crucial --org system:masters part
kubeadm alpha kubeconfig user \
--org system:masters \
--client-name kubernetes-admin
kubeadm in K8s v1.15.x has new helpful capabilities.
Make sure to perform these steps on every control plane node:
to check expiration
kubeadm alpha certs check-expiration
to renew (for example cert in scheduler.conf)
kubeadm alpha certs renew scheduler.conf
I'm using Arch linux
I had virtualbox 5.2.12 installed
I had the minikube 0.27.0-1 installed
I had the Kubernetes v1.10.0 installed
When i try start the minkube with sudo minikube start i get this error
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0527 12:58:18.929483 22672 start.go:281] Error restarting cluster: running cmd:
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase kubeconfig all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase controlplane all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase etcd local --config /var/lib/kubeadm.yaml
: running command:
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase kubeconfig all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase controlplane all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase etcd local --config /var/lib/kubeadm.yaml
: exit status 1
I already try start minekube with others option like:
sudo minikube start --kubernetes-version v1.10.0 --bootstrapper kubeadm
sudo minikube start --bootstrapper kubeadm
sudo minikube start --vm-driver none
sudo minikube start --vm-driver virtualbox
sudo minikube start --vm-driver kvm
sudo minikube start --vm-driver kvm2
Always I get the same error. Can someone help me?
Minikube VM is usually started for simple experiments without any important payload.
That's why it's much easier to recreate minikube cluster than trying to fix it.
To delete existing minikube VM execute the following command:
minikube delete
This command shuts down and deletes the minikube virtual machine. No data or state is preserved.
Check if you have all dependencies at place and run command:
minikube start
This command creates a “kubectl context” called “minikube”. This context contains the configuration to communicate with your minikube cluster. minikube sets this context to default automatically, but if you need to switch back to it in the future, run:
kubectl config use-context minikube
Or pass the context on each command like this:
kubectl get pods --context=minikube
More information about command line arguments can be found here.
Update:
The below answer did not work due to what I suspect are differences in versions between my environment and the information I found and I'm not willing to sink more time into this problem. The VM itself does startup so if you have important information in it, ie: other docker containers you can login to the VM and extract such data from it before minikube delete
Same problem, I ssh'ed into the VM and ran
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml
The result was
failure loading apiserver-kubelet-client certificate: the certificate has expired
So like any good engineer, I googled that and found this:
Source: https://github.com/kubernetes/kubeadm/issues/581
If you are using a version of kubeadm prior to 1.8, where I understand certificate rotation #206 was put into place (as a beta
feature) or your certs already expired, then you will need to manually
update your certs (or recreate your cluster which it appears some (not
just #kachkaev) end up resorting to).
You will need to SSH into your master node. If you are using kubeadm
= 1.8 skip to 2.
Update Kubeadm, if needed. I was on 1.7 previously. $ sudo curl -sSL
https://dl.k8s.io/release/v1.8.15/bin/linux/amd64/kubeadm >
./kubeadm.1.8.15 $ chmod a+rx kubeadm.1.8.15 $ sudo mv
/usr/bin/kubeadm /usr/bin/kubeadm.1.7 $ sudo mv kubeadm.1.8.15
/usr/bin/kubeadm Backup old apiserver, apiserver-kubelet-client, and
front-proxy-client certs and keys. $ sudo mv
/etc/kubernetes/pki/apiserver.key
/etc/kubernetes/pki/apiserver.key.old $ sudo mv
/etc/kubernetes/pki/apiserver.crt
/etc/kubernetes/pki/apiserver.crt.old $ sudo mv
/etc/kubernetes/pki/apiserver-kubelet-client.crt
/etc/kubernetes/pki/apiserver-kubelet-client.crt.old $ sudo mv
/etc/kubernetes/pki/apiserver-kubelet-client.key
/etc/kubernetes/pki/apiserver-kubelet-client.key.old $ sudo mv
/etc/kubernetes/pki/front-proxy-client.crt
/etc/kubernetes/pki/front-proxy-client.crt.old $ sudo mv
/etc/kubernetes/pki/front-proxy-client.key
/etc/kubernetes/pki/front-proxy-client.key.old Generate new apiserver,
apiserver-kubelet-client, and front-proxy-client certs and keys. $
sudo kubeadm alpha phase certs apiserver --apiserver-advertise-address
$ sudo kubeadm alpha phase certs
apiserver-kubelet-client $ sudo kubeadm alpha phase certs
front-proxy-client Backup old configuration files $ sudo mv
/etc/kubernetes/admin.conf /etc/kubernetes/admin.conf.old $ sudo mv
/etc/kubernetes/kubelet.conf /etc/kubernetes/kubelet.conf.old $ sudo
mv /etc/kubernetes/controller-manager.conf
/etc/kubernetes/controller-manager.conf.old $ sudo mv
/etc/kubernetes/scheduler.conf /etc/kubernetes/scheduler.conf.old
Generate new configuration files. There is an important note here. If
you are on AWS, you will need to explicitly pass the --node-name
parameter in this request. Otherwise you will get an error like:
Unable to register node "ip-10-0-8-141.ec2.internal" with API server:
nodes "ip-10-0-8-141.ec2.internal" is forbidden: node ip-10-0-8-141
cannot modify node ip-10-0-8-141.ec2.internal in your logs sudo
journalctl -u kubelet --all | tail and the Master Node will report
that it is Not Ready when you run kubectl get nodes.
Please be certain to replace the values passed in
--apiserver-advertise-address and --node-name with the correct values for your environment.
$ sudo kubeadm alpha phase kubeconfig all
--apiserver-advertise-address 10.0.8.141 --node-name ip-10-0-8-141.ec2.internal [kubeconfig] Wrote KubeConfig file to disk:
"admin.conf" [kubeconfig] Wrote KubeConfig file to disk:
"kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk:
"controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk:
"scheduler.conf"
Ensure that your kubectl is looking in the right place for your config
files. $ mv .kube/config .kube/config.old $ sudo cp -i
/etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id
-u):$(id -g) $HOME/.kube/config $ sudo chmod 777 $HOME/.kube/config $ export KUBECONFIG=.kube/config Reboot your master node $ sudo
/sbin/shutdown -r now Reconnect to your master node and grab your
token, and verify that your Master Node is "Ready". Copy the token to
your clipboard. You will need it in the next step. $ kubectl get nodes
$ kubeadm token list If you do not have a valid token. You can create
one with:
$ kubeadm token create The token should look something like
6dihyb.d09sbgae8ph2atjw
SSH into each of the slave nodes and reconnect them to the master $
sudo curl -sSL
https://dl.k8s.io/release/v1.8.15/bin/linux/amd64/kubeadm >
./kubeadm.1.8.15 $ chmod a+rx kubeadm.1.8.15 $ sudo mv
/usr/bin/kubeadm /usr/bin/kubeadm.1.7 $ sudo mv kubeadm.1.8.15
/usr/bin/kubeadm $ sudo kubeadm join --token= : --node-name
Repeat Step 9 for each connecting node. From the master node, you can
verify that all slave nodes have connected and are ready with: $
kubectl get nodes Hopefully this gets you where you need to be
#davidcomeyne.
As root try first a full clean up with:
minikube delete --all --purge
Then start up minikube and you need to copy the root certs for another users.