Rancher Kubernetes Dashboard - Service Unavailable - kubernetes

I am new to Rancher and containers in general. While setting up Kubernetes cluster using Rancher, i’m facing problem while accessing Kubernetes dashboard.
rancher/server: 1.6.6
Single node Rancher server + External MySQL + 3 agent nodes
Infrastructure Stack versions:
healthcheck: v0.3.1
ipsec: net:v0.11.5
network-services: metadata:v0.9.2 / network-manager:v0.7.7
scheduler: k8s:v1.7.2-rancher5
kubernetes (if applicable): kubernetes-agent:v0.6.3
# docker info
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 1
Server Version: 17.03.1-ce
Storage Driver: overlay
Backing Filesystem: extfs
Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.34-rancher
Operating System: RancherOS v1.0.3
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.798 GiB
Name: ch7radod1
ID: IUNS:4WT2:Y3TV:2RI4:FZQO:4HYD:YSNN:6DPT:HMQ6:S2SI:OPGH:TX4Y
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Http Proxy: http://proxy.ch.abc.net:8080
Https Proxy: http://proxy.ch.abc.net:8080
No Proxy: localhost,.xyz.net,abc.net
Registry: https://index.docker.io/v1/
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Accessing UI URL http://10.216.30.10/r/projects/1a6633/kubernetes-dashboard:9090/# shows “Service unavailable”
If i use the CLI section from the UI, i get the following:
> kubectl get nodes
NAME STATUS AGE VERSION
ch7radod3 Ready 1d v1.7.2
ch7radod4 Ready 5d v1.7.2
ch7radod1 Ready 1d v1.7.2
> kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-4285517626-4njc2 0/1 ContainerCreating 0 5d
kube-system kube-dns-3942128195-ft56n 0/3 ContainerCreating 0 19d
kube-system kube-dns-646531078-z5lzs 0/3 ContainerCreating 0 5d
kube-system kubernetes-dashboard-716739405-lpj38 0/1 ContainerCreating 0 5d
kube-system monitoring-grafana-3552275057-qn0zf 0/1 ContainerCreating 0 5d
kube-system monitoring-influxdb-4110454889-79pvk 0/1 ContainerCreating 0 5d
kube-system tiller-deploy-737598192-f9gcl 0/1 ContainerCreating 0 5d
The setup uses private registry (Artifactory). I checked Artifactory and i could see several images present related to Docker. I was going through private registry section and i also saw this file. In case this file is required, where exactly do i keep it so that Rancher can fetch it and configure the Kubernetes dashboard?
UPDATE:
$ sudo ros engine switch docker-1.12.6
> ERRO[0031] Failed to load https://raw.githubusercontent.com/rancher/os-services/v1.0.3/index.yml: Get https://raw.githubusercontent.com/rancher/os-services/v1.0.3/index.yml: Proxy Authentication Required
> FATA[0031] docker-1.12.6 is not a valid engine
I thought may be it’s due to NGINX so i stopped the NGINX container but i am still getting the above error. Earlier i have tried the same command on this Rancher server and it used to work fine. It’s working fine on agent nodes although they are already having 1.12.6 configured.
UPDATE 2:
> kubectl -n kube-system get po
NAME READY STATUS RESTARTS AGE
heapster-4285517626-4njc2 1/1 Running 0 12d
kube-dns-2588877561-26993 0/3 ImagePullBackOff 0 5h
kube-dns-646531078-z5lzs 0/3 ContainerCreating 0 12d
kubernetes-dashboard-716739405-zq3s9 0/1 CrashLoopBackOff 67 5h
monitoring-grafana-3552275057-qn0zf 1/1 Running 0 12d
monitoring-influxdb-4110454889-79pvk 1/1 Running 0 12d
tiller-deploy-737598192-f9gcl 0/1 CrashLoopBackOff 72 12d

None of your pods running, you need to resolve that issue first. try to restart the whole cluster and see all above pods in running status.

Based on #ivan.sim's suggestion, i posted 'UPDATE 2'. This started me finally to look in the right direction. I then started looking for CrashLoopBackOff error online and came across this link and tried the following command (using CLI option from Rancher console), which was actually quite similar to what #ivan.sim suggested above but this helped me with the node where the dashboard process was running:
> kubectl get pods -a -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system heapster-4285517626-4njc2 1/1 Running 0 12d 10.42.224.157 radod4
kube-system kube-dns-2588877561-26993 0/3 ImagePullBackOff 0 5h <none> radod1
kube-system kube-dns-646531078-z5lzs 0/3 ContainerCreating 0 12d <none> radod4
kube-system kubernetes-dashboard-716739405-zq3s9 0/1 Error 70 5h 10.42.218.11 radod1
kube-system monitoring-grafana-3552275057-qn0zf 1/1 Running 0 12d 10.42.202.44 radod4
kube-system monitoring-influxdb-4110454889-79pvk 1/1 Running 0 12d 10.42.111.171 radod4
kube-system tiller-deploy-737598192-f9gcl 0/1 CrashLoopBackOff 76 12d 10.42.213.24 radod4
Then i went to the host where the process was executing and tried the following command:
[rancher#radod1 ~]$
[rancher#radod1 ~]$ docker ps -a | grep dash
282334b0ed38 gcr.io/google_containers/kubernetes-dashboard-amd64#sha256:b537ce8988510607e95b8d40ac9824523b1f9029e6f9f90e9fccc663c355cf5d "/dashboard --insecur" About a minute ago Exited (1) 55 seconds ago k8s_kubernetes-dashboard_kubernetes-dashboard-716739405-zq3s9_kube-system_7b0afda7-8271-11e7-ae86-021bfe69c163_72
99836d7824fd gcr.io/google_containers/pause-amd64:3.0 "/pause" 5 hours ago Up 5 hours k8s_POD_kubernetes-dashboard-716739405-zq3s9_kube-system_7b0afda7-8271-11e7-ae86-021bfe69c163_1
[rancher#radod1 ~]$
[rancher#radod1 ~]$
[rancher#radod1 ~]$ docker logs 282334b0ed38
Using HTTP port: 8443
Creating API server client for https://10.43.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: the server has asked for the client to provide credentials
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md
After i got the above error, i again searched online and tried few things. Finally, this link helped. After i executed the following commands on all agent nodes, Kubernetes dashboard finally started working!
docker volume rm etcd
rm -rf /var/etcd/backups/*

Related

Kubeadm Failed to create SubnetManager: error retrieving pod spec for kube-system

No matter what I do it seems I cannot get rid of this problem. I have installed Kubernetes using kubeadm many times quite successfully however adding a v1.16.0 node is giving me a heck of a headache.
O/S: Ubuntu 18.04.3 LTS
Kubernetes version: v1.16.0
Kubeadm version: Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:34:01Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"
A query of the cluster shows:
NAME STATUS ROLES AGE VERSION
kube-apiserver-1 Ready master 110d v1.15.0
kube-apiserver-2 Ready master 110d v1.15.0
kube-apiserver-3 Ready master 110d v1.15.0
kube-node-1 Ready <none> 110d v1.15.0
kube-node-2 Ready <none> 110d v1.15.0
kube-node-3 Ready <none> 110d v1.15.0
kube-node-4 Ready <none> 110d v1.16.0
kube-node-5 Ready,SchedulingDisabled <none> 3m28s v1.16.0
kube-node-databases Ready <none> 110d v1.15.0
I have temporarily disabled scheduling to the node until I can fix this problem. A query of the pod status in the kube-system namespace shows the problem:
$ kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-55zjs 1/1 Running 128 21d
coredns-fb8b8dccf-kzrpc 1/1 Running 144 21d
kube-flannel-ds-amd64-29xp2 1/1 Running 11 110d
kube-flannel-ds-amd64-hp7nq 1/1 Running 14 110d
kube-flannel-ds-amd64-hvdpf 0/1 CrashLoopBackOff 5 8m28s
kube-flannel-ds-amd64-jhhlk 1/1 Running 11 110d
kube-flannel-ds-amd64-k6dzc 1/1 Running 2 110d
kube-flannel-ds-amd64-lccxl 1/1 Running 21 110d
kube-flannel-ds-amd64-nnn7g 1/1 Running 14 110d
kube-flannel-ds-amd64-shss5 1/1 Running 7 110d
kubectl -n kube-system logs -f kube-flannel-ds-amd64-hvdpf
I1002 01:13:22.136379 1 main.go:514] Determining IP address of default interface
I1002 01:13:22.136823 1 main.go:527] Using interface with name ens3 and address 192.168.5.46
I1002 01:13:22.136849 1 main.go:544] Defaulting external address to interface address (192.168.5.46)
E1002 01:13:52.231471 1 main.go:241] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-amd64-hvdpf': Get https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-amd64-hvdpf: dial tcp 10.96.0.1:443: i/o timeout
Although I had a few hits on iptables issues and kernel routing I don't understand why previous versions have installed without a hitch but this version is giving me such a problem.
I have installed this node and destroyed it quite a few times yet the result is always the same.
Anyone else having this issue or has a solution?
This occurs when its not able to lookup the host add the below after name: POD_NAMESPACE
- name: KUBERNETES_SERVICE_HOST
value: "10.220.64.186" #ip address of the host where kube-apiservice is running
- name: KUBERNETES_SERVICE_PORT
value: "6443"
According to Documentation about version skew policy:
kubelet
kubelet must not be newer than kube-apiserver, and may be up to two minor versions older.
Example:
kube-apiserver is at 1.13
kubelet is supported at 1.13, 1.12, and 1.11
That means that worker nodes with version v1.16.0 is not supported on master node with version v1.15.0.
To fix this issue I recommend reinstalling node with version v1.15.0 to match the rest of the cluster.
Optionally You can upgrade whole cluster to v1.16.1 however there are some problems with it running flannel as network plugin at the moment. Please review this guide from documentation before proceeding.

How to install kube-dns on minikube?

I've looked at How does one install the kube-dns addon for minikube? but the issue is that in that question, the addon is installed. However when I write
minikube addons list
I get the following:
- addon-manager: enabled
- dashboard: enabled
- default-storageclass: enabled
- efk: disabled
- freshpod: disabled
- gvisor: disabled
- heapster: disabled
- ingress: disabled
- logviewer: disabled
- metrics-server: disabled
- nvidia-driver-installer: disabled
- nvidia-gpu-device-plugin: disabled
- registry: disabled
- registry-creds: disabled
- storage-provisioner: enabled
- storage-provisioner-gluster: disabled
none of which is kube-dns. Can't find instructions anywhere as it's supposed to be there by default, so what have I missed?
EDIT This is minikube v1.0.1 running on Ubuntu 18.04.
The StackOverflow case which you are referring to was in 2017 so it's bit outdated.
According to documentation CoreDNS is recommended DNS server which replaced kube-dns. There was a transitional period when both KubeDNS and CoreDNS were deployed parallel, however in latest version only CoreDNS is deployed.
As default Minikube is creating 2 pods with CoreDNS. To verify execute:
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-g4vs2 1/1 Running 1 20m
coredns-5c98db65d4-k4s7v 1/1 Running 1 20m
etcd-minikube 1/1 Running 0 19m
kube-addon-manager-minikube 1/1 Running 0 20m
kube-apiserver-minikube 1/1 Running 0 19m
kube-controller-manager-minikube 1/1 Running 0 19m
kube-proxy-thbv5 1/1 Running 0 20m
kube-scheduler-minikube 1/1 Running 0 19m
storage-provisioner 1/1 Running 0 20m
You can also see that there is CoreDNS deployment.
$ kubectl get deployments coredns -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 2/2 2 2 37m
Here you can find comparison between both DNS.
So in short, you did not miss anything. CoreDNS is deployed as default during minikube start.

Error from server (NotFound): podmetrics.metrics.k8s.io "mem-example/memory-demo" not found

I am following this tutorial: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/
I have created the memory pod demo and I am trying to get the metrics from the pod but it is not working.
I installed the metrics server by cloning: https://github.com/kubernetes-incubator/metrics-server
And then running this command from top level:
kubectl create -f deploy/1.8+/
I am using kubernetes version 1.10.11.
The pod is definitely created:
λ kubectl get pod memory-demo --namespace=mem-example
NAME READY STATUS RESTARTS AGE
memory-demo 1/1 Running 0 6m
But the metics command does not work and gives an error:
λ kubectl top pod memory-demo --namespace=mem-example
Error from server (NotFound): podmetrics.metrics.k8s.io "mem-example/memory-demo" not found
What did I do wrong?
There are some patches to be done to metrics server deployment to get the metrics working.
Follow the below steps
kubectl delete -f deploy/1.8+/
wait till the metrics server gets undeployed
run the below command
kubectl create -f https://raw.githubusercontent.com/epasham/docker-repo/master/k8s/metrics-server.yaml
master $ kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-6zg78 1/1 Running 0 2h
coredns-78fcdf6894-gk4sb 1/1 Running 0 2h
etcd-master 1/1 Running 0 2h
kube-apiserver-master 1/1 Running 0 2h
kube-controller-manager-master 1/1 Running 0 2h
kube-proxy-f5z9p 1/1 Running 0 2h
kube-proxy-ghbvn 1/1 Running 0 2h
kube-scheduler-master 1/1 Running 0 2h
metrics-server-85c54d44c8-rmvxh 2/2 Running 0 1m
weave-net-4j7cl 2/2 Running 1 2h
weave-net-82fzn 2/2 Running 1 2h
master $ kubectl top pod -n kube-system
NAME CPU(cores) MEMORY(bytes)
coredns-78fcdf6894-6zg78 2m 11Mi
coredns-78fcdf6894-gk4sb 2m 9Mi
etcd-master 14m 90Mi
kube-apiserver-master 24m 425Mi
kube-controller-manager-master 26m 62Mi
kube-proxy-f5z9p 2m 19Mi
kube-proxy-ghbvn 3m 17Mi
kube-scheduler-master 8m 14Mi
metrics-server-85c54d44c8-rmvxh 1m 19Mi
weave-net-4j7cl 2m 59Mi
weave-net-82fzn 1m 60Mi
Check and verify the below lines in metrics server deployment manifest.
command:
- /metrics-server
- --metric-resolution=30s
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
On Minikube, I had to wait for 20-25 minutes after enabling the metrics-server addon. I was getting the same error for 20-25 minutes but later I could see the output without attempting for any solution.
I faced the similar issue of
Error from server (NotFound): podmetrics.metrics.k8s.io "default/apple-app" not found
I followed two steps and I was able to resolve the issue.
Download the latest customized components.yaml, which is their official file used for easy deployment.
Update the change
# - /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
to the command section of the deployment specification. I have commented the first line because it is the entrypoint of the image used by kubernetes metrics-server.
$ docker image inspect k8s.gcr.io/metrics-server-amd64:v0.3.6 -f {{.ContainerConfig.Entrypoint}}
[/metrics-server]
Even If you use it or not, it doesn't matter.
Note: You have to wait for few seconds for it to properly work.
After this running the top command will work for you.
$ kubectl top pod apple-app
NAME CPU(cores) MEMORY(bytes)
apple-app 1m 3Mi
I know this is an old thread may be someone will find this answer useful.
You have to checkout the following repo:
https://github.com/kubernetes-incubator/metrics-server
Go to the root of the repo and checkout release-0.3.2.
Remove default metrics server by:
kubectl delete -f deploy/1.8+/
Download the container yaml
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
Edit the container.yaml by adding the following lines to the argument section. You will see these two lines there
args:
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls=true
There is only one args parameter in that file.
Deploy your pod/deployment and you should be able to do:
kubectl top pod <pod-name>

Kubernetes Canal CNI error on masters

I'm setting up a Kubernetes cluster on a customer.
I've done this process before multiple times, including dealing with vagrant specifics and I've been able to constantly get a K8s cluster up and running without too much fuss.
Now, on this customer I'm doing the same but I've been finding a lot of issues when setting things up, which is completely unexpected.
Comparing to other places where I've setup Kubernetes, the only obvious difference that I see is that I have a proxy server which I constantly have to battle with. Nothing that a NO_PROXY env hasn't been able to handle.
The main issue I'm facing is setting up Canal (Calico + Flannel).
For some reason, on Masters 2 and 3 it just won't start.
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system canal-2pvpr 2/3 CrashLoopBackOff 7 14m 10.136.3.37 devmn2.cpdprd.pt
kube-system canal-rdmnl 2/3 CrashLoopBackOff 7 14m 10.136.3.38 devmn3.cpdprd.pt
kube-system canal-swxrw 3/3 Running 0 14m 10.136.3.36 devmn1.cpdprd.pt
kube-system kube-apiserver-devmn1.cpdprd.pt 1/1 Running 1 1h 10.136.3.36 devmn1.cpdprd.pt
kube-system kube-apiserver-devmn2.cpdprd.pt 1/1 Running 1 4h 10.136.3.37 devmn2.cpdprd.pt
kube-system kube-apiserver-devmn3.cpdprd.pt 1/1 Running 1 1h 10.136.3.38 devmn3.cpdprd.pt
kube-system kube-controller-manager-devmn1.cpdprd.pt 1/1 Running 0 15m 10.136.3.36 devmn1.cpdprd.pt
kube-system kube-controller-manager-devmn2.cpdprd.pt 1/1 Running 0 15m 10.136.3.37 devmn2.cpdprd.pt
kube-system kube-controller-manager-devmn3.cpdprd.pt 1/1 Running 0 15m 10.136.3.38 devmn3.cpdprd.pt
kube-system kube-dns-86f4d74b45-vqdb4 0/3 ContainerCreating 0 1h <none> devmn2.cpdprd.pt
kube-system kube-proxy-4j7dp 1/1 Running 1 2h 10.136.3.38 devmn3.cpdprd.pt
kube-system kube-proxy-l2wpm 1/1 Running 1 2h 10.136.3.36 devmn1.cpdprd.pt
kube-system kube-proxy-scm9g 1/1 Running 1 2h 10.136.3.37 devmn2.cpdprd.pt
kube-system kube-scheduler-devmn1.cpdprd.pt 1/1 Running 1 1h 10.136.3.36 devmn1.cpdprd.pt
kube-system kube-scheduler-devmn2.cpdprd.pt 1/1 Running 1 4h 10.136.3.37 devmn2.cpdprd.pt
kube-system kube-scheduler-devmn3.cpdprd.pt 1/1 Running 1 1h 10.136.3.38 devmn3.cpdprd.pt
Looking for the specific error, I've come to find out that the issue is with the kube-flannel container, which is throwing an error:
[exXXXXX#devmn1 ~]$ kubectl logs canal-rdmnl -n kube-system -c kube-flannel
I0518 16:01:22.555513 1 main.go:487] Using interface with name ens192 and address 10.136.3.38
I0518 16:01:22.556080 1 main.go:504] Defaulting external address to interface address (10.136.3.38)
I0518 16:01:22.565141 1 kube.go:130] Waiting 10m0s for node controller to sync
I0518 16:01:22.565167 1 kube.go:283] Starting kube subnet manager
I0518 16:01:23.565280 1 kube.go:137] Node controller sync successful
I0518 16:01:23.565311 1 main.go:234] Created subnet manager: Kubernetes Subnet Manager - devmn3.cpdprd.pt
I0518 16:01:23.565331 1 main.go:237] Installing signal handlers
I0518 16:01:23.565388 1 main.go:352] Found network config - Backend type: vxlan
I0518 16:01:23.565440 1 vxlan.go:119] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
E0518 16:01:23.565619 1 main.go:279] Error registering network: failed to acquire lease: node "devmn3.cpdprd.pt" pod cidr not assigned
I0518 16:01:23.565671 1 main.go:332] Stopping shutdownHandler...
I just can't understand why.
Some relevant info:
My clusterCIDR and podCIDR are: 192.168.151.0/25 (I know, it's weird, don't ask unless it's a huge issue)
I've setup etcd on systemd
I've modified the kube-controller-manager.yaml to change the mask size to 25 (otherwise the IP mentioned before wouldn't work).
I'm installing everything with Kubeadm. One weird thing I did notice was that, when viewing the config (kubeadm config view) much of the information that I had setup on the kubeadm config.yaml (for kubeadm init) was not present in the config view, including the paths to etcd certs.
I'm also not sure why that happened, but I've fixed it (hopefully) by editing the kubeadm config map (kubectl edit cm kubeadm-config -n kube-system) and saving it.
Still no luck with canal.
Can anyone help me figure out what's wrong?
I have documented pretty much every step of the configuration I've done, so if required I may be able to provide it.
EDIT:
I figured how meanwhile that indeed my master2 and 3 do not have a podCIDR associated. Why would this happen? And how can I add it?
Try to edit:
/etc/kubernetes/manifests/kube-controller-manager.yaml
and add
--allocate-node-cidrs=true
--cluster-cidr=192.168.151.0/25
then, reload kubelet.
I found this information here and it was useful for me.

Kubernetes - kube-system pods in master node keep restarting after worker node joins

I have followed this tutorial and this tutorial and this one but am facing the same issue for last 3 days.
I am able to set up the master node correctly with the following steps:
kubeadm init
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export kubever=$(kubectl version | base64 | tr -d ‘\’)
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
and everything seems fine in
kubectl get all --namespace=kube-system
then,
on the worker node:
kubeadm join --token 864655.fdf6d0b389867b79 192.168.100.17:6443 --discovery-token-ca-cert-hash sha256:a2d840808b17b53b9612e6271ccde489f13dbede7d354f97188d0faa9e210af2
The output seems fine and is as below:
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.100.17:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.100.17:6443"
[discovery] Requesting info from "https://192.168.100.17:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.100.17:6443"
[discovery] Successfully established connection with API Server "192.168.100.17:6443"
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
BUT as soon as I run this command, all hell breaks loose. The
kubectl get all --namespace=kube-system
starts showing that all pods are kind of restarting all the time. the status keeps changing between Pending and Running, and at time some of the pods will even disappear and may have ContainerCreating status etc.
NAME READY STATUS RESTARTS AGE
po/etcd-ubuntu 0/1 Pending 0 0s
po/kube-controller-manager-ubuntu 0/1 Pending 0 0s
po/kube-dns-6f4fd4bdf-cmcfk 3/3 Running 0 13m
po/kube-proxy-2chb6 1/1 Running 0 13m
po/kube-scheduler-ubuntu 0/1 Pending 0 0s
po/weave-net-ptdxr 2/2 Running 0 11m
I have also tried the second tutorial, with flannel, and get the exact same issue.
My Set Up
I created two new VMs with a fresh installation of Ubuntu 17.10 on VMware with 2 processor/2core 6 GB of ram and 50 GB hard disk each. My physical machine is a i7-6700k with 32gb of ram.
I installed kubeadm, kubelet and docker on both of them and then followed the steps as mentioned above.
I have also tried switching between NAT and Bridge on VMware and nothing changed.
The initial IP of both VMs with bridge network was 192.168.100.12 and 192.168.100.17.
The hostname -I for master:
192.168.100.17 172.17.0.1 10.32.0.1 10.32.0.2
The hostname -I for worker-node:
192.168.100.12 172.17.0.1 10.44.0.0 10.32.0.1
journalctl -xeu kubelet shows the following:
https://gist.github.com/saad749/9a771a3460bf88c274498b5bc4b7fd84
While trying with flannel (and still the same issue), the result from
kubectl describe nodes
is
https://gist.github.com/saad749/d24c453c8b4e663e9abf572a0fb38bf4
Am I missing any step before kubeadm init? Should I change the IP addresses (to what)? Are there any specific logs I should look into? Is there a more comprehensive tutorial for this?
All Issues start after kubeadm join on the worker node, I can deploy the kubernetes on the master node or any other stuff, and it works fine.
UPDATE:
Even after applying the suggestions from errordeveloper, The same issue persists.
I add the following flag to kubeadm init:
--apiserver-advertise-address 192.168.100.17
I updated the kubeadm.conf to following and did reload and restart:
https://gist.github.com/saad749/c7149c87ec3e75a40586f626cf04279a
and also tried changing the cluster dns
https://gist.github.com/saad749/5fa66bebc22841e58119333e75600e40
This the log from after initializing the master:
kube-master#ubuntu:~$ kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system etcd-ubuntu 1/1 Running 0 22s 192.168.100.17 ubuntu
kube-system kube-apiserver-ubuntu 1/1 Running 0 29s 192.168.100.17 ubuntu
kube-system kube-controller-manager-ubuntu 1/1 Running 0 13s 192.168.100.17 ubuntu
kube-system kube-dns-6f4fd4bdf-wfqhb 3/3 Running 0 1m 10.32.0.7 ubuntu
kube-system kube-proxy-h4hz9 1/1 Running 0 1m 192.168.100.17 ubuntu
kube-system kube-scheduler-ubuntu 1/1 Running 0 34s 192.168.100.17 ubuntu
kube-system weave-net-fkgnh 2/2 Running 0 32s 192.168.100.17 ubuntu
The hostname -i results:
kube-master#ubuntu:~$ hostname -I
192.168.100.17 172.17.0.1 10.32.0.1 10.32.0.2 10.32.0.3 10.32.0.4 10.32.0.5 10.32.0.6 10.244.0.0 10.244.0.1
kube-master#ubuntu:~$ hostname -i
192.168.100.17
Results from:
kubectl describe nodes
https://gist.github.com/saad749/8f460650182a04d0ddf3158a52761a9a
The Internal IP seems correct now.
After joining from second node, this happens:
kube-master#ubuntu:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu Ready master 49m v1.9.3
kube-master#ubuntu:~$ kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system kube-controller-manager-ubuntu 0/1 Pending 0 0s <none> ubuntu
kube-system kube-dns-6f4fd4bdf-wfqhb 0/3 ContainerCreating 0 49m <none> ubuntu
kube-system kube-proxy-h4hz9 1/1 Running 0 49m 192.168.100.17 ubuntu
kube-system kube-scheduler-ubuntu 1/1 Running 0 1s 192.168.100.17 ubuntu
kube-system weave-net-fkgnh 2/2 Running 0 48m 192.168.100.17 ubuntu
ifconfig -a results:
https://gist.github.com/saad749/63a5a52bd3246ff72477b2aca7d158d0
journalctl -xeu kubelet results
https://gist.github.com/saad749/8a60870b35f93df8565e66cb208aff32
Sometimes, the pods IP is shown at 192.168.100.12 which is the IP of the non-master second node.
kube-master#ubuntu:~$ kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system etcd-ubuntu 0/1 Pending 0 0s <none> ubuntu
kube-system kube-apiserver-ubuntu 0/1 Pending 0 0s <none> ubuntu
kube-system kube-controller-manager-ubuntu 1/1 Running 0 0s 192.168.100.12 ubuntu
kube-system kube-dns-6f4fd4bdf-wfqhb 2/3 Running 0 3h 10.32.0.7 ubuntu
kube-system kube-proxy-h4hz9 1/1 Running 0 3h 192.168.100.12 ubuntu
kube-system kube-scheduler-ubuntu 0/1 Pending 0 0s <none> ubuntu
kube-system weave-net-fkgnh 2/2 Running 1 3h 192.168.100.17 ubuntu
kube-master#ubuntu:~$ kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system kube-dns-6f4fd4bdf-wfqhb 3/3 Running 0 3h 10.32.0.7 ubuntu
kube-system kube-proxy-h4hz9 1/1 Running 0 3h 192.168.100.12 ubuntu
kube-system weave-net-fkgnh 2/2 Running 0 3h 192.168.100.12 ubuntu
kubectl describe nodes
Name: ubuntu
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=ubuntu
node-role.kubernetes.io/master=
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints: node-role.kubernetes.io/master:NoSchedule
CreationTimestamp: Fri, 02 Mar 2018 08:21:47 -0800
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Fri, 02 Mar 2018 11:38:36 -0800 Fri, 02 Mar 2018 08:21:43 -0800 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Fri, 02 Mar 2018 11:38:36 -0800 Fri, 02 Mar 2018 08:21:43 -0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 02 Mar 2018 11:38:36 -0800 Fri, 02 Mar 2018 08:21:43 -0800 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Fri, 02 Mar 2018 11:38:36 -0800 Fri, 02 Mar 2018 11:28:25 -0800 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 192.168.100.12
Hostname: ubuntu
Capacity:
cpu: 4
memory: 6080832Ki
pods: 110
Allocatable:
cpu: 4
memory: 5978432Ki
pods: 110
System Info:
Machine ID: 59bf65b835b242a3aa182f4b8a542219
System UUID: 0C3C4D56-4747-D59E-EE09-F16F2793677E
Boot ID: 658b4a08-d724-425e-9246-2b41995ecc46
Kernel Version: 4.13.0-36-generic
OS Image: Ubuntu 17.10
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.13.1
Kubelet Version: v1.9.3
Kube-Proxy Version: v1.9.3
ExternalID: ubuntu
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system kube-dns-6f4fd4bdf-wfqhb 260m (6%) 0 (0%) 110Mi (1%) 170Mi (2%)
kube-system kube-proxy-h4hz9 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system weave-net-fkgnh 20m (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
280m (7%) 0 (0%) 110Mi (1%) 170Mi (2%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Rebooted 12m (x814 over 2h) kubelet, ubuntu Node ubuntu has been rebooted, boot id: 16efd500-a2a5-446f-ba25-1187857996e0
Normal NodeHasNoDiskPressure 10m kubelet, ubuntu Node ubuntu status is now: NodeHasNoDiskPressure
Normal Starting 10m kubelet, ubuntu Starting kubelet.
Normal NodeAllocatableEnforced 10m kubelet, ubuntu Updated Node Allocatable limit across pods
Normal NodeHasSufficientDisk 10m kubelet, ubuntu Node ubuntu status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 10m kubelet, ubuntu Node ubuntu status is now: NodeHasSufficientMemory
Normal NodeNotReady 10m kubelet, ubuntu Node ubuntu status is now: NodeNotReady
Warning Rebooted 2m (x870 over 2h) kubelet, ubuntu Node ubuntu has been rebooted, boot id: 658b4a08-d724-425e-9246-2b41995ecc46
Warning Rebooted 15s (x60 over 10m) kubelet, ubuntu Node ubuntu has been rebooted, boot id: 16efd500-a2a5-446f-ba25-1187857996e0
What am I doing wrong?
So after following the advice from #errordeveloper and still hitting the wall, I was able to solve the issue that turns out to be pretty simple.
Both my VMs had the same hostname.
hostname -f
would return
ubuntu
on both, and that causes issue with kubernetes, apparently.
I changed the name on my non-master node with
hostnamectl set-hostname kminion
and in the following files:
/etc/hostname
/etc/hosts
and everything went smooth onward!
Should I change the IP addresses (to what)?
Yes, this is typically the way to make things work on VMs where the default route is for NATed access to the Internet.
You want to use the IP of the bridge network, for you master that appears to be 192.168.100.17 (but please double check).
First, please try using kubeadm init --apiserver-advertise-address 192.168.100.17, but that may not solve all of the issues.
In your ouput of kubectl describe nodes, I can see this
Addresses:
InternalIP: 172.17.0.1
Hostname: ubuntu
So you probably want to make sure that kubelet also doesn't used the NATed interface, for which you would need to use kubelet's --node-ip flag.
However, there are other ways to fix this problem, e.g. if you can ensure that hostname -i returns the IP of the bridged interface (which you can do by tweaking /etc/hosts).